The Question Every Leader Gets Wrong About AI

5

When people find out we’ve embedded AI deep into how we build and operate at Newel Health, the first question is almost always the same: which tools do you use?

It’s the wrong question.

The tools are the easy part. Any team with a budget and a browser can access the same AI capabilities we do. What separates organizations that extract genuine value from AI from those that create expensive noise is something more fundamental: a clear, deliberate framework for trust.

Specifically, knowing which decisions you hand to AI, and which ones you don’t.

Why “Which Tools?” Is the Wrong Starting Point

There’s a seductive logic to the tools-first approach. Evaluate options, run pilots, pick winners, roll out. It feels like execution. It looks like progress.

But it skips the only question that actually matters: what are you trusting this system to do?

AI tools are not neutral. Every time you integrate one into a workflow, you are implicitly making a decision about where machine output is sufficient and where human judgment is required. If you haven’t made that decision explicitly (in advance, with clear criteria) you’ve made it by default. And decisions made by default in fast-moving organizations tend to be inconsistent, invisible, and occasionally dangerous.

In most industries, the consequences of getting this wrong are operational. A bad AI output costs you time, money, or reputation. Those are real costs. But they’re recoverable.

In Software as a Medical Device, the stakes are categorically different.

When the Output Has a Patient at the End of It

At Newel, we develop AI-driven SaMD for chronic disease managementcardiometabolic conditions, neurological disorders, chronic pain. Our products are certified medical devices. They operate inside clinical pathways. They generate data that informs care decisions.

That context changes everything about how we think about AI in our operations.

Our quality management system, certified to ISO 13485 and ISO 9001, doesn’t just suggest human accountability at key decision points. It requires it. And that requirement isn’t a bureaucratic imposition. It reflects a genuine truth about where AI is and isn’t ready to operate without oversight in a high-stakes environment.

This doesn’t mean we use AI cautiously or sparingly. Quite the opposite.

It means we use it with precision. We know exactly where it creates leverage and exactly where it doesn’t belong. That clarity is what enables us to move fast without cutting corners, which is, ultimately, the only kind of speed that matters in regulated medtech.

A Framework Built on Trust Boundaries

The mental model we operate with is straightforward, even if the implementation requires rigor.

AI earns trust where tasks are well-defined, inputs are structured, and outputs can be meaningfully verified by a human with the right domain expertise. In these zones, AI is not just acceptable, it’s a compounding asset. It accelerates synthesis, improves consistency, surfaces patterns that human review alone would miss, and frees qualified people to focus on the decisions only they can make.

AI does not get trust where outputs carry clinical, regulatory, or safety weight that cannot be fully and reliably reviewed. Not because the technology lacks sophistication. Because the accountability structure of a regulated medical device company demands that certain decision points remain owned by qualified humans. That is not a temporary limitation to be engineered around. It is an appropriate boundary that reflects the current state of both the technology and the regulatory landscape.

What most organizations discover, often too late, is that failing to define these boundaries explicitly doesn’t mean AI operates safely within them. It means the boundaries shift informally, inconsistently, and invisibly across teams and workflows.

How H.Core Makes This Operational

This framework is not abstract inside Newel. It is built into the infrastructure we use to develop and scale our products.

H.Core, our proprietary adaptive development platform, is the operational environment where AI meets regulated SaMD development. It integrates regulatory compliance, behavioral science, clinical-grade AI, and real-world data into a single coherent system. Crucially, it encodes trust boundaries directly into development and operational workflows, defining at every step where AI-generated output is sufficient to proceed and where a qualified human decision is required before moving forward.

This isn’t AI governance as a policy document sitting in a shared drive. It’s AI governance as architecture, embedded in the platform itself, consistent across every product we build, and auditable as part of our quality management system.

The result is that our teams don’t have to make ad hoc judgments about where to trust AI. The structure makes those decisions in advance, consistently, and in alignment with our regulatory obligations. That frees them to use AI ambitiously within the zones where it belongs — without the friction of uncertainty or the risk of compliance exposure.

The Principle Behind the Practice

There is a broader lesson here that applies well beyond regulated medtech.

Organizations that use AI most effectively are not necessarily the ones with the most sophisticated tools or the largest AI budgets. They are the ones with the clearest thinking about trust. They have done the harder, slower work of defining what good output looks like, who is accountable for which decisions, and where the line sits between augmentation and delegation.

That work is operational and cultural before it is technological. It requires leadership to set the frame, not just IT to implement the tools.

Once that frame is set, once the trust boundaries are explicit, shared, and embedded in how work actually gets done, AI becomes an extraordinary multiplier. Every capable team member becomes more capable. Every well-designed process becomes faster. Every insight that previously required significant human synthesis arrives sooner and with greater consistency.

Clarity about limits is what creates freedom within them.

Before You Evaluate the Next Tool

If you are a leader currently assessing how to integrate AI more deeply into your operations, the most valuable thing you can do before opening another product demo is to map your trust boundaries.

Where does your operation require human accountability — by regulation, by ethics, by the nature of the consequences? Where can AI output be verified reliably by people with the right expertise? Where are you currently making these decisions implicitly, by default, without a shared framework?

Answer those questions first. The tool evaluation follows naturally from the answers. Done in the other order, you will spend significant resources solving the wrong problem.

The organizations building durable AI advantage right now are not moving faster than everyone else. They are moving with more precision. In a world where AI capabilities are increasingly commoditized, precision, knowing exactly what you are trusting and why, is the real differentiator.


*At Newel Health, we partner with pharma and medtech companies to develop and legally manufacture AI-driven SaMD — combining regulatory rigor with the speed of a purpose-built platform. If you’re exploring how to build digital therapeutics that meet the highest standards without slowing down, we’d like to talk.

PierPaolo Iagulli
WRITTEN BY

PierPaolo Iagulli

Chief Operating Officer and Co-Founder at Newel Health in charge of operational excellence across the company’s multiple business units, from R&D to development and commercialization. Pier Paolo is a technology entrepreneur and web marketing expert passionate about Artificial Intelligence and the impact that new technologies will have on our lives and society.