Why Clinical Evidence Is Not Enough for Real-World Impact: How Behavioral Science Can Help Bridge the Gap

4

Imagine a patient. Let’s call her Julie. She has type 2 diabetes and has just enrolled in a clinical trial for a new digital therapeutic. For twelve weeks, she logs in every day, tracks her meals, completes her breathing exercises, and checks in with her care team. Her blood sugar improves. The trial, by every measure, is a success.

Six months after the trial ends, Julie has stopped using the app entirely.

This is not an unusual story. In fact, it is one of the most common stories in digital health, and one of the most expensive. Billions go into developing interventions that work well in controlled conditions and then fall apart when they meet the unpredictability of real life.

So what went wrong? And more importantly, what can we do about it?

A Study Flow is Not The Same as Daily Life

Clinical evidence tells us what can work. It does not always tell us what will work, for this person, in this context, on a Tuesday afternoon when they are exhausted and their app sends its third notification of the day.

Clinical trials are designed to control for noise. They select motivated participants, monitor adherence closely, and measure outcomes in constrained windows of time. It is how we establish that an intervention has a real effect, not one that was lucky.

Real life does not run on trial protocols. Real patients have competing priorities, fluctuating motivation, and deeply ingrained habits that do not bend simply because a doctor recommends otherwise. The gap between “it works in a trial” and “it works in the world” is not a failure of science. It is a failure to account for what drives human behavior.

And that gap is costing us. Studies consistently show that adherence to long-term therapies for chronic conditions averages around 50%, meaning roughly half of patients are not getting the benefit of treatments that evidence shows should help them. For pharmaceutical companies and healthcare systems investing in digital therapeutics, the stakes are even higher. Regulators now want real-world evidence. Payers want proof of sustained impact. And patients want experiences that actually fit their lives.

Enter Behavioral Science

Behavioral science is the study of why people do what they do, why they often do not do what they intend to do.

Clinical science asks: does this molecule, or this app, or this therapy, change the outcome we are measuring? Behavioral science asks: given everything we know about how human beings think, feel, and decide, how do we design something that people will actually use, consistently, over time, in the middle of their messy, complicated lives?

What Gets in the Way of Change

Most people living with a chronic condition know what their healthcare team has told them. They understand the importance of taking their medication, attending appointments, adjusting their diet, or managing stress. Awareness, in most cases, is not what is missing.

What is missing, far more often, is the conditions that make sustained change genuinely possible.

Think about what a typical self-management programme assumes of its participants: reliable access to a smartphone, a stable home environment, enough time and energy after work and caregiving responsibilities, health literacy in the language the app was designed in, and the emotional bandwidth to engage with new technology during one of the most stressful periods of their lives. These are not small asks. And they are not equally available to everyone.

A single parent working night shifts faces very different challenges to a retired professional with flexible time and a strong support network. A patient managing depression alongside a physical condition faces different barriers to one without. Someone navigating a healthcare system in their second language, or with a disability that makes standard interfaces difficult to use, is being asked to do more, not less, than others to access the same benefit.

This is not a question of individual willpower or knowledge. It is about the gap between what interventions assume of the people using them, and the reality of the lives those people are actually living.

Behavioral science takes this seriously. It looks beyond individual decision-making to the social, environmental, and structural factors that shape whether change is even possible in a given context. And it asks a harder question than “how do we motivate this person?” It asks: what do we need to change in the design of this product, this intervention, this system, to make the desired behavior more accessible and more realistic for the broadest possible range of people?

The mechanisms behavioral science draws on are tools for reducing that gap: environmental design (making the healthier choice the easier choice), social support (embedding connection into the intervention itself), and adaptive personalization (adjusting the experience to the individual, rather than the other way around). They are not tricks to overcome human weakness. They are ways of building interventions that meet people where they actually are, and that work for people whose lives look very different from the average trial participant.

And when they are built into a digital health product, not as afterthoughts but as architectural principles, something shifts. Engagement climbs. Drop-off rates fall. Outcomes extend beyond the trial window, and across a wider range of patients.

The Real-World Evidence Revolution

Here is where the stakes get particularly high. Value-based care models are replacing the old fee-for-service logic, and with that shift comes new demand: show us that your intervention works out here, not just in the controlled environment of a randomized controlled trial.

This is a paradigm shift in what counts as evidence. Regulatory bodies increasingly expect manufacturers of digital therapeutics to demonstrate that patients actually use their products, that engagement translates to sustained behavioral outcomes, and that those outcomes persist beyond the point where trial incentives disappear.

For pharmaceutical companies investing in digital health, this raises an uncomfortable question. If your product is clinically validated but behaviorally naive, designed to pass a trial rather than to be used by a real human being over months and years, how will it perform under this new scrutiny?

The answer, increasingly, is: not well enough.

Bridging the Gap: What Good Looks Like

So what does it actually look like to bridge clinical rigor and behavioral intelligence in a digital therapeutic?

It starts before a single line of code is written. Behavioral scientists work alongside clinical and regulatory teams to map the specific mechanisms of action the product needs to engage: not just the clinical outcome, but the behavioral determinants that drive it. What needs to change in how a patient thinks, feels, or acts for this therapeutic to deliver its effect? And what is the evidence-based technique most likely to produce that change?

It continues in the design of the user experience itself. The most effective digital health products are not just clinically sound. They are genuinely enjoyable to use. They are habit-forming in the best sense: they fit naturally into daily routines, reward engagement, and adapt to the individual user rather than treating every patient as an identical avatar of their diagnosis.

And it does not stop at launch. The richest source of behavioral data is what happens when real patients use a product in the real world. Continuous monitoring, adaptive personalization, and feedback loops that learn from actual use patterns. These are how a behavioral intervention improves over time, rather than degrading as novelty fades.

The Bottom Line

Clinical evidence is essential. Without it, we have no basis for believing an intervention can help at all. But evidence of efficacy is not the same as evidence of impact.

Real-world impact requires real-world engagement. And real-world engagement requires understanding, and designing for, the whole person on the other end of the screen: their circumstances, their constraints, and the systems that either support them or leave them to manage alone.


*Newel Health is a digital therapeutics company embedding behavioral science at the heart of regulated digital health solutions.

Dr. Silja-Riin Voolma
WRITTEN BY

Dr. Silja-Riin Voolma

Silja-Riin Voolma, PhD, is Head of Behavioral Science at Newel Health, where she leads the integration of behavioral science into digital therapeutics for neurological, cardiovascular, and chronic pain conditions. With expertise spanning clinical research, applied behavioral science, and human-centered design, she ensures that Newel’s products are grounded in evidence-based mechanisms of change and deliver meaningful impact for patients, clinicians, and healthcare systems. Silja has extensive experience in co-design, qualitative research, product engagement & retention metrics, and strategy development. She is passionate about translating behavioral insights into scalable, patient-centered digital health solutions.