In this series, David Lapidus talks with members of the Rare Collective to debunk common myths surrounding rare diseases. In this post, Chris Smith flips the script on David for his thoughts, and David talks about the challenges of estimating a treatable population.
Chris:
David, you’ve spent more than a decade counting patients with rare diseases. What is a common misperception that companies have about estimating their treatable population?
David:
Clients usually approach me with a preconceived idea about the number of patients who have a disease. These estimates typically come from published papers, which seems perfectly reasonable at first. But those papers can be misleading. It takes a lot of digging and careful interpretation to understand what those numbers really mean.
Chris:
If these numbers come from peer-reviewed publications, shouldn’t they be trustworthy?
David:
They should be, but sometimes they aren’t. To understand what a number means, you have to think about why it was created in the first place. A pharma company is concerned about the size of a patient population because it has a huge impact on revenue forecasts, company valuations, and commercial planning. They need to be conservative and precise: they’re responsible to their investors, and they have to negotiate with outside parties. But those aren’t the same motivations that drive the authors of publications, especially review articles. Put yourself in their shoes: these are often researchers or clinicians who are personally invested in a disease. They don’t want to minimize a group of people who are suffering from a very serious problem. Nor do they want to minimize a problem that might define their career. So when they review population studies, they tend to interpret the results very broadly.
Chris:
If we’re talking about the results of quantitative studies, why is there room for interpretation?
David:
Ideally, a population study would send a researcher to every home in the country, test every single person, and report a perfectly unbiased result. Real research is limited by practical concerns, but those limits tend to be glossed over when later papers review the work. For example, an article might refer to the prevalence of a disease in the United States, but it turns out that the original data come from a county in Minnesota—that’s a common example because the Mayo Clinic publishes so much research. So a number that looks like an authoritative estimate for the US is based on data from people who are, in many health-related ways, different from the rest of the country. That’s why it’s essential to dig up the source of your data and see what was really studied. Once we discover the true nature of the original data, we have to interpret whether it’s meaningful for a pharmaceutical market—for some analyses, it’s perfectly fine to use data from Olmstead County in Minnesota.
Chris:
So once you get to the bottom, at least there’s a fact that everyone can agree on?
David:
That’s often true, but sometimes it isn’t. I recently researched a disease where everyone believed that US population was a particular number, because of a value reported in one paper. But a close reading of that paper showed that the authors didn’t do the same research in the US that they did in Europe. The US number was just a guesstimate, but people gave it the same credibility as the European data that took up most of the paper. This seriously undermined the client’s market size estimate, but at least it happened early enough for them to design their own projects to shore up the US estimate.
Chris:
What can you do when the published data let you down like that?
David:
First you have to decide how serious the problem is—do you really need to run your own data-collection project? In many cases, especially early in drug development, we can apply a careful and conservative interpretation to the published literature. We might get a different number than the standard estimate, but a well-founded interpretation can tide you over for a long time.
If that isn’t good enough, there are many ways to generate the data you need. We can use insurance claims to see how many patients have a related diagnosis. If patients are concentrated with experts or specialists, we can work with them to count their patients—and that’s also a good opportunity to ask doctors about the disease or the patients’ experience. Sometimes you’ll want to do this even if you have perfectly reliable population studies, because these different ways of counting patients have different implications for a company.
Chris:
How are you counting patients differently? Isn’t a patient a patient?
David:
There are three ways of counting patients, and they all mean different things to a pharma company. Prevalent patients are the ones who have the disease. Diagnosed patients are the ones whose doctor told them that they have the disease. Identified patients are the ones that your company is somehow connected to. The prevalent patients are maximum number whom your therapy could treat, the diagnosed patients are the real-world candidates, and the identified ones are the great candidates—maybe you’re in touch with their doctors, or maybe they participate in a patient group that your company supports, or maybe they chose to connect with you via a registry or a less-formal program. As you commercialize your therapy, you want to these three groups to converge: first, those prevalent patients must be diagnosed, and then you have to make contact and identify them to your company. A good forecast tracks them separately, which illustrates the commercial challenges that the company faces.
Chris:
It seems like counting patients depends on a lot more than published papers!
David:
Those papers are a great place to start to figure out your therapy’s potential, but they won’t help with identified patients. That takes a different kind of approach: you might need to survey their doctors, or maybe raise awareness of your therapy among doctors and patients, or help those patients get organized so that they have a structure for communication and action.
Start with a robust population estimate so you understand your potential. Then layer on the work that turns potential patients into identified patients. That’s how you can build a forecast that people believe.
Chris Smith is the President and CEO of SmithSolve, a communications agency helping clients earn trust, manage risk and break through the noise.