Think about Risk to Pick the Right Process

A few months ago, I had a conversation about project management with a very smart biotech executive. I described the agile project management methods common in software projects, and he was horrified. Among other things, he wondered how those teams do a critical path analysis. I told him that mostly, they don’t. He was even more shocked.

640px-Red_and_blue_pill

This is the mirror image of conversations I’ve had with software professionals about drug development projects. They marvel at the large amount of upfront planning and the detailed timelines, and snort at the “wasted time” the teams spend producing those timelines.

Which group is right? Both are. The reason these two types of projects use such wildly different processes is that they have wildly different risks.

To understand the differences, think about the largest mitigatable risk for each project. We have to limit this to mitigatable risks, because the largest overall risk to a drug development study is that the candidate drug just won’t work. There are things that project teams do to mitigate the impact of this on their overal program (e.g., have back up candidates), but in general, those things are outside of the main timeline that shows which studies are going to be run and when they will be run, so for the purposes of this post, we’ll ignore this risk.

For a drug development project, the largest mitigatable risk is probably delaying a study. At this point in the project, a study is a specific type of experiment, run in what is called a “validated” environment. This means that the experiments are conducted according to the rules of “Good Laboratory Practice” (for animal studies) and “Good Clinical Practice” (for clinical studies). The material used in the studies must have been produced using “Good Manufacturing Practices.” All of these rules are necessary to ensure that the results are meaningful for a submission to the regulatory agencies (the US FDA and the EMEA).

Studies are very expensive and often need to be scheduled months in advance. Delaying a study can cost hundreds of thousands of dollars, and it can also jeopardize other studies that are planned further along in the project. In addition, there is a ticking clock on all of this: the patent clock. The company has probably filed for patent protection on the “chemical matter” (i.e., the drug candidate) well before the trials start. The patent only gives twenty years of exclusivity, so the more time the drug candidate spends making its way through the trial process, the less time the company has to recoup its investment.

In small companies, there is an additional ticking clock: the runway. This is the amount of time the money in the company’s coffers will allow the company to stay in business. The company needs to either secure more funding or find an “exit strategy” before the money runs out.

Given all of this, you can hopefully see why drug development projects spend a fair amount of time planning out their studies and have a well-defined timeline for these studies, with defined intermediate milestones. This allows them to mitigate the risk of delays. For instance, if a potential delay in manufacturing is identified early enough, the team may be able to add more resources to speed things up, or they may be able to redesign their planned studies to use less material.

OK, so what about the other group of people? What is the largest mitigatable risks for software projects? I would argue it is changing requirements, particularly requirements relating to the match between the software’s design and the users natural work processes. Life does not freeze while the team is building software. People continue to work, and their needs may change. It is not uncommon at all to deliver software that works exactly as intended, only to discover that it no longer meets the needs of its users.

This is particularly true for software written for fast-changing industries like science. The entire point of doing science is to learn new things, which often leads to changing how you do your experiments, which often leads to changing your requirements for the software you use to analyze your data or integrate it with other data.

I am sure that similar sentences could be written for other industries, too. Innovation is great and necessary—and it kills software.

So, if you’re running a software team that is trying to produce tools in this sort of environment, it is completely natural that you would look for methods that help you mitigate the risk of changing requirements. One such method is to “release early, release often,” which is one of the fundamental tenets of agile processes. These teams don’t spend time producing detailed timelines because those timelines would always be completely out of date. The more important thing is to have a good list of features to develop, prioritized by the users. This is called a backlog, and it is designed to be constantly reprioritized.

I think you can learn a lot from these two examples, no matter what industry you’re in. If you have a project and want to bring some sort of process to the management of it, you need to make sure that you pick the process that is best suited to help you mitigate the largest risks on your type of project. All too often, people simply borrow a process without thinking through whether or not it is appropriate to their type of project.

I have witnessed a depressing number of attempts to “bring project management to discovery” in the pharma/biotech industry that import some project managers from the drug development division and make no attempt to tailor the processes to the unique needs of the discovery environment. Discovery is the part of drug discovery and development that happens before a drug candidate is identified. It encompasses research into disease processes to identify specific targets for drugs, research into how these targets function and what sorts of chemicals can be used to disrupt their function, and research into what else those chemicals might do if they are given as a drug, among other things.

Broadly speaking, the studies needed are much less well-defined than those needed in the drug development phase. Most importantly, the risks are much different. The largest mitigatable risks are that the team will find a compound that inhibits the desired target, but that this will either not have the desired effect in the intact system (i.e., the human body) or it will have some additional, undesirable effect that renders the compound useless as a drug. These risks are related to the incompleteness of our understanding of how the human body functions, and cannot be avoided. Project teams put a lot of effort into developing a research strategy that mitigates them as well as they can, while also keeping their eye on those ticking clocks I mentioned above.

These risks actually look a lot like the risk of building software that doesn’t do what the users need. Both types of risk are related to incomplete information, and both are generally mitigated by allowing the team to respond to new information as it comes in and trying to design the overall project so that the biggest unknowns are resolved first. In software, this means building the most fundamental part of the tool first, so that users can provide feedback on your overall design before you spend too much time on the details. In drug discovery, this means running the experiments that answer the highest value questions as early as you can. Therefore, I suspect discovery teams would get a lot more value from borrowing from the agile methods used in the software industry than from borrowing from the methods used by development teams in their own industry.

There are examples of the reverse, too. If I think about software written for a well-defined process where the cost of a bug could be catastrophic—for instance, software that runs medical equipment—I’d much rather that team use a process with more upfront analysis and detailed timelines that ensure adequate time is built in for identifying and testing edge cases than a standard agile process. For this sort of software, quick response to changing requirements could literally be deadly if that response does not allow time for the team to explore and test edge cases. This is one of the reasons that I worry a bit about how much the “always be releasing” mentality has become accepted as dogma in the software industry. It is fine (and maybe even wonderful) for ecommerce software and other software where the cost of a bug is some lost time for a user. It is not fine (and maybe even catastrophic) for software where the cost of a bug might be someone’s health or even life.

Of course, most of us don’t work on such high stakes projects. But no matter what type of project you work on, there are costs to having a mismatch between the risks you need to mitigate and they types of risk the process you are using is designed to mitigate. As you think about what sort of process to use, think first about what sort of risks your project faces, and I think you’ll have a happier and more productive team.

Image: “Red and blue pill” by W.carter – Own work. Licensed under CC BY-SA 4.0 via Commons.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *