Shiny Use Cases within Pharma
Eric Nantz sat with us recently to walk through how he has seen Shiny create value in the clinical trials process. This blog post shares his insights, adding additional context and color, along with Posit’s recommendations for how organizations can best leverage Shiny in their workflows.
Eric Nantz is the Director of the Statistical Innovation Center at Eli Lilly and Company. His team of statisticians supports clinical teams with the design of clinical trials, bringing innovation and automation into the analysis and presentation of clinical trial data.
His ultimate goal, like the goal of thousands of other life scientists and the agencies that regulate their work, is to improve the process of getting safe and effective therapeutics into the hands of patients who need them.
Clinical trials are the central process where new drugs and therapies demonstrate their safety and efficacy and gain approval for distribution. In these controlled experiments, researchers randomize patients into either a control or treatment group where they receive a novel therapy under consideration. The clinical trial process helps ensure that this new treatment is better in efficacy and safety than a placebo or standard of care.
Just getting a single clinical trial off the ground has many moving parts. Before the trial starts, the team needs to find ideal sites to run these trials, identify key outcomes to measure that are meaningful to patients and payers, document protocol, prespecify the analytical methodology, set up data management teams and processes, and work with quality teams to ensure there are no inconsistencies or issues in this process. This only scratches the surface. A single clinical trial is a complex process involving numerous teams, important statistical considerations, and hard choices.
And it is extremely difficult to find beneficial treatments. Discovery teams consider many potential treatments, few of which will ultimately be considered for testing with people. There may be multiple clinical trials across different disease states or even different time points within one disease state, each with its own dedicated team, all running in parallel. Many trials will fail to show efficacy. Many fail due to side effects, safety, or other issues. The vast majority of treatments will fail the development stage, so it is vital to iterate quickly.
As trials and other tests generate vast amounts of complex data, medical teams must promptly analyze and interpret the results. Investigators want to rapidly visualize, summarize, and apply appropriate statistical tests to the data. The faster these results can be analyzed and reviewed by quality and compliance teams, the faster medical domain experts and others can determine a recommendation to seek regulator approval.
Once clinical trial data are submitted to regulators, it’s a matter of working with them directly to answer all their questions and address all their concerns satisfactorily. If everything goes well, it is only then that a new therapeutic becomes available to the public.
Life science organizations constantly seek ways to expedite processes, alleviate coordination workloads, access helpful tools and methodologies, and enhance safety while reducing risk. Anything that can ultimately reduce the time it takes for life-saving treatments to reach patients in need is highly sought after. As we will explore, Shiny has proven to be a valuable tool in facilitating these objectives.
Shiny in Pharma
Shiny was inspired by the R package manipulate, a tool created for a professor seeking a better way to demonstrate statistical concepts to students. Instead of showing these in a console or IDE or collecting static results into a report, he sought interactive elements like buttons and sliders to update plots, tables, or statistical results. This dynamic GUI interaction can more quickly and easily show a relationship between input and statistical output. Students used it to explore these concepts themselves.
It’s fitting, then, that Shiny became so popular in life science. Shiny makes it easy to build interactive web apps straight from R. These can be simple webpages or complex websites or dashboards. At first, it may seem counterintuitive for people with analytical or statistical domain expertise to develop web apps with Shiny. At the end of the day, they are helping others understand statistical results. These are often better communicated via the interaction and collaboration empowered by Shiny.
One apocryphal story from the early days of Shiny is that it was named after “the shine” in the movie The Shining. The shine refers to one’s psychic ability to communicate with others and see things that have happened in the past or will occur in the future. This rumor turned out to be untrue. Joe Cheng, the author of Shiny, has said it is a reference to “Shiny” in the 2002 TV show Firefly, which borrows from the old western usage, where it just means “good.”
Shiny for Exploratory Data Analysis
For data scientists performing exploratory data analysis, using an IDE such as RStudio, a notebook like Jupyter, or even spreadsheets can be an effective and fun home for this work. However, exploring data via an interactive interface is often useful.
Some analyses produce large amounts of static outputs to review. But let’s face it, thumbing through static output can be as engaging as watching paint dry. Shiny can provide a friendly front-end to explore that output. For example, an app with inputs like checkboxes, dropdown, selectizer tool, or more complex inputs, which immediately produce outputs like summary tables and plots. A data scientist can work through numerous parameter options without producing an immense static output.
At this point in one’s exploratory analytic journey, many first discover the value of Shiny. You can also produce an interactive data application using the same toolkit you were using to explore your data. These first apps are often extensions of custom scripts you’ve been exploring for yourself.
Shiny for Collaboration
Many analyses, whether simple or complex, often start with a lone analyst working on a single machine. The people doing this work can experience a sense of isolation as they create their own statistical programming and share static presentations and reports with their colleagues. All review questions need to go back to that individual for answers. They can feel like they are a bottleneck in the process of synthesizing trial data into the decision of how to proceed.
Shiny allows data scientists to bring these processes out of one’s machine and into the hands of the rest of the organization. Eric sees Shiny developers producing applications where medical, regulatory, and commercial leadership now all sit in the same review meeting, quickly turning around their questions and ideas via one interface. Now, these data customers can drill down and answer many of their questions independently, without an extensive technical understanding of the underlying data cleaning process or how their desired statistical models are applied in code.
Building these kinds of self-service applications introduces many challenges. One significant challenge is the potential for non-experts to misinterpret statistical results, which can be exacerbated by misdesigned applications that make such misinterpretations too easy. Given the highly-regulated nature of the life sciences industry and the need to protect patient data and proprietary research, organizations are understandably keen to safeguard their confidential data and results. But many tools can help you manage these risks.
Shiny for Automation
One of the most important ways Shiny has proven valuable to life science teams is in helping automate the analysis of clinical trial data. Routine processes can be made repeatable, with a Shiny application now a front-end interface where users input data and other parameters to produce standard results.
For example, traditionally, as new data came in from a study, an analysis team would have to build up a set of scripts from scratch, producing statistical results presented in tables and plots. The same analytical steps will often be rerun with new data or run on other studies. Shiny developers build general-purpose tools to automate previously bespoke work. Now, multiple teams can make use of that same Shiny application. That puts the power to perform these analyses into more hands, saving time and helping standardizes review processes.
And bringing quality assurance and compliance teams into the Shiny development process can save even more time and improve confidence in the integrity of the results. A qualified Shiny application can speed up and ease highly standardized review processes.
Shiny as a front end to high-performance computing environments
Another use case we increasingly see is using Shiny as a front-end to offer access to cloud or other high-performance computational resources.
Some analyses in clinical trial workflows require significantly more computational power than is available on a single desktop machine. Eric shares one example of this kind: running simulations of trials under consideration. Simulations help quantify the uncertainty around potential designs, which inform recommendations for how clinical trials should be run. For example, they may offer insight into the optimal number of treatment arms, how long a study should run, and what outcomes should be assessed. But this simulation may be very complex and require many iterations.
Analytics teams will therefore leverage high-performance computing technologies – hosted locally or with a cloud platform – to perform these tasks. These resources are often challenging to utilize, but these Shiny applications can be light, built-for-purpose interfaces to access computing resources and reduce the technical overhead.
Shiny for Regulatory Submissions
It’s still early days, but there are teams in life science companies and in regulators seeking to leverage Shiny applications in the review of data for clinical trial submissions.
It is well known that R is increasingly relied upon in trial submissions, but Shiny is also proving valuable to the process. A 2018 FDA Regulatory Science Report noted, “A Shiny R-based application was developed to facilitate the usability of received PK data for reviewers. This application can save 4-8 hours per study for reviewers for managing PK data. Its user-friendly web interface enables the application as an easy-to-use tool for most reviewers”. Mentors at the FDA have recently shared with students some of the Shiny applications used at the agency.
When well executed, Shiny applications can save reviewers time and streamline the review process. We recently wrote on the Posit blog about the efforts of the R Consortium’s R Submissions Working Group to trial and publicly document the process to include Shiny in an FDA clinical trial submission.
Bright and Shiny
Shiny provides value to life scientists in many ways. The ability to extend R, R packages, and your own code into interactive visualizations and full web applications is incredibly powerful. Eric has shown how teams leverage Shiny to aid collaboration, automate routine processes, streamline code reviews, connect to high-performance computing technologies, and are starting to use Shiny in FDA clinical trial submissions. And we believe that Shiny developers are just scratching the surface.
At the end of the day, life scientists like Eric Nantz strive to improve the process of developing novel therapeutics, demonstrate their safety and efficacy, and get them into the hands of patients as quickly as possible. The people here at Posit PBC contributing to Shiny are incredibly pleased to make a small, positive contribution to this important process.
The future of Shiny is clearly quite bright!
And as Eric has described so well, Shiny is a great framework for building interactive web applications. As individual data scientists bring tools like Shiny to the data science teams they are a part of, they often need additional, enterprise-friendly features to support their work. That is where Posit’s professional products come in. They help teams scale, secure and sustain their open-source data science workflows.
Posit Workbench is where teams can collaboratively build open-source data science projects using Shiny or other frameworks at scale. It supports both R and Python, giving data scientists access to all the development environments they love, including the RStudio IDE, Jupyter, and VS Code. Workbench provides enterprise-friendly features like centralized management, security, and commercial support.
Data scientists use Posit Connect to automate time-consuming tasks with code, distribute custom-built tools and solutions (built using Shiny or other frameworks) across teams, and securely share insights with decision-makers. With Connect, you can easily and securely publish and share Shiny applications and other interactive applications, documents, notebooks, and dashboards.
Posit Package Manager is a package repository management server to organize and centralize R and Python packages across your organization. Use it to provide full mirrors of CRAN, Bioconductor, and PyPI, share internal packages, and restrict access to potentially harmful public packages by curating your own custom repository with only the packages you need. This helps ensure your Shiny applications (and other data products) are fully reproducible over time and helps ensure your development and deployment environments always use the same package versions, mitigating unexpected errors due to package changes.
At Posit, we have a dedicated Pharma team to help organizations migrate and utilize open source for drug development. To learn more about our support for life sciences, please see our dedicated Pharma page, where you can book a call with our team.