How to Prepare for an Unpredictable Generative AI Future in Pharma

Generative Artificial Intelligence (AI) is a transformative technology that has been reshaping the landscape of artificial intelligence since 2014. Generative AI models exhibit the capacity to produce human-like text, images, audio, video and code as well as other types of content by learning patterns from vast amounts of unstructured data. One of the most famous recent generative AI models, GPT-3.5/4.0, has generated a lot of interest offering many use cases in pharma and other industries.

Although Generative AI can be used throughout the pharma value chain, this article will focus more on the discovery, R&D, clinical and medical end of the value chain. Generative AI models possess the potential to revolutionize drug discovery, streamline clinical trials, improve patient care, and enhance various aspects of pharmaceutical research and development. They are already being used for these purposes very successfully. They can analyze extensive datasets, formulate hypotheses, and facilitate data-driven decision-making processes, making them indispensable tools for researchers, clinicians, and pharmaceutical manufacturers.

Nevertheless, the promise of generative AI in the pharmaceutical domain also introduces a level of uncertainty that companies and professionals in the field must navigate skilfully. As we explore the applications, opportunities, and challenges that generative AI presents to the pharmaceutical industry, it becomes clear that effectively harnessing this technology demands a deliberate and strategic approach to maximize its benefits while mitigating potential risks.

Understanding the Opportunities with Generative AI


Research and Development

Generative AI in pharma offers enormous potential in expediting drug design processes and curtailing associated research and development expenses. By employing methodologies such as machine learning models derived from artificial neural networks and deep neural networks, these models adeptly analyze extensive scientific data encompassing molecular structures, chemical properties and clinical trial results, thus proposing novel molecules with potential drug-like attributes. This computational approach empowers the evaluation of millions of potential molecules in silico, delivering results at a fraction of the cost and time required by traditional wet lab experiments.

By way of example here are some generative AI models being used currently in drug discovery and R&D:

Generative Adversarial Networks (GANs): GANs are widely used in pharmaceutical research for drug design and molecule generation tasks. GANs consist of a generator network that creates synthetic samples and a discriminator network that evaluates the authenticity of these samples, resulting in the generation of novel molecules with desired properties.
Recurrent Neural Networks (RNNs): RNNs are employed for sequential data generation, making them helpful in generating novel chemical structures or optimizing drug properties. RNNs can learn patterns from sequential data, allowing them to generate new sequences with desired characteristics.
Variational Autoencoders (VAEs): VAEs are used for drug discovery and optimization tasks. VAEs learn the underlying distribution of chemical structures and enable the generation of new molecules with specific properties. They are instrumental in exploring chemical space and generating diverse compounds.
Deep Reinforcement Learning: Deep reinforcement learning techniques are applied in drug discovery and optimization processes. These algorithms learn through trial and error and can generate molecules with desired properties by maximizing reward signals based on predefined objectives.
Transformer Models: Transformer models, such as the popular GPT (Generative Pre-trained Transformer) architecture, are utilized in natural language processing tasks, including text generation and drug discovery. These models can generate coherent and contextually relevant text, making them valuable for generating chemical descriptions and drug-related text.

Preliminary estimates indicate that generative AI holds the potential to curtail the cost of drug discovery by a substantial 30-50%, potentially reducing it from the current $2.6 billion to under $1 billion. Furthermore, it stands to potentially trim development timelines by 5-10 years through the automation of synthesis and screening tasks.

Leading companies like Benevolent AI, Exscientia and InSilico Medicine have already harnessed the power of generative AI to expedite the design of potential drug candidates. Their success showcases the extraordinary potential of scaling AI to reshape the future of generative AI in pharma, opening new frontiers in drug discovery and development.

Biomedical text generation and summarization tools

Large language models such as GPT-3.5/4.0 have spearheaded the creation of biomedical text generation and summarization tools, presenting a wealth of applications for pharmaceutical research. These generative AI systems analyze extensive scientific literature, proficiently producing concise summaries that highlight key findings and conclusions from lengthy research papers and clinical trial data. This innovation holds immense promise in significantly enhancing researcher productivity.

Research suggests that AI summarization tools have the potential to reduce the time researchers spend on paper reviews by a significant margin of 2-5x compared to reading entire texts. This efficiency boost could result in a notable 15-30% increase in the number of papers researchers can assess within a given timeframe. Given the overwhelming volume of new biomedical publications each year, these tools serve as a crucial mechanism for researchers to cope with information overload. Moreover, they facilitate quicker literature reviews and meta-analyses, thereby expediting the identification of new avenues for drug development and combination therapies.

Scientific reports and personalized treatment recommendations

AI holds enormous potential to automate various scientific writing tasks and provide tailored treatment guidance. Leveraging its ability to analyze extensive volumes of research data, generative models can synthesize information and generate initial drafts of scientific reports, literature reviews, and clinical study summaries. This technological advancement has the potential to liberate researchers from laborious documentation tasks, allowing them to allocate more time to experiment design and innovation. When supervised diligently, AI can also generate personalized treatment recommendations, matching patients with appropriate therapies based on their medical records, genetics, and case histories.

Pioneering initiatives indicate that in the future, AI could empower physicians to create customized care plans by simulating a patient’s probable disease progression and response to various treatments. As AI models continue to evolve, they may become instrumental in generating personalized risk assessments and surveillance schedules. Nevertheless, ensuring the safety, efficacy, and ethical alignment of AI recommendations will necessitate rigorous validation procedures. If executed responsibly, AI-generated reports and personalized guidance have the potential to expedite scientific discovery and enhance patient outcomes, marking a significant leap forward in the field of healthcare.

Preparing for the uncertainties and risks

As the pharmaceutical industry increasingly embraces generative AI, it becomes pivotal to address the uncertainties and potential risks associated with this transformative technology.

A primary concern involves the possibility of AI models generating unsafe drugs or providing inaccurate treatment recommendations. Instances have surfaced where AI algorithms, influenced by limitations in training data or inherent biases within the data, have produced flawed outputs.

For instance, a few AI-generated drug designs were discovered to be chemically unfeasible during laboratory testing. These cases underscore the crucial importance of implementing rigorous validation and testing procedures to minimize the risk of deploying ineffective (or worse, unsafe) solutions in pharmaceutical applications.

Intellectual property and data ownership issues

As generative AI systems progress and undertake more autonomous creative tasks, they raise new questions about intellectual property and data ownership.

For instance, when an AI model independently devises a novel drug molecule or authors a scientific paper, determining credit and IP rights becomes a complex issue – should these rights belong to the AI creator, its corporate owners, or the human researchers who supplied the training data?

Moreover, uncertainties surround the ownership of commercially valuable data used to train these models. If an AI learns from proprietary information and subsequently uses this to create an even better generative model for someone else, the issue of who holds rights over the AI’s expanded learning and newly generated content becomes complex.

As AI actively contributes to the innovation process, the formulation of clear policies regarding attribution, licensing, patenting, and the appropriate handling of confidential corporate information becomes imperative. Proactively resolving IP concerns is essential to encourage AI investments while safeguarding companies’ research assets and competitive edges within the field. International collaboration will likely play a significant role in establishing cohesive norms and rules surrounding IP for autonomous AI creators.

Strategies for rigorously testing and validating AI-generated outputs

Considering the potential risks associated with AI generating inaccurate (or unsafe) information, robust testing and validation of AI outputs are pivotal for their application in the pharmaceutical industry. Independent review of AI suggestions by multiple subject matter experts can help in identifying implausible or questionable results before their real-world utilization. This also should put to bed the worry of many in pharma that they will lose their jobs to AI. The jobs will pivot but the humans will still be needed. Rigorous testing on historical data, including intentionally altered “adversarial” examples, serves to evaluate model robustness and performance in extreme cases.

Regulatory bodies might consider mandating companies using AI in drug discovery or clinical decision-making to develop evaluation frameworks involving staged pilots, randomized trials, and post-market surveillance. Transparency regarding the AI’s decision-making process is equally important for validation, employing techniques such as explanation mapping. Encouraging a careful collaboration between humans and AI, where AI serves as a decision aid by recommending options for human review and approval, can represent a judicious risk control approach.

By implementing meticulous testing protocols, uncertainties surrounding AI can be effectively managed, thereby safely expediting research and discovery in the pharmaceutical realm.

Recommending guidelines for human oversight on critical AI decision-making

As AI systems advance, it is imperative to establish guidelines that ensure appropriate human oversight, especially in safety-critical tasks like clinical decision support. Although pharma always ensures human oversight of AI, regulators may begin to more rigorously enforce the requirement for designated subject matter experts to review AI-generated treatment recommendations before they are presented to healthcare providers. For new drug candidates proposed by AI, mandatory human validation of top prospects serves as a crucial check to identify any issues that technical evaluation alone might overlook. Humans should retain full discretion in accepting or rejecting AI proposals, and the option to seek a second opinion must be available. AI systems should also be designed with interpretability in mind, enabling human experts to scrutinize the rationale behind AI-generated suggestions.

Regulatory bodies can play a pivotal role in setting standards that mandate specialized training for professionals overseeing AI and establish procedures for handling uncertain cases. Professional medical associations may also issue best practice standards regarding human-AI collaboration and the continuous monitoring of model performance over time. With thoughtful safeguards in place, AI can complement and augment human expertise rather than replace it, thereby maximizing the benefits of automation while minimizing risks to patient safety.

Future of Jobs in Pharma

As AI capabilities continue to progress, some functions within pharmaceutical research and development are poised for partial or complete automation which will allow humans to engage in the more high-value areas that AI cannot do. Generative AI shows potential in taking over routine tasks such as molecular simulation and virtual screening of chemical libraries, currently carried out by medicinal chemists and biologists. It might also automate segments of clinical trial management, including patient recruitment, monitoring, and basic data analysis.

However, higher-level critical thinking roles demanding human judgment, such as lead optimization of drug candidates, clinical trial design, and medical safety evaluation, are unlikely to be replaced by AI. Highly trained researchers and clinicians will remain essential to oversee AI systems, validate outputs, and make final decisions. While certain positions focused on repetitive tasks face the highest risk of automation, the overall impact of AI on pharmaceutical employment is projected to be small due to increasing R&D needs and the emergence of new roles managing and collaborating with AI.

In light of these changes, the emergence of new high-skilled roles will be more significant, with a focus on managing AI systems. As pharmaceutical companies increasingly adopt AI to streamline R&D, there will be a strong demand for AI engineers capable of developing, training, deploying, and optimizing advanced generative models. Experts in data science, and machine learning will be needed to analyze vast quantities of drug and patient data and tackle complex healthcare problems using AI.

With the rise of more autonomous systems, specialized roles overseeing AI safety, such as AI ethicists and model validators, will be essential. Regulatory, legal, and IP experts with AI proficiency will address challenges related to accountability, privacy, and innovation. As AI transforms pharmaceutical R&D, new high-paying technical jobs are expected to emerge at a faster rate than traditional roles are displaced, contributing to the overall industry’s employment growth driven by AI-powered discovery and medicine.

Recognizing the evolving job landscape, the importance of re-skilling and upskilling programs cannot be overstated. These initiatives can assist the current pharmaceutical workforce in adapting to the changing technological landscape. Training programs focusing on data analytics, AI applications in drug discovery, and computational biology can equip existing professionals with the necessary skills to thrive in an AI-integrated environment. Encouraging continuous learning and professional development will be crucial to ensure the workforce remains adaptable and prepared for the evolving dynamics of the pharmaceutical industry, facilitating a smoother transition into the era of AI-driven advancements.

Regulatory Challenges with Advanced Generative AI

As AI systems take on more autonomous roles in drug design and discovery, regulators will encounter new challenges in ensuring the safety of algorithmically developed therapies. Traditional drug approval pathways rely on stringent clinical testing and complete transparency throughout the development process.

However, the extensive use of massive datasets in complex, non-intuitive ways by generative AI models might limit the level of traceability and explainability. Regulators will need to establish new standards to validate the safety and efficacy of AI-suggested molecules and research proposals for human use based on the available evidence. Moreover, there’s a necessity to devise regulations governing AI systems themselves and ensure adequate oversight when models are updated or retrained over time.

As generative models continue to progress, creating scientifically and legally acceptable methods for certifying AI-developed drugs is expected to be one of the most significant hurdles. Addressing this challenge is vital to ensure that patients can fully benefit from the potential of AI-augmented drug discovery.

Challenges in attributing accountability and ownership for AI decisions

As AI systems gain more autonomy, determining accountability for their decisions becomes increasingly intricate. When an AI model independently designs or recommends a new drug target that carries unforeseen risks, pinpointing responsibility can be challenging. Was it due to a flaw in the AI’s training, a validation process oversight by its developers, or an inherent limitation of current technology? Regulators will need to collaborate with companies to establish appropriate accountability frameworks for overseeing AI systems and ensuring recourse in case of issues. New rules may be necessary to address the ownership of intellectual property generated by AI and liability for any medical harm or missed opportunities resulting from flawed AI suggestions that are approved or adopted. Like many emerging technologies, generative AI raises novel questions regarding responsibility, necessitating cooperative solutions from engineers, ethicists, lawyers, and policymakers.

With the continual evolution of AI in healthcare, regulatory bodies must engage experts from diverse domains—such as technology, healthcare, and law—to formulate comprehensive frameworks. These frameworks are crucial to ensure the safe, responsible, and ethical integration of advanced generative AI in the pharmaceutical sector. This proactive approach is essential to encourage innovation while upholding the highest standards of safety and efficacy in pharmaceutical products developed through AI technologies.

 

Conclusion

Generative AI holds tremendous promise for revolutionizing pharmaceutical R&D and reshaping patient care by expediting the discovery of new drugs and treatments. Through automating drug design, condensing scientific literature, and generating personalized recommendations at scale, AI offers the potential to significantly enhance research productivity and outcomes. Nevertheless, fully realizing these advancements while managing associated risks demands meticulous planning from both industry and regulators. As generative models assume more autonomous roles, it becomes crucial to proactively address uncertainties regarding accountability, safety validation, intellectual property, and workforce impacts.

Given the absence of precedents in this field, strategic collaboration among companies, researchers, policymakers, and other stakeholders becomes pivotal in establishing responsible best practices for the development and application of next-generation AI. Looking towards the future, pharmaceutical organizations that invest in advancing their AI capabilities while ensuring robust governance and oversight are poised to lead the industry into an AI-driven era, facilitating new medical innovations for the betterment of patients worldwide.

 

Found this article interesting?

At Eularis, we are here to ensure that AI and FutureTech underpins your pharma success in the way you anticipate it can, helping you achieve AI and FutureTech maturation and embedding it within your organisational DNA.

If you need help to leverage AI to identify how to leverage generative AI into your leadership plan to increase operational efficiencies and speed up revenue growth, then contact us to find out more.

We are also the leaders in creating future-proof strategic AI blueprints for pharma and can guide you on your journey to creating real impact and success with AI and FutureTech in your discovery, R&D and throughout the biopharma value chain and help identify the optimal strategic approach that moves the needle. Our process ensures that you avoid bias as much as possible, and get through all the IT security, and legal and regulatory hurdles for implementing strategic AI in pharma that creates organizational impact. We also identify optimal vendors and are vendor-agnostic and platform-agnostic with a focus on ensuring you get the best solution to solve your specific strategic challenges. If you have a challenge and you believe there may be a way to solve it with AR but are not sure how, contact us for a strategic assessment.

See more about what we do in this area here. 

For more information, contact Dr Andree Bates abates@eularis.com.

Contact Dr Bates on Linkedin here.

Listen to the AI for Pharma Growth Podcast on 

Apple here

Spotify here

Contact Us

Write you name and email and enquiry and we will get right back to you as soon as we can.