The Opportunities and Threats to Pharma of Buying Off-the-Shelf Solutions to AI

Artificial Intelligence is the present and the future if companies want to remain competitive. Companies recognize the competitive advantages it brings including cost and time savings, efficiencies, better decision-making ability and better customer service.

However, many companies do not have the budget approval or inclination, to develop fully custom systems. In these cases, they turn to off-the-shelf solutions. You can buy or subscribe to one of these off-the-shelf solutions and your job is done. Right?

Possibly, but… and there is a big but.

I am all about democratising AI in healthcare and not reinventing the wheel. If a great off-the-shelf solution exists that solves your identified business issue, and is built with pharma specific needs in mind, and is white-box and compliant with pharma regulatory, then I am definitely going to recommend them rather than recommend a build. Unfortunately, the majority of off-the-shelf tools do not do that for pharma. Those naïve among us without an understanding of the pros and cons and pitfalls, can be led into dangerous (and sometimes litigious) waters if they are in pharma. Many of these tools, when not designed for pharma, fail to produce accurate results when it comes to use cases in pharma. This is because pharma have unique needs and most off the shelf tools do not have the contextual data of pharma and therefore fail to produce relevant results. At least, not without some serious data wrangling.

Without at least a white box understanding of the off-the-shelf AI tools they are using, companies using them could have dire legal and financial consequences, especially since the introduction, in May 2018, of the European Union’s General Data Protection Regulation (GDPR) and the Californian Consumer Privacy Act (CPPA), and the Australian Health Data Privacy Act. Although this is only three regions, more are likely to follow suit soon also. There was a recent case where a pharma company who bought an off the shelf tool without understanding the details of how AI works, has now found itself in breach of code, as well as facing legal issues over data privacy due to an infringement reported.

Although there are more and more AI-as-a-service tools about, and many of them are very powerful, they should only be chosen when they are the right fit for the job to be done, and have the right fit for the data requirements, technical infrastructure as well as regulatory and compliance fit for pharma. It must be born in mind that each company has different needs, different data, different infrastructure, different processes, and different customers (they may all be physicians or patients, but as you know, if they have different specialties, different therapy areas, and different countries, the needs can be very different). For example, if a pharma company wanted to understand which physician will respond to specific things, or which patient was likely to not adhere to their medication, the training data would be unique to that pharma company, as would the algorithms be. The same applies in many AI areas in pharma and other sectors.

So let’s examine a few of the dangers in pharma of buying off-the-shelf solutions that are not specifically designed for pharma and the unique challenges it brings.

Data

AI is very dependent on big data and needs big data running through algorithms that can find specific patterns or predict outcomes. But the algorithms need to be trained on specific big data sets. If your data is not public data, then in most cases, it is not likely to work well with an off-the-shelf AI tool that is not designed for pharma-specific data. The only sources of big data that come to mind that can be used easily without serious data wrangling with off the shelf tools are social media data (very big data), pubmed data (big data) and language data (with the caveat on language data that depending on how technical or medical the language data sets trained on were, you may still need some additional computational linguistics added if you are focusing on a specific disease category or therapy area).

However, that is not the only data minefield to be careful of. If that data is not processed appropriately (structured, sorted, cleaned, features extracted etc) then I am afraid we have the ‘garbage in garbage out’ scenario. As a case in point, I signed up for an IBM Watson platform as it has an API for a data source I was using. I connected my data to the API and the result was meaningless. Why? Because the data had not been sorted structured, cleaned etc. I was simply testing their claim that the data could be plugged and played – and although it may have looked that way on the surface, the results proved this was certainly not the case. The platform algorithms were no doubt sound, but the data had not been prepared adequately for it to be of use as a ‘plug and play’. However, if your team are fully grounded in AI and how to prepare the data appropriately, then some of these tools can be very advantageous for part of the process.

Legal and regulatory compliance

For those new to the AI space, and naïve about what it takes to do AI, are treading in dangerous waters when buying any off-the-shelf tool not specifically designed for pharma especially in a highly regulated industry such as pharma. There was a case recently in pharma from using an AI off-the-shelf solution only this month highlights this case in point and in fact is what prompted me to write on this topic. There are numerous legal and regulatory pitfalls to be wary of. One is location of data. If you are using a big data database of patient or physician data (e.g. claims data or IQVIA data etc), it is typically a requirement that it not leave the country the data was created in. However, if the off the shelf tool is from another country and gets uploaded into the tool, it is leaving your country, and hence putting you in breach. But there are also all the data privacy laws to be wary of also. On top of these, we also have the challenges of black box algorithms which are becoming more and more of a no go for pharma transparency of making decisions.

That is why many of the great tools out there are not being used in pharma. It is not because pharma have not found them, but it is because they come with their own set of challenges in this sector. You need to be very careful here, especially now with this precedent.

Still not solving the challenge

I recently worked on several pharma projects in which the pharma client had an array of off-the-shelf AI tools they were using. But they were still stuck and spending even more time doing their work and not getting the results they hoped for. The tools they had bought were all great tools no doubt. But having great AI tools is not enough. You need to know what the right tool is for the job required. That requires an understanding of both the job to be done, and the pros and cons of each AI tool for that job for your specific needs (what data you have, your tech environment etc). For one of these clients, we are building a tech stack to bring their tools together in a way that works for their needs and only two of the plethora of tools that they have will be used in it, but they will have the originally desired result. Faster work, better results. In another pharma example, the tool was made for pharma and for the data and although all the data has been integrated, cleaned, structured and sorted, there are still numerous data manipulations that are needed to be done with AI to make the data appropriate for the AI tool to actually unleash its’ power.

Having the right tool for the right job and the right data is only the first step of the battle we face in pharma.

When you buy an off the shelf tool, you are relying on black box algorithms without understanding the logic behind the decision making – as this is their secret sauce in many cases. There is now at least one legal case in healthcare where this has backfired enormously. Tools that were not designed for pharma and the regulatory constraints that we have were used must be approached with caution and thorough planning and forethought.

Not actually AI

And finally, one point to consider is what is in the tool. Because ‘AI’ is the latest ‘must-have’ we are now seeing tools being sold as AI that have no AI in them at all. I saw a tool being sold as an ‘AI-powered tool’ recently and I investigated as some of the claims appeared odd to me. After a little questioning of the company, I found that there was no AI in their tool whatsoever, despite saying it was AI-powered on their website and in their marketing claims. When pushed on this point, they said ‘it is in the plan’! Unscrupulous companies are relying on the lack of knowledge of their potential clients to sell inferior items that have no AI in them at all as ‘AI-powered’.

Conclusion

Off-the-shelf AI tools are a useful and convenient way to get a specific task done fast and cost effectively and hit the ground running with an AI tool. However, if you are in pharma, you need to be wary of the pros and cons of using these. And – don’t expect them to be smart for your specific issues! That only comes from your unique data and how you train AI algorithms.
AI is important for pharma but there’s no quick fix to getting there. It still requires customization, rigorous training, and experience to deliver measurable value. Organizations who rely on black box off-the-shelf AI tools are treading on dangerous ground when they do not understand what is required to be done to the data to get the power out of these tools, or the underlying logic, especially when things go wrong. And they can and do. On top of that, with the newer data privacy and compliance regulations, there must be transparency on how decisions have been made to meet compliance requirements. In the case of many off-the-shelf tools, this cannot always be done.

Found this article interesting?

Would you like help in planning a strategic approach with AI that both helps your customers while achieving business objectives for your company? By understanding how to plan your AI solution, from a commercial point of view, for healthcare teams that is both business focused (so not math and tech focused) but allows you to understand the math and tech enough to guide those teams, we have an easy on-demand training, you will not only provide a strong customer experience, but also achieve business objectives as well. We can help you plan the strategy, or can train you how to do this yourself.

Our training (Artificial Intelligence: From Understanding to Strategy to Implementation for Healthcare) covers all non-tech folk need to know, from the fundamentals of deep learning to the most effective applications of machine intelligence. In addition, Eularis training demonstrates the processes executive and management teams need to follow, step by step, to make use of the incredible capabilities of AI. 

For more information, contact Dr Andree Bates abates@eularis.com.

Contact Us

Write you name and email and enquiry and we will get right back to you as soon as we can.