How to Spot and Prevent Deepfakes Spreading Medical Misinformation 

Back in 2021, many people died and the US economy suffered losses worth $50-300 million each day – both from people refusing to get COVID-19 vaccines due to the spread of medical misinformation.

If this is the damage that social media conspiracy theories alone can cause, imagine what will happen once more sophisticated technology like deepfakes enters the mix. With deep learning algorithms being used to manufacture falsified healthcare videos at scale, the potential for damage can be far greater.

Christopher Ross, of the RAND Corporation, recently led a study on the damage potential of scientific and medical deepfakes. The group found that 27-50% of people across different populations are completely unable to distinguish between the real videos and the fake ones.

“Imagine a scenario where the likeness of well-respected doctors is deepfaked to endorse questionable health products online,” warns Justin Marciano, CEO of a deepfake detection startup called IdentifAI. “While these fabricated endorsements accumulate millions of views, the implications are disastrous for the medical community.” We are already witnessing fake endorsements from deepfakes of celebrities. It is only a matter of time that this spreads to healthcare.

Deepfakes could also be used to spread false claims against pharmaceutical products, manipulate medical research data, or impersonate healthcare company executives to commit identity theft.

As a healthcare, biotech, or pharma company — how do you insulate your organization against these pervasive threats? At Eularis, we have helped companies like Pfizer, Roche, and Sanofi use strategic employee training and business consultation to understand and operationalize the impact of Deepfakes. Here’s how we recommend preventing and combatting deepfake attacks against your business.

What are Deepfakes?

A deepfake is a type of synthetic media where a person’s face or voice is cloned by another using AI, making it appear as though they are saying or doing things that never actually happened. This is achieved through advanced deep learning networks and algorithms.

Imagine you have a jigsaw puzzle, where each piece represents a data point about a person’s facial expressions, voice, or movements. When creating a deepfake, AI algorithms, like deep learning networks, work to understand and replicate how these pieces fit together to represent someone’s identity.

  • The first step is gathering visual or audio data of the individual involved
  • This data is fed into sophisticated AI models, typically Generative Adversarial Networks (GANs). They create the deepfake content, then verify its authenticity against real data. This is repeated several times until the AI models are properly trained.
  • The last step often involves refining the output to remove any glitches or unnatural artefacts that could give away the fact that it’s a fake.

While it is possible to harness deepfake technology for positive impact, the threat of it being used for business identity theft, corporate phishing, and more — is very real.

In fact, this is already being done. In a report from CNN, the Hong Kong police recently revealed how deepfakes were used to lure a business employee into remitting $25.6 million USD to con artists. The worker was tricked using a multi-person video conference that appeared to include several high-level company executives, including the CFO — all of whom were deepfakes. How long before such attacks affect the healthcare and pharmaceutical industry, too?

The challenge today is anyone can create a deep fake of anyone with only a few seconds of video images of that person that can be picked up from social media but the damage that can be caused is significant.

Deepfakes Vs Digital Twins

We see a lot of confusion between deepfakes with digital twins. But there’s a clear difference between to the two concepts:

A deepfake is an artificial image, video, or audio recording that has been convincingly edited, using deep learning ( type of machine learning AI) algorithms, to depict a real person saying or doing something they did not actually say or do. They are often created by malicious actors without the consent of the person being depicted, with a clear intent to cause harm.

A digital twin is a very different thing – it is a virtual model or representation of a real-world object, system, or process. It is designed to accurately simulate the characteristics and behavior of its physical counterpart. Human digital twins are extremely complex as they replicate the person’s body processes and they are highly accurate. So much so that the FDA changed their regulations in 2022 to allow digital twins to be used in lieu of humans in clinical trials. Digital twins are built using real-time data from sensors to keep the virtual and physical versions in sync. So a simple example is a digital twin of a person’s pancreas. The base model of the human pancreas created from thousands of human’s data is the initial model. This then gets adapted to the individual who is using that insulin pump. The sensor continuously sends data about insulin levels to the insulin pump containing the digital twin data and that runs a mathematical model of glucose metabolism. The model is calibrated to the patient’s blood glucose readings, health status and individual characteristics, such as gender, age, weight, and activity level to mimic that person’s pancreas.

How to Spot a Deepfake?

“You might think that as deepfakes proliferate, people are going to get good at it just by being able to pick it out better with experience,” says Christopher Doss. But he warns that the opposite might be true.

Spotting a deepfake requires a keen eye for detail. In fact, your brain may very well register that there’s something wrong long before you consciously realize it. Still, here are some things that could give it away – although they get better and better constantly:

  • Unnatural Eye Movement and Lack of Blinking: Deepfakes often struggle to replicate the natural movement of eyes or the act of blinking convincingly. A consistent absence of blinking or unnatural eye movements could be a significant red flag.
  • Awkward Facial Expressions and Misalignments: If facial expressions seem off or don’t match the emotional tone of the speech, or if there’s awkward positioning of facial features (like the nose not aligning with the direction the face is pointing), these could indicate a deepfake.
  • Inconsistent Audio: Deepfakes often focus more on the visual aspects, leading to discrepancies in audio quality. Misalignments between the voice and lip movements, robotic-sounding voices, or unnatural pronunciations are common giveaways. Additionally, digital noise or the absence of natural background sounds might be present.
  • Unnatural Coloring and Lighting: Look for abnormal skin tones, discoloration, strange lighting, or misplaced shadows. These visual anomalies can suggest that the footage has been digitally altered.
  • Poorly Rendered Hair and Teeth: Deepfake algorithms may not accurately replicate the individual characteristics of hair and teeth. Lack of detailed textures, such as frizzy hair or defined teeth edges, can be clues.
  • Blurring or Misalignment at Edges: Noticeable blurring where the face meets the neck, or misaligned visuals, particularly in the hairline or where different parts of the body come together, are potential indicators of a deepfake.
  • Unnatural Body Movement or Posture: Deepfakes primarily focus on the face, often neglecting the natural movement or positioning of the body. If the body shape appears distorted, or if there’s awkward or inconsistent positioning, it might be a deepfake.
  • Reflections and Environment Consistency: Inconsistencies in reflections in the eyes or mismatching environmental lighting can also reveal a deepfake. Natural reflections should mirror the surroundings and match across both eyes.
Deepfake Examples in Healthcare

Not only can deepfake attacks cause an erosion of trust in healthcare organizations, but they can also lead to compliance violations and expensive lawsuits. Meanwhile, the people who trust this misinformation can be exposed to real danger to their physical and psychological well-being.

Here’s an in-depth look at the harm that AI deepfakes can cause:

Impersonating Healthcare Executives

Deepfakes could be used to create videos or audio recordings of healthcare executives making false statements. This can potentially cause panic among patients or influence the stock market, as investors react to the fabricated news. The trustworthiness of healthcare organizations could be severely damaged if stakeholders cannot differentiate between real and fabricated statements.

Misinformation Campaigns

Healthcare organizations could be targeted by misinformation campaigns using deepfakes, spreading false health information. This could undermine efforts to combat public health crises, erode trust in vaccinations, or propagate health myths, with serious implications for public health and safety.

Insurance Fraud

Criminals could use deepfake technology to create false evidence of medical conditions or treatments to submit fraudulent insurance claims. This not only affects the financial stability of insurance companies but also increases premiums and costs for legitimate patients and healthcare providers.

Identity Theft

Deepfakes can be used to impersonate patients or healthcare providers, allowing attackers to gain unauthorized access to medical records or personal information. This could lead to privacy breaches, financial theft, or the unauthorized use of sensitive health information.

Fraudulent Clinical Instructions

A deepfake impersonating doctors or nurses could give incorrect medical advice or prescriptions. This dangerous scenario could lead to patients receiving harmful treatments or drugs, endangering their health and potentially leading to legal consequences for healthcare providers mistakenly believed to be at fault.

Phishing Attacks

Using deepfake audio of familiar voices within an organization, attackers can trick employees into divulging sensitive information or making unauthorized transfers. This represents a sophisticated evolution of phishing attacks, exploiting trust in known voices to breach security protocols.

Manipulating Research Data

Falsifying research results or clinical trial outcomes through deepfake videos or altered voice recordings could lead to the adoption of ineffective or dangerous medical practices and treatments. The scientific integrity of healthcare research is at risk if such manipulations are not detected and addressed.

Disrupting Telehealth Services

Impersonating doctors or patients during telehealth sessions using deepfakes could result in misdiagnosis, prescription fraud, or breaches of patient confidentiality. As telehealth becomes more prevalent, ensuring the authenticity of participants in virtual consultations is paramount.

Sowing Discord within Organizations

Creating deepfake content designed to incite conflicts or disputes among healthcare staff or between organizations and their partners could undermine operational efficiency and patient care. Such targeted disruptions could lead to a breakdown in communication and trust within healthcare ecosystems.

Undermining Public Trust

The spread of deepfake content damaging the reputation of healthcare facilities or professionals could erode public trust in the healthcare system. Restoring this trust can be challenging and time-consuming, requiring significant effort and transparency from affected organizations.

Strategies for Preventing Deepfakes

AI deepfakes have introduced a new and complex challenge for the healthcare sector, raising concerns about misinformation, fraud, and privacy violations. However, by adopting an approach that combines employee awareness training, brand monitoring, and deepfake detection tools, healthcare teams can significantly mitigate these risks:

Employee Awareness Training

One of the first lines of defence against deepfakes is to ensure that all healthcare employees are equipped with the knowledge and tools to recognize and respond to potential deepfake threats. Employees should understand what deepfakes are, how they are created, and the potential they have to cause harm within the healthcare sector.

By presenting case studies or hypothetical scenarios, employees can better grasp the tangible impacts of deepfakes on patient care, privacy, and organizational integrity. AI training should also encourage employees to critically assess the authenticity of digital content, especially when it pertains to sensitive healthcare information or directives that could affect patient care.

Monitoring Online Content

Healthcare organizations should actively monitor digital platforms for content that could damage their reputation or spread misinformation. This includes social media, forums, and other digital spaces where deepfakes may circulate. Utilizing social listening tools can help identify harmful content early, allowing organizations to respond promptly and mitigate damage.

Deepfake Detection Tools

Investing in advanced detection tools is crucial for identifying and responding to deepfakes effectively. These tools analyze videos and images for signs of manipulation, such as irregular blinking patterns, unnatural facial movements, or inconsistencies in audio-visual synchronization. Some of the most promising technologies and platforms include:

  • Deep Neural Networks: Tools like TensorFlow and PyTorch leverage deep learning to detect deepfakes by analyzing inconsistencies in images, videos, and audio cues. These platforms require users to input examples of genuine and manipulated media to train the detection model.
  • Sentinel: Sentinel’s cybersecurity platform is designed to help organizations identify AI-forged digital media. It analyzes uploaded content and provides detailed reports on manipulation, offering insights into how and where the media has been altered.
  • Intel’s Real-Time Deepfake Detector: Known as FakeCatcher, this technology boasts a 96% accuracy rate in detecting fake videos, identifying deepfakes by analyzing subtle blood flow signals in video pixels.
Training and Literacy Can Insulate Pharma Companies from Deepfakes

The rise of deepfake technology poses significant risks to the pharmaceutical and healthcare industries. From impersonating executives to spread disinformation, to manipulating research data and disrupting clinical trials, the potential for harm is vast. Deepfakes could erode public trust, lead to dangerous treatment decisions, and expose companies to costly lawsuits and reputational damage.

To protect against these threats, a multi-pronged approach focused on employee training and awareness is critical. Pharma and healthcare organizations should educate their workforce on how to spot the signs of a deepfake, such as unnatural facial movements, inconsistent audio, and visual anomalies. Employees should be trained to verify the source and authenticity of provocative content before acting on it.

In addition to detection skills, workers need to understand the risks deepfakes pose to their specific roles and the company overall. Tailored training scenarios can prepare employees to respond appropriately if targeted by a deepfake-enabled phishing attack, impersonation attempt, or other scam.

Technical tools like deepfake detection software can provide a valuable defense as well. However, given the rapid advancement of synthetic media, over-reliance on automation is unwise. Ultimately, a holistic awareness program that empowers employees to think critically, stay vigilant, and make smart trust decisions is key to safeguarding pharma businesses from deepfake attacks.

Conclusion

Deep fakes are becoming part of our everyday life and it is critical that you understand what they are and the implications and risks to your organization and career if they are not detected and stopped quickly.

Found this article interesting?

At Eularis, we are here to ensure that AI and FutureTech underpins your pharma success in the way you anticipate it can, helping you achieve AI and FutureTech maturation and embedding it within your organisational DNA.

At Eularis, over 20 years of condensed, tactical, and practical knowledge in pharmaceuticals and healthcare is combined with AI we deliver intensive in-person training in AI for pharmaceutical professionals.

We show you the exact processes, important techniques, and monumental touch points that’ll put you light years ahead of your competition. You will be able to mould, direct, and craft reliable and role-specific AI frameworks – that work every single time!

Register for ‘AI for Pharma Leaders Masterclass’ training to become the leader in your business by strategically using next-generation AI tech to drive superior growth and market dominance. For Pharma and healthcare, it gives you the edge that our industry has been needing for years.

For more information, contact Dr Andree Bates abates@eularis.com.

Contact Dr Bates on Linkedin here.

Listen to the AI for Pharma Growth Podcast on 

Apple here

Spotify here

Contact Us

Write you name and email and enquiry and we will get right back to you as soon as we can.