A recent research study has shed light on a concerning trend in the UK, where some of the country’s renowned TV doctors are being exploited to promote fraudulent products through the use of deepfake technology. Deepfaking involves the creation of highly sophisticated digital fabrications using artificial intelligence, resulting in videos that convincingly superimpose a person’s head onto another body or replicate their voice.
The research, published as a feature article in the BMJ on Wednesday, highlights the unauthorized use of the names and likenesses of general practitioners Hilary Jones and Rangan Chatterjee, as well as the late health guru Michael Mosley, who passed away last month. These individuals are unwittingly associated with the promotion of various products, including blood pressure and diabetes cure-alls, as well as hemp gummies.
Dr. Hilary Jones, 71, known for his appearances on “Good Morning Britain” and other TV shows, has expressed his frustration with this growing phenomenon. He has employed a social media specialist to monitor the web for deepfake videos misrepresenting his views and attempts to have them removed. However, he notes that even when taken down, these videos resurface under different names the following day.
Differentiating between authentic videos and deepfakes can be challenging, as recent research indicates that 27% to 50% of individuals struggle to distinguish between the two, particularly when the video features a trusted medical professional who has long been seen in the media.
To gauge the prevalence of deepfake doctor videos across social media, retired UK doctor John Cormack collaborated with the BMJ. Cormack emphasizes that creating videos is significantly cheaper than conducting research, developing new products, and bringing them to market through conventional means. He suggests that those platforms hosting such content, including Facebook, Instagram, YouTube, TikTok, and X, should be held accountable for the proliferation of computer-generated videos.
In response to the research findings, a spokesperson for Meta, the company that owns and operates Facebook and Instagram, stated that they would investigate the examples highlighted. The spokesperson affirmed that intentionally deceptive or fraudulent content is not permitted on their platforms and that they continuously strive to enhance detection and enforcement. They encouraged users to report any content that potentially violates their policies for investigation and appropriate action.
If individuals come across a deepfake video, it is crucial to exercise caution and verify the authenticity of the content before drawing conclusions or making decisions based on it.