Medtech News Africa

Subscribe

Advertise your job ad
    Search jobs

    Unmasking deepfakes: Navigating the threat to medical professional identity and trust

    Social media and artificial intelligence (AI) are double-edged swords. The former is both good and evil because it has given everyone a voice that spreads as far and wide as your imagination.
    Source:
    Source: Pexels

    Not long ago, the Covid-19 pandemic gave rise to armchair health experts, referring to ordinary individuals without relevant credentials sharing scientific information and health advice on social media.

    We also saw a rise in fake news, which is disinformation, intentionally spreading false information online.

    AI enables the deliberate generation of false content. The rapid advancement and access to generative AI raises new concerns about privacy, identity theft, misinformation and disinformation.

    Deepfakes use a person's likeness to mislead others by using fabricated information through superimposing or replacing elements within videos, images or voice to make them appear real but depict events or scenarios that did not occur.

    A recent story about a urology professor whose likeliness was used in a deepfake using her image, professional reputation and voice to sell sexual health products.

    When a colleague alerted her of this content, she only then became aware that she was a deepfake victim. She was also flooded with messages from people demanding refunds for the product bought online, which she knew nothing about. This is an example of the dark side of AI.

    The content of this deepfake video also attacks scientifically approved medical products, which threatens victims' professional reputations and their professional code of conduct. She is among many individuals whose deepfakes were used to promote products or services online.

    Examples include a deepfake using a famous news anchor and Elon Musk's likeliness to promote an investment scam. Health professionals are seen as custodians of health information, and the public trusts them as credible sources of health information. Low health and media literacy levels compound the threat of these deepfakes in society.

    Navigating the deepfake dilemma

    These cases are the tip of the iceberg, especially in a country with no regulations governing AI, which underscores the ill-preparedness of the law enforcement system when such cases are reported.

    Do people even know where to report such cases in South Africa? It begs a question about the threat posed by deepfakes on our road to the national elections.

    The lack of regulations exposes citizens and the state to these fraudulent acts with malicious intent. Medical professionals are not spared from this risk, putting science under attack, a public good requiring protection.

    Deepfakes impact the public perception of the profession and threaten the public's trust in health professionals, including undermining public health
    initiatives. Increased internet access over the years has democratised access to health information, but credibility is left to chance.

    Combating misinformation

    Manipulated health information exacerbates misinformation and erodes trust in the health system and science. Low health and technology literacy weakens the ability to verify information using multiple credible online sources.

    Some of this false information will be shared further using various online platforms and direct messaging apps like WhatsApp.

    There is a need to educate the public about practical strategies for verifying the credibility of health information and signs of manipulated content, thus highlighting the need for improving media and health literacy.

    Social media platforms could also proactively invest money and time in measures to identify and remove suspect content. These platforms should bear corporate and social responsibility to the public by preventing the misuse of their platforms.

    We must advocate for strengthened laws to govern AI - elevating the importance of protecting personal data and safeguarding individuals from unauthorised use of their likenesses and voices.

    Deepfakes are just an example of how AI can harm individuals and societies. AI can advance humanity when used to achieve ends that benefit society. Without the relevant AI regulations in South Africa, individuals and society are at grave risk of identity, reputational, and socio-political harm.

    Let's do Biz