Sunday, November 3, 2024
spot_img

Deepfake Potential threat to Civil Society

Dr M.SURESH BABU, President, Praja Science Vedika

Deepfakes indeed represent a significant advancement in synthetic media, powered by the capabilities of machine learning and artificial intelligence.  Deepfakes are a form of synthetic media resulting from the manipulation of digital content through advanced machine learning techniques, particularly deep learning. The name itself is a portmanteau of “deep learning” and “fake.” The creation of deepfakes primarily relies on deep learning methods, particularly generative neural network architectures such as autoencoders and generative adversarial networks (GANs). These techniques allow for the realistic replacement of one person’s likeness with another in visual and audio content. The text highlights the negative aspects and potential misuse of deepfake technology. This includes the creation of inappropriate content such as child sexual abuse material, celebrity pornographic videos, revenge porn, fake news, hoaxes, bullying, and financial fraud.  Deepfakes have the potential to undermine democratic systems by spreading disinformation and hate speech. They can interfere with people’s ability to participate in decisions, shape collective agendas, and express political will through informed decision-making. The challenges posed by deepfakes have prompted responses from both industry and government. Efforts have been made to detect and limit the use of deepfake technology to mitigate its negative consequences. Deepfake technology has evolved over time, becoming increasingly convincing. Its availability to the public has raised concerns about its potential disruptive impact on traditional entertainment, gaming, and media industries.

Also read: Scope and Potentiality of Naturopathy

Machine learning techniques

It’s important for society to be aware of the existence and potential consequences of deepfakes, and ongoing efforts in research, technology development, and policy-making are crucial to addressing these challenges effectively.

The Video Rewrite program, developed in 1997, is identified as an early landmark project in the field of deepfakes. It automated facial reanimation by modifying existing video footage of a person speaking to synchronize with a different audio track. Machine learning techniques were used to connect the sounds produced by the subject in the video with the corresponding facial movements.

“Synthesizing Obama” Program (2017): This project, published in 2017, focused on modifying video footage of former President Barack Obama to make it appear as though he was mouthing words from a separate audio track. A notable research contribution was the development of a photorealistic technique for synthesizing mouth shapes from audio, enhancing the realism of the generated content.

Also read: Comprehensive drought mitigation strategy needed

Face2Face Program (2016): The Face2Face program, introduced in 2016, aimed to modify video footage in real time to depict a person mimicking the facial expressions of another individual. A key research contribution was the development of the first method for re-enacting facial expressions in real time, even when using a camera that does not capture depth, making it accessible for common consumer cameras.

Expansion to Full Body Manipulation (2018): In August 2018, researchers at the University of California, Berkeley, introduced a fake dancing app that utilized AI to create the illusion of masterful dancing ability. This marked an expansion of deepfake applications from focusing on the head or parts of the face to manipulating the entire body.

Diversification into Other Domains: The information also highlights the diversification of deepfake applications into other domains, such as tampering with medical imagery. Researchers demonstrated how attackers could automatically inject or remove lung cancer in a patient’s 3D CT scan. The manipulated images were convincing enough to deceive both radiologists and a state-of-the-art lung cancer detection AI system. This expansion into medical imaging poses potential risks and challenges for the healthcare industry.

The Indian government is planning to introduce regulations aimed at addressing the issue of AI-generated deepfakes and misinformation. These regulations may involve financial penalties for both creators of deepfake content and the social media platforms facilitating its dissemination.

Timeline for Actionable Items: The government, in collaboration with stakeholders, plans to develop actionable items within 10 days. These items will focus on detecting deepfakes, preventing their upload and viral sharing, and strengthening mechanisms for reporting such content. The goal is to provide citizens with recourse against AI-generated harmful content on the internet.

Concerns About Threats to Democracy: Union Information Technology and Telecom Minister Ashwini Vaishnaw emphasized the emerging threat of deepfakes to democracy. He stated that deepfakes can erode trust in society and its institutions, making it crucial to take urgent steps to protect democracy.

Also read: Agroecology is a holistic approach to address multiple challenges

Financial Penalties: The proposed regulation may include financial penalties for both the creators of deepfake content and the platforms hosting such content. Minister Vaishnaw mentioned the need to consider penalties during the regulatory process.

Industry Consultation: The government consulted representatives from the technology industry, including Meta, Google, and Amazon, to gather insights on handling deepfake content. This collaboration reflects an effort to involve key stakeholders in the development of effective measures.

Proactive Measures by Social Media Platforms: Minister Vaishnaw stressed the importance of social media platforms being proactive in addressing the issue of deepfake content. Rapid and effective responses are deemed necessary to prevent the immediate damage caused by the spread of such content.

It will impact on trust in society

This initiative reflects a growing awareness of the potential harm caused by deepfake technology, not only in terms of misinformation but also its impact on trust in society and democratic institutions. The proposed regulations and collaborative efforts with industry players signal a commitment to addressing these challenges in a timely and comprehensive manner.

The discussions between the government and industry stakeholders are based on four key pillars: detection of deepfakes, prevention of publishing and viral sharing of deepfake and misinformation content, strengthening the reporting mechanism, and spreading awareness through joint efforts by the government and industry entities.

Also read: Decline in production and consumption of millets

Dr. M. Suresh Babu
Dr. M. Suresh Babu
Dr. M. Suresh Babu has been a Professor, Dean and Principal in various engineering colleges and institutions in Hyderabad and Anantapur. His approach to teaching is “For the student, by the student and to the student.” He is associated with several Civil Society Organizations like Praja Science Vedika and Election Watch.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related Articles

Stay Connected

3,210FansLike
330FollowersFollow
2,483SubscribersSubscribe

Latest Articles