India Combats Deepfakes with Research, Committees, and Regulatory Frameworks

New Delhi is taking a multi-pronged approach to combat the rising threat of deepfakes, with the Ministry of Electronics and Information Technology (MeitY) leading the charge. Two significant research projects are underway, funded by MeitY, specifically designed to detect AI-generated fake videos, audio, and images. This proactive approach underscores the government’s recognition of the potential damage deepfakes can inflict on individuals and society. Alongside research, MeitY is establishing committees and advisory groups to formulate comprehensive regulatory frameworks for artificial intelligence, addressing the broader implications of AI technologies beyond just deepfakes.

The first research project, focused on fake speech detection, leverages deep learning frameworks to identify manipulated audio. With a budget of ₹47.846 lakhs and a timeline extending to December 2024, this project aims to develop a robust detection software accessible through a web interface, alongside a speaker verification platform. The anticipated final report in January 2025 will likely offer valuable insights into the effectiveness of deep learning in combating audio manipulation. The second project tackles deepfake videos and images, employing both a web-based tool and a desktop application called "FakeCheck" for offline detection. Developed by C-DAC Hyderabad and Kolkata, this project is nearing completion, with the desktop tool currently undergoing testing with law enforcement agencies. This real-world application demonstrates the practical focus of these research initiatives.

Beyond research, MeitY established a nine-member committee on November 20th specifically tasked with addressing the deepfake challenge. This committee comprises experts from various domains, including MeitY’s cybersecurity and cyber law divisions, the Indian Cyber Crime Coordination Centre, C-DAC Hyderabad, the Data Security Council of India, Amrita Vishwa Vidyapeetham, a legal representative, and Dr. Balaraman Ravindran, head of IIT Madras’s Department of Data Science and AI. The Delhi High Court has directed the committee to expedite its report, expected within three months, after consulting with stakeholders such as intermediaries, telecom service providers, victims of deepfakes, and relevant websites. The committee’s broad representation aims to ensure a holistic approach to the issue.

In addition to the deepfake-focused committee, MeitY has formed other groups to address the wider implications of AI. An advisory group formed in September 2023 advises the government on AI regulation, focusing on balancing innovation and oversight. This group is tasked with drafting policies, establishing ethical guidelines tailored to India’s context, developing testing and certification mechanisms, and creating techno-legal AI frameworks. The group’s diverse membership, including government officials, industry representatives, and legal experts, highlights the collaborative approach being adopted.

A subcommittee dedicated to AI Governance Guidelines, formed in November 2023, operates under the advisory group’s umbrella. This subcommittee, chaired by Dr. Balaraman Ravindran, is responsible for developing specific governance guidelines and recommendations. One key proposal from this subcommittee involves the creation of an inter-ministerial AI coordination committee, championing a whole-of-government approach to AI governance. Furthermore, the subcommittee emphasizes the importance of establishing a technical advisory group and a central coordination point, alongside an AI incident database to track AI-related risks within India.

This comprehensive approach, encompassing research, committee deliberations, and regulatory framework development, signals India’s commitment to tackling the multifaceted challenges posed by AI-generated misinformation and deepfakes. The collaboration between government bodies, research institutions, industry stakeholders, and legal experts underscores the seriousness with which India is addressing these emerging technological threats. The focus on developing context-specific guidelines and frameworks reflects a nuanced understanding of the unique challenges and opportunities presented by AI in the Indian context. The establishment of an incident database and a centralized coordination point further demonstrates a proactive stance towards monitoring and mitigating risks associated with AI technologies.

The Delhi High Court’s involvement through its directives for expedited reports and stakeholder consultations ensures a degree of public accountability and transparency in the process. This judicial oversight reinforces the importance of these initiatives and adds weight to the government’s efforts to regulate AI effectively. The ongoing research projects, coupled with the work of the committees and advisory groups, position India at the forefront of addressing the complex and rapidly evolving landscape of AI-generated content and its potential impact on society. As these initiatives progress, they will contribute valuable insights and frameworks not only for India but potentially for other nations grappling with similar challenges.

Share.
Exit mobile version