For more than two decades, Google has worked with machine learning and AI to make its products more helpful. In India, AI has allowed Google to enable language translations at scale, do precise flood forecasts and foster improved agricultural productivity.
Google is partnering with the government to continue the dialogue on AI. The partnership includes Google’s upcoming engagement on the Global Partnership on Artificial Intelligence (GPAI) Summit.
Google is addressing Advancing AI responsibly by striking a balance between maximizing its positive impact and addressing its potential risks. Google is anticipating and testing for a wide range of safety and security risks, including the rise of new forms of AI-generated, photo-realistic, synthetic audio or video content known as “synthetic media”. While this technology has useful applications – for instance, by opening new possibilities to those affected by speech or reading impairments, or new creative grounds for artists and movie studios around the world – it raises concerns when used in disinformation campaigns and for other malicious purposes, through deep fakes. The potential for spreading false narratives and manipulated content can have negative implications.
Providing additional context for generative AI outputs
Google is helping users identify AI-generated content and empowering people with knowledge of when they’re interacting with AI generated media. This is why we’ve added “About this result” to generative AI in Google Search to help people evaluate the information they find in the experience. We also introduced new ways to help people double check the responses they see in Google Bard by grounding it in Google Search.
Equally, context is important with images, and we’re committed to finding ways to make sure every image generated through our products has metadata labeling and embedded watermarking with SynthID, currently being released to a limited number of Vertex AI customers using Imagen, one of our latest text-to-image models that uses input text to create photorealistic images.
In the coming months, YouTube will require creators to disclose altered or synthetic content that is realistic, including using AI tools, and Google will inform viewers about such content through labels in the description panel and video player.
Implementing guardrails and safeguards to address AI misuse
In the coming months, on YouTube, Google aims to make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process.
Google has a prohibited use policy for new AI releases outlining the harmful, inappropriate, misleading or illegal content we do not allow, based on early identification of harms during the research, development, and ethics review process for our products.
Furthermore, Google has recently updated its election advertising policies to require advertisers to disclose when their election ads include material that’s been digitally altered or generated. This will help provide additional context to people seeing election advertising on its platforms.
Google also has long-standing policies, across our products and services, that are applicable to content created by generative AI. For instance, as part of its misrepresentation policy for Google Ads, Google prohibits the use of manipulated media, deep fakes and other forms of doctored content meant to deceive, defraud, or mislead users.
The policies for Search features like Knowledge Panels or Featured Snippets, prohibit audio, video, or image content that’s been manipulated to deceive, defraud, or mislead. And on Google Play, apps that generate content using AI have always had to comply with all Google Play Developer Policies – this includes prohibiting and preventing the generation of restricted content and content that enables deceptive behavior.
Combating deep fakes and AI-generated misinformation
On YouTube, Google uses a combination of people and machine learning technologies to enforce its Community Guidelines, with reviewers across Google operating around the world. AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is helping to continuously increase both the speed and accuracy of its content moderation systems.
Google has invested USD $1M in grants to the Indian Institute of Technology, Madras, to establish the first of its kind multidisciplinary center for Responsible AI. This center will foster collective effort — involving not just researchers, but domain experts, developers, community members, policy makers and more – in getting AI right, and localizing it to the Indian context.