American tech giant Google is reportedly developing a new feature for its popular photo and video sharing platform, Google Photos, as per reports. This move is aimed at helping users identify whether an image has been generated or altered using artificial intelligence (AI).
The update, driven by rising concerns over deepfakes, will incorporate new identification resource tags to reveal AI-related information and the digital source type of images stored within the app.
According to reports, hidden code strings in version 7.3 of the Google Photos app suggest the upcoming functionality. Though not yet live, the clues found in the app’s XML files indicate the addition of an “ai_info” tag, which could display whether an image was created using AI technology.
This initiative is part of Google’s larger effort to combat the spread of misinformation and deepfakes, which are increasingly used to manipulate public opinion with hyper-realistic, digitally altered images and videos.
While details on how this information will be presented to users remain unclear, one option could be embedding the AI data within the image’s metadata using Exchangeable Image File Format (EXIF) tags.
Google has not yet announced an official release date for this feature, but its inclusion in the app’s code hints that the launch may be imminent.