Meta ordered to delete AI-generated photographs of public individuals

The Board determined that Meta’s current policies on such content were imprecise and ineffectual, as per reports.

By
  • Storyboard18,
| July 26, 2024 , 3:43 pm

Meta, the parent company of Facebook and Instagram, was instructed by its Oversight Board to erase AI-generated pornographic photographs of public individuals. The ruling was made after the board received two cases featuring photos of women, one Indian and one American, according to reports.

The Board determined that Meta’s current policies on such content were imprecise and ineffectual. Specifically, the guideline on “derogatory sexualized photoshop” was found imprecise and insufficient to meet the growing problem of AI-generated deepfakes, as per reports.

To improve its approach, the Board suggested that Meta reclassify this type of content under the Adult Sexual Exploitation policy and rename it “Non-Consensual Sexual Content.” Furthermore, the Board proposed replacing the phrases “derogatory” and “photoshop” with more precise and inclusive wording that reflects the expanding ways of picture modification. the reports clarified.

The Oversight Board voiced concern regarding Meta’s treatment of user reports. Complaints that were immediately closed without sufficient evaluation were noted as examples of the platform’s flaws. Given the serious harm caused to victims of deepfake intimate photographs, the Board stressed the critical necessity for prompt action in such circumstances.

The ruling emphasizes the difficulty that social media sites confront in coping with the rapid improvements in AI technology. As deepfake development becomes more accessible, platforms must adjust their policies and enforcement methods to safeguard consumers from the terrible effects of such bad content. The Oversight Board’s decision is a key step toward holding digital companies accountable for their part in stopping the spread of nonconsensual sexual content.

Leave a comment