Google is intensifying its efforts to manage unauthorised AI-generated deepfakes. The company is implementing new measures to demote or remove websites hosting such content from search results.
Understanding Deepfakes
Deepfakes are media created using generative AI, producing lifelike videos, pictures, or audio clips. These often feature public figures like Scarlett Johansson or Joe Biden, and alarmingly, sometimes involve children.
Google's Enhanced Removal Process
In a recent blog post, Google explained that people have long been able to request the removal of non-consensual explicit imagery. Now, new systems are in place to streamline this process, helping users address the issue on a broader scale.
Impact on Website Visibility
A Google spokesperson informed Decrypt that sites receiving multiple removal requests could see their visibility impacted. These sites may be considered low-quality, and Google will reflect this in their ranking systems, reducing the site's prominence in search results.
Filtering Explicit Search Results
When Google receives a removal request for non-consensual deepfake content, it will also filter similar search results, ensuring explicit content is less likely to appear. This system aims to prevent harmful content from surfacing in searches, providing a layer of protection for individuals impersonated.
SAG-AFTRA expresses support
The entertainment industry has praised Google's new policy. Organisations like SAG-AFTRA support measures that protect against unauthorised digital replicas, marking it as a significant achievement for industry-wide cooperation.
Addressing Challenges
One challenge Google faces is distinguishing between non-consensual deepfakes and consensual or legitimate content, such as nude scenes in films. Despite this, Google continues to refine its algorithms to better separate real content from explicit fake material.
Collaboration and Industry Standards
Google has joined other AI developers in pledging to prevent their models from generating harmful content, such as child sexual abuse material (CSAM). The company employs hashing technologies to detect and block CSAM, adhering to industry standards.
Expert Opinions
Ben Clayton, CEO of Media Medic, notes that combating deepfakes is an ongoing challenge. While Google's update is a positive step, continual improvements are necessary to prevent the spread of harmful content. The technology poses risks to privacy, security, and even the justice system, where deepfakes could fabricate evidence.
Legislative Efforts
Policymakers are also responding. Senator Maria Cantwell introduced the COPIED Act, proposing a standardised method for watermarking AI-generated content. This initiative aims to protect individuals' likenesses from unauthorised exploitation.