On Friday, 15th March 2019, Facebook announced the launch of its new Artificial Intelligence technology that can detect “near-n**e” images and videos shared without consent. The social networking says that, with the tool, it can now automatically flag revenge p**n before anyone even reports or sees it.
The new AI tool will proactively remove revenge p**n from being shared on the platform. Usually, non-consensual videos would have to be flagged or reported by victims themselves before it is taken down by Facebook.
Facebook’s global head of safety, Antigone Davis, in a post, explained that victims are mostly afraid of reluctant to report the content themselves. This is because of fear of retribution or they aren’t even aware of the content being there altogether.
Davis added:
“If the image or video violates our Community Standards, we will remove it, and in most cases, we will also disable an account for sharing intimate content without permission.”
Specially-trained members of the Facebook Community Operation team will review the flagged content upon detection. The social networking platform said that the tool will work with its previous anti-revenge p**n measure.
The non-consensual intimate image pilot program invites users to upload their intimate images privately to Facebook before it is posted anywhere else. Facebook, in turn, creates a “digital fingerprint” of the image. This helps to stop the videos from being shared on the platform.
Facebook has also announced its “Not Without My Consent” support for victims. It will tell users what to do and who to contact if they are a victim of revenge p**n.
Artificial Intelligence is already being employed by Facebook in other areas like suicide prevention.