Bumble introduces feature to flag AI-generated profile photos

Bumble introduces feature to flag AI-generated profile photos

Bumble has introduced a new feature that allows users to report fake profiles that use AI-generated images. This move is part of Bumble’s ongoing effort to maintain the integrity and safety of its dating platform. Users can now select “Fake profile” and then choose the option “Using AI-generated photos or videos” when reporting suspicious profiles.

This new reporting option is an addition to existing ones, which include inappropriate content, underage users, scams, and the use of someone else’s photos. Bumble aims to deter individuals from using AI-generated images to mislead or deceive other users.

AI-generated photos have become a common tactic used by malicious actors on dating apps to lure victims into sharing personal information, which can then be exploited for targeted attacks. Bumble’s latest feature is designed to combat this threat by making it easier for users to flag suspicious profiles.

In February, Bumble launched a tool called “Deception Detector,” which utilizes AI and human moderation to identify and remove fake profiles. Since the tool’s introduction, Bumble has reported a 45% decrease in user reports of spam, scams, and fake profiles. Additionally, Bumble has implemented an AI-powered “Private Detector” tool to automatically blur unsolicited nude photos.

Whitney Wolfe Herd, Bumble’s founder and executive chairman, has previously suggested the concept of an AI dating concierge, which could assist users with dating while minimizing the need for direct interaction with potential matches. This idea reflects Bumble’s ongoing exploration of AI’s potential to enhance the online dating experience.

Bumble introduces feature to flag AI-generated profile photos
Image courtesy: Bumble

Risa Stein, Bumble’s Vice President of Product, emphasized the importance of removing misleading or dangerous elements from the platform. “An essential part of creating a space to build meaningful connections is removing any element that is misleading or dangerous,” Stein said in an official statement. “We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections.”

A recent Bumble user survey revealed that 71% of Gen Z and Millennial respondents support limiting the use of AI-generated content on dating apps. Another 71% view AI-generated photos of people in places they’ve never been or doing activities they’ve never done as a form of catfishing.

In 2022, the Federal Trade Commission reported of romance scams from nearly 70,000 people, with losses totaling $1.3 billion. Many dating apps, including Bumble, are taking extensive measures to protect users from scams and physical dangers, and the misuse of AI in creating fake profiles is the latest challenge they are addressing.

Bumble’s proactive steps in leveraging AI for safety and trust on its platform demonstrate its commitment to providing a secure dating environment for its users.