Dark
Light
Today: November 15, 2024
February 16, 2024
1 min read

Fighting Deepfakes: Unveiling the VC-Backed Startup’s Revolutionary Tactics

TLDR:
– Clarity, an AI-powered cybersecurity startup, has raised $16 million in venture capital funding to combat deepfakes.
– The company uses AI to recognize common patterns in deepfake creation and offers a watermark for authentic content.

The New York-based startup Clarity has secured $16 million in venture capital funding to fight against deepfakes. Deepfakes are artificial images, videos, or audio that depict events that never occurred. Clarity utilizes artificial intelligence technology to identify common patterns in the creation of deepfakes and provides a watermark to designate authentic content. The funding will enable Clarity to enhance its core research efforts and deliver efficient solutions. The company aims to preserve trust in the digital world, as deepfakes have posed a significant threat to the authenticity of digital content. Venture capital firms Walden Catalyst Ventures and Bessemer Venture Partners led the funding round, with the participation of Secret Chord Ventures, Ascend Ventures, and Flying Fish Partners.

Clarity’s technology is already being utilized by media publishers and government agencies to tackle deepfakes. One of the distinguishing factors of Clarity is its ability to respond rapidly to new types of deepfakes, setting it apart from its competitors. With the upcoming U.S. presidential election, concerns about AI and deepfakes interfering with democracy have heightened. A survey conducted by Pew Research revealed that 66% of Americans occasionally encounter doctored content, causing confusion regarding current events. Additionally, 77% of respondents support restrictions on deepfakes. In response to this growing threat, government agencies are taking action. The Federal Trade Commission (FTC) has proposed expanding a rule that penalizes impersonation of businesses and government agencies to cover individuals as well. The FTC also plans to strengthen measures against scammers involved in voice cloning and AI-driven scams, aiming to protect Americans from impersonator fraud. The agency seeks to bring fraudsters to court for repayment and is considering making it illegal for AI companies to provide services if they knowingly contribute to impersonating individuals.

Previous Story

AI Venture Deals: Unleashing the Power of Scale and Anthropic Show

Next Story

Winning Streak: Bloggers Celebrate Success at VCCircle Awards 2024

Latest from Blog

Go toTop