Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks
Hackers Expose Age-Verification Software Powering Surveillance Web

Hackers Expose Age-Verification Software Powering Surveillance Web

Three hacktivists tried to find a workaround to Discord’s age-verification software. Instead, they found its frontend exposed to the open internet.

L0la L33tz profile image
by L0la L33tz

Ten days ago, the social chat app Discord announced that it would launch “teen-by-default” settings for its global audience. As part of this update, all new and existing users worldwide will have a teen-appropriate experience, with updated communication settings, restricted access to age-gated spaces, and content filtering that preserves privacy and meaningful connections, the platform said.

This, of course, means that to use Discord the way you are used to, you’ll have to let it scan your face, and the internet wasn’t happy. Many communities quickly announced their move to other platforms. Others, like the security researcher Celeste, who goes by the handle vmfunc, were convinced there would be a workaround. 

Together with two other researchers, they set out to look into Persona, the San Francisco-based startup that’s used by Discord for biometric identity verification – and found a Persona frontend exposed to the open internet on a US government authorized server.

In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting – and a parallel implementation that appears designed to serve federal agencies. On Monday, Discord stated that it will not be proceeding with Persona for identity verification.

Screengrab of Persona's exposed interface displaying a US government systems notification. source

Persona, Beyond Age Verification

Persona Identity, Inc. is a Peter Thiel-backed venture that offers Know Your Customer (KYC) and Anti-Money Laundering (AML) solutions that leverage biometric identity checks to estimate a user’s age that use a proprietary “liveliness check” meant to distinguish between real people and AI-generated identities. At a $2 billion valuation, Persona powers identity verification processes for the likes of OpenAI, Roblox, Heritage Bank, and the ride-sharing service Lime. 

Persona feeds on a growing trend of age-verification legislation that is making its way around the world. From the EU’s Chat Control to the UK’s Online Safety Act and the KOSA and EARN IT Acts proposed in the US, governments argue that as long as we can verify anyone’s age on the World Wide Web, we can keep children safe from the dangers of free information. This, it seems, is far from true.

Beyond offering simple services to estimate your age, Persona’s exposed code compares your selfie to watchlist photos using facial recognition, screens you against 14 categories of adverse media from mentions of terrorism to espionage, and tags reports with codenames from active intelligence programs consisting of public-private partnerships to combat online child exploitative material, cannabis trafficking, fentanyl trafficking, romance fraud, money laundering, and illegal wildlife trade.

Once a user verifies their identity with Persona, the software performs 269 distinct verification checks and scours the internet and government sources for potential matches, such as by matching your face to politically exposed persons (PEPs), and generating risk and similarity scores for each individual. IP addresses, browser fingerprints, device fingerprints, government ID numbers, phone numbers, names, faces, and even selfie backgrounds are analyzed and retained for up to three years.

The information the software evaluates on the images themselves includes “Selfie Suspicious Entity Detection,” a “Selfie Age Inconsistency Comparison,” similar background detection, which appears to be matched to other users in the database, and a “Selfie Pose Repeated Detection,” which seems to be used to determine whether you are using the same pose as in previous pictures.

In short, the software “flags you as a ‘suspicious entity’ based on your face alone,” the researchers write. An act that may prove dangerous, as Persona’s software has reportedly made significant mistakes when attempting to estimate the age of users in the past. When paired with AML reporting, such suspicious analysis can quickly lead to the unjust termination of bank accounts. And that seems to be exactly what Persona was built to do.

In addition to facial recognition, Persona’s software is able to perform checks on financial data — including running checks on sanctions lists, running checks on cryptocurrency activity via the blockchain analysis firms Chainalysis and TRM Labs, and an interface to file suspicious activity reports (SARs) directly with US and Canadian federal agencies.

It’s nothing spectacular for a KYC/AML program, but it marks a growing trend in AML compliance where financial service providers increasingly utilize AI features, such as automated social media screening, to determine risk scores for their customers. This carries inherent risks of discrimination based on factors such as race or political affiliation.

Identity verification efforts like these mislead people into thinking you’re more protected. On the contrary, Celeste tells The Rage, “you're making the internet less safe, not safer.” “Normies”, they say, “won’t be able to bypass these,” while less benevolent people “will always find ways to exploit your system.” 

Computer scientists have long sounded the alarm that a central database of thousands of ID documents will always be a lucrative target for attackers. Just last year, 70,000 ID photos were stolen from Discord and held at ransom. Every week, a new leak places people at the risk of identity theft.

Banks Find AML “Ineffective”, Propose Access To Social Media
The world’s biggest banks find monitoring for suspicious activity “ineffective”. In a “True Risk-Based Approach”, the group now proposes access to customer social media, IP addresses, and device IDs.

A Wake Up Call

At best, Persona can be used to create comprehensive surveillance graphs of users for the companies it serves, the researchers say. At worst, the software could file automated reports directly with the government.

The exposed code, which has now been removed, sat at a US government authorized endpoint that appears to have been isolated from its regular work environment, according to the researchers. It is unusual for such a program to sit on dedicated infrastructure, and not on Cloudflare or Persona’s shared systems, the researchers say.

“You do this when the data requires compartmentalization. When the compliance requirements for the data you’re collecting, demand that level of isolation. When the damage of a breach is bad enough to warrant dedicated infrastructure,” the researchers say.

From the publicly exposed domain, titled “openai-watchlistdb.withpersona.com,” which appears to query identity verification requests on an OpenAI database, the researchers found a FedRAMP-authorized parallel implementation of the software called “withpersona-gov.com.” 

FedRAMP, or the Federal Risk and Authorization Management Program, is a United States federal government-wide compliance program founded in 2011 that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. 

The program, according to the researchers, performs product analytics and user behavior tracking on a government identity-verification platform, provides real-time user monitoring — every click, every page load — on a FedRAMP platform processing PII and biometrics, and includes financial identity-verification capabilities on the government platform. All of these features, the researchers note, may also be available in the implementation used by OpenAI, though no direct feedback between the two platforms could be determined.

The difference seems to be that OpenAI may have created an internal database for Persona identity checks that spans all OpenAI users via its internal watchlistdb. “They quickly used this opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves,” the researchers write, noting that this is not necessarily a new development. Persona’s CEO has since personally reached out to the researchers, but the watchlist has not been addressed, says Celeste.

The government implementation, on the other hand, features data that is served by an ONYX deployment — a mere coincidence in name, or a potential link to Fivecast ONYX, as the researchers point out — an AI-powered “open source” surveillance platform purchased by ICE for $4.2 million in 2023, the same year that Persona’s now exposed service first came online.

Fivecast ONYX offers the automated collection of multimedia data from social media and the dark web, the building of “digital footprints” from biographical data, and can track shifts in sentiment and emotion, assign risk scores, search across 300+ platforms and 28+ billion data points, and identify people with “violent tendencies.” Just like Persona, Fivecast ONYX specifically sells its services for KYC/AML to financial institutions.

The researchers note that no direct link to a Fivecast ONYX implementation could be found. Persona Identity, Inc. does not appear to hold government procurement contracts at this time.

The entire service appears to be powered by an OpenAI chatbot that runs on the same government platform that handles SARs, facial biometrics, and watchlist screening — operators using AI chat assistance while reviewing suspicious activity reports and facial recognition matches, the researchers say. “The code doesn’t show PII flowing to OpenAI but honestly the questions about what context the copilot has access to are worth asking.” 

The researchers hope that their findings will serve as a wake up call. “The internet was supposed to be the great equalizer. Information wants to be free, the network interprets censorship as damage and routes around it, all that beautiful optimism. And for a minute it was true,” Celeste tells The Rage.

But with surveillance capitalism as the default business model, “they took the most powerful communication technology in human history and turned it into a slot machine that makes you sad. We're all rats in a skinner box pressing the lever for pellets of validation.”

“The state wants to see everything. The corporations want to see everything. And they've learned to work together.”

Independent journalism does not finance itself. If you enjoyed this article, please consider making a donation. If you would like to note a correction to this article, please email corrections@therage.co

L0la L33tz profile image
by L0la L33tz

Subscribe to our Newsletter

Get all news on financial surveillance and beyond directly to your inbox

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More