Over the last year, federal agents were struggling to uncover the identity of a darkweb child exploitation site, with little success. Then a possible avenue opened up, thanks to the suspect’s use of ChatGPT.

In the first known federal search warrant asking OpenAI for user data, reviewed by Forbes after it was unsealed in Maine last week, Homeland Security Investigations revealed it had been chatting with the administrator in an undercover capacity on the child exploitation site when the suspect noted they’d been using ChatGPT.

The suspect then disclosed some prompts and responses they had received, detailing an apparently innocuous discussion that began with, “What would happen if Sherlock Holmes met Q from Star Trek?” In another discussion, the suspect said they’d received a response from ChatGPT for an unspecified request about a 200,000-word poem, receiving in response “a sample excerpt of a humorous, Trump-style poem about his love for the Village People’s Y.M.C.A., written in that over-the-top, self-aggrandizing, stream-of-consciousness style he’s known for.” They then copied and pasted that poem. The government ordered OpenAI to provide various kinds of information on the person who entered the prompts, including details of other conversations they’d had with ChatGPT, names and addresses associated with the relevant accounts, as well as any payment data.

The case shows how American law enforcement can use ChatGPT prompts to gather data on users suspected of criminal activity. In the past, search engines like Google have been asked to hand over personal information on users who entered certain searches, but no generative AI platform was known to have been asked the same for those entering prompts. That makes this the first public example of this kind of reverse AI prompt request. OpenAI hadn’t responded to a request for comment at the time of publication.

However, the government did not require the OpenAI data to identify their man. Instead, they were able to glean enough information during undercover chats with the suspect to discover that he was connected to the U.S. military. The suspect disclosed, for instance, that he was going through health assessments, had lived in Germany for seven years and that his father had served in Afghanistan. They later learned from the military the suspect had worked on Ramstein Air Force Base in Germany and had applied for further work with the Department of Defense, though didn’t specify which branch. With enough indicators, the government has alleged that 36-year-old Drew Hoehner is the site admin. He was charged with one count of conspiracy to advertise child sexual abuse material (CSAM). He has not entered a plea and his lawyer hadn’t responded to a request for comment at the time of publication.

Homeland Security Investigation, a specialist team inside U.S. Immigration And Customs Enforcement (ICE) focused on child exploitation, cybercrime and human trafficking, had been trying to discover the identity of this person since 2019. Investigators believed the same person, now identified as Hoehner, was either a moderator or an administrator of 15 different darkweb sites containing CSAM, which have had a combined user base of at least 300,000. All were based on the Tor network, which encrypts users’ traffic and sends them through a number of servers to make it hard to track their online movements and their identities.

The warrant does not reveal the names of the suspect’s latest sites, but they were highly organized, run by a team of administrators and moderators, who would hand out badges and commendations for those who contributed the most to the site. They had various subcategories of illegal material, which included one dedicated to AI, which was likely for hosting CSAM generated by artificial intelligence programs.

It’s unclear what specific data the government received. A document showed the search had been completed and OpenAI had provided agents with one Excel spreadsheet of information. No more details were released and the DOJ had not responded to a request for comment. It’s possible the information from OpenAI could be used to help prosecutors corroborate their identification of the defendant.

While the prompt itself had nothing to do with child exploitation, ChatGPT, like all big apps, can be a target for pedophiles. OpenAI data shows it reported 31,500 pieces of CSAM-related content to the National Center for Missing and Exploited Children, the clearing house to which all tech companies have to report child abuse imagery, between July and December last year. Over the same six months, it’d been asked to disclose either user information or content 71 times, providing governments with information from 132 accounts.

Read the full article here

Share.
Leave A Reply