Weather     Live Markets

The Federal Trade Commission is launching an investigation into AI chatbots from seven companies, including Alphabet, Meta and OpenAI, over their use as companions. The inquiry involves finding how the companies test, monitor and measure the potential harm to children and teens. 

A Common Sense Media survey of 1,060 teens in April and May found that over 70% used AI companions and that more than 50% used them consistently — a few times or more per month. 

Experts have been warning for some time that exposure to chatbots could be harmful to young people. A study revealed that ChatGPT provided bad advice to teenagers, like how to conceal an eating disorder or personalizing a suicide notes. In some cases, chatbots have ignored comments that should have been recognized as concerning, hopping over the comment to continue the previous conversation. Psychologists are calling for guardrails to protect young people, like reminders in the chat that the chatbot is not human and that educators should prioritize AI literacy in schools

It’s not just children and teens, though. There are plenty of adults who’ve experienced negative consequences of relying on chatbots — whether for companionship, advice or their personal search engine for facts and trusted sources. Chatbots more often than not tell what it thinks you want to hear, which can lead to flat out lies. And blindly following the instructions of a chatbot isn’t always the right thing to do. 

“As AI technologies evolve, it is important to consider the effects chatbots can have on children,” FTC Chairman Andrew N. Ferguson said in a statement. “The study we’re launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.”

A Character.ai spokesperson told CNET every conversation on the service has prominent disclaimers that all chats should be treated as fiction.

“In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,” the spokesperson said.

The company behind the Snapchat social network likewise said it has taken steps to reduce risks. “Since introducing My AI, Snap has harnessed its rigorous safety and privacy processes to create a product that is not only beneficial for our community, but is also transparent and clear about its capabilities and limitations,” the spokesperson said.

Meta declined to comment, and neither the FTC nor any of the remaining four companies immediately responded to our request for comment. 

The FTC has issued orders and is seeking a teleconference with the seven companies about the timing and format of its submissions no later than Sept 25. The companies under investigation include the makers of some of the biggest AI chatbots in the world or popular social networks that incorporate generative AI: 

  • Alphabet (parent company of Google)
  • Character Technologies 
  • Instagram;
  • Meta Platforms;
  • OpenAI;
  • Snap
  • X.AI.

Starting late last year, some of those companies have updated or bolstered their protection features for younger individuals. Character.ai began imposing limits on how chatbots can respond to people under the age of 17 and added parental controls. Instagram introduced teen accounts last year and switched all users under the age of 17 to them and Meta recently set limits on subjects teens can have with chatbots.

The FTC is seeking information from the seven companies on how they:

  • monetize user engagement
  • process user inputs and generate outputs in response to user inquiries
  • develop and approve characters
  • measure, test, and monitor for negative impacts before and after deployment
  • mitigate negative impacts, particularly to children
  • employ disclosures, advertising and other representations to inform users and parents about features, capabilities, the intended audience, potential negative impacts and data collection and handling practices
  • monitor and enforce compliance with Company rules and terms of services (for example, community guidelines and age restrictions) and
  • use or share personal information obtained through users’ conversations with the chatbots



Read the full article here

Share.
Leave A Reply