ChatGPT owner in probe over risks around false answers

ChatGPT on a computerGetty Images

US regulators are probing artificial intelligence company OpenAI over the risks to consumers from Chat GPT generating false information.

The Federal Trade Commission (FTC) sent a letter to the Microsoft-backed business requesting information on how it addresses risks to people’s reputations.

The inquiry is a sign of the rising regulatory scrutiny of the technology.

OpenAI has sparked a furore since launching its chatbot last year.

ChatGPT generates convincing human-like responses to user queries within seconds, instead of the series of links generated by a traditional internet search. It, and similar AI products, are expected to dramatically change the way people get information they are searching for online.

Tech rivals are racing to offer their own versions of the technology, even as it generates fierce debate, including over the data it uses, the accuracy of the responses and whether the company violated authors’ rights as it was training the technology.

The FTC’s letter asks what steps OpenAI has taken to address its products’ potential to “generate statements about real individuals that are false, misleading, disparaging or harmful”.

The FTC is also looking at OpenAI’s approach to data privacy and how it obtains data to train and inform the AI.

This spring, Congress hosted OpenAI’s chief executive Sam Altman for a hearing, in which he admitted the technology could be a sousce of errors. He called for regulations to be crafted for the emerging industry and recommended that a new agency be formed to tackle it. He said he expected the technology to have a significant impact as its uses become clear, including on jobs.

“I think if this technology goes wrong, it can go quite wrong… we want to be vocal about that,” Mr Altman said at the time. “We want to work with the government to prevent that from happening.”

The investigation by the FTC was first reported by the Washington Post, which published a copy of the letter. OpenAI did not respond to a request for comment.

The FTC also declined to comment. The consumer watchdog has taken a high profile role policing the tech giants under its current chair, Lina Khan.

Ms Khan rose to prominence as a Yale law student, when she criticised America’s record on anti-monopoly enforcement related to Amazon.

Appointed by President Joe Biden, she is a controversial figure, with critics arguing that she is pushing the FTC beyond the boundaries of its authority.

Some of her most high-profile challenges of tech firms activities – including a push to block the merger of Microsoft with gaming giant Activision Blizzard – have faced setbacks in the courts.

During a five-hour hearing in Congress on Thursday, she faced tough criticism from Republicans over her leadership of the agency.

She did not mention the FTC’s investigation into OpenAI, which is at a preliminary stage. But she said she had concerns about the product’s output.

“We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else,” Ms Khan said.

“We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we are concerned about,” she added.

The FTC probe is not the company’s first challenge over such issues. Italy banned ChatGPT in April, citing privacy concerns. The service was restored after it added a tool to verify users’ ages and provided more information about its privacy policy.

Related Topics

Comments

Leave a Reply

Skip to toolbar