US officials seek to crack down on harmful AI products

By AP News

In this article

  • Loading...
  • Want to see what you should be buying? Check out our top picks.

The federal government will “not hesitate to crack down” on harmful business practices involving artificial intelligence, the head of the Federal Trade Commission warned Tuesday in a message partly directed at the developers of widely-used AI tools such as ChatGPT

Artificial Intelligence Regulators

The U.S. government will “not hesitate to crack down” on harmful business practices involving artificial intelligence, the head of the Federal Trade Commission warned Tuesday in a message partly directed at the developers of widely-used AI tools such as ChatGPT.

FTC Chair Lina Khan joined top officials from U.S. civil rights and consumer protection agencies to put businesses on notice that regulators are working to track and stop illegal behavior in the use and development of biased or deceptive AI tools.

Much of the scrutiny has been on those who deploy automated tools that amplify bias into decisions about who to hire, how worker productivity is monitored or who gets access to housing and loans.

But amid a fast-moving race between tech giants such as Google and Microsoft in selling more advanced tools that generate text, images and other content resembling the work of humans, Khan also raised the possibility of the FTC wielding its antitrust authority to protect competition.

“We all know that in moments of technological disruption, established players and incumbents may be tempted to crush, absorb or otherwise unlawfully restrain new entrants in order to maintain their dominance,” Khan said at a virtual press event Tuesday. “And we already can see these risks. A handful of powerful firms today control the necessary raw materials, not only the vast stores of data, but also the cloud services and computing power that startups and other businesses rely on to develop and deploy AI products.”

Khan didn't name any specific companies or products but expressed concern about tools that scammers could use to “manipulate and deceive people on a large scale, deploying fake or convincing content more widely and targeting specific groups with greater precision.”

She added that “if AI tools are being deployed to engage in unfair, deceptive practices or unfair methods of competition, the FTC will not hesitate to crack down on this unlawful behavior.”

Khan was joined by Charlotte Burrows, chair of the Equal Employment Opportunity Commission; Rohit Chopra, director of the Consumer Financial Protection Bureau; and Assistant Attorney General Kristen Clarke, who leads the civil rights division of the Department of Justice.

As lawmakers in the European Union negotiate passage of new AI rules, and some have called for similar laws in the U.S., the top U.S. regulators emphasized Tuesday that many of the most harmful AI products might already run afoul of existing laws protecting civil rights and preventing fraud.

”There is no AI exemption to the laws on the books," Khan said.

Explore more on these topics:

Share:

IMPORTANT NOTICE AND DISCLAIMER

This article does not provide any financial advice and is not a recommendation to deal in any securities or product. Investments may fall in value and an investor may lose some or all of their investment. Past performance is not an indicator of future performance.

Originally published by Associated Press Valuethemarkets.com, Digitonic Ltd (and our owners, directors, officers, managers, employees, affiliates, agents and assigns) are not responsible for the content or accuracy of this article. The information included in this article is based solely on information provided by the company or companies mentioned above.

Sign up for Investing Intel Newsletter