The tech executive and lawmakers agreed that new A.I. systems must be regulated. Just how that would happen is not yet clear.
The tone of congressional hearings featuring tech industry executives in recent years can best be described as antagonistic. Mark Zuckerberg, Jeff Bezos and other tech luminaries have all been dressed down on Capitol Hill by lawmakers upset with their companies.
But on Tuesday, Sam Altman, the chief executive of the San Francisco start-up OpenAI, testified before members of a Senate subcommittee and largely agreed with them on the need to regulate the increasingly powerful A.I. technology being created inside his company and others like Google and Microsoft.
In his first testimony before Congress, Mr. Altman implored lawmakers to regulate artificial intelligence as members of the committee displayed a budding understanding of the technology. The hearing underscored the deep unease felt by technologists and government over A.I.’s potential harms. But that unease did not extend to Mr. Altman, who had a friendly audience in the members of the subcommittee.
The appearance of Mr. Altman, a 38-year-old Stanford University dropout and tech entrepreneur, was his christening as the leading figure in A.I. The boyish-looking Mr. Altman traded in his usual pullover sweater and jeans for a blue suit and tie for the three-hour hearing.
Mr. Altman also talked about his company’s technology at a dinner with dozens of House members on Monday night, and met privately with a number of senators before the hearing, according to people who attended the dinner and the meetings. He offered a loose framework to manage what happens next with the fast-developing systems that some believe could fundamentally change the economy.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.”
Mr. Altman made his public debut on Capitol Hill as interest in A.I. has exploded. Tech giants have poured effort and billions of dollars into what they say is a transformative technology, even amid rising concerns about A.I.’s role in spreading misinformation, killing jobs and one day matching human intelligence.
That has thrust the technology into the spotlight in Washington. President Biden this month said at a meeting with a group of chief executives of A.I. companies that “what you’re doing has enormous potential and enormous danger.” Top leaders in Congress have also promised A.I. regulations.
That members of the Senate subcommittee for privacy, technology and the law did not plan on a rough grilling for Mr. Altman was clear as they thanked Mr. Altman for his private meetings with them and for agreeing to appear in the hearing. Cory Booker, Democrat of New Jersey, repeatedly referred to Mr. Altman by his first name.
Mr. Altman was joined at the hearing by Christina Montgomery, IBM’s chief privacy and trust officer, and Gary Marcus, a well-known professor and frequent critic of A.I. technology.
Mr. Altman said his company’s technology may destroy some jobs but also create new ones, and that it will be important for “government to figure out how we want to mitigate that.” Echoing an idea suggested by Dr. Marcus, he proposed the creation of an agency that issues licenses for the development of large-scale A.I. models, safety regulations and tests that A.I. models must pass before being released to the public.
“We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” Mr. Altman said.
But it was unclear how lawmakers would respond to the call to regulate A.I. The track record of Congress on tech regulations is grim. Dozens of privacy, speech and safety bills have failed over the past decade because of partisan bickering and fierce opposition by tech giants.
The United States has trailed the globe on regulations in privacy, speech and protections for children. It is also behind on A.I. regulations. Lawmakers in the European Union are set to introduce rules for the technology later this year. And China has created A.I. laws that comply with its censorship laws.
Senator Richard Blumenthal, Democrat of Connecticut and chairman of the Senate panel, said the hearing was the first in a series to learn more about the potential benefits and harms of A.I. to eventually “write the rules” for it.
He also acknowledged Congress’s failure to keep up with the introduction of new technologies in the past. “Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” Mr. Blumenthal said. “Congress failed to meet the moment on social media.”
Members of the subcommittee suggested an independent agency to oversee A.I.; rules that force companies to disclose how their models work and the data sets they use; and antitrust rules to prevent companies like Microsoft and Google from monopolizing the nascent market.
“The devil will be in the details,” said Sarah Myers West, managing director of AI Now Institute, a policy research center. She said Mr. Altman’s suggestions for regulations don’t go far enough and should include limits on how A.I. is used in policing and the use of biometric data. She noted that Mr. Altman didn’t show any indication of slowing down the development of OpenAI’s ChatGPT tool.
“It’s such an irony seeing a posture about the concern of harms by people who are rapidly releasing into commercial use the system responsible for those very harms,” Ms. West said.
Some lawmakers in the hearing still displayed the persistent gap in technological know-how between Washington and Silicon Valley. Lindsey Graham, Republican of South Carolina, repeatedly asked witnesses if a speech liability shield for online platforms like Facebook and Google also applies to A.I.
Mr. Altman, calm and unruffled, tried several times to draw a distinction between A.I. and social media. “We need to work together to find a totally new approach,” he said.
Some subcommittee members also showed a reluctance to clamp down too strongly on an industry with great economic promise for the United States and that competes directly with adversaries such as China.
The Chinese are creating A.I. that “reinforce the core values of the Chinese Communist Party and the Chinese system,” said Chris Coons, Democrat of Delaware. “And I’m concerned about how we promote A.I. that reinforces and strengthens open markets, open societies and democracy.”
Some of the toughest questions and comments toward Mr. Altman came from Dr. Marcus, who noted OpenAI hasn’t been transparent about the data its uses to develop its systems. He expressed doubt in Mr. Altman’s prediction that new jobs will replace those killed off by A.I.
“We have unprecedented opportunities here but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability,” Dr. Marcus said.
Tech companies have argued that Congress should be careful with any broad rules that lump different kinds of A.I. together. In Tuesday’s hearing, Ms. Montgomery of IBM called for an A.I. law that is similar to Europe’s proposed regulations, which outlines various levels of risk. She called for rules that focus on specific uses, not regulating the technology itself.
“At its core, A.I. is just a tool, and tools can serve different purposes,” she said, adding that Congress should take a “precision regulation approach to A.I.”