Original Source Here
Altman’s Big Asks Going To Congress On AI Safety
Safety standards, an oversight agency, independent auditors, international cooperation and legal responsibilities were discussed with Congress.
We’re familiar with tech CEOs heading to testify to congress. Facebook’s Mark Zuckerberg, Google’s Sundar Pichai and Jack Dorsey of Twitter appeared many times to be grilled by politicians, on a variety of subjects.
Usually, those types of hearings are a nothingburger, fortified only with political brownie points.
This week hits differently, though.
First, OpenAI’s Sam Altman ASKED to testify. He pushed for it.
This is a case of a tech CEO showing leadership and pushing our politicians to do what is right, not waiting to be called accountable later for a grey area of governance.
In fairness, we are behind other world powers. The EU approved a final version of its AI Act on Thursday making it very close to law. China released draft Administrative Measures for Generative Artificial Intelligence Services (official Chinese version available here) in April for review. The review period closed last week.
Meanwhile, in America—we are just getting warmed up. Fortunately, industry leaders like Sam Altman have spent years contemplating the challenges we the people face.
His first ask is that we get ahead of it. He wants us to lead the conversation on global regulation. That seems to be happening, and mercifully, there is bipartisan recognition of the effort. So far, they haven’t weaponized the discussion for political benefit. Let’s hope it stays that way.
Let’s take a look at the rest of his asks.
Altman Is Looking To Create Safety Standards
Citing the most immediate threats to democracy and to our societal fabric, Altman is focused on how to avoid highly personalized disinformation campaigns that can now run at scale thanks to generative AI. AI’s ability to fool us is inherent to its design, and the root of its danger.
He did not elaborate on the specific threats that we need to set standards on, but they range from warnings about the spread of misinformation and bias to bringing about the complete destruction of biological life.
To underscore this danger, Sen. Richard Blumenthal kicked off Tuesday’s hearing with some theatrics. Using a fake recording of his own voice, written by ChatGPT, and audio of Blumenthal’s actual voice produced using recordings of his floor speeches, he applauded how accurately ChatGPT reflected his views. However, he pointed out that ChatGPT just as easily could have produced “an endorsement of Ukraine’s surrendering or Vladimir Putin’s leadership.”
Frightening.
Altman drove fears further, reminding folks that we will have another election in just 18 months, and the models are just getting better.
“Some of us might characterize it more like a bomb in a china shop, not a bull.”
—Sen. Richard Blumenthal (D., Conn.), chair of the group’s subcommittee on Privacy, Technology, and the Law
As for what Altman wants to regulate, he broadly suggested that AI systems that can “self-replicate and self-exfiltrate into the wild” and manipulate humans would be violations. He suggests barring models from self-replication and creating specific functionality tests the models have to pass, such as verifying the model’s ability to produce accurate information, or ensuring it doesn’t generate dangerous content.
Two Senators, Marcus and Montgomery, advocated for universal warning transparency from AI creators so that users would always know when they were interacting with a chatbot, for example. Marcus even discussed creating a type of “nutrition label” where AI creators would explain the components or data sets that went into training their models.
Altman did not include transparency considerations in his regulation recommendations.
Altman Suggests a Governing Agency, and License
With AI utility exploding, Altman believes we need strong AI regulation, including government licensing models.
This yet-to-be-born-agency would have the authority to license companies working on advanced AI models and revoke licenses if safety standards are violated.
This would act a lot like the SEC does for financial security. A necessary oversight and encumbrance to ensure investors can trust the system. A stabilizing force could open investment to flow into the AI market at large, spurring innovation, hindering bad actors, and creating a safe space for citizens to adopt AI.
At least four lawmakers addressed or supported the ideas of a new regulatory body to help navigate this new world with AI.
Does regulation benefit OpenAI?
The short answer is, yes!
OpenAI is a business, and its major competition is open source. They have near-term plans to release a new open-source language model to combat the rise of other open-source projects.
Regulation and licensing are expensive hurdles for any business, requiring lawyers, countless hours of work, and fees that could be prohibitive to loosely organized and not-well-funded open-source projects. It could skew the market towards private, licensed models.
So yes, this is also a strategy to help protect OpenAI’s business.
But, I’ve worked in open source for almost 20 years, and I consider that a weak argument. Good open-source projects get hundreds or thousands of people to help them. Good open-source projects have many eyeballs and hearts and wallets invested in them doing well.
Bad projects will suffer though. And that’s kinda the point. Stability is paramount to growing a large market, as the SEC experiment has demonstrated.
Altman Suggests Independent Auditors
To button up the package, Altman urged legislators to require independent oversight. He suggests audits from experts unaffiliated with the creators or the government would create the necessary checks and balances so we could ensure AI tools operated within the legislative guidelines.
Altman Calls For International Cooperation and Leadership
Recognizing that AI issues transcend national borders, Altman urged legislators to create international AI regulations and for the United States to take a leadership role in this effort.
Other Issues Discussed at the Altman Congressional Hearing
Job loss fears not a hot issue
Altman and Senators alike seem to agree that AI may eliminate some jobs, but new ones will form in their place. The important thing is to prepare the workforce for AI-related training.
“There will be an impact on jobs. We try to be very clear about that, and I think it’ll require partnership between industry and government, but mostly action by government, to figure out how we want to mitigate that. But I’m very optimistic about how great the jobs of the future will be.”
—Sam Altman, Open AI CEO
Creator compensation seems to be lower urgency
AI models use artists’ works in their training, and they can now produce similar works quickly and prolifically. Should creators be compensated?
Altman agrees we need to do something to reward their inputs, but was vague on how. He also sidestepped sharing how ChatGPT’s recent models were trained and whether they used copyrighted content.
His lawyers probably advised him to avoid sharing specific tactics, given the regulation is yet to be written. This is just an obvious landmine that could incriminate them later.
Really though, these types of issues tend to take much longer to work out. They are such complicated, wide-reaching issues that it often takes a big name to go to court to move the law forward here. We’ve been through this before with digital rights.
These things tend to get worked out, with the government focused on safety and stabilization, and the courts working out the money side. I expect it will go the same way this time.
Social media protection (Section 230) doesn’t apply to AI models
Section 230 is the contentious legislation that protects social media companies from liability for their users’ posted content. It’s a much-hated loophole that protects platforms from the individual actions of their users and fails to urge them to proactively govern bad actors.
This week, Altman argued that Section 230 doesn’t apply to AI models, and called for new AI-specific regulation instead. This is a rare time when a CEO begs the government to regulate his company.
Voter influence at scale is AI’s nearest and greatest threat
Altman thinks the most immediate threat AI presents is to democracy and our societal fabric. Its ability to create a deluge of personalized disinformation is so great, it has the power to reshape elections and our fabric of reality.
With just 18 months until the next presidential election, this should be a fire under legislators’ feet. With the last election’s “alternative facts” being tested in court today, it should be evident that these huge disinformation campaigns carried out by a sitting president happened without the power of AI.
AI critics are worried corporations are leading too much
Sen. Cory Booker (D-NJ) shared his concern about how much AI power was concentrated in the OpenAI-Microsoft alliance.
Others complained that letting Altman lead this conversation was a bad example of letting corporations write their own rules, which is roughly how legislation is proceeding in the EU.
AI/ML
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot