We need a new Federal AI Safety Administration (FAISA) to be created, and quickly
I am happy that Sen Chuck Schumer (Senate Majority Leader) seems to get the importance of AI and the need to regulate it. He’s leading a congressional process to draft new legislation.
Schumer is quoted in this NPR article:
“Look, it’s probably the most important issue facing our country, our families and humanity in the next 100 years,” he said. “And how we deal with AI is going to determine the quality of life for this generation and future generations probably more than anything else.”
Yes, agreed — resoundingly so. Some are comparing, accurately, AI to the harnessing of fire or electricity. And it may well exceed both in importance of the coming decades.
Schumer’s ideas are a good start but we will need something far more ambitious than what he has in mind, including the possibility of a national moratorium on LLM development more powerful than GPT-4.0.
This is the inflection point, now, and Congress needs to rise to the occasion, and quickly.
It also seems pretty clear that we need a new federal agency to be created with the specific purpose of regulating the safety of AI products — kind of like the Federal Aviation Administration was created after a major crash of two planes in mid-air at the Grand Canyon back in the 50s.
This would be called something like the Federal AI Safety Administration (FAISA) and could be created quickly if Congressional efforts congeal around this goal.
OpenAI’s Cullen O’Keefe recently suggested a similar idea, but called it the Office of AI and Infrastructure Safety (OASIS).
It’s good to see OpenAI at least trying to get out front on wise regulation but since they are largely responsible for the problem itself I’m not going to clap for them just yet.