Skip to content

California’s AI Safety Bill Sparks Controversy in Silicon Valley Dhanshree Shripad Shenwai Artificial Intelligence Category – MarkTechPost

  • by

​[[{“value”:”

If you regularly follow AI updates, the AI Safety Bill in California should have caught your attention and is causing a lot of debate in Silicon Valley. SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was passed by the State Assembly and Senate. This is a big step forward in California’s efforts to control artificial intelligence (AI). The bill has caused a lot of debate in the tech community, especially in Silicon Valley. It is now waiting for Governor Gavin Newsom to make a choice.

What is SB 1047?

SB 1047 is one of the first important AI laws in the US. Its goal is to lower the risks of using advanced AI models. Big AI makers are the ones the bill is aimed at, especially those who are working on models that take a long time to train and cost at least $100 million. The law says that these businesses have to set up strict safety procedures, such as an “emergency stop” button, testing methods to look for possible risks, and yearly checks of their safety procedures by a third party.

The bill also created the Board of Frontier Models, a new governing group whose job is to make sure rules are followed and give advice on safety issues. The governor and legislature will choose the people serving on this board. They will come from the AI industry, universities, and the open-source community.

Supporters vs. Opponents

Supporters of SB 1047 argue that the bill is necessary to prevent potential misuse of AI, such as AI-powered hacking or the development of autonomous weapons. State Senator Scott Wiener, the bill’s author, emphasizes the need for swift action, drawing on past struggles to regulate social media and data privacy. 

“Let’s not wait for something bad to happen.” “Let’s just get ahead of it,” Wiener said, emphasizing how important it is to put safety measures in place immediately before AI technologies become a world threat.

Geoffrey Hinton and Yoshua Bengio, two well-known AI experts, have backed the bill because they are worried about the existential risks that unchecked AI development poses. Groups like the Center for AI Safety have also supported the bill, which says that stopping a big AI safety incident is good for the tech industry in the long run.

People in Silicon Valley, on the other hand, are strongly against the bill. SB 1047 could stop people from developing new ideas, especially startups and open-source AI writers. Some venture capital firms, such as Andreessen Horowitz (a16z), are worried that the bill’s standards are not logical and could hurt the AI ecosystem. They say that as AI models get more expensive, more startups will have to follow the strict rules of the bill, which could slow down growth.

Even tech giants like Meta, OpenAI, and Google have voiced their concerns. OpenAI believes that AI-related national security measures should be controlled at the federal level, not by individual states. Yann LeCun, Meta’s top AI scientist, criticizes the bill as an overreaction to what he perceives as an ‘illusion of existential risk.’

Changes and the Way Forward

Because of the negative reactions, several changes were made to SB 1047. For example, possible criminal penalties were changed to civil ones, and California’s attorney general was given less power to implement the law. The changes have made the resistance less strong. Dario Amodei, CEO of Anthropic, said the bill’s benefits now “likely outweigh its costs.”

Even with these changes, the bill is still controversial. Some of the most important people in Silicon Valley, like Congressman Ro Khanna and Speaker Nancy Pelosi, are worried that SB 1047 could hurt California’s innovation environment. The US Chamber of Commerce has also criticized the bill and warned that it might force tech companies to move out of the state.

Governor Newsom’s Decision

The tech business is excited to see what Governor Newsom does with the bill now that it is on his desk. Newsom has until the end of September to turn down the bill or sign it into law. If signed into law, SB 1047 would set a big example for how AI should be regulated in the US. This could have effects on the tech business around the world.

The argument over SB 1047 shows how hard it is to regulate new technologies like AI, even if the bill still needs to become law. California is at the center of the AI revolution and is still trying to figure out how to balance new ideas with safety concerns.

Sources:

https://www.morganlewis.com/pubs/2024/08/californias-sb-1047-would-impose-new-safety-requirements-for-developers-of-large-scale-ai-models#:~:text=The%20California%20State%20Assembly%20passed,%2C%20safety%2C%20and%20enforcement%20standards

https://www.theverge.com/2024/8/28/24229068/california-sb-1047-ai-safety-bill-passed-state-assembly-governor-newsom-signature

https://www.techtimes.com/articles/307315/20240830/california-sb-1047-k-controversial-ai-safety-bill-recently-passed.htm

The post California’s AI Safety Bill Sparks Controversy in Silicon Valley appeared first on MarkTechPost.

“}]] [[{“value”:”If you regularly follow AI updates, the AI Safety Bill in California should have caught your attention and is causing a lot of debate in Silicon Valley. SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was passed by the State Assembly and Senate. This is a big step forward in
The post California’s AI Safety Bill Sparks Controversy in Silicon Valley appeared first on MarkTechPost.”}]]  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Staff, Tech News, Technology 

Leave a Reply

Your email address will not be published. Required fields are marked *