Elias: Why not instead merely adopt Asimov’s rules to govern A.I.?

Scott Wiener would have been all smiles if Gov. Gavin Newsom opted to sign his supposedly landmark bill to govern development of new artificial intelligence devices and programs in California.

Instead Newsom decided to veto the measure, also known as SB 1047, on Sunday.

This bill, originally, was also intended as a model for other states to follow, but it fell far short of that. Instead, it was so watered down in the legislative process, so dumbed down for the sake of political convenience that it might as well have contained no new rules.

Related Articles

Local News |


Elias: Newsom could be left out in the cold if Harris beats Trump

Local News |


Elias: Gov. Newsom committing to big wind power projects risky right now

Local News |


Elias: Harris unlikely to take stand on California’s fall anti-crime Prop. 36

Yes, Wiener, a Democratic state senator from San Francisco, sported a big grin when his bill passed, despite being cut to pieces in the state Assembly. That might have been because pioneering tech startups Open AI and Anthropic are in his district. Helping out potential hometown businesses by accepting a weaker measure can’t hurt Wiener as he continues his not exactly secret quest to take the seat Nancy Pelosi has occupied for decades in Congress whenever she retires.

Open AI is the developer of the widely-used A.I. tool Chat GPT, which has often been wrong about a host of things.

But here’s the real question for Wiener and why governor possibly vetoed his bill: Why set up a complicated, often obfuscated, so-called protection against harmful robots and mechanical minds when simple rules that could protect against all kind of problems were laid out about 82 years ago by a leading scholar and science-fiction author?

In his 1942 short story “Runaround,” Isaac Asimov first put forward his three laws of robotics, which would become staples in his myriad later works, including the famed “Foundation” series.

“The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law,” Asimov wrote, “is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause harm to itself.”

Rather than offering this kind of wide but simple protection, politics interferes. Some opponents questioned even the softened Wiener bill that eliminated a previously proposed state department specializing in safety measures for A.I. devices in all forms. Instead, they would be submitted for approval to the attorney general’s office, never known for its cybernetic genius.

The attorney general, nominally California’s top law enforcement officer, could penalize companies posing imminent threat or harm. But there is no solid definition of what that means.

Backers of the Wiener measure claimed it creates guardrails to prevent A.I. programs from shutting down the power grid and causing other sudden disasters. It’s clear some kind of controls are needed because A.I. is developing fast and in many forms, from taking over most mathematical functions at banks to writing automated news stories.

Then there’s the state’s legitimate concern that it not set up rules so tough they threaten to drive out the newest potential high tech economic engine, one that’s already picking up some of the slack for companies like Tesla and Toyota, which moved headquarters to other states.

Then there are those who claim this would be head-in-the-clouds regulation does not halt everyday, real-world concerns like privacy and misinformation. For sure, A.I. produces plenty of misinformation, often mangling basics like birth dates and birthplaces, thus complicating some people’s lives. Wiener’s bill offered no recompense for these ills.

Why not instead merely adopt Asimov’s rules? They’re simple and his vivid imagination used them as central features of many novels and stories involving robots with disparate personalities and functions.

The advantage to starting with simple rules to govern an industry that has previously had few is that it allows for designing new rules as need for them is demonstrated, and leaving people and companies alone to develop new A.I. functions and wrinkles with little interference from government agencies unless circumstances demand they step in.

There’s an old principle that says “Start simply” — and if there’s ever been a situation demanding this, it is the potentially limitless field of artificial intelligence.

Email Thomas Elias at tdelias@aol.com, and read more of his columns online at californiafocus.net.

You May Also Like

More From Author