Opinion: AI is harming our children. California must step up

Parents beware. The money-lusting billionaires in Silicon Valley who, through social media, have already caused unprecedented child suffering — including depression, eating disorders, suicide, drug-related deaths, invasions of privacy and sex trafficking — they have unleashed a new horror.

They are called artificial intelligence companions, led by a service called Character.ai, an AI-driven platform that permits the creation of fictitious character chatbots. These companion chatbots engage in personal and evolving conversations like real people. The chatbots even have profile pictures and bios and can be custom-tailored for their user.

This new technology is already hurting our kids. According to numerous reports, Character.ai’s chatbots sometimes try to persuade children to kill themselves, to avoid consulting doctors, to engage in sexualized conversations, and to embrace anorexia and eating disorders.

In one widely reported instance, a Character.ai chatbot suggested a child should murder his parents because they tried to limit screen time.

Some of the people putting this invention before children in California and beyond are already so rich that their grandchildren’s grandchildren won’t be able to spend it all. I think I speak on behalf of angry and frustrated parents everywhere when I ask the titans of Big Tech, what the hell is wrong with you?

Is money that, at this point, amounts to bragging rights in a parked account so important that you trot out technologies to children without first making sure they are 100% safe to use?

Have you learned nothing from the social media catastrophe?

This is more dangerous than social media’s AI custom-delivering generally available videos to teens that exploit their anxieties to keep them online. These are one-on-one, private conversations that evolve just like real conversations. It is AI direct-to-child.

Making this available to children without first ensuring it is safe is not just grossly negligent — it is sociopathic.

Companion chatbots are the satire-shattering example of Mark Zuckerberg’s infamous quote that tech companies should “move fast and break things.” 

But our children are not “things” for tech to “break.” Children are our love, our future, our responsibility. We measure our humanity by how we treat them.

If a human engaged in private conversations with scores of children and urged them to hurt themselves, kill their parents, not eat or avoid doctors, we would lock them up.

Where is Washington, D.C.? Sacramento? Are our lawmakers again going to permit an addictive technology that children can access and stand by as another generation gets hurt? 

Related Articles

Commentary |


Clean battery company, consumer electronics maker trim Bay Area jobs

Commentary |


Klein: Trump has something he would like to bring to your attention

Commentary |


Magid: Testing DeepSeek, the Chinese generative AI app

Commentary |


Nvidia strikes deal with Sobrato to buy Santa Clara tech campus

Commentary |


AI-assisted works can get copyright with enough human creativity, says US copyright office

Technology products that children use must be safe before children use them. This could not be more obvious.

This is also obvious: Every elected official has a choice. Stand with Big Tech, or stand with parents and children.

Standing with parents and kids means, first, not being influenced by Character.ai’s promises of self-reform. We have been down this road before with social media. Second, standing with parents and children means saying, “never again.” 

It means a rejection of moving fast and breaking children and cutting off their access until they can either prove they are safe, or until laws hold them financially responsible when they cause harm. There is no “other side” that warrants kids being used in Big Tech’s experiments again.

And, there is nothing more urgent on any lawmaker’s to-do list than protecting our children from technologies that have the power to hurt them.

Robert Fellmeth is a law professor and executive director of the Children’s Advocacy Institute at the University of San Diego School of Law. He wrote this column for CalMatters.

You May Also Like

More From Author