I spent a couple of days last week at the University of Oxford in the UK where I spoke at and attended the Oxford Generative AI Summit. This multi-stakeholder event brought together elected and appointed officials from the UK and other countries along with academics and executives and scientists from tech and media companies.
Related Articles
Magid: Speeding up a sluggish device
Larry Magid: Watch out for election misinformation
Magid: Handy keyboard shortcuts for Windows and Mac
Magid: For Meta, the future is even smarter glasses
Magid: Conference focuses on regional impact of AI
Other speakers included Michael Kratsios, who served as U.S. chief technology officer during the Trump administration; Michael Bronstein, the DeepMind professor of AI at the University of Oxford; Dame Wendy Hall DBE, a professor of computer science who is on the United Nations high-level advisory body on artificial intelligence; and Baroness Joanna Shields, OBE who served as the UK’s Minister for Internet Safety and Security under David Cameron and Theresa May. There were also executives from Google, TikTok, OpenAI and other tech companies.
GenAI explained
As a refresher, Generative AI (or GenAI) is artificial intelligence that can create “original” content, including text, images, video, audio and software code in response to a prompt or question entered by a human. It’s been around for a number of years but has come into prominence in the past couple of years thanks to major players like OpenAI, Google, Microsoft and Meta, which are putting massive resources into GenAI development. I put original in quotes because, although the AI model generates the content, it is based on training data it gets online and from other sources. So, although the wording is original, the information comes from a great many other places. Of course, that’s also true for human created content, but reputable journalists and academics usually cite their sources, which doesn’t necessarily happen with AI systems.
Regulation is needed
My panel was on AI regulations. I was joined by Markus Anderljung from the Centre for the Governance of AI, Rafaela Nicolazzi from OpenAI, Joslyn Barnhart, senior research scientist at Google DeepMind, and moderator Keegan McBride from the Oxford Internet Institute.
There was nearly unanimous consensus from my panel and other speakers that regulation of AI is inevitable and necessary. Most people seemed to agree with my comment that regulation should be focused and nuanced so as to prevent negative consequences while not impeding the potential positive aspects of Generative AI, which, at least as far as mainstream products are concerned, is still in its infancy. It should be focused on real harms and be flexible enough to allow for the inevitable changes in technology. As we’ve seen in the past few decades, the tech industry moves faster than governments, so it’s important for governments to provide general guidelines without trying to micromanage the technology.
Risk of jurisdictions passing conflicting laws
A couple of speakers worried aloud about the balkanization of AI regulation with multiple countries and U.S. states considering or passing legislation, which sometimes conflicts with regulations from other jurisdictions.
In an interview at the conference, Linda Lurie, who served in the Biden White House Office of Science and Technology Policy and is now at Westexec Advisors, told me, “What’s going to happen is that any company with a presence is going to have to comply with the most stringent regulations, which is kind of unfair and undemocratic.” She argued that many jurisdictions already have laws in place that can protect against misuse of AI. “We don’t have to put an AI stamp on every other law that any country has. Make sure you know what’s currently on the books to see where the gaps are, and to do that on a harmonized level. That includes input from both governments, but also from companies and from civil society. Only then are you going to be able to get real regulation that’s going to be effective and not kill AI.”
Risks
A number of people expressed concerns about large companies, mostly based in the United States, dominating generative AI in ways that could shut out other countries, especially in Africa, Latin America and other regions where the economy and tech infrastructure isn’t as developed as it is in the U.S., UK and much of Europe.
The risks are not only exclusion of these regions from any economic and social gains from GenAI but the biases that can be built into AI models, especially those trained on internet data that comes mostly from wealthier countries and dominant groups within those countries. Don’t just take my word for it. ChatGPT itself admits, “Countries with less internet infrastructure or lower rates of digital content creation (e.g., in media, academia, or user-generated platforms) contribute less to the training datasets of AI models. ”I suppose I should be pleased that even a bot can be self-critical when forced to reckon with a question about its own potential bias.
Optimism
Most speakers expressed cautious optimism. One UK politician spoke about how generative AI can help level the playing field not just among adults but for young people as well. When I asked whether she worried that big companies would dominate Generative AI as they have social media, search and other aspects of the internet, she expressed hope that regulations could prevent that from happening. I hope she’s right, but I’m not convinced.
Although many attendees and speakers expressed concerns over negative outcomes, including employment disruptions, bias, misinformation, deep fakes, privacy and security issues, lack of accountability and intellectual property disputes, just about everyone agreed that generative AI can provide humankind with hugh advantages and potential economic growth.
Oxford Ph.D. student Nathan Davies, who moderated panels for the event, said “It’s rare to get policymakers, academics and businesspeople together in one space.”
Although there were expected disagreements, I was left with a strong sense of hope in some shared values, which is impressive, given that the conference attendees spanned from Donald Trump’s former CTO to current Labour Party MPs.
After the conference, I walked about the more than 1,000-year-old campus. I’m sure its founders had no idea about artificial intelligence, but they did help lay the groundwork for the nourishing of human intelligence that has helped lead us to this place.
Related Articles
Eye-catching vision emerges for San Jose three-tower housing project
University of California sees donations Top $4 billion, CFO Says
LA Times editor resigns after newspaper owner blocked plans to endorse Kamala Harris
San Mateo County approves use of $18 million in federal funds for affordable housing
FIFA warns Bay Area fans: Don’t buy 2026 World Cup tickets currently on resale sites
Larry Magid is a tech journalist and internet safety activist. Contact him at larry@larrymagid.com.