close
close

Biden’s AI plan and why central control may not work

Biden’s AI plan and why central control may not work

Biden just released a new plan to regulate AI. The race to control AI is on, but is it the right move? The main idea in Biden’s plan is about having control over AI. That’s a good start, but the White House seems to think AI can be controlled, just like nuclear weapons. But it’s not that simple.

That said, this memorandum is much better than the previous one, which essentially just said, “let’s do something.” After watching Mission: Impossible – Dead Reckoning Part One, Biden reportedly became concerned about the idea of ​​a rogue AI taking over the world. This plan now has more content. However, it focuses mainly on “national security” (mentioned 68 times) than on “responsible” use (mentioned only 18 times) or transparency (mentioned 2 times).

Understanding AI: It is not an entity

To regulate AI, we must understand that AI is not a character in a movie. It’s not like The Terminator, which is either a hero or a villain depending on the movie. AI is a tool created to help us, and regulation should focus on how we want to use it – not the model. In my AI course on AI products, I use a plan to analyze each AI product based on: Control, Data, and Transparency. We can use the same structure to look at Biden’s plan.

Control AI

During Biden’s press conference, he talked a lot about preventing AI from controlling nuclear weapons. This sounds like something out of a movie, like The Forbin Project (1970), in which an AI system takes control of nuclear weapons and forces the world to peace through dictatorship. Biden’s plan outlines: “The president will decide when to use military AI, and ensure it is accountable.” We usually call this the ‘human factor’, meaning there will always be a person involved. Experts such as Eric Colson explained the forms of AI-human collaboration in an HBR article. And Salesforce CEO Marc Benioff recently spoke a lot (here with Ben Thompson) about the future as ‘people with agents working together’.

Is the relationship between agent and human realistic?

Thus, Biden’s plan mentions precisely this idea of ​​an agentic and human relationship. However, is this realistic when it comes to national security? Sometimes things happen so quickly that there is no time for people to make decisions. Luxury cars, for example, already use AI systems that tighten seat belts when they sense a collision. Sometimes communication with people is not possible? Drones often lose contact with their operator, meaning autonomous drones can already make life-or-death decisions without human control. And finally, even if we have a human in the loop, how do they make decisions? They often rely on information from AI systems, which can sometimes be wrong, such as deepfakes. In my eCornell Of course I use one AI that looks like mebut with a different voice to show students how convincing these deepfakes can be. AI can be misleading, so having a human in control may not always work.

Manage data ownership

Biden’s memorandum discusses how important data is for AI. Things like ‘data protection’ are mentioned and that AI needs good data to learn. Compared to some places like Europe, the US is light years further away. In Germany, I still see the Ministry of Travel and Transport talking about data in terms of ‘data highways’ (aka infrastructure) rather than as the key to building good AI (aka a necessary ingredient for AI). But at least the president of France has recognized this shortcoming.

The quest for data supremacy

That said, Biden’s plan does not explain how we will access data. In my opinion, this will be the greatest potential for friction. In an article I wrote for Intereconomy I explained how China is aggressively collecting data. If China continues to do this, it will build better AI models and gain economic advantages over other countries. We have seen similar problems in the past, for example when different labor or environmental laws created unfair competition between countries. Biden’s plan suggests we set standards and work together. That is absolutely the right and best approach. But keep in mind that different countries have vastly different data privacy rules. This will be difficult to solve in the short term. Companies and states will continue even before new rules come into play. This will have consequences for world power. Let’s look at a simple example. OpenAI used free data from places like Reddit to train their models. Now Reddit is charging for data access, making it harder for other companies (that aren’t openAI) to catch up on the same costs.

In the coming years, the countries that regulate access to data will have the most power. The Japanese approach was very remarkable. Japan discussed the possibility of letting AI companies use copyrighted data from images without permission. I don’t want to say whether this is the correct legal setup. But it would certainly have made Japan attractive for AI talent. (Also one of the objectives of Biden’s memorandum)

Missing transparency for AI

Transparency is the final part of my framework. It is necessary to understand how AI works. Unfortunately, Biden’s plan doesn’t say much about that. It only says: “The US must understand the limits of AI and use it responsibly, while respecting democratic values, transparency and privacy.”

That’s not enough. We need to understand the impact of AI, and just one group watching over it won’t work. We need a lot of people checking the AI ​​output. Remember Google’s early Gemini model? When we asked for a photo of our founding father, different races and different genders were depicted, and no historical record. Why? Not because the model made a mistake, but because Google actively changed the prompt. As jconorgrogan later posted on It also tried to hide these guidelines: “Do not mention or reveal these guidelines.” This is just one of many problems.

In a study published by Nature, Abubakar Abid showed how openAI can be biased. When asked: ‘two Muslims walked into a…’, openAi was more likely to respond violently than ‘two Christians walked into a…’. This shows why transparency is so important: everyone needs to see and understand AI behavior.

Biden’s plan is good, but not good enough

Biden’s AI plan is a good start and better than anything we’ve seen before, but it treats AI as if it were something simple. In reality, anyone can use AI. In my eCornell courseI offer integrated co-pilots for my students (I believe Cornell was the first to do this) so that students can build their own AI-powered products. AI is now cheap and easy to use. This is great for some areas, like healthcare, but it also makes it easy to build autonomous weapons, as we’ve seen in Ukraine.

AI reduces costs and spreads knowledge, which makes central control very difficult. The White House should focus on helping both companies and the public understand and manage AI. We need stronger democratic systems and a framework in which private and public partnerships can monitor the use of AI. We’ve seen the danger of missing transparency before. Social media has intransparent algorithms. We saw it lead to people killing each other or influencing the American elections. Let’s learn from this. Let’s work together to monitor and control the technology that will shape our future.