Jeff Nesbit
5 min readMay 3, 2021

Artificial Intelligence runs our lives — whether we like it or not

Government leaders took their first, tentative steps April 21 in a fight to keep high-risk Artificial Intelligence systems inside Pandora’s Jar, when European Union officials outlined an ambitious package of proposed regulations on the riskiest elements of AI systems.

The EU’s draft proposals — the first effort by any governments to regulate AI that is becoming ubiquitous in all facets of the global economy, but which could take months to negotiate — would ban or tightly control AI-driven systems that threaten people’s safety or rights.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission’s executive vice president for the digital age, said in a statement. “By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.”

But it may be too little, too late.

At some point, it will become obvious to all of us. Our lives will be controlled, from start to finish, by Artificial Intelligence. This reality is already with us — and we can’t escape.

The signs are now everywhere.

The Mayo Clinic just launched a company to harness patient data and deliver care — using artificial intelligence. It will collect and analyze data from remote monitoring devices and diagnostic tools and then use that patient data to deliver continuous care guided by AI.

In the waning days of the Trump administration, the Department of Transportation gave its first approval to a company for a self-driving vehicle directed by artificial intelligence. DOT approved a regulatory exemption for R2, which is a second-generation autonomous vehicle from Nuro.

It was a milestone for the emerging autonomous vehicle industry, which is built around AI’s ability to see what humans cannot on roads and highways. While R2 will deliver packages, not people, it’s an obvious roadmap for what’s to come — and what AI means. Domino’s Pizza just added Nuro’s autonomous delivery robot R2 to its delivery options.

“We custom-designed R2 to enrich local commerce with last-mile delivery of consumer products, groceries, and hot food from local stores and restaurants,” the company said when it announced the DOT approval. “With its specially designed size, weight, pedestrian-protecting front end, operating speed, electric propulsion, and cautious driving habits, R2 is ready to begin service as a socially responsible neighborhood vehicle that you can trust.”

Microsoft just announced that it was acquiring Nuance Communications for $16 billion. It is its largest acquisition in the past 5 years — and was specifically targeted so Microsoft could design real-world applications of “enterprise” artificial intelligence.

Nuance is a leader of artificial intelligence and speech-recognition software. Nuance’s products include Dragon, which uses AI to transcribe human speech for all sorts of real-world applications.

But Microsoft has much bigger designs for the use of enterprise AI and will use this Nuance acquisition to race ahead toward those applications. Specifically, it plans to aggressively use AI in the health care technology services field. The company believes the use of AI in the health care market could double it, to $500 billion.

“Nuance provides the AI layer at the health care point of delivery and is a pioneer in the real-world application of enterprise AI,” Satya Nadella, Microsoft’s chief executive, said in a statement.

The Nuance acquisition advances that goal. AI-driven medical computing is advancing at warp speed. In the very near future, virtually every aspect of health care will be impacted in one way or another by data-driven AI applications. But it is also just the beginning for Microsoft, which plans to use AI tools to race ahead in consumer and business finance and retail markets as well.

The EU’s new draft proposals on AI come at a critical juncture. Governments around the world are starting to recognize both the risks and rewards of such ubiquitous AI use in virtually all aspects of our lives. The EU’s new, proposed rules would govern what AI can — and cannot — be used for in products that span virtually all aspects of our daily lives.

Europe wants to avoid the worst aspects of what AI might do — e.g., trampling on privacy concerns through surveillance or unintended accidents from rogue AI systems — but still allow it to be used to grow the global economy.

The debate over government regulation of AI began in earnest earlier this year when the draft EU rules were first published by POLITICO. Under the new rules, violations will be steep — on the order of tens of millions of dollars per violation.

Streamlining manufacturing facilities (even though it replaces human jobs with AI ones)? Good, and allowed. Making the energy grid much more efficient so that freak snowstorms in Texas don’t kill people when power systems fail and natural gas plants freeze up? Good, and allowed.

But AI algorithms that prowl the internet and public data to create profiles of us for things like credit applications, social security benefits, visa applications or court cases? Potentially not good, or high risk, and subject to scrutiny and handwringing. AI-driven algorithms on media platforms that manipulate people’s behavior or decisions and threaten democracy? Bad, and not allowed.

AI-driven scoring systems, like those launched in China that track the trustworthiness of people and businesses, wouldn’t make the cut. They’d be banned in Europe.

The rules, if eventually enacted, would be the first of their kind to regulate artificial intelligence. The EU believes that it can avoid the sorts of mistakes other governments have made — like powerful technology companies that are largely unregulated in the United States on one end, or China’s use of AI systems as part of a sweeping, mass surveillance state at the other.

But, as high-minded and laudable as the efforts might seem, there is very little chance they will ever actually achieve what officials might hope.

For starters, the United States isn’t likely to agree to these strict, new AI rules in Europe. China certainly won’t follow them. And if the two biggest markets in the world don’t play ball on new rules designed to keep AI inside Pandora’s Jar, then no one else will, either.

AI systems are now so ubiquitous in every aspect of our daily lives — in our health care, financial, transportation, computing, retail delivery, telecommunications, entertainment and handheld devices — that there is no chance any of it will truly be rolled back, altered or blocked.

Artificial intelligence is here to stay — and it directs our daily lives, whether we like it or not.

Jeff Nesbit was the director of legislative and public affairs at the National Science Foundation during the Bush and Obama administrations.

Jeff Nesbit
Jeff Nesbit

Written by Jeff Nesbit

Former HHS/SSA/NSF/FDA/WH; contributing writer to the NYT, Time, US News, Axios; author of THIS IS THE WAY THE WORLD ENDS & POISON TEA from St. Martin’s Press

No responses yet