By Joseph Lord via The Epoch Times (emphasis on ours),
Former Google CEO Eric Schmidt said autonomous artificial intelligence (AI) is coming – and that it could pose an existential threat to humanity.
“Soon we will be able to make computers work on their own, deciding what they want to do,” said Schmidt, who has long worried about both the dangers and the benefits that AI poses to humanity, during a Dec. 15 appearance on ABC’s “This Week” show.
“That’s a dangerous point: If the system can improve itself, we need to seriously think about disconnecting it,” Schmidt said.
Schmidt is far from the first technology leader to raise these concerns.
The rise of consumer AI products like ChatGPT has been unprecedented over the past two years, and the language-based model has been significantly improved. Other models of artificial intelligence have become increasingly adept at creating visual art, photographs and full-length videos, which in many cases are almost indistinguishable from reality.
For some, the technology brings to mind the “Terminator” series, which focuses on a dystopian future in which artificial intelligence takes over the planet, leading to apocalyptic results.
For all the fears that ChatGPT and similar platforms have aroused, the consumer AI services available today still fall into the category that experts would consider “dumb AI.” These AI are trained on the basis of a huge amount of data, but they lack consciousness, sensitivity or the ability to behave independently.
Schmidt and other experts aren’t particularly concerned about these systems.
Rather, they are concerned about more advanced AI, known in the tech world as “AI General Intelligence” (AGI), describing much more sophisticated AI systems that may have sensitivity and can consequently develop conscious motives that are independent and potentially dangerous to human interests.
Schmidt said such systems don’t exist yet, and we’re rapidly moving toward a new, intermediate type of AI: one that lacks sensitivity, defines AGI, and is still capable of acting independently in areas such as research and armaments.
“I’ve been doing this for 50 years. I’ve never seen innovation on such a scale,” Schmidt said of the rapid evolution of AI complexity.
Schmidt said a more advanced AI would have many benefits for humanity — and could have as many “bad things as weapons and cyberattacks.”
Challenge
Schmidt said the challenge is multifaceted.
At a key level, he reiterated the common view of technology executives: if autonomous AGI-like systems are unavoidable, it will require extensive international cooperation from both corporate interests and governments to avoid potentially devastating consequences.
This is easier said than done. AI gives U.S. competitors such as China, Russia, and Iran a potential advantage over the United States that would otherwise be difficult to achieve.
The tech industry, too, currently faces massive competition between large corporations — Google, Microsoft and others — to outperform competitors, a situation that creates inherent risks of using inappropriate security protocols to deal with fraudulent AI, Schmidt said.
“The competition is so fierce, there’s a concern that one of the companies will decide to cancel the [safety] steps and then somehow release something that really hurts,” Schmidt said. He said that such losses appear only after that.
The challenge is greater on the international stage, where competing countries are likely to see the new technology as revolutionary in their efforts to challenge U.S. global hegemony and expand their influence.
“The Chinese are smart, and they understand the power of a new type of intelligence for their industrial power, military power and surveillance system,” Schmidt said.
It’s a bit of a catch-22 for U.S. leaders in the field who are forced to balance existential concerns about humanity with the possibility that the U.S. will lag behind its adversaries, which could be disastrous for global stability.
In the worst case scenario, such systems could be used to construct crippling biological and nuclear weapons, especially by terrorist groups such as ISIS.
For this reason, Schmidt said, it is extremely important that the United States continues to innovate in this area and ultimately maintains technological dominance over China and other competing countries and groups.
Industry leaders demand regulation
Regulation in the field is still inadequate, Schmidt said. But he hopes governments’ focus on stepping up technology-related safeguards will accelerate dramatically in the coming years.
When asked by anchor George Stephanopoulos if governments are doing enough to regulate it, Schmidt replied: “Not yet, but they do it because they have to do it.”
Despite some initial interest in the field — hearings, legislative proposals, and other initiatives — that emerged during the current 118th Congress, this session appears to be on track to end without key AI-related legislation.
For his part, President-elect Donald Trump has warned of the enormous dangers of AI, saying in a appearance on Logan Paul’s “Impaulsive” podcast that it’s “really powerful stuff.”
He also talked about the need to maintain competition with opponents.
“It leads to difficulties, but we have to be at the forefront,” Trump said. “It’s going to happen, and when it happens, we have to take the lead over China. China is the main threat.”
Schmidt’s approach to both the benefits and challenges of technology aligns with the reactions of other industries.
In June 2024, OpenAI and Google employees signed a letter warning of the serious risks posed by AI and calling for greater oversight of the industry by the government.
Elon Musk has issued similar warnings, saying Google is trying to create a “digital god” through its DeepMind AI program.
In August, these concerns deepened after it was discovered that AI took autonomous measures to avoid lockdown – creating fears that humanity was already losing control of its creation as governments remain passive.