Quick access to top menu Direct access to main contents Quick access to page bottom
Subscribe and receive updates

Geoffrey Hinton Expresses Fear Over His AI Industry

“There is a 50% chance that artificial intelligence (AI) will surpass human intelligence within the next 5 to 20 years. There is also a 50% chance that AI once it surpasses human intelligence, will wrest control from humans.”

Jeffrey Hinton, the ‘Godfather of AI’ and the pioneer of the deep learning concept that underpins AI learning, issued this warning about the risks of AI in a recent New Year interview with Asia Economy. Hinton, a University of Toronto professor and winner of the ‘Turing Award,’ often referred to as the Nobel Prize of computer science made headlines last year when he abruptly left Google and expressed regret over his decades-long AI research. At the time, the world was enthralled by the dazzling technological advancements and conveniences brought about by chat GPT…….

Hinton once again likened AI to ‘nuclear weapons,’ warning that AI smarter than humans could potentially become ‘killer robots’ that control human society. Comparing the functioning of AI to the neural networks in the human brain he stated, “There is a 50% chance that AI’s reasoning ability will surpass humans within 5 to 20 years.” He also predicted a 50% chance that AI after learning all human-made things and autonomously generating and executing computer codes, “will wrest control from humans.” This is not a science fiction film scenario but a claim that AI could become an existential threat to humanity in reality.

The problem is that even the direction of discussions about such risks of AI is unclear at this point, let alone responses. Hinton, often cited as a prominent AI ‘doomer’ (pessimist), does not advocate for a temporary halt to AI research like other doomers. Regarding AI regulations under discussion in various countries, he said, “It is very unclear what will be effective,” and “AI is developing very rapidly making it much harder to regulate than nuclear weapons. Considering the benefits brought by AI, the pressure to develop is enormous.” He seems to view a halt or regulation of AI research as unrealistic.

When asked what the world should do now, Hinton simply responded, “It’s time to make efforts to find ways to ensure that (AI does not) control (humanity).” When asked if this means that the best we can do for now is to be vigilant about the dangers of AI and continue discussions from our respective positions, he affirmed, “Yes.” Hinton previously stated that he resigned from Google to “freely discuss this issue (the threat of AI).” His acceptance of the interview with Asia Economy was also intended to warn Korean readers about the dangers of AI and encourage them to contemplate the issue.

As the global AI development race, including the U.S. and China, accelerates, Hinton also suggested that there is a way to elicit China’s participation in safe AI development. He stated, “There is one threat that the U.S. and China can cooperate on,” and “It is the threat of AI controlling (humanity). Both do not want this.” This implies that as existential threats loom ever closer, international cooperation becomes inevitable.

The dangers brought by AI are not just stories of the distant future. In this interview, Hinton pointed out the immediate risks that generative AI could pose. He first mentioned the possibility of election manipulation saying, “The worst short-term impact of AI will be its ability to easily deceive voters with fake images, videos, and the like.”

He also agreed with a recent warning by Yuval Harari, a professor at Hebrew University that AI could cause a global financial disaster stating, “He may be right.” Observations suggest that it would be difficult to even predict the risks if AI which controls data-centric financial markets creates new financial tools. However, he drew a line at the fear that AI will take away human jobs stating that it will “make people much more efficient.” He added, “While there may be fewer jobs, it could also increase people’s productivity.”

Hinton was reluctant to comment on the recent conflict over AI development revealed by the sudden dismissal and reinstatement of Sam Altman, CEO of OpenAI, stating, “I don’t know enough about the situation to want to comment.” The incident was seen as revealing a split between doomers, who see AI as a potentially existential threat to humanity, and boomers (development optimists) who believe AI technology will advance humanity and that the killer robot claim is pure imagination. Of course, it posed more fundamental questions about how AI should be developed and utilized safely worldwide. Eliezer Yudkowsky, who participated in the board’s decision to dismiss CEO Altman due to concerns about rapid AI development and commercialization is a student who shares Hinton’s AI philosophy.

Hinton, a leading scholar in the AI doomer camp also left a rebuttal to the boomers who underestimate the threat of AI in this interview. “Have you ever seen an instance where something with higher intelligence is controlled by something with lower intelligence?” He emphasized that the absence of such an instance throughout history is the reason why we need to be wary of the dangers that AI can bring from now on.

By. Seul Ki Jo

+1
0
+1
0
+1
0
+1
0
+1
0
Eugene Park's Profile image

Comments0

300

Comments0

Share it on

adsupport@fastviewkorea.com