In 2012, Dr. Geoffrey Hinton and two of his graduate students at the University of Toronto developed technology that has served as the intellectual foundation for the artificial intelligence (AI) systems that major tech companies believe are crucial to their future. However, on Monday, Dr. Hinton officially joined a growing number of critics who claim that these companies are hurtling toward danger with their aggressive pursuit of generative AI technology, which powers popular chatbots like ChatGPT.
After working at Google for over a decade and becoming one of the most respected voices in the field, Dr. Hinton has resigned to openly voice his concerns about the risks of AI. He admits to regretting his life’s work to some extent, but justifies it by saying that if he hadn’t done it, someone else would have. During a lengthy interview last week in the dining room of his Toronto home, which is a short walk from where he and his students made their breakthrough, Dr. Hinton expressed these sentiments.
Dr. Hinton’s journey from A.I. pioneer to doomsayer is a significant moment for the technology industry, which is at a crucial turning point. The introduction of new A.I. systems is viewed by industry leaders as potentially as transformative as the early 1990s introduction of the web browser, with the potential to drive breakthroughs in fields like drug research and education.
However, many industry insiders are plagued by the fear that they are unleashing something dangerous into the world. Already, generative A.I. can be used to spread misinformation. In the near future, it may pose a threat to employment. In the long term, the most concerned individuals in tech warn that it could even pose a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
Following the release of a new version of ChatGPT by San Francisco-based start-up OpenAI in March, over 1,000 technology leaders and researchers signed an open letter requesting a six-month halt in the development of new A.I. systems, citing their “profound risks to society and humanity.”
Just days later, 19 current and former leaders of the 40-year-old academic society, the Association for the Advancement of Artificial Intelligence, issued their own warning about the dangers of A.I. This group included Eric Horvitz, the chief scientific officer at Microsoft, which employs OpenAI’s technology in a wide range of products, such as its Bing search engine.
Dr. Hinton, known as “the Godfather of A.I.,” refrained from signing the open letters that called for a moratorium on new A.I. systems due to their perceived risks to society and humanity. He explained that he did not want to publicly criticize Google or other companies until after he had resigned from his job. He officially notified Google of his resignation last month, and on Thursday, he talked with Sundar Pichai, Alphabet’s CEO, but did not reveal the conversation’s details.
Google’s chief scientist, Jeff Dean, issued a statement in response to Dr. Hinton’s departure, reiterating the company’s commitment to a responsible approach to A.I. and to understanding the emerging risks while continuing to innovate.
Dr. Hinton, a lifelong academic, was motivated by his personal convictions about the development and use of A.I. His career’s turning point came in 1972 when, as a graduate student at the University of Edinburgh, he adopted a then-radical idea called a neural network, a mathematical system that learns skills by analyzing data. In the 1980s, he moved to Canada from Carnegie Mellon University, stating that he was reluctant to accept Pentagon funding for A.I. research. Dr. Hinton strongly opposes the use of artificial intelligence on the battlefield.
In 2012, Dr. Hinton, along with his graduate students Ilya Sutskever and Alex Krishevsky, built a neural network that could teach itself to recognize common objects in images, such as flowers, dogs, and cars. Google acquired their company for $44 million, leading to the creation of increasingly powerful A.I. technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become OpenAI’s chief scientist. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award for their work on neural networks.
During the same period, various companies including Google and OpenAI started developing neural networks that could learn from vast amounts of digital text. While Dr. Hinton recognized the potential of this approach for machines to comprehend and produce language, he believed it fell short of how humans process language.
Previously, Dr. Hinton believed that even though neural networks could learn from digital text, they were inferior to human language processing abilities. However, his view changed when Google and OpenAI began using larger amounts of data to build systems that were surpassing human intelligence in certain ways. Dr. Hinton now believes that as companies continue to improve their AI systems, they become increasingly dangerous, especially since they could potentially upend the job market and flood the internet with false information.
Dr. Hinton is particularly worried that future versions of AI technology could pose a threat to humanity since they often learn unexpected behavior from vast amounts of data, and he fears the possibility of truly autonomous weapons becoming reality. While some experts consider this threat to be hypothetical, Dr. Hinton thinks that the race between Google, Microsoft, and others will escalate into a global race that can only be controlled by global regulation. However, this is unlikely since it is hard to know if companies or countries are working on AI technology in secret.
Dr. Hinton now believes that AI technology should not be scaled up further until its control has been thoroughly understood. He no longer paraphrases Robert Oppenheimer’s statement about technical advancements being pursued regardless of their potential dangers.
This leads us to ask the question ‘what do we really need to know about AI’?
Here’s What You Need To Know About AI
Artificial intelligence is just that, artificial. It’s not necessarily real or truthful. AI art generating images can generate an image of Dr. Fauci and Bill Gates getting arrested for his crimes against humanity, but this hasn’t happened and will not happen because the tech and governing overlords won’t allow it.
What you need to know is that all of the images and things you see online are not real, this should go without saying but with artificial intelligence imagery it will be harder than ever to discern the truth from fiction. Best of luck to you on this new technological journey. As for me, I’ll be off in the woods enjoying a campfire with my loved ones 😉