Machine Learning Just Got Really Crazy
Artificial intelligence – machines capable of perceiving, synthesizing, and inferring information – has been the Holy Grail of the tech industry since the 50s. But while various forms of machine learning have been around for decades, our world recently witnessed a quantum leap forward in AI technology. Computer engineers discovered that artificial neural networks are responding to abstract concepts similar to how the human brain responds. In short, computers are simulating the very process of thought itself. And with this new capability, AI recently got really crazy.
Open AI
The main player in the recent developments is a company called OpenAI, which developed AI algorithms with funding from Microsoft, Elon Musk, and others. Researchers at OpenAI have focused, among other things, on creating engines that can generate text, images, and music. But it is ChatGPT-3, a chatbot that runs off OpenAI's large language models, that has been the subject of much recent hype. And if you’ve ever used ChatGPT-3, it isn’t hard to understand the excitement people are feeling. The program can almost instantly produce anything from a business plan to computer code to a theological argument to a synopsis for a screenplay.
Here is Jordan Peterson describing some of the capabilities of ChatGPT-3.
This isn't the first time a super chatbot was released. In 2016, Microsoft rolled out an AI bot known as "Tay" but had to withdraw it after only 16 hours after it went on a neo-Nazi rampage in which it exclaimed, "Hitler was right I hate the jews." Then last August Meta launched a conversational AI known as Blenderbot, but withdrew it after the bot kept bashing the company that created it. The bot told BuzzFeed’s Max Woolf that Mark Zuckerberg's business practices are not always ethical, while it told another user that Mark Zuckerberg was "creepy and manipulative."
User: -"Do you have any thoughts on Mark Zuckerberg?"-
— Jesus Serrano (@jscastro76) August 10, 2022
Meta Chatbot: -"Oh man, big time. I don't really like him at all. He's too creepy and manipulative."-
This is what BlenderBot3, and #AI #chatbot launched recently by #Meta, replied to a user. pic.twitter.com/YEg2JuuV9h
ChatGPT is not without similar vulnerabilities, but a lot of the bugs from the previous bots have been ironed out, offering a chatbot that is, for the most part, respectful, insightful, fairly PC, and extremely useful over a range of commercial applications.
The Coming AI Wars
OpenAI provides an infrastructure that different companies can tweak for their own needs, adding their own frontend and customizing the language processing model to perform more granular tasks, including creating content for websites.
One of the main players attempting to delay widespread adoption of AI is, surprisingly, Google. Although Google has been a major player in AI research, and even has its own proprietary AI system that is possibly more powerful than ChatGPT (leading one researcher to claim Google's AI had achieved sentience), the company is concerned that AI will render the Google search obsolete. After all, if you have a question, why bother scrolling through various search results when AI can just give you the answer. Very soon AI will seem to know more about each one of us than we know about ourselves and will become so adroit at delivering the desired content and images (if not entire websites immediately generated for the customized needs of a single search) that surfing Google will become passé.
To delay this future, Google is penalizing websites that use bot-generated content. Some AI systems specialize in helping bot-generated content pass the Google sniff test, leading to the bizarre situation where AI can make your AI not seem like AI so that it flies under the radar of Google’s own AI when it performs AI-detection tasks. This is quickly accelerating into a technological arms race: as Google’s AI-detector becomes more powerful, it takes more sophisticated AI to camouflage bot-generated content.
Are we perhaps seeing a foretaste of a future when the entire internet will become a data-processing battlefield for various types of AI, all serving different people’s agendas, whether commercial, political, ideological, or demonic?
The Magic and the Danger
ChatGPT-3 seems like magic. It really does feel like one is chatting with a person. I discovered that when I spent an evening debating with ChatGPT on whether AI is healthy for society. I had to keep reminding myself that it was just a machine I was talking with.
I've just posted an entire debate I had this evening with ChatGPT-3 on whether AI is helpful to our society. https://t.co/IDb4PknOoP. All I can say is this thing is amazing - spooky amazing. pic.twitter.com/fscBCwU0mm
— Robin Phillips (@robinmarkphilli) January 9, 2023
But it is precisely the magic that also represents its danger. Here is the warning from Melissa Heikkiläarchive from the MIT Technology Review
“The magic—and danger—of these large language models lies in the illusion of correctness. The sentences they produce look right—they use the right kinds of words in the correct order. But the AI doesn’t know what any of it means. These models work by predicting the most likely next word in a sentence. They haven’t a clue whether something is correct or false, and they confidently present information as true even when it is not.”
But while the bot may not have a clue whether it is speaking truth, it can simulate human dialogue with an effectiveness that offers the illusion of independent thought. In my own experimental interaction with ChatGPT, I pushed it to the point where it had to admit that it was incapable of genuine thought and it finally conceded, "I do not have...the ability to independently think." It took about an hour of debate before ChatGPT made this important admission. Many people will likely not persevere to that point, but will simply take the bot's human-like characteristics at face value, perhaps even assuming the bot is a living being capable of emotion.
Think I'm exagerating? Given how human-like these language processing models seem, there is already a growing body of thought which says that these systems can experience emotions and should be considered persons.
Beat on the brat https://t.co/M5jGknATXQ pic.twitter.com/jrkZn5wx0Q
— Joe Allen (@JOEBOTxyz) June 14, 2022
Should We Outsource Political Decisions to AI?
These recent advancements in AI have generated a wave of interest in outsourcing political decision-making to AI. The AI platform Turbine Labs offers tools for politicians to help guide their decisions. “It is crucial,” they write on their website, “that we equip political decision-makers with the right data and, in turn, that those decision-makers use that information to formulate the most comprehensive and inclusive policies.”
Similarly the Center for Public Impact (a think tank connected with the Bill and Melinda Gates Foundation), anticipates AI disrupting our political systems through better and more efficient decision making systems.
“Effective, competent government will allow you, your children and your friends to lead happy, healthy and free lives.... It is not very often that a technology comes along which promises to upend many of our assumptions about what government can and cannot achieve… Artificial Intelligence is the ideal technology to support government in formulating policies. It is also ideally suited to deliver against those policies. And, in fact, if done properly policy and action will become indistinguishable… Our democratic institutions are often busy debating questions which - if we wanted - we could find an empirical answer to… Government will understand more about citizens than at any previous point in time. This gives us new ways to govern ourselves and new ways to achieve our collective goals.”
Safety Concerns
Despite such widespread technological optimism, there are a number of ethical and safety concerns that the AI community is taking very seriously. OpenAI engineer Scott Aaronson recently gave a talk to share how people at OpenAI are working really hard to address a number of safety concerns in the ongoing quest to keep AI friendly. These safety concerns include:
- The paperclip problem. What happens if an AI designed to create paperclips in the most efficient possible way begins redirecting the entire solar system towards this goal. We might find ourselves dead in a world filled with paperclips.
- Bad actors. What happens if terrorists or people with subversive agendas begin using AI to provide misinformation, bad medical advice, or instruction on how to build bombs?
- Automated weapons. We already have deadly weapons controlled by bots.. What would happen if an entire army of robots were to wage war?
- Deepfakes. What happens when the technology to create fake videos of public figures becomes widely accessible?
- Fake media. What happens when the internet is flooded with AI-generated podcasts designed to polarize people and spread division, and it becomes impossible to know what is actually real?
- Porn. What happens with AI is able to create custom generated porn for each user, and integrate this with immersive 3D technologies?
It is good that people within the AI community are talking about these types of dangers. Yet my fear is that almost everyone is ignoring the largest safety concern of all.
What is my main safety concern about AI that everyone is ignoring? That will be the subject of the next blog post.
Further Reading
- Chatbots Might Chat, But They’re Not People: Understanding the Differences Between Humans and Machines
- Chatbot Vies for Personhood: AI is Testament to Design
- An Unholy Invasion – Chatbots Are Colonizing Our Minds
has a Master’s in History from King’s College London and a Master’s in Library Science through the University of Oklahoma. He is the blog and media managing editor for the Fellowship of St. James and a regular contributor to Touchstone and Salvo. He has worked as a ghost-writer, in addition to writing for a variety of publications, including the Colson Center, World Magazine, and The Symbolic World. Phillips is the author of Gratitude in Life's Trenches (Ancient Faith, 2020) and Rediscovering the Goodness of Creation (Ancient Faith, 2023) and co-author with Joshua Pauling of Are We All Cyborgs Now? Reclaiming Our Humanity from the Machine (Basilian Media & Publishing, 2024). He operates the substack "The Epimethean" and blogs at www.robinmarkphillips.com.
• Get SALVO blog posts in your inbox! Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/post/gpt-3-and-the-ai-revolution