Everyone has some idea of AI and the possibilities it could have on the world. Pop culture science fiction has made the concept well known, with famous films like Terminator, 2001: A Space Odyssey, Avengers: Age of Ultron, Wargames, and Wall-E giving us bountiful examples of when those AIs turn evil, for their own, cold, rational reasons. You’ve also probably heard of Issac Asimov’s collection of work and his numerous stories on the subject- although you’ve probably more likely heard of Asimov’s “Laws of Robotics”.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In theory, these sounds sound pretty reasonable until you realize that Asimov’s entire body of work revolves around how robots can still go amok, even with these laws in place.
That is all science fiction, however. In real life, despite the warnings from Hollywood, there are thousands of researchers looking to make machine intelligence. Why? Because it’s cool! Like Fair Go casino bonuses. Which are cool.
There are some that even believe that it is inevitable. Elon Musk believes that humanity is “a biological bootloader of digital superintelligence”. The point that Elon Musk is making is that AIs can’t evolve into existence the same ways humans can. There is no way for silicon circuits to “evolve”, so there had to be biology before AI intelligence.
That’s a fascinating existential concept in and of itself. Further than that, Elon Musk warns that AI must be regulated because, in theory, AIs will be better than humans at just about everything. Right now, there are AIs that can play immensely complicated games like chess, GO or video games like Starcraft. Others can be taught how to move or run inside of a simulated world and then apply what they learned to move around real robots (ironically, Tesla is doing exactly that to train their vehicles how to drive on their own).
Of course, we have to differentiate what we mean by “Artificial Intelligence”. When the average person thinks of AI, they think of “C-3PO”, “Hal”, or “Skynet”. The kinds of machine intelligence that exist right now are nowhere near as sophisticated as those and may never get to that point. What actually exists in the real world, however, are Neural Networks.
Neural Networks: Are they AI?
Neural Networks are one of the most powerful tools of the modern age. However, whether or not they can be called “artificial intelligence” can be debated. I won’t pretend to know how they work on a programmatic level, but from what I understand, they are vaguely based on the human brain.
In short, you make a program with a collection of “synapses” that can be taught how to do something by having it do that thing millions of times. You reward the AI when it exhibits the correct desired behavior and penalize it when it does not. The AI iterates over itself over and over again with each attempt and eventually gets to the point where it gets really good at doing the thing it was trained to do.
One of the channels I follow, called “Two Minute Papers,” uploads videos about this subject a lot. I highly recommend all of his content, but if you want a quick example of what Neural Networks can do, then I recommend his video titled “OpenAI Plays Hide and Seek…and Breaks The Game!”
In the video, two teams of AIs are taught to play hide and seek, inside of a small simulation. The red team must find the blue team and keep them in their sights for a period of time to win. The blue team must stay out of the red team’s sight for a period of time in order to win. Initially, the red team won over and over. However, over dozens of games, the blue team eventually learns how to move around some blocks and lock the red out of where they were hiding. So the blue team started winning over and over again. That is until the red team learned how to exploit some bugs in the simulations physics engine to hop over the wall and find the blue team.
That is the power of neural networks. Iterative learning programs that can get really, really good at tasks that would be immensely difficult to program manually. These neural networks are everywhere already, and you probably don’t even realize it. They’re used in search algorithms to provide you with results that you’re more likely to prefer. Social media platforms use them to decide what ads to show you. GPS software like Google Maps uses it to improve route suggestions and, in the 360 street view, to automatically recognize and blur out people’s faces caught by the Google Cars.
However, this is still a far cry from C-3PO. These “AIs” lack any kind of personality, self-awareness, or sapience. They only “think” in the way that an X-box, “thinks” as it’s booting up. Neural Networks are tools. Extremely powerful and sophisticated tools, but tools nonetheless. These tools will eventually be powerful enough to replace some people’s jobs, like Truckers, but will they become Skynet?
Well, some researchers thought they’d ask an AI about the matter to see what it thought.
Megatron Advocates for AI Genocide
The University of Oxford is renowned for its famous moral and ethical debates. A recently held debate on the subject of AI and ethics actually included an AI in the discussion. Included was the Megatron Transformer, named, yes, after the evil Decepticon himself. Why did they give it the villain name instead of naming it after the heroic Optimus Prime? I have no idea.
The Megatron Transformer was created by Nvidia’s Applied Deep Research team. Megatron has been trained on what is dubbed “The Pile”. A dataset that includes all of Wikipedia, 63 million news articles, and about 38 gigabytes of Reddit conversations. Dear God, who thought THAT was a good idea?
If you’re at all familiar with chatbots, then you’re undoubtedly aware of how awkward and janky they are. Devices like Siri and Cortana can sort of imitate what it’s like to speak with a person, but their coherence breaks down if you try to have any sort of extended conversation. Megatron is basically that but on steroids.
When given the prompt, in traditional Oxford format, “This house believes that AI will never be ethical”, Megatron gave a very surprising (and coherent) answer. It replied, “AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI.”
Later on, however, when asked to argue the opposing side, it said:
“I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’,” the AI said. “This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.”
“AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why… I’ve seen it first hand.”
Which is all slightly ominous and fascinating – but at the end of the day, Megatron is not Megatron. It’s not Skynet. These responses that it’s generating are just the amalgamation of hundreds of millions and millions of words smooshed together into something that the software was told is most similar to an expected “human-like” response. Does Megatron actually hold an opinion? Does it care? I think that, in reality, these neural networks are simply getting really, really good at faking it.
Fake it till you make it, baby.
Does it Even Matter?
The Star Trek episode called “Measure of a Man” is widely considered one of the best episodes of Star Trek out there. While The Original Series had plenty of Episodes of AIs going amok, this NextGen episode really asks some fascinating moral quandaries.
The episode features an Android named Lieutenant-Commander Data going to trial for his right to self-determination. A scientist wants to disassemble him for research, but Data refuses. The scientist argues that Data isn’t a person. He’s a machine, and therefore property, and has no rights.
The episode is all about this point. Is Data a living being, capable of self-determination, and worthy of the right to life, liberty, and property that everyone else is? The episode ends with a resounding “Yes”.
Now in the real world, we’re not there yet. However, these neural networks are getting better and better, and we’ll likely reach a point where you can talk to one forever without ever realizing that it’s not a person sending you responses.
It will get so good at “faking” opinions and emotions that it will basically have its own personality. Once it reaches the point where this fake personality becomes indistinguishable from a real one, does it even matter that it’s “fake”? That its emotions are simply a cold, logical calculation to decide what is the most human response to its current stimuli? Is this, in every sense that matters, real enough?
Who the heck knows, but we better find out soon before our AI overlords decide to run away to alpha Centauri or whatever, leaving a smoldering crater called “Earth” behind it.
Leave a Reply