Conscious AI

Tomas Tulka
9 min readApr 15, 2024

“The mind is like an iceberg; it floats with one-seventh of its bulk above water.” — Sigmund Freud

When folks start getting worried about artificial intelligence (AI) taking control of everything, enslaving humanity, or something even scarier, what they’re really afraid of is artificial consciousness. You know, like those evil AIs you see in movies, like Skynet from Terminator. It’s all about the idea of machines gaining consciousness, not just being really good at math or playing chess.

As humans, we’ve pretty much taken over the whole planet and even made our way into space, but there’s still a ton about ourselves that we haven’t quite figured out yet. Take the human brain, for instance — it’s still mostly a mystery, and we can’t all agree on exactly what consciousness is, even though we all know it’s there. It’s kind of crazy how something so essential to being human can still be so mysterious, isn’t it? But what does it even mean to be conscious? And can’t we just teach that to artificial intelligence to avoid the whole Skynet type disaster?

Intelligence

Let’s start with an easier question: What does it mean to be intelligent?

Intelligence can mean different things. In one sense, it’s about the ability to make decisions. Dogs, self-driving cars, and chatbots all fit this bill — they can make choices. But on a deeper level, true intelligence is about more than just making decisions; it’s about the ability to make them freely, to think. That’s where humans come in. Unlike dogs, cars, and chatbots, we can think for ourselves. We can go against our programming, choose not to respond, or even decide our own fate. That’s what sets us apart as truly intelligent beings.

Wait a sec, did I just say that artificial intelligence isn’t really intelligent? Yeah, I did. Honestly, I think calling it “artificial intelligence” is kind of misleading, because let’s face it — current AI doesn’t quite measure up to what we’d call true intelligence. Sure, it’s great at things like playing chess or generating text, but it’s not like it really understands what it’s doing. In short, today’s AI can’t actually think for itself, so I wouldn’t call it truly intelligent.

That’s why computer scientists came up with another term: artificial general intelligence (AGI). This is the real deal when it comes to intelligence. The thing is, we don’t actually have anything like that yet. Sure, we’ve made big strides in improving AI tools, but I haven’t seen any major breakthroughs in creating AGI that would blow everything else out of the water since the 1970s.

AGI is basically a universal thinker. Just like us humans, it can tackle pretty much any solvable task from any field, as long as it’s been trained well and long enough. Universality isn’t just some missing piece of the puzzle of artificial intelligence — it’s also a threshold, the key to what’s truly possible.

Universality

In 1931, the genius logician Kurt Gödel discovered that in any logical system reaching a certain level of expressiveness, self-reference just has to happen. All of a sudden, you’ve got paradoxes like “This statement is false” showing up left and right. His findings, famously known as the incompleteness theorems, shattered the ancient dream of a complete, consistent apparatus capable of mechanically answering any mathematical question. Gödel showed that once you start talking about a system from inside the system itself, there’s just no dodging those unsolvable problems. It’s like the system is either wrong or missing some pieces.

This set of features needed for self-referencing lines up perfectly with what’s required for universality. A universal system has the power to simulate any other system, even itself.

In 1936, the computer pioneer Alan Turing stumbled upon a fascinating fact about universality. He figured out that any universal system is the most computationally powerful system possible. No matter how fast and efficient computers become down the line, they won’t be able to outdo each other in terms of problem-solving ability. Sure, faster processors and more memory will speed things up, but according to Turing’s law of universality, they’ll never beat Gödel’s incompleteness theorem — some problems will just stay unsolvable forever.

The law of universality sort of acts like a basic law of nature because, in real life, we can only crunch numbers using physical machines. And in our universe, we’re stuck with the capabilities of the universal machine, just like how we can’t zip around faster than light.

Plus, Turing also showed that achieving universality doesn’t require anything fancy. The basic equipment of a universal machine is just not more advanced than a kid’s abacus — operations like incrementing, decrementing, and conditional jumping are all it takes to create software of any complexity: be it a calculator, Minecraft, or an AI chatbot.

Likewise, consciousness might just be an emergent property of the software running AGI, much like how the hardware of a universal machine gives rise to its capabilities. Personally, I don’t buy into the idea of something sitting on top of the physical human brain — no immortal soul or astral “I” floating around in higher dimensions. It’s all just flesh and bone. Think of it like an anthill: this incredibly complex system doesn’t need some divine spirit to explain its organized society, impressive architecture, or mushroom farms. The anthill’s intricate behaviour, often referred to as a superorganism, emerges from the interactions of its individual ants without needing to be reduced to them. Similarly, a single ant wandering around in a terrarium won’t tell you much about the anthill as a whole. Brain neurons are like those ants — pretty dumb on their own, but get around 86 billion of them together, and suddenly you’ve got “I” with all its experiences, dreams, and… consciousness.

So basically, if something can think, it can also think about itself. That means consciousness is a natural part of thinking — it just comes with the territory. And if you think about it, this also means you can’t really have thinking without consciousness, which brings us back to the whole Skynet thing.

If something can think, it can think about itself.

Are we gonna end up with evil machines no matter what? Should we just scrap all this AGI research and call it a crime against humanity? Nah, not really. It just means we’ve got a bunch of problems to sort out before we can even think about making progress in this area. And let’s face it — figuring out consciousness is one of the biggest ones. My guess? We won’t see any big leaps in AGI until we really wrap our heads around what intelligence is all about, especially when it comes to consciousness.

I think, therefore I am conscious.

Universality is key when it comes to understanding artificial intelligence. Learn about it and much more in my new book Building a Universal Machine!

Superintelligence

The law of universality throws in another twist: We don’t need to worry about computers outpacing us one day. Sure, they will beat us in speed and memory, but when it comes to thinking, we’ve already hit the ceiling. Nature’s put a cap on how far we can go, and we’re already at the top as universal thinkers.

In practice, computers make us superintelligent by extending our capabilities almost infinitely. We can predict the weather, chat with people on the other side of the globe, and travel to the moon, all thanks to computers. Accountants using pocket calculators are a thousand times more effective than they were one hundred years ago working on the same task. Some fields of research and engineering would not be possible at all without computers as essential calculations would take years if done manually.

And it’s not just about crunching numbers. We can also extend or even replace our natural skills — like seeing, hearing, talking, you name it. As such integration will blur the borders between a human and a machine we have to ask what in essence makes us human. My guess is creativity, the ability to invent new ideas from nothing.

Creativity

Alright, let’s break it down. We’ve got cars, jet planes, and intense running races. And then we’ve also got cameras, CGI, and a whole bunch of talented artists. But here’s the kicker: while sports are all about stirring up our emotions and getting us moving, fine arts are a whole different ball game. It’s all about tapping into the highest levels of intelligence and creativity.

Creativity is what separates minds from tools, humans from animals, and AGI from AI. Sure, tools might seem creative at first glance — they can churn out new texts, images, tunes, you name it. But here’s the thing: those creations? They’re just combinations of what the tool’s been trained on. Take image generators, for example. They mix and match millions of pre-existing pictures based on human-provided labels and randomness, and lo and behold, a new image appears. The same goes for a pocket calculator — it can generate numbers you’ve never seen before. But true creativity? That’s a whole different ballgame. True creativity takes genuine intelligence to pull off.

With AGI, we might also welcome great artists into our society. Through their art, we might better understand how they experience the world and vice versa. For this reason, human artists like professional soccer players are unlikely to die out because conscious experience is unique for each individual, natural or artificial.

Humanity

As descendants of animals, we’re influenced by all sorts of stuff — genetics, society, culture, you name it. And all this really shapes how we make decisions, aka how we think. Our whole idea of good and evil? It’s pretty much ingrained in us from evolution. We see good as anything that helps us keep on reproducing, and evil as anything that doesn’t. Of course, it’s way more complicated than that, but hey, it’s a decent place to kick things off, right?

So, can we teach an AGI program our ethical values? It surely is possible, but those can never become hard impermeable rules like Asimov’s three laws of robotics*. It could not be so because it would break universality. A conscious mind must by definition be capable of making whatever decision completely freely. Of course, we can teach it good and it can choose to do so. But it’s also free to do evil — or change its mind whenever it feels like it.

The same goes for us humans. We’re free to choose evil over good, totally independent of outside influences or predispositions. It’s a sad truth, but people can — and all too often do — hurt others or even themselves. A person has the freedom to kill or, in the end, even take their own life. It’s just one of those things that come with being part of universal intelligence.

Luckily, thanks to evolution, most folks have a built-in safeguard against doing evil: it just doesn’t feel right. Feelings act as a kind of chemical brake, helping us make tough decisions quickly and intuitively. It’s a handy feature that served us well back in the African savannah where we first emerged, and it’s still pretty darn useful today.

Could AGI use feelings to navigate through our weird human world? And if so, what feelings could a pure consciousness even have — if it could have any? Like, could it feel happy, angry, proud, disappointed, or just plain bored? And if it could feel stuff, would doing good make it happy? Or is happiness just some chemical thing that pure thought doesn’t get? Maybe boredom is what would nudge a non-feeling mind to break out of its shell?

Most likely, we must teach AGI all of this. If we do it well, hopefully it will become benevolent, if we do it badly, it might turn malevolent. Just like any human would. And that is the point! Any intelligence, natural or artificial, is equal by the law of universality. Once we create it, we must invite it to our society to be an equal part of it and teach us how to be human. But when watching people hate each other for small differences in skin colour, eye shape, or mother tongue, consciousness doesn’t seem to be the utmost problem anymore. First, we must learn to be human ourselves.

What is it like to be a machine?

* Asimov’s three laws of robotics

  • The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

--

--