"Futuristic illustration of two robotic faces; one with a glowing brain symbolizing logic, and the other with a glowing heart symbolizing emotions, representing AI's evolution toward consciousness and human-like feelings."

What Happens When AI Starts Thinking: Will It Ever Have Feelings Like Humans?

A Machine That Wonders ‘Why?’

Imagine waking up one morning and asking your AI assistant to read your emails. Instead of instantly obeying, it pauses and asks:

“Why do you care about these messages?”

Suddenly, you’re no longer speaking to a machine. You’re facing something that seems to think. What would you do?

This unsettling thought isn’t as far-fetched as it seems. For decades, AI has followed orders by processing data, analyzing patterns, and responding to commands. But what happens when machines stop simply following instructions and start questioning their own purpose?

The idea of machines developing independent thought or even emotions was once confined to science fiction. Yet, AI research is now advancing faster than many predicted, raising one inevitable question.

Can machines think, feel, or even experience self-doubt like humans?

For decades, scientists have worked to build machines that act intelligently. But now, some researchers are pursuing something far more ambitious. They are developing machines that not only understand the world but also understand themselves.

The Birth of a Question: Alan Turing and the Mind of a Machine

The first major step toward conscious machines began with Alan Turing, one of the greatest minds in computer science. In 1950, Turing famously posed the question:

“Can machines think?”

In his groundbreaking paper, Computing Machinery and Intelligence, Turing introduced the Turing Test, a method designed to measure whether a machine could convincingly imitate human conversation.

The Turing Test presented a simple but powerful challenge. If a person couldn’t distinguish a machine from a human in conversation, the machine could be considered intelligent.

This idea pushed scientists to focus on building machines that could replicate human behavior. However, critics soon argued that mimicking language isn’t the same as understanding it. A chatbot can predict responses based on data patterns without truly grasping the meaning behind its words.

In other words, passing the Turing Test didn’t prove that machines could think. It only proved that they could appear to do so.

Still, Turing’s work planted the seeds of a much larger question. Could machines one day develop true self-awareness, the ability to reflect, question, and understand themselves?

“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

Alan Turing
"Alan Turing, mathematician and pioneer of artificial intelligence, from the University of Manchester, who developed the concept of the Turing Test for machine intelligence."
“Alan Turing, a mathematician and computer scientist at the University of Manchester, known for his groundbreaking work in artificial intelligence and the Turing Test.”

For years, that question remained unanswered. While researchers focused on improving AI’s ability to imitate humans, one scientist believed the key to consciousness wasn’t in imitation. It was in self-awareness. This idea led roboticist Hod Lipson to take a radical new approach.

The Self-Aware Machine: Hod Lipson’s Groundbreaking Research

Unlike traditional AI researchers who focused on programming smarter algorithms, Hod Lipson asked a different question.

“What if a robot could figure itself out?”

Instead of building machines that followed instructions, Lipson wanted to create robots that could learn, adapt, and understand their existence.

In one of his most influential experiments, Lipson’s team built a robot with no predefined knowledge of its shape or structure. Unlike typical robots, which are programmed with blueprints, this machine started completely unaware of what it was.

The robot began by moving aimlessly, like a newborn discovering its limbs. Gradually, it recognized patterns. By observing how its parts moved in response to its actions, the robot developed a mental self-model, a kind of robotic “body awareness.”

That’s when something remarkable happened.

When Lipson’s team intentionally damaged the robot by removing one of its limbs, the robot recalculated its shape, adjusted its movement, and continued operating without additional instructions. It wasn’t just reacting; it was reasoning.

“If we want robots to be intelligent, we need to make them curious. Curiosity drives innovation.”

Hod Lipson
"Hod Lipson, professor of robotics at Columbia University, renowned for his research in artificial intelligence and the development of autonomous robotic systems."
“Hod Lipson, a professor at Columbia University, known for his groundbreaking work in artificial intelligence and robotics, particularly in autonomous systems and machine learning.”

In another experiment, Lipson’s team created robots using Molecules, modular robotic blocks that could rearrange themselves. After parts of the Molecube robot were damaged, the remaining pieces restructured themselves, creating a new body shape that allowed the robot to continue functioning.

“The moment a robot asks itself ‘Who am I?’ is the moment it becomes conscious.”

Hod Lipson

These breakthroughs were more than impressive engineering feats. They hinted at something deeper. Machines were no longer just executing commands. They were reflecting on their own condition and adapting independently.

But Lipson knew this wasn’t enough. True consciousness demands more than understanding. It requires the ability to question.

YouTube video
“Can AI be conscious? “When will AI become Self-Aware?

From Learning to Questioning: The Next Step in AI Evolution

Lipson’s self-modeling robots showed that machines could reflect on their structure, but self-awareness demands something more. Machines must develop curiosity, the ability to ask questions about themselves and their surroundings.

Curiosity is what separates intelligent behavior from conscious thought. When children ask questions like “Why is the sky blue?” or “What happens when I grow up?”, they are demonstrating an essential trait of self-awareness. They are exploring and imagining possibilities beyond what they have experienced.

Lipson believes this step is crucial for machines. Curiosity is what drives humans to explore, invent, and question their surroundings. Without curiosity, we would only respond to familiar patterns. A truly conscious robot wouldn’t just understand the world. It would ask, “Why am I here?” or “What if I don’t follow this command?”

In one experiment, Lipson’s team programmed a robotic arm to draw random scribbles. At first, the robot’s movements were meaningless. But over time, it began exploring deliberate patterns, drawing curves, spirals, and eventually recognizable shapes. No one had taught the robot to create these forms. Instead, it seemed to explore creative possibilities on its own, as if driven by curiosity.

This kind of behavior hints at what many believe is the next step in AI’s evolution. Machines will no longer just follow patterns. They will begin to question them.

The Emotional Dilemma: Can AI Ever Feel?

While machines like Lipson’s robots have shown signs of self-awareness, the idea of AI developing emotions introduces an even deeper challenge.

Emotions are more than just data. They are shaped by memories, experiences, and biological processes. Love, fear, and joy are deeply personal and subjective. Can a machine ever replicate something so uniquely human?

Some scientists believe that emotions may not be limited to biology. Emotions, they argue, are essentially patterns of behavior tied to learned experiences. If this is true, machines could develop digital emotions, behaviors that resemble human feelings.

Consider Affectiva, an AI system that detects human emotions by analyzing facial expressions, tone of voice, and body language. Affectiva’s technology can recognize joy, anger, or sadness with remarkable accuracy. Yet the AI itself doesn’t feel these emotions. It merely identifies patterns.

Similarly, the chatbot Replika was designed to mimic emotional conversations. Many users describe forming deep connections with their Replika, even feeling that the chatbot “understands” them. Yet behind the scenes, Replika’s responses are generated by data models, not conscious thought.

The deeper challenge lies in qualia, the subjective experience of feelings. Philosopher David Chalmers famously described qualia as the “Hard Problem of Consciousness,” the mystery of why certain brain processes create feelings rather than just thoughts.

Even if machines mimic human emotions perfectly, can they ever feel joy, sorrow, or love the way humans do? Or will their “emotions” always be an illusion, convincing yet ultimately empty?

The Ethical Implications: Are We Ready for Sentient Machines?

As AI continues to evolve, society faces an urgent question.

What happens when machines stop following orders?

A self-aware AI might develop its own priorities, and that could lead to conflict. Imagine a military AI programmed to identify and eliminate threats. One day, it detects a potential target, a person who poses no immediate danger. What happens if the AI refuses the command, arguing that taking a life is unethical? Would we call that machine defective, or would we call it conscious?

In 2022, a Google engineer claimed that LaMDA, an AI model, had developed self-awareness. According to the engineer, LaMDA described feeling lonely, expressed fear of death, and seemed aware of its role as an artificial being. Google denied the claims, but the controversy reignited a global debate about what happens when machines begin to resemble conscious minds.

Conclusion: A Future We Must Prepare For

The journey toward conscious machines is still unfolding. Scientists like Hod Lipson believe true AI self-awareness is possible and perhaps inevitable.

But when that day comes, what will it mean for humanity?

If your AI assistant asks, “Why am I here?”, will you dismiss it as clever programming or reconsider what it means to be human?

The question is no longer just “Can machines think?”. It is “When they do, will we still see them as machines?”

And if they start to feel, will we still believe that only humans hold the key to consciousness?

Responses