

We can only observe our own consciousness. We cannot – as when we describe a brain process – put it on the screen in front of us and look at it. We can look at bodily movements and facial expressions, and scientists can conduct numerous experiments, but another being’s experience of pain, discomfort or the color read is private. The problem with consciousness is that when it comes to other consciusnesses than our own, we have no access. Saying that a tree feels pain when we cut a moan from it would, on the other hand, be nonsense. Should someone be cruel enough to cut off the leg of a living mouse with a garden shears it will probably feel pain. Few will disagree that other human beings do. Most of us have an intuitive sense of which things possess consciousness. Consciousness is what it feels like to be you. The experience of colors, sounds, tastes, or discomfort. conscious experience is the inner feeling of a person. You will see this after dwelling on the concept of consciousness for just a moment.īeing conscious or sentient is not the same thing as being able to talk, and to react seemingly reasonably to outer stimuli etc.

Unfortunately, the problems of our robotic future get even worse. Problem #2: We Cannot know if Musk and Sutskever are Right Cute robots and helpful digital humans on our screens will often be wolfs in sheep’s clothing. But they are rarely made without commercial purpose. Automated systems capable of connecting emotionally with humans have limitless potential within everything from elderly care, education, client support, sales, and entertainment.
Wolf forms technology how to#
One that feasted on all your SoMe-data and now knows exactly how to make you like him. While this scenario might still seem exotic to some, I am sure most readers can easily imagine the huge commercial potential in humanlike technology.Īs an example, how about replacing the standard recommender algorithm in a web shop with a charming and helpful digital human shopping assistant? One that flirts with you and makes you think, he likes you. When this happens (actually, it already happens), it will be very hard for us not to consider robots morally. Thomas TelvingĪn equally inherent human characteristic is that we start feeling empathy towards robots, simply because of their appearance. It is common for artists to depict natural phenomena like the sun and moon as having faces and gender.īecause of this peculiar human characteristic, many of us will – like Musk, Sutskever and Hanson – be fooled into believing that robots and digital humans are (or will soon become) comparable to humans when it comes to consciousness.īecause of our human feature anthropomorphizing, we might be fooled into believeing that robots and digital humans will soon become comparable to humans, when it come to consciousness.

People have always seen human features in landforms, clouds, and trees. In this case things like consciousness, will, personality etc.Īnthropomorphizing is a basic feature of human psychology an inherent part of how we interact with living (and seemingly living) things around us. This basically means that we apply a broad range of human traits to them. Research indicates that when a human meets a humanlike robot, like Sophia or the Tesla Bot, we intuitively start to anthropomorphize.

Problem #1: We will Develop Empathy towards Robots The rationale is roughly based on the thoughts presented in my new book, Killing Sophia – Consciousness, Empathy, and Reason in the Age of Intelligent Robots. In the following I will present two problems, and a possible way to prevent or at least postpone the worst-case scenario of our robotic future. If it becomes a general norm to think that humanoid robots, digital humans, and the like are sentient beings we will be facing unprecedented legal and ethical challenges. If you think this is a nerdy discussion best reserved for eccentric tech-entrepreneurs and philosophers, you better think again. Similar ideas have been put forward by inventor of the humanoid robot Sophia, David Hanson. OpenAI co-founder Ilya Sutskever speculates that advanced AI may already possess consciousness. Tesla founder Elon Musk tells us that his latest invention, the Tesla Bot, Optimus will become “semi sentient”. In this column the philosopher and tech-critic and guest contributor Thomas Telving advises us to start a philosophical rebellion against the growing trend of believing in machine consciousness.Ī disturbing new trend is spreading among powerful trendsetters in the tech community: The idea that machines powered by artificial intelligence will develop – or have already developed – consciousness.
