Should Robots Have Rights? A Comprehensive Dive Into The Future
If it walks like a human, talks like a human, and looks like a human, should it be considered human? This is a question that has been trending in the late years of computer and machine evolution even in the world of robots. Indeed, the Ethics of Artificial Intelligence and robotics have been a growing topic and concern for many years now. Especially since as humans we keep continuing to push boundaries of technological progress and innovation which ultimately touches upon questions such as ethics, morals, and rights. That cannot be overlooked at this stage in time in the technological evolution process. When should we start to consider ethics in AI, and where do we draw the line? Indeed, one could ask himself a daring question.
What May Be The Major Difference Between Human Mind and AI?
What is the fundamental difference between the human mind and psyche, and a top of the line future AI? Is it the soul? And if so, what would the soul be considered to be? Or would it be the fact that we, as humans, have a complex network of biological ‘God made’ set up known as neurons while machines only have a man-made synthetical, mechanical hardware? But what is then the difference? Certainly, an individual only thinking out of logic and without any theological thought could advance in all rights that the only difference between us and a futuristic, sentient AI, would be that we human process data in a greater quantity and on biological hardware.
While machines process somewhat less data at the moment on a mechanical one. Something that could eventually change with futuristic advances and technology and even develop into a semi-biological AI. Presenting the features of both worlds. But then one question emerges? What would be too far gone? When the AI starts becoming sentient? And how do you even define the limit between advanced AI and a sentient one? Even further, what could be the risks? And as Nietzsche saying goes.
Could AI as Its Creator Before Him kill its Own Maker?
Would it be at all possible that we as a species would recreate this pattern and lose control over our own creation and be eventually destroyed by it? This is perhaps one of the most pressing issues of AI today, and there lies a fundamental fear that somewhere on the line, robots will be superior to humans and we might lose control of our own makings; as dramatic as it may sound, some would argue that this is not only a possibility but a matter of time – and if so, could these technological advances be reversible or stopped before they become a reality?
When we speak of robots and rights, the specifics for this paper will relate to the legal person status as well as citizenship given to robots. By clarification, legal person status can be a human and a non-human entity that is treated as a person in limited legal purposes, such as owning property, being able to sue or be sued. (Legal Person status, Cornell Law Legal Information Institute), basic rights such as the right to live and not be harmed, etc.
It will be argued that neither should be applied to robots at this very point in time, but for different reasons, and could evolve rather greatly in a mid to far future. Point that would be made and demonstrated in the second part of this paper.
This is a critical paper where we argue against the fact that robots should be given the same rights as humans as of now, based on the premise that robots are not able to develop the capability to be self-aware on their own, and do not yet possess the ability of full consciousness like humans at this point in time. Meaning, they are not yet able to feel fear or love or any other feeling without being programmed to do so, and the ethics of robotics and AI is as such not relevant as of yet. A robot would be able to say “I love you”, but it would not actually “feel” that it does. By directly contradicting the core of humanity, robots should not be elevated to the equal status of human beings as they lack any true emotion. The international community has shown to be divided on these issues and we are far from reaching a universal agreement that would be legislating robotic rights equal to human rights.
However, the need to establish a set of generally accepted guidelines and laws to govern the ethical, moral, technical, and legal aspects of robotics and Artificial Intelligence is very much needed in order to safeguard future innovations and the impact they would have on our society at large. And therefore, by understanding the risks and potential consequences of going too far, agree on a clear set of ‘red-lines’ not to be crossed in order to always be able to consider robotics and AI as programmed machines and not sentient ones.
Before we go deeper into the philosophical concepts of ‘free-will’ and ‘consciousness’ of robots, it is important to understand where we are today technically and what robotics and AI can or cannot do. Artificial intelligence (AI) by definition is the capability of a machine to imitate intelligent human behavior. It may not be the best definition, as it is a quite comprehensive term that includes many forms of computer science such as Deep Learning, Machine Learning, and Image Processing, and they can be interrelated as well. AI can be used for improving work and processes in many different sectors such as medicine and Security, but it is also an integrated part of our personal lives where it can give us personal assistance in our phone from instance through ‘Siri’, or by Apple’s iPhone facial recognition, or again from self-driving cars. The sophistication of robots and their capabilities varies as they are intended for different purposes, recently the world’s first robotic kitchen hit the market, although this robot is limited to certain types of food such as hamburgers and pizza, it demonstrates the leap made from “one-robot one-operation” to robots that are able to perform “sequences of operations”. This robot is now available in limited stock, but it gives us an idea of what impact these innovations will have in the future on a socioeconomic scale by taking on typical low wage jobs, it might make millions of people unemployed.
There is no doubt that the capabilities of robotics and AI have rapidly been improved over the years as the drive for pushing new limits is constant. However, the capabilities of robotics and AI as of today are not measured up to the conception of creating truly conscious, sentient machines in the future, which only then would we have a basis for giving robots rights as humans, and bring the need to consider ethics as part of the Artificial intelligence equation.
One of the biggest contributors to the somewhat confusion and concerns over the future of robotics has been long brought to us by Hollywood science fiction series and movies, like Ex-Machina a story about a programmer that takes part in a scientific experiment where he is expected to assess artificial intelligence by interacting with a female robot. The robot named Ava, manipulates the programmer into helping her escape the experiment when she finds out that she is being ‘switched off’. Ex Machina demonstrates the capability of future AI to produce robots with the ability to deceit and manipulate its surroundings to its own advantage, it questions whether or not robots can exhibit intelligent behavior equivalent to, or indistinguishable from that of a human being. In other words, it predicts a scenario where we have robots that can think.
The Turing Test
This experiment could be based on The Turing test, originally called the imitation game by Alan Turing in 1950. The Turing test does exactly that, it attempts to see if a computer can exhibit behavior indistinguishable from a human. Back then, it was done by having one person chat over a computer – sometimes it would speak to a human and sometimes it would speak to a robot – the question was if the computer could successfully imitate a human. In the framework of his experiment, for a system/robot to prove itself to be intelligent, a certain amount of deceit is implicitly required. The Turing test is a psychological assessment of the robot where we test what it says or does in order to establish its intelligence, but the test would need to be upgraded to match the criteria of today. The robot is designed by machine learning to imitate human beings whether that is by action or ‘emotion’, we write the algorithms that enable the robot to respond accordingly. By using strategic Artificial Intelligence, the robot is able to make strategic decisions, however, the decisions are still inherently decided in one way or another by a code and not human emotions. The ability of the robot to convince us otherwise is the capability to show deceit.
Distinguish Capabilities of AI of Today and Predicted AI
By distinguishing the capabilities of AI as we know it today, from the ones we predict, we challenge the direction of policymaking around these innovations as to what extent they are also fulfilling a purpose and not just a cause. Just because they can have rights, it does not mean that they should. Industry leaders and experts in fields of Law, Science, Digital Science, Robotics & Ethics and more, have gone together to warn decision-makers and politicians in the EU against giving robots rights. They do so in an open letter to the European Commission after The European parliament passed a resolution in 2017 that envisioned a special legal status of “Electronic persons”, which is aimed at the most sophisticated autonomous robots.
In the letter that was sent, the experts explain that the arguments of giving legal status to robots give a lot of bias that is based on – “An over-evaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and a robot perception distorted by science fiction”. They all agree upon the fact that applying a legal personality to a robot is inappropriate, both from a legal and ethical standpoint, whatever the legal status might be. We can conclude that an intelligent robot would have to be revolutionary in nature in order to be considered indistinguishable to a human, i.e it would have to be able to develop biological emotions such as reasoning and sanity, concepts that are not deriving from an algorithm.
And therefore, it would contradict the argument of giving a robot Legal Person status deriving from the Natural Person Model, which would give them human rights such as freedom of speech, Integrity, etc. And on the other hand, robots should not be given rights in terms of Legal person status as a non-human entity model as the robot would then be accountable for its actions, the liability would be difficult to assess in terms of responsibility in potential damages made by the robot. This again implies that the robot would have either the conscious capability to act on rights and obligations, or that a human person is behind the legal person to represent and direct it, which is not the case. The questions surrounding the legal status of robots create an unnecessary grey zone that could easily be abused, especially in terms of liability and damages.
The Future of AI: Where Are We Going, and Should Ethics Come Into Consideration Further Down The Line?
On the other hand, even though as of now, giving rights to robots and thinking about the ethics of AI is still a far-fetched scenario. It is still open to questioning when it comes to future innovation. And even though, the sentence ‘the future is now’ is quite a cliché in the world of new technology, it has never been so right as of now. Especially when thinking about human consciousness in machines and machine learning.
Indeed, for now, a human-like artificial intelligence is not quite possible, neither from a technological perspective nor an ethical one. But when would it be?
In order to understand this, one needs to think about the current and traditional approaches to the process of building an AI. Or the top-down approach, symbolic approach. Mainly meaning the process of creating an AI by starting to encode it with behavioral actions and patterns of thinking and therefore in this way artificially replicating intelligence by analyzing and mimicking the biological structure of the brain. In other words, analyzing and processing the symbols given to the program as a human would. Hence the wording of the symbolic approach to AI building. When thinking about it that way, the keyword of the sentence is indeed mimicking, and therefore would support the premise that AI and therefore by extension robots should and would not be able to be considered as sentient and be given rights as Humans would. Limiting the machine’s mind to be only a simile of the human mind, only capable of copying and showing patterns that it has been encoded to do, and no matter how good it is at it, it would never be able to evolve passed it by itself.
The Symbolic Approach: Second Approach To AI Building
But here comes the second and new approach to AI building, the one that changes the game. The bottom-up approach. Whereas the symbolic approach would associate symbols to a meaning, through a computerized program, its counterpart would instead be built by first creating, a ‘neural network’ by studying the neurons inside the human mind, therefore giving it the possibility to learn by being exposed to things. And therefore, instead of telling it, this is what it is, showing it and progressively teaching it what things are. Exactly as human consciousness does while growing up and being educated. Since indeed, what is human learning but the unknown property of connections between neurons in the brain? And therefore, even though this process is still not yet technologically possible, and far from reality, what is to say that a future AI built in such a way, would not be capable of developing morale, conscience, and instead of following its rigid code, be able to learn and write its own code? As the human does, in a biological rather than mechanical manner. Opening the door to a computerized program, that would potentially be able to learn how to feel, ‘love’, and what sadness means, as any human being does while growing up. Maybe even giving it instincts.
Why Should Robots & AI Not Eventually Be Entitled To Rights?
So in the end, why should such artificial intelligence, an intelligence that would be able to grow itself through ‘biological like’ patterns of evolution, from the bottom up, not be entitled to rights that would then be the difference between it and a human being? Apart from the fact that the human being does not know its creator, is working out of biological hardware, and is able to live in 4D due to its greater ability to process a vast amount of data at the same time. The last point that is likely to change in the close to mid-term future also can possibility for a computerized program to understand the d itself in space would it be uploaded to a mechanical body.
In fact, such projects of self-aware, sentient and sapient AIs, that would be able of human interactions, self-adaptative/modifiable code and thought, reasoning as well as learning how to interact and change depending on its environment, have already been developed. Even if it failed due to an even narrowing lack of technological advancements. And are still being developed as of today in order to better enhanced the possibility of AI assistance to human life and Machine to Machine as well as Human to Machine communication. To a point that thinkers and technological scholars and philosophers are already starting to think about this matter and how one should consider an ethic of robotics, and even consider rights for future AIs
What Can Stop AI-Powered Robots From OverPowering Humanity
But this brings an even greater question. If the AI will have instincts, such as the survival one, a mind of its own, and greater processing power than the Human, what would then stop it from overpowering it -humanity-? What would stop it from taking over, and going over and beyond its goal and become the Human race doom? These are questions that are worth mentioning. Especially since the occurrence of one of Facebook AIs that suddenly decided that in order to speak with connected computers, it was way more efficient to create an integrated computerized totally new language than using the one it was ordered to used and programmed to use. Indeed, what would stop such a next-gen intelligence, to create a catastrophic scenario? Would it feel threatened to be shut down? Or even only tasked by a future government to come up with a solution to climate change and spontaneously decided that the problem was the human race and therefore tasked itself to remedy it. By removing the problem as a whole from the equation. This might seem farfetched and coming out of a sci-fi movie, but should not, ever, be out of consideration, especially seeing how back-then science fiction things such as automated drones and cars are coming to reality rather quickly following the past decades.
Therefore, even though it has been proven that Sentient AIs could rather well be a possibility in the future. Some other approach needs to be taken. Indeed, as the human sentient being, is bound by morality and the social contract in order to allow human society to blossom, such consideration should also be taken when building the AI and before giving it rights. And integrate during its building process some kind of ethical knowledge or bound, to safeguard humanity from an artificial intelligence going out of bounds and threaten us as a whole. Indeed, think of the possibilities of a malevolent program with a mind of its own, when most of the modern world is already working on programs and algorisms working on their own without almost any human knowing what they exactly do any more. What would then stop a machine of taking them over and reprogramming them with malicious intents?
Robots Rights: Conclusion
To conclude, and as shown during this article. Even though, as of now Robots and AI should not have rights or even ethical considerations made toward them. It is not to say that this would still be the case in the near future. And as the changes and evolution in AI technology are picking up in pace, one might want to consider thinking about these topics right now before it actually becomes a must-do in order to get ahead and prepare the population for sentient programs. But seeing where such evolution could lead, the real question might not be should AI and therefore Robots have rights. But instead be, should we give an AI the possibility to become sentient? Is it really worth the risk? And instead of thinking of guidelines to prevent AI to become a danger to humanity, it might be smarter to establish red lines not to be crossed to avoid even before arriving at this possibility and keep the technology under control before it potentially overwhelms us. Indeed, even we Human have duties alongside our rights. And one would definitely not want to be murdered by its own creation, as from Nietzsche saying on humanity murdering ‘God’.
Wanna know more about AI and robots? Contact us at www.techsngames.com today!