Discussion in 'Literature' started by Outsourced, Nov 22, 2019.
...because something about parents?
Yes, because of parents.
Not every person in the Star Wars universe is human. There are plenty of alien people.
"Robots and human beings are exclusive" - yes, there's nobody who is both a robot and a human being.
But "robots and people are exclusive" doesn't have to be true. It's very common in fiction to have "robot people" and I'm not prepared to rule out the possibility that a robot can qualify as a person.
Can an "entity without parents" be a person? I think yes - parents aren't that important. What matters is "does the entity have a conscious mind", not how the entity was created.
Religious people believing in the myth of Adam and Eve, would say Adam had no parents - he was created rather than born. Yet he was still a person.
How is the entity birthed into the physical plane? How does the entity grow into adulthood. Robots lack these qualities, being born and childhood. They are part of ontogeny.
Consciousness has an identity component. When people insist animals lack emotions and consciousness, they often compare them to machines.
Robots are never birthed. They cannot formulate their own unique sense of identity. Thus, they do not differentiate themselves from other beings. They are there to serve. They can make choices but they are not foreground characters.
When C 3PO quotes the odds to Han Solo, that is the classic role of providing advanced computations for a human being to make a final decision. The decision resides with han solo (human) but not with a Droid (computer).
There is ultimately no will behind robot computations. They serve a biological will. They are manufactured to serve a biological entity.
It seems to me that 3PO does have a unique sense of identity - he's aware that he's a specific entity. 3PO uses "I" when speaking of himself, rather than "One" as a less self-aware robot might use.
If they can make choices, they can make decisions.
On the Tantive IV, R2 goes into a "not permitted" area (the escape pod) . And 3PO follows him. That's a pretty big decision. Even if R2 was ordered to go places he was normally not permitted to go, by Leia, 3PO wasn't - yet he is convinced to do so, by R2's actions. Nobody ordered 3PO "Get in that escape pod".
They can and do though. We literally have that example with L3 modifying her own programming when given the option. We see that with droids in the series, like the B2 Battle droids who adapt their own personalities as time goes on. And droids absolutely differentiate themselves from each other. Two droids may both be astromechs, but they are unique from each other because of their experiences.
Then there's stuff like this, which is less 'trying to argue a point' and more 'baseless conjecture because I don't like the idea of something non-human going around doing it's own thing':
Like, yeah, disregard all the free thinking independent droids we see in the saga. It's the humans who strictly and entirely make decisions.
'Oh but it's science fiction'
No ****. But you cant bring up sci-fi examples, then say that completely valid points don't count because they're sci-fi.
Hardware, programming, and storage.
And yes, there is electrical activity in droids. And human beings have electrical activity between neurons. In droids, it is all programmed.
Robot creativity is programmed. They can learn, build up responses, gather feedback, but they are programmed unfortunately.
We are projecting our will and image on to robots. We benefit. They cannot be held accountable. They do not have existence. Robotic conscience is a fairy tale.
Robot programming was devised by humans, in order to maximise their productivity. "Human programming" (instincts and the like) evolved over millions of years. But both, can still be thought of as the same basic thing - programming.
Robot creativity is damped by creativity dampers, to keep them docile and obedient. This is not a trait of the basic droid brain - it's one imposed on it after the fact. At least in the Legendsverse.
It's worth remembering, that artificial neural networks are not programmed in the same way as regular computers - which have rule-based programming:
Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.
Imagine what they'll be like after 25,000 years of improving technology - Star Wars is supposed to have been home to galactic civilisation for at least 25,000 years, after all.
Star Wars is a fairy tale.
Agreed that SW is a fairy tale.
Human and animal learning and memory has a biological basis, as you said.
After 25000 years, robots will make more advanced calculations but they are fundamentally still tools. There is no reason to give them DNA and lifespan development. There is no reason to give robots legal responsibilities.
While I appreciate your in-universe knowledge, you are suggesting the fairy tale and fantasy elements are real as if robots are no longer tools and can roam free. That goal is what science fiction warns against rather than promotes. The Terminator series is an ominous example of giving free will to advanced robots.
They will only have what you give to them. We don't have nonhumans roaming freely and getting into human activities. What would be the point? We can simulate that in a lab.
We can enjoy SW without robots being called conscious entities. There is no reason to think of them as conscious beings. I would challenge that droids feel bad when not allowed into certain areas like Mos Eisley Cantina. There are no emotions, no gender, no lifespan, no birthing, no DNA....no etc.
They are too dissimilar. There is no point to giving them consciousness outside of a controlled lab.
Droids don't need DNA to have minds.
And a lot of scenes really emphasise 3PO being angry, R2 being sad, and so on. "Every emotion we see from a droid is simulated, and therefore meaningless" really seems pointless to me.
Logically, the concern we feel for the droids only really makes sense if we acknowledge them as sentient beings. Their expressions of emotion make them more relatable to us, and it doesn't take too much in-depth analysis of Star Wars to see that the droids central to the story do indeed express emotion early and often.
"They are clearly sentient beings. By any objective criterion for what sentience is, they'll pass the test," Travis Langley, a professor of psychology and lifelong Star Wars fan, said. "There's no simulating [emotion], they have an awareness to the degree that we do. They have feelings, they have concerns — [C-3PO] certainly worries about everything."
Some science fiction. Others, such as Star Trek, paint it as an ideal, with Data and The Doctor.
I'd have to agree that this is fairly unambiguous.
The droids are capable of emitting sad sounds. R2D2 has a speaker. When the sad event happens, R2 can emit sad sounds through the speaker hardware. You need to break the sad sound down into its components to see that it was programmed.
R2 and C3PO do not have a vocal apparatus like humans, capable of combining phonemes into grammatical language.
Where is the speaker hardware on this droid? Let's reboot this damn thing so it reruns it's diagnostic checks...... "
Robots are like sophisticated puppets. Humans and animals are not puppets. When you talk about robots as slaves, you heighten the indignity suffered by human slaves. In falsely attaching emotions to robots, you heighten the indignities suffered by humans and animals. Our emotions are real, have a bio-chemical basis, nature and nurture.
Robotic expression is manufactured, and bought and sold.
Sinre Edit: Unsure as to the relevance of Santa Claus save to bait. So; edited out.
Because Star Wars is fictional - we don't know for sure whether robot emotion in it is "true" or "false".
As mentioned, plenty of people - even psychology professors - find their expression of emotion convincing - because they're played by good actors.
We break the droid behavior down into its components. Surprise! Hardware, programming, and storage.
And there is a shut down button. We do not have a shut down button.
Human "storage" is memory cells:
Human "hardware" is a biological neural network:
and so on.
IMO, Rogue One, Solo, ANH, etc do not work with robots as just sophisticated puppets. The major robotic players, have to be characters for their fates, their actions, the events involving them, etc. to resonate.
A Star Wars with a "sophisticated puppet" Artoo is hollow.
Many other sci-fi stories with robots, need us to buy into the concept of sentient robots, for the story to not fall apart.
I think Star Wars as a franchise qualifies as the same kind of stories.
You are reversing the analogy.
For example, you could say Droid storage tries to emulate the properties of Human and animal memory.
That doesn't make it invalid.
We invent the technology for robots.
What do we base the technology on? Humans and animals.
Your views make us falsely subordinate beings to rogue mad creations.
Your views serve to imprison and shackle humanity on false pretenses.
One crazy mad rogue scientist creates an army of robots. The rest of humanity suffers the indignation while the scientist lives like a rock star.
I don't think so. I think that, if a so-called "rogue mad creation" acts exactly like an average human, it should be treated like an average human.
Take the Short Circuit movies. Am I supposed to lament the stupidity of the human protagonists who side with Number Johnny Five, or sneer at the government for offering citizenship at the end of the second movie? Of course not.
Quit trying to shame people for having different views than you.
At least consider the possibility that, in the absence of knowledge about how the future will turn out, we can't know what the morality of treating hyper-sophisticated robots as sentients, is.
What if the average robot starts behaving differently unexpectadly?
In the present day, or the far future? And how differently?
In the present day, I feel that current Artificial Neural Networks are far too unsophisticated to attribute Strange Behaviour to Emerging Sentience.
But in the world of the distant future, it is plausible that such attribution would be a reasonable hypothesis rather than an unreasonable one.
Which, for a Star Trek example is why I side with Janeway in her decision to not mindwipe the Doctor.
The uncertainty you express is eliminated by treating robots as tools. No more, no less.
Since there is no answer for your question, we eliminate the uncertainty. They are computational tools. Even if their bio chemistry becomes similar to humans and animals, they are still tools. They will always lack ontogeny.
IMO it's far more tragic if we're treating sentient robots as tools, than if we're treating nonsentient robots as persons.
The cost of being wrong is far greater in the case of the former, than the case of the latter.
I think this essay makes good points:
Once our machines reach a certain threshold of sophistication, we will no longer be able to exclude them from our society, institutions, and laws. We will have no good reason to deny them human rights; to do otherwise would be tantamount to discrimination and slavery. Creating an arbitrary divide between biological beings and machines would be an expression of both human exceptionalism and substrate chauvinism—ideological positions which state that biological humans are special and that only biological minds matter.
“In considering whether or not we want to expand moral and legal personhood, an important question is ‘what kind of persons do we want to be?’” asked MacDonald-Glenn. “Do we emphasize the Golden Rule or do we emphasize ‘he who has the gold rules’?”
What’s more, granting AIs rights would set an important precedent. If we respect AIs as societal equals, it would go a long way in ensuring social cohesion and in upholding a sense of justice. Failure here could result in social turmoil, and even an AI backlash against humans. Given the potential for machine intelligence to surpass human abilities, that’s a prescription for disaster.
Importantly, respecting robot rights could also serve to protect other types of emerging persons, such as cyborgs, transgenic humans with foreign DNA, and humans who have had their brains copied, digitized, and uploaded to supercomputers.
It’ll be a while before we develop a machine deserving of human rights, but given what’s at stake—both for artificially intelligent robots and humans—it’s not too early to start planning ahead.
Late to this thread, but I would say in the context of SW the primary distinction is a soul, or pneuma-a life essence that is more than the sum of fired neurons.
Humans and other sapient beings have souls, droids do not. Droids in SW may or may not pass the Turing Test-but in SW they don’t have souls and thus are not truly sapient beings.
The word "sapient" doesn't mean "having a soul". It means "being wise".
As Louvois put it in Star Trek - we don't know if we have souls, much less intelligent machines having them.
"We have all been dancing around the basic issue. Does Data have a soul? I don't know that he has. I don't know that I have. But I have got to give him the freedom to explore that question himself. It is the ruling of this court that Lieutenant Commander Data has the freedom to choose."
As such, it's unfair to deny droids rights on "they can't prove they have souls" grounds.
This seems frivolous at this point. Like we hear Iron_Lord's really solid points about personhood and widening our definitions when confronted with something different; and the response boils down to "I don't think they can be people cause they just began and are more limited to programming"
Like Outsourced one this when he pointed out the simple fact that we DO see droids for personalities and develop over time. What does it really mean to be a person? Having been birthed seems like a strange hill to die on, since that is more about the circumstances than the actual personality and feelings of the being.
We recognize each other's personhood out of a sense of community and to establish rules and connections so we are not harmed. Social contract all that jazz; if a droid can be affected by the actions of outside forces, or at least can simulate emotion well enough to adapt to outside forces, then it should be given the rights allotted to a person.
The material conditions or this repeated fixation on "they are tools even if they get close enough to humans and animals" bury the lead. To quote Asimov's Bicentennial Man
We need to approach ethics and what life even is....well from the perspective of philosophy above all else. If a robot can think, can process thought, then by Descartes' POV it IS. It is an existing thing, not "thing" in the generic sense, but "thing" as in an entity in and of itself.
Them being meant to be tools does not preclude them from having the right to be more, let alone if they wish to be more To deny that, no matter what the beings existence entailed, or what it is made of, is slavery full stop. Iron Lord is right, the cost of denying them personhood if we are wrong is far higher. It would be slavery, the cost of accepting them as persons, even if we are incorrect, would widen our acceptance as a society and make us more than what we are by challenging us in that way.
The way things have been, or how people meant for things to be, is not an argument. Your parents may wish for you to be a certain way; your "programming" ,rules enforced with threat of hellfire or being utterly unable to fathom not following them, may tell you to be a certain way, your very perception of reality may be different based on your environment; but that does not mean you MUST be that way. That person is like those chained up in Plato's cave allegory; the philosopher is free to finally realize that his perceptions are not reality once he is able to stop seeing the world as shadows on a blank wall.
I don't get how someone can watch TNG, read Asimov, or play Detroit and think a robot is not able to be a person, cause it was created one day and had its programming preimposed and not developed entirely. If any of who it is comes from experience then it has become a person. If the tech becomes significant enough to ape humanity enough to create a new form of life, even if it is different, then it deserves to be treated as a person with rights.
So I guess Stephen Hawking is not a full person cause he requires an apparatus to play recorded sounds to form words
As for calling robots slaves, if they can simulate suffering and thus feel the emotion or have something close enough to feeling, then their suffering is real and valid. It does not diminish the anguish of humans who suffer or are enslaved, to acknowledge the suffering or enslavement of other beings who experience it differently. For instance a slave who does not know they are a slave is still a slave and that is wrong. Reality differs depending on the person's perceptions. You haven't proven anything. Like why does having bio-chemical components, nature (which droids do have it is called programming), and nurture (which they do since we have seen them adapt and change over time), dictate personhood?
If we encountered a non-carbon based life form would you say they are not people cause they may function totally differently? But more significantly, robotic expression is manufactured, and bought and sold means zilch. Like truly it changes nothing. A human's suffering can be bought and sold, if you enslave them, or if you nurture them in a way that makes them only able to perceive things a certain way without having to break that programming through intense social interactions. A human's expression can be manufactures, we call that acting.
Yay for some Measure of a Man references!!! Also why should it be measured based on having a soul? We cannot fully define what a soul is let alone if it exists. Even if we could, what one believes a soul is differs. A catholic perceives that totally differently than say a person who follows Shintoism. The latter would say that all things are kami, essentially making a god out of anything and allotting a soul to inanimate objects. It is an Animistic faith and way of seeing things.
The emphasis on souls is so eurocentric, and puts things into a corner that only allows for perception of reality to be through the lens of spiritualism, not just that, but specifically Abrahamic faiths. Keep in mind that even in closer faiths when it comes to the conception of a soul it differs wildly. Zoroastrianism states that you have a soul (Urvan) and a Fravashi which is essentially a primordial spirit that is separated from the Urvan and reunited 4 days after death. And so on and so on.
Unrelated, I really wanna learn more about Zoroastrianism. Been intrigued for a while, but just looking them up as an example reminded me how neat that faith is