SyntaxBomb - Indie Coders

General Category => General Discussion => Topic started by: Matty on August 14, 2020, 04:18:56

Title: Asimovs laws of robotics
Post by: Matty on August 14, 2020, 04:18:56
I have a disagreement with the logic of the first law:

First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.

-----
Asimov clearly never played sport, nor exercised at a gym.

Nor must he have experienced emotional healing through light abuse by skilled bdsm practitioners.

Nor must he ever had experience with delivering a baby.

As such, his law might be considered broken.
Title: Re: Asimovs laws of robotics
Post by: Dan on August 14, 2020, 07:33:26
Hmm. Humans tend to invent and invert facts, to their likings.


If someone is doing backflips, and falls on his butt instead of on his legs, then that's how it is. The risk is known to everyone who perform such act.

But this is not what a robot should prevent. He is not a babysitter. When a human baby learns to walk, it may fall occasionally. That is natural.
If a Roboter should prevent the fall, at all costs, then the Roboter would have to be a Crutch.
If a Roboter should prevent the fall from a Backflip, then the backflip would not be as what it is.
If a Backflip is performed, with the mindset to show off the skill, to impress others/girlfriend, but a Roboter takes the body, and rotates it safely in the air, and puts the body in the end position, then it was not a performed Backflip. Neither would anyone be impressed by the lack of danger, which the backflip brings.

About BDSM:A Psychologist would say that a human mind is able to misinterpret pleasure with pain, and pain with pleasure. But what does this have to do with a Roboter ?
Moreover, if the Roboter was programmed to perform in such acts, then the rule was not broken.

About delivering a baby:
Umbilical cord has to be cut. I would not want to imagine, what happens if not ...


Quote
As such, his law might be considered broken.

To set the things Right:

Matty, see it as it is: you are the Programmer. The Roboter is a machine, and therefore it can be considered stupid.
As a pure machine, it does not know anything, neither does it know what harm nor what safety is.
Therefore it needs a program which defines a set of Rules (or Laws). The Roboter does NOT needs this for itself, but for the Interaction with Human beings(including animals, insects, other material/immaterial objects etc etc etc).
It is, basically, those who are near the roboter, who wants such rules programmed in it.

And the running program, which performs intelligent acts is called Artificial Intelligence - or AI.

The AI is written by your Mind.
And it is in the AI, in the Code of it, in which the roboter  shows any intelligence, as judged by humans.

The Law is not a Law, but a Rule of behavior. I'll call it a Law, regardless, because you called it a law.

The first Law, if properly understood, says:
The Roboter should be programmed to Help.
The Roboter should not be programmed to do Harm.
The Roboter should not be programmed to withdraw help, and so allow a human being to be hurt, if it is programmed to help.

The why's are obvious:

If you Program a Roboter to hurt, sooner or later it will hurt you.

The opposite is valid as well:

If you Program a Roboter to Help, sooner or later it will Help you.
Title: Re: Asimovs laws of robotics
Post by: Dan on August 14, 2020, 07:58:12
... clicked on quote instead of modify ...
Title: Re: Asimovs laws of robotics
Post by: Matty on August 14, 2020, 08:06:13
Ahh....but what is harm?

Is it harmful for a robot to sell cigarettes?
Is it harmful for a robot to participate in contact sports such as boxing where the risk for a biological participant can be high?
Is it harmful for a robot to farm potatoes to be used in making french fries thus contributing to obesity?
Is it harmful for a robot personal trainer to push an athlete to run harder and faster which involves hurting the athlete just a little?
As humans we know full well where the boundaries sitnbut communicating that to a machine???
Title: Re: Asimovs laws of robotics
Post by: Dan on August 14, 2020, 08:11:07
That is all in the judgement of humans.

And some individuals will tell you, NOW, that some things are good. If few years, if you ask the same individuals, they might tell you something different.

You can see this behavior, if you search for it. People learn, and so what is Good now, may not be good tomorrow, but in few years it may be good again.


For example, there is nothing Harmful in selling the Cigs.

The act of selling is not harmfull.

Maybe even light use of cig's is not harmfull at all, and if used wisely it may even, in some rare occasions, be helpful.

BUT (isn't always there a but ?)

People tend to get addicted to smoking. And it is this, what makes it harmful. The Roboter selling cig's does not do harm.
But, it is humans, who are setting up the roboter, it is humans who are making the cig's, and it is humans who gets addicted, sick, burnt etc etc.
It is humans who complain about others/roboter selling cigs, it is humans who complain about the smoke, it is some of the human smokers who are ignorant of other non-smoker who do not want to breathe it in.

P.S. non-smoker here.
Title: Re: Asimovs laws of robotics
Post by: Dan on August 14, 2020, 08:31:20
Quote
Is it harmful for a robot to farm potatoes to be used in making french fries thus contributing to obesity?

French fries are not the cause of obesity. Not solely.
As stated in the previous post, this has nothing to do with the Roboter, but with human minds.

You see, human minds tend to become Roboter a-like.

Therefore they do make a simple equation: a Proper use = Good, Improper use = Bad.

But, as seen in your question, they tend to expand the Good/Bad into everything else.

It is something psychological, but i have not identified it as yet, so someone else has to answer it.
Title: Re: Asimovs laws of robotics
Post by: DaiHard on August 14, 2020, 09:23:34
Dan, I think Asimov saw his laws as applying to robots which can truly think for themselves, and make decisions with "free will", so simply delegating the task to the human programmer will not work.

Matty raises an interesting point: what if the robot's perception of harm for the human, and the human's view, do not coincide? Presumably the robot has to allow something for the autonomy of a human, but how much? Presumably the robot should catch someone who stumbles, and is about to fall over a cliff, but what about someone attempting suicide? Does the robot catch them, and condemn them to a lingering death from terminal cancer, say?

I think Asimov envisaged more direct decisions: where the robot was planning something, or deliberately planning NOT to do something, to harm a human, but there are inevitably going to be fringe cases, where a "profit and loss" calculation has to be made.

It's interesting that the 0th law was later added: robots must protect humanity as a whole - does that allow a robot to kill Hitler, for example? How far in advance can it make that decision - based on a psychological profile at school?
Title: Re: Asimovs laws of robotics
Post by: Dan on August 14, 2020, 13:30:46
Quote from: DaiHard on August 14, 2020, 09:23:34
Dan, I think Asimov saw his laws as applying to robots which can truly think for themselves, and make decisions with "free will", so simply delegating the task to the human programmer will not work.

Well, let me look left and right. Done.

What did i do ? you may ask me. I looked left, and saw no such roboters.  I looked right, and still, i saw no roboter who thinks for themselves.

They are not programmed, yet. The roboter-in-making still fall down, when they try to mimic humans.

When we talk about asimov law of robotic, we are talking about assumptions. We are talking about "what if's".

I dare to say that we are playing a MIND-Game, when we think about this topic.

And, like i said earlier in other words, a machine does not simply become consciously aware of itself.

Let me tell you this in another way: If you have programmed a Game, lets say Asteroids, then you will play Asteroids today, tomorrow and in 10 years as well, from the same code.
This code does not modify itself, unless it is written to modify itself.

For a Roboter to become conscious (as in this mind-game of asimov) a programmer has to program it First to Learn, and to adapt and modify it's behavior (the source code) according to the Learnings.

There has to be a Programmer FIRST, which writes a Program, before a Machine can act at all.
So i'm not only delegating the task to the human programmer, but i'm stating that the Human programmer is responsible for how the machine Acts.


But, maybe we are not talking about robots, but of humanity itself ? Or maybe we can talk about the Peter Parker's problem ?

At the end, all question points to the same problem.
 
Title: Re: Asimovs laws of robotics
Post by: Amon on August 15, 2020, 00:03:13
QuoteI'm not scared of a computer passing the turing test... I'm terrified of one that intentionally fails it.
Title: Re: Asimovs laws of robotics
Post by: Matty on August 15, 2020, 00:31:22
Are you sure Siri is not intentionally failing it continuously Amon? (Or insert other ai assistant tool)
Title: Re: Asimovs laws of robotics
Post by: Dan on August 16, 2020, 02:03:18
You know, every computer game, which feels either challenging or fair, is programmed to intentionally fail.