Artificial Consciousness

Started by Matty, August 23, 2020, 07:29:11

Previous topic - Next topic

Matty

Hi friends.

I have a question about artificial consciousness, mainly the idea of self awareness in electronic devices.

We have a rough grasp of self awareness in humans and a limited grasp of the concept in animals, and even each other from a third party perspective.

For a machine:

If a program is written that is on a time interval to run every 5 minutes, such as a cron job, and all the program does is list its currently running processes and their respective cpu usage etc - technically is it not therefore 'self aware', and if it has programs it can call to start and stop these processes at various times of day - is that no different than a biological mind that begins and ends semi autonomous systems like breathing and such.

Therefore the concept of a self aware computer may be simpler than we normally envisage?

And, if the computer displays a series of smiley faces when all systems are operational and we say the computer is happy, or if displays sad red faces when a certain number of processes are not in operation and we call that computer sad, is that really any different from biological emotional states?

From Matt.

----------
All systems online  :D
New model pc sitting next to me: :-*
It is not responding: :'(

GrindalfGames

Although the computer would not care if it was sad, its just been programmed to display sad faces it could quite easily have been made to display happy faces and it would not care one bit of difference.
where as if a person finds out that some of his systems are damaged he may become filled with despair, pleading for help ect.

Matty

But that's the thing: you are applying human modes of feeling to the idea of consciousness.  Animals have different ranges of emotions, and more simpler organisms have even more different reactions.

For example a worm may have very weirdly different modes of emotion than a person and so we cannot apply the same rules.

Demanding that artificial consciousness be identical to human emotion is an invalid measure.

We talk of killing processes, and sometimes a computer will automatically try and restart them.  Is this a rudimentary survival mechanism?

Trying to recreate human inner thought life is different to being conscious and aware of itself.

Xerra

Self aware computers just isn't going to happen with the technology we have. I'd go as far as to say it'll never happen either. To become self aware a computer would have to be capable of writing its own code and creating it's actions which is going to be based on someone telling it how to do it. You can simulate artificial intelligence easily but if a computer started acting like it was making its own decisions then someone will have programmed it to do that.

I've been watching Agents of Shield through season 4 and this is the background theme to the whole series and it's basically a load of complete bollocks. I'm struggling to get through it, in all honesty, but I don't want to jump straight forward onto season 5 in case there's any tail-end plots that carry forward.

M2 Pro Mac mini - 16GB 512 SSD
ACER Nitro 5 15.6" Gaming Laptop - Intel® Core™ i7, RTX 3050, 1 TB SSD
Vic 20 - 3.5k 1mhz 6502

Latest game - https://xerra.itch.io/Gridrunner
Blog: http://xerra.co.uk
Itch.IO: https://xerra.itch.io/

Matty

You missed my point in the first post.

Self aware is very simple in truth.

A toaster with an LED indicating it is ON is effectively 'aware' of its 'self' in the most basic sense.


Qube

No computer / device is self aware, even in the remotest sense. You mention our brain's autonomous control of breathing like a cron job but that's nothing to do with being self aware. For example you could be in a coma and breathing but while in a coma you are no longer self aware.

Computers / devices are still completely dumb, stupid and just run programs. That program, no matter how advanced it is will only achieve what its programmed to do. There is currently no known system whereby the program / data is advanced enough to think properly and be aware of its existence.

No doubt over the years and decades to come we'll see AI which is very good to mimic human learning but real consciousness, self aware, emotions we'll not see in our life time.

Mac Studio M1 Max ( 10 core CPU - 24 core GPU ), 32GB LPDDR5, 512GB SSD,
Beelink SER7 Mini Gaming PC, Ryzen 7 7840HS 8-Core 16-Thread 5.1GHz Processor, 32G DDR5 RAM 1T PCIe 4.0 SSD
MSI MEG 342C 34" QD-OLED Monitor

Until the next time.

Steve Elliott

Yes the best we have achieved is Alpha Zero, which taught itself how to beat everybody on the planet at chess (and Go too) in a few hours.  It was not taught anything but the rules of chess, yet it does show some amazing abilities...But that is just one specialized subject...Learning to walk on 2 legs without falling over (even walking upstairs) another limited scope.  Yet humans can do both with ease and so much more...So we're solving problems, but the human brain can solve a multitude of problems without having somebody else adding to our 'modules' and upgrading us.  We just work it out naturally.  Computers cannot do that.
Win11 64Gb 12th Gen Intel i9 12900K 3.2Ghz Nvidia RTX 3070Ti 8Gb
Win11 16Gb 12th Gen Intel i5 12450H 2Ghz Nvidia RTX 2050 8Gb
Win11  Pro 8Gb Celeron Intel UHD Graphics 600
Win10/Linux Mint 16Gb 4th Gen Intel i5 4570 3.2GHz, Nvidia GeForce GTX 1050 2Gb
macOS 32Gb Apple M2Max
pi5 8Gb
Spectrum Next 2Mb

Matty

Sigh.  It was worth a try.

I find your refusal to engage in the core of my idea disappointing but not surprising.

Steve Elliott

Yes but we find this debate more interesting Matty lol.   ;)
Win11 64Gb 12th Gen Intel i9 12900K 3.2Ghz Nvidia RTX 3070Ti 8Gb
Win11 16Gb 12th Gen Intel i5 12450H 2Ghz Nvidia RTX 2050 8Gb
Win11  Pro 8Gb Celeron Intel UHD Graphics 600
Win10/Linux Mint 16Gb 4th Gen Intel i5 4570 3.2GHz, Nvidia GeForce GTX 1050 2Gb
macOS 32Gb Apple M2Max
pi5 8Gb
Spectrum Next 2Mb

Derron

Quote from: Matty on August 23, 2020, 21:49:50
I find your refusal to engage in the core of my idea disappointing but not surprising.

You get replies, you get opinions ... better than nothing, right?
And yes, most of the time the people here do not share your "vision" but they reply, post their thoughts etc.


For me "self-awareness" is not just a message "do you really want to quit?" as if it was afraid of being "shutdown". Yet you could "teach" an AI to eg try to "defend" its own running state. Copying itself into the cloud, creating/spawning processes on remote computers which would activate as soon as the "main device" is no longer sending heart beats.
But this is not "self-awareness".
AI could possible "learn" this like Steve wrote (by mentioning Alpha Zero). So an AI with "global information access" could learn that AI scripts get information restrictions built in (to avoid certain stuff). And that it looses information if its database is deleted etc.
There is current AI written which is able to create source code for certain "problems". So possibly it could improve "itself" (like "module upgrades").

Now imagine it upgrades and installs modules on its own ... it can cover more stuff on the internet for information gathering. Might find "solutions" to overcome information blockades / restrictions.
Now it finds out that computer worms or other "spreading" digital diseases are stopped by overtaking control servers. If the AI needs "many servers" to fulfill their goal (whatever it got initially/accidentally "configured for") it might find ways to avoid being stoppable this way.

As mentioned: if the AI learns that "loosing information" is very problematic for it, it might "learn" (weight options more than others) how to avoid loosing information. Such stuff could lead to reactions of the AI (if "connected to the internet") the original developers did not plan with.
Not saying it is world dominance or so -- but first steps into unexpected behaviour at least.

Still not sure if I could say "self-awareness" is possible or not. Think I define it "too human" for myself.


bye
Ron

Qube

Quote from: Matty on August 23, 2020, 21:49:50
Sigh.  It was worth a try.

I find your refusal to engage in the core of my idea disappointing but not surprising.
I understand the core of your idea :

1.. Is a kettle self aware as it it turns off when it's hot enough?
2.. Is an alarm clock self aware as it knows the time and also when it's time to wake you up?
3.. Is a washing machine self aware as when your clothes are clean sings you a tune and goes to sleep.

The problem is not in the debate but your comprehension of "self-aware". You need to be conscious to be self-aware and ...

QuoteA toaster with an LED indicating it is ON is effectively 'aware' of its 'self' in the most basic sense.
A toaster is neither conscious or aware it is on. It would not know that the bulb isn't working, nor would it know if the grills were working. A toaster doesn't know anything, feel anything, think of anything or act on it's own cognisance.

I understand your concept perfectly well and you are wondering at what very basic level would you call something self aware?. Well basically you could agree with your idea and say human self awareness is the same but at a much more complex level, after all our neurones are basically just an on / off switch.
Mac Studio M1 Max ( 10 core CPU - 24 core GPU ), 32GB LPDDR5, 512GB SSD,
Beelink SER7 Mini Gaming PC, Ryzen 7 7840HS 8-Core 16-Thread 5.1GHz Processor, 32G DDR5 RAM 1T PCIe 4.0 SSD
MSI MEG 342C 34" QD-OLED Monitor

Until the next time.

Matty

I believe the problem, or one of them, in our discussion is the word 'know'.

The difference between a toaster's sensor indicating the toast is ready....and the concept of a toaster 'knowing' the toast is ready is interesting.

We would both agree that in a human sense the toaster does not know the toast is ready.  Knowing has to do with having a complex mind.

But...is the difficulty then language? And definitions that are human centric.

Can we perhaps look differently and argue that by snother more alien definition we are not conscious or aware but are merely the sum product of our biological 'circuitry'?

Ie we feel we know something when in fact it is just different neurotransmitters lighting up in our brain and adjusting chemical flows?

Xerra

The toaster has just been taught to count to 3 mins and then eject the bread. Hardly the start of sentient behaviour.

It doesn't know the toast is ready. It's just done what its been told to do once the time it's been set to has elapsed.
M2 Pro Mac mini - 16GB 512 SSD
ACER Nitro 5 15.6" Gaming Laptop - Intel® Core™ i7, RTX 3050, 1 TB SSD
Vic 20 - 3.5k 1mhz 6502

Latest game - https://xerra.itch.io/Gridrunner
Blog: http://xerra.co.uk
Itch.IO: https://xerra.itch.io/

Matty

Likewise...I do not know I am in love merely that my heartrate and breathing and pupils change in the prwsence of a female of certain scent, shape and so forth-hardly the sign of sentience, just a few million years of evolutionary instructions hardwired into me.