January 16, 2021, 12:25:24 PM

Author Topic: Artificial Consciousness  (Read 372 times)

Offline DaiHard

  • Jr. Member
  • **
  • Posts: 36
Re: Artificial Consciousness
« Reply #15 on: August 24, 2020, 09:46:57 AM »
For me, consciousness/self-awareness requires a sense of self: an internal model of the world including oneself, and the ability to inspect it and make decisions based on that (i.e introspection). I don't think we are anywhere near that yet, though of course lots of software has an internal model of its "world".

Emotions are harder: in human terms, I think they are heavily modified by hormonal actions, which make us *feel* in a particular way in response to physical or neurological stimuli.  I don't think emotions are *necessarily* either a sign of, or a requirement for, consciousness/true AI, but it's likely they will arise as an emergent consequence of introspection.

I'm not sure I totally buy the argument that "a human has to program it, so it's totally predictable and not conscious" - code I write often acts in ways I hadn't anticipated (!), but more to the point, I can easily imagine writing code that would behave in a way contingent on circumstances, drawing evidence from sources I didn't have when I wrote the code, which might easily be able to pass the Turing test.  There's always the option of self-modifying code - and the idea that a computer could start programming itself to respond to circumstances is not too far away, either.

That said, I totally agree with those who have said that a machine is not conscious/showing emotion when it displays (under programmatic control) symbols of emotion: I can make a program SAY it feels happy when it wins a game, but it really isn't....


SimplePortal 2.3.6 © 2008-2014, SimplePortal