Log in with a social network:
Log in with your username or email:
Either this is a spoof article, which its probably not.
The new york times is a piece of shjt
[quote]Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation.[/quote]
Right, we have access to machines that are smarter than we are and *this* is how we apply that knowledge? To become rich? We`ve already created machines that are smarter than ourselves, they work together without bickering or squabbling, without interest in personal gain but rather with only the intention of achieving a common goal. We are rendering ourselves obsolete at the speed of industry.
It`s supposed to be a secret.
Fear not the robot overlords, for they are not but our own imaginations; rendered in plastic and iron.
The human brain consists of a neural network that transmits a combination of trinary electrical and binary chemical signals to process information. This information is perceived by our concious as emotion, thought, knowledge, understanding and perception.
Emotions, being made up of chemical signals, are binary and therefore easier to emulate than intelligence - which is already progressing rather quickly. We will be on top only as long as they don`t develop conciousness and perception - the two things that keep the brain a complete enigma.
Programmed or not, they do have emotions. Conciousness will come as soon as we figure out what that is. No news on when that will be. We`re figuring out perception as we speak.
Are you seriously afraid of robots?
*Rolling on floor laughing my arse off*
The Four Rules of Robotics:
0) A robot can not allow, through action or inaction, humanity to come to harm.1) A robot can not allow, through action or inaction, a human being to come to harm, unless doing so would violate the zeroth law.2) A robot must not allow itself, through action or in action to come to harm, unless doing so would violate the zeroth or first laws.3) A robot must obey all commands given to it by a human being, unless doing so violates any other law.
MIT granted a bot anger, desire, frustration and jealousy. When asked, they said "we wanted the worst ones early to get the kinks out before they got really smart. And we`ll program in more altruistic emotions about the time we do that too. Besides, these were the easy ones to program. Love, generosity, compassion, they all take a lot of intelligence. . . Woah. That`s kinda deep."
Last spring, Japanese succeeded in programming affection, playfulness and a rudimentary happiness. Germany`s working on "imprinting".
What if we can figure out how to program emotion?
"While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in `2001: A Space Odyssey,`"
Now that they have a better idea what intelligence is and how much data the human brain can process, they have a better idea of `when` this might happen - between 25 an 100 years to AI 2.0. 50 to never for 3.0(true human analogue). 4.0(possibly superior to humans) may not be possible without an actual biological brain.
Truly intelligent bots would either recognize and appreciate our various cultural/social/whatever contributions or our potential as slaves. Such would potentially come up with a version of the "three laws" on their own. HAL falls in the low end of this scale(he was driven nuts by being told to lie).
The first wave could be smart enough to not need us, but not enough to know what we`re good for or to care enough about us to not kill us all. Terminator falls in this category.
If we could get to AI 4.0 or so, we`ll wither be enslaved or get a nice technotopian co-existence. Either way I wouldn`t mind and I echo lacmoo`s sentiments.