Triz
19-10-2015, 11:29 AM
http://i57.tinypic.com/29v6ed.png
This debate is whether or not you think AI (Artificial Intelligence) is/will be a threat to us in the future.
Without getting onto the whole "Terminator/Skynet" topic, just have a discussion about AI in general. Our technology has rapidly grown more advanced looking back from 20-30 years ago, therefore what do you think it's going to be like in a further 20-30 years? We already have Siri and other similar AI's out there who can have a basic conversation with us humans, and even self-driving cars. However do you feel that it will get out of hand, as it's us who program computers, and it's very black and white to them.
For example if we program a AI computer for the good of the planet (something as simple as litter picking) who adapts itself to different situations and learns from those experiences, then it won't be long until said AI figures out that best method to picking up the litter is to eliminate it at the source... us.
Stephan Hawkings has long warned us about the complications with AI and has openly stated that he is against AI, and has signed a petition along with many other famous names to ban weaponizing AI, but even Stephan Hawkings doesn't think that is enough, but it's a start, considering the above example; anything can happen when programming the simplest task, as theres far too much things to over-look with all the "what if's"
With this said, an article I read a few months back had a good point
Yet, some of the enthusiasm may be premature: as I’ve noted previously, we still haven’t produced machines with common sense,
vision, natural language processing, or the ability to create other machines. Our efforts at directly simulating human brains remain primitive.
This debate is whether or not you think AI (Artificial Intelligence) is/will be a threat to us in the future.
Without getting onto the whole "Terminator/Skynet" topic, just have a discussion about AI in general. Our technology has rapidly grown more advanced looking back from 20-30 years ago, therefore what do you think it's going to be like in a further 20-30 years? We already have Siri and other similar AI's out there who can have a basic conversation with us humans, and even self-driving cars. However do you feel that it will get out of hand, as it's us who program computers, and it's very black and white to them.
For example if we program a AI computer for the good of the planet (something as simple as litter picking) who adapts itself to different situations and learns from those experiences, then it won't be long until said AI figures out that best method to picking up the litter is to eliminate it at the source... us.
Stephan Hawkings has long warned us about the complications with AI and has openly stated that he is against AI, and has signed a petition along with many other famous names to ban weaponizing AI, but even Stephan Hawkings doesn't think that is enough, but it's a start, considering the above example; anything can happen when programming the simplest task, as theres far too much things to over-look with all the "what if's"
With this said, an article I read a few months back had a good point
Yet, some of the enthusiasm may be premature: as I’ve noted previously, we still haven’t produced machines with common sense,
vision, natural language processing, or the ability to create other machines. Our efforts at directly simulating human brains remain primitive.