banner image
Sedang Dalam Perbaikan

Humans, machines and learning

Image by Mike MacKenzie on Flickr
One of the many topics I discuss in my forthcoming book is Artificial Intelligence (AI) and its potential impact on the future of learning and development. I, along with many others, believe this is an important subject to explore, because it is a rapidly growing area of technology that will significantly influence our future.

In particular, there are several philosophical debates about the nature of intelligence and how human intelligence differs from machine intelligence. One of the texts I draw from is Tegmark's Life 3.0. Here's an excerpt from the new book:

MIT physics professor Max Tegmark presents some compelling arguments for the future of AI. He argues that the benefits of AI will far surpass the threats, provided they are aligned to human intentions. One of the greatest concerns he reveals is not that computers might become sentient, or ‘evil’, but a scenario in which the goals of ‘competent’ AI become misaligned with ours. His key argument is that the discussion around whether or not computers will attain consciousness or emotional capability is spurious (Tegmark, 2017). Our future co-existence with technology will be premised on the ability of computers to make life better for humanity, not to out-think us.

For Tegmark, intelligence, whether human or artificial, is being able to accomplish complex goals (whether those goals are good or bad). He argues that intelligence ultimately relies on information and computation, not on flesh and blood or on metal and plastic. Therefore, he reasons, with the exponential developments taking place in the world of technology, there is no barrier to computers eventually attaining and even surpassing human intelligence. Such a position can be described as ‘Strong AI’, or in Tegmark’s terms, the ‘Beneficial AI movement’.

Conversely the weak AI supporters predict that computers will not reach a level of intelligence that exceeds our own. Firstly, they argue, human and machine intelligence are not the same thing. Secondly, computers blindly follow code, and have no free will to decide not to follow it (unless they are programmed to do so – which thereby defeats the notion of free will). Thirdly, suggest the weak AI theorists, it is proving extremely difficult to create computer programs that can accurately model or reproduce human attributes such as emotions, abstract thinking and intuition.

Whatever side of the argument you subscribe to, it is interesting to note the comparisons between human and machine. Arguably, all of the above attributes, such as free will, emotions, abstract thinking and intuitive action not only make us who we are, they also create a permanent and unbridgeable divide between humans and computers.

Reference
Tegmark, M. (2017) Life 3.0: Being Human in the Age of Artificial Intelligence. London: Penguin Books.

Creative Commons License
Humans, machines and learning by Steve Wheeler was written in Plymouth, England and is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Humans, machines and learning Humans, machines and learning Reviewed by MCH on December 14, 2018 Rating: 5

No comments:

Powered by Blogger.