Everyone knows that the development of technology over the last couple decades has been what could only be called meteoric. To take an obvious example, the internet barely existed before the year 1990, whereas life without the internet is inconceivable today in the modern world. It would be an understatement to say that the internet has revolutionized the field of communications. This fact about technology, however, may not be an unequivocal cause for celebration. In particular, several concerns have emerged over time regarding the specific technology of artificial intelligence. This sample computer science essay will explore these concerns regarding the dangers of artificial intelligence.
Recent high-profile cases from artificial intelligence experts
Recent news articles have tended to focus on the comments that Bill Gates, the founder of Microsoft, has made regarding the subject of artificial intelligence. This man is seen as having obvious credibility, given his role as a pioneer in the area of information technology and Microsoft's part in the invention of the personal computer. In this context, the following concern expressed by Gates (Holley) is quite noteworthy:
"I am in the camp that is concerned about super intelligence. First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I don't understand why some people are not concerned" (paragraph 15).
Essentially, Gates is suggesting that if artificial intelligence becomes advanced enough, then there is a real risk that it will become too intelligent for human beings to actually manage or control an effective fashion. Insofar as Gates surely must have a good idea of what the trajectory of technological development over the next several years will be, this is surely an important statement that must be taken into account when considering the issue of artificial intelligence.
Stephen Hawking's concerns about AI and the end of humans
Another high-profile comment regarding artificial intelligence has been made by another well regarded, and certified genius. Stephen Hawking's views on this subject would seem to be considerably more apocalyptic than Gate'sview:
"The development of artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded" (paragraphs 1 and 7-8).
The basic assumption here is that human beings would be able to design artificial intelligence in such a way that the artificial intelligence would become far more advanced than the natural intelligence of human beings themselves. This concern could perhaps be explicated by making an analogy to the capacities that technologies already have that human beings do not.
For example, if a human being were to manually perform an advanced mathematical calculation, it would likely take him several minutes to work through the problem on paper and produce an answer. On the other hand, a computer (or calculator) would be able to produce the same answer in less than a second. Similarly, then, the possibility described by Hawking is one in which the algorithms (or "inputs") for artificial intelligence would be such that the artificial intelligence would be able to perpetuate itself at an ever-increasing rate. Machines would then essentially become the most advanced "species" on the planet, leaving human beings far behind.
Alternative perspectives
The comments discussed above by Gates and Hawking have focused on the sheer processing power that artificial intelligence may one day achieve, thereby eclipsing the powers of the human mind. Bostrom, though, has approached the problem from a somewhat different angle. The suggestion made in his article is that what is most to be feared about advanced artificial intelligence is dangers technology poses to morality and philosophy:
"We have little reason to believe a superintelligence will share human values and no reason to believe it would place intrinsic value on its own survival either. These arguments make the mistake of anthropomorphizing artificial intelligence, projecting human emotions onto an entity that is fundamentally alien" (paragraph 2).
The idea here is that whatever kind of "mind" an artificial intelligence may have, it will not resemble the human mind, insofar as the human mind has non-rational aspects (i.e. emotions). There is thus no telling how it may behave or what decisions it may make.
Moral dangers of artificial intelligence
Bilton has also echoed this concern that artificial intelligence will not necessarily possess anything like morality as human beings understand it. The danger present here could be called as one of uncertainty: If one cannot know how an artificial intelligence would act, then it may not necessarily act in horrific ways; but on the other hand, it is almost inevitable that it would eventually do something or another that most human beings would find morally horrific.
The implicit concession here is that the human mind is not driven by pure reason: rather, the reasoning processes of human beings are contextualized with a framework of emotions, chief among which are basic emotions such an empathy that give rise to basic morality. There is no telling what an intelligent agent might do if it is in possession of pure reason but lacks any such broader framework of emotions. Indeed, from a human perspective, the one would imagine that such an agent would engage in behaviors that would generally be called sociopathic.
Overstated dangers and the reality of technology
On the other hand, an alternative perspective on artificial intelligence is that apocalyptic concerns are somewhat overblown. No one denies that there are inherent moral and practical dangers regarding artificial intelligence; what is contested, however, is the idea that these concerns cannot be controlled in a meaningful way. As Kurzweil has written:
"There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology [for which a conference was called]. The resulting guidelines, which have been revised by the industry since then, have worked very well" (paragraph 5).
Not all artificial technology is harmful, for example, smartphones have impacted society by creating a need for technology. The main idea here is that in principle, artificial intelligence cannot become more dangerous than what they are programmed to become, and that an effective regulatory environment regarding the development of the technology can control the dangers of artificial intelligence while capitalizing on the benefits.
Cellan-Jones have also pointed out that other pioneers in the field of artificial intelligence do not share Hawking's pessimism regarding the future of the technology. Basically, such "optimists" reason that human moral reasoning about artificial intelligence will advance in step with the evolution of the technology itself. At the present time, it is difficult to imagine how full artificial intelligence could be controlled; but to a large extent, this may be because the possibility of such a technology is so abstract and hypothetical, and thus beyond the real grasp of the human imagination.
As the technology becomes more real, however, human beings may well prove to be capable of reasoning about the relevant issues in a more coherent way and thereby be offsetting any major risks that may emerge in connection with the technology. Of course, the counterpoint would be that the technology itself may replace communication and interpersonal relationships. Perhaps only time will tell which of these perspectives is the more valid one.
Need an opinion piece on artificial intelligence? Our team can create an essay custom tailored to your needs.
Philosophical and ideological analysis of AI
At this point, though, it is perhaps worth exploring some of the philosophical and ideological assumptions presupposed by serious concern over the dangers of artificial intelligence. In particular, one such basic assumption would seem to be that human beings will in fact be able to create a form of intelligence that is more sophisticated than the human mind itself. This in turn implies the assumption that the human mind is in fact reducible to the human brain.
Such an assumption would be necessary because whatever artificial intelligence is, it must arise from strictly material and technological foundations; therefore, if it is supposed to be qualitatively similar to human intelligence, then the human mind must also strictly arise from the brain itself. If, on the other hand, it is assumed that there is something "invisible" within the human mind that cannot be reduced to matter, then the entire analogy between human intelligence and artificial intelligence would collapse entirely.
This is also related to the fact that artificial intelligence is programmed entirely in terms of rational and logical algorithms (see Thomason). Human beings can perhaps comprehend logic as a complete and coherent system. However, as Raatikainen has indicated, the incompleteness theorems of the mathematician Gödel would seem to imply that it is impossible to comprehend a system as a system while still remaining within the confines of that system.
Human brain versus artificial intelligence
The development of the human brain will never reach a point where it comprehends the mind itself as a complete system. The concrete implication that follows is: It may be conceptually impossible for a human being to ever create an intelligence which is truly more complex in all aspects than human intelligence itself. This is because it is metaphysically necessary that something of human intelligence would be left out of whatever artificial intelligence that human beings could construct.
This analysis sheds an interesting light on the concerns regarding the dangers of artificial intelligence discussed above. On the one hand, the concern that artificial intelligence would ever truly surpass human intelligence in all its aspects would seem to be spurious. Artificial intelligence may become superior in some aspects (most notably rational and logical processing power); but it would be impossible for artificial development to develop what could properly be called wisdom, or anything else rooted in non-rational dimensions of the human mind.
On the other hand, it may still be very much possible for human beings to create artificial intelligence that has free will and behaves in truly monstrous ways, precisely because of their inferiority (and not superiority) to human intelligence. In this case, human beings would still be more intelligent than their creations in the deep sense; but then, human beings may also encounter considerable difficulties with getting their runaway creations back under control. In short: there must always remain something within the human mind that cannot be transmitted to artificial intelligence, but this will not necessarily prevent artificial intelligence from wreaking havoc on the world.
Conclusion
In summary, this essay has discussed the potential dangers of artificial intelligence. It began with a description of a couple high-profile comments on this subject; then it proceeded to consider alternative perspectives on the dangers or lack thereof; and finally, it conducted a philosophical and ideological analysis of the subject under consideration. A main point that has emerged here is that concerns about the danger of artificial intelligence are not unjustified; however, a closer analysis of the concerns seems to reveal that the danger lies not so much in the potential superiority of artificial intelligence to the human mind but precisely in its inferiority. Even so, it is critical that sociologists and scientists alike continue to observe and discuss the impact and potential of artificial intelligence. There can be no doubt that it will be a rich subject area for research papers or even dissertation writing for years to come.
Works Cited
Bilton, Nick. "Artificial Intelligence as a Threat." New York Times. 5 Nov. 2014. Web. 3 Feb. 2015. http://www.nytimes.com/2014/11/06/fashion/artificial-intelligence-as-a- threat.html?_r=0.
Bostrom, Nick. "You Should Be Terrified of Superintelligent Machines." Slate. 11 Sep. 2014. Web. 3 Feb. 2015. http://www.slate.com/articles/technology/future_tense/2014/09/will_artificial_intelligence_turn_on_us_robots_are_nothing_like_humans_and.html.
Cellan-Jones, Rory. "Stephen Hawking Warns Artificial Intelligence Could End Mankind." BBC. 2 Dec. 2014. Web. 3 Feb. 2015. http://www.bbc.com/news/technology-30290540.
Holley, Peter. "Bill Gates on Dangers of Artificial Intelligence." Washington Post. 29 Jan. 2015. Web. 3 Feb. 2015. http://www.washingtonpost.com/blogs/the- switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand- why-some-people-are-not-concerned/.
Kurzweil, Ray. "Don't Fear Artificial Intelligence." Time. 19 Dec. 2014. Web. 3 Feb. 2015. http://time.com/3641921/dont-fear-artificial-intelligence/.
Raatikainen, Panu. "Gödel's Incompleteness Theorems." Stanford Encyclopedia of Philosophy. 2015. Web. 3 Feb. 2015. http://plato.stanford.edu/entries/goedel-incompleteness/.
Thomason, Richard. "Logic and Artificial Intelligence." Stanford Encyclopedia of Philosophy. 2013. Web. 3 Feb. 2015. http://plato.stanford.edu/entries/logic-ai/#cs.