ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel

What can Computing do ‘beyond the Office’?

Updated on April 17, 2015

Top ten AI Fails

Introduction

Attempting to Define Artificial Intelligence

The early Artificial Intelligence (AI) Pioneer John McCarthy defined Artificial Intelligence as “the science and engineering of making intelligent machines” (2007); However, the general meaning of intelligence is defined as “the ability to acquire and apply knowledge and skills” (Oxford Dictionary 2012); a distinction is therefore drawn when considering intelligence in the machine (or artificial) sense, and is defined as “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages” (Oxford Dictionary 2012).

Considering these definitions it appears that artificial intelligence is a somewhat elusive concept to compartmentalise, a notion that was observed by A.M. Turing (1950) prior to proposals of methods of artificial intelligence; in his Computing Machinery And Intelligence Paper he proposed a test, known as the Imitation Game, to address the question of whether machines could think;

Notably however, even this test, which Turing created to specifically define the terms “machine” and “think” in and as they relate to Artificial Intelligence, has been subject to academic criticism (Sterrett 2000, Moor 2004); And the trouble with quantifying artificial intelligence has remained an ongoing academic discussion (Brooks 1987, Naas et al 1995, Kassan 2006).

Watson the computer, pictured here in the middle of his fellow human competitors, won the American game show Jeopardy
Watson the computer, pictured here in the middle of his fellow human competitors, won the American game show Jeopardy | Source

The Beginnings of Artificial Intelligence

The notion of Artificial Intelligence can be traced back some 2500 years, whereby Greek Mythology holds many instances of AI beings; furthermore, feasible attempts to create Artificial Intelligence pre-date the modern computer by at least 800 years; in the 13th Century an “Arabic thinking machine called a Zairja” was utilised by astrologers (McCorduck et al. 1977), and worked by forming "an agreement in the wording (between)… question and answer ... with the help of the technique called the technique of 'breaking down” (Khaldūn 1958).

The fundamental focus on logic within the Zairja can be traced to ‘modern day’ foundations within AI; in McCulloch and Pitts’ 1943 paper (“A Logical Calculus of the Ideas Immanent in Nervous Activity”), it was established that expressions in logic could be computed by simple so called ‘neural nets’ thereby reflecting the structure and nature of the by then established biological neuron doctrine (based upon work by Golgi and Cajal); Moreover, John Von Neumann’s first digital computer, was greatly influenced by McCulloch and Pitts’ work (Boden 1995).

Due to the shared fundamental principles in the biological and artificial intelligence fields, early “methods being developed were often proposed as contributions to theories about human mental processes (thus)… research in cognitive psychology and research in artificial intelligence became highly intertwined” (Nilsson 2010).

Officially however, the actual term ‘Artificial Intelligence’ wasn’t coined until 1955, when McCarthy and ten fellow Academics dedicated their Summer to a collaborative effort in producing tangible evidence of Artificial Intelligence.

Established upon the proven Turing Machine logic, McCarthy et al (1955) reasoned that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”; furthermore, they contended that “a truly intelligent machine will carry out activities which may best be described as self-improvement”; These initial observations would come to represent an ongoing, arguably misguided, optimistic prediction of the future advancements within Artificial Intelligence (Kassan 2006).

However, as Marr reflected in 1976, McCarthy et al’s (1955) envisagement of Artificial intelligence was far from being realised, denoting that studies thus far were “too simple to be interesting, or very complex, yet perform too poorly to be taken seriously”.

What is artificial intelligence?

Approaches

Computationalism

As McDermott (2007) denotes, Computationalism is the theory that the human brain is essentially a computer” and represents the predominant approach in cognitive modelling, thus it is “analogous to the synaptic structures within the brain (with)… three classes of nodes: input nodes, hidden nodes and output nodes” (Hutchinson 2007); this represents the underlying theory behind McCulloch and Pitts’ (1943) paper.

Connectionism

The “more recent rival approach is ‘connectionism’ . . . the hypothesis that cognition is a dynamic pattern of connections and activations in a ‘neural net’ ” (Harnad, 1993: 12); Smolensky observed in 1987 that connectionism was regarded as a new approach, and at that time it appeared unclear whether the approach would succeed.

Connectionism “models behavioural or mental phenomena, as emergent processes of interconnected networks of simple units” and thus can be said to be low level modelling representing the actual neurological physical matter; in contrast, computationalists argue that the mind is “a discrete-state device that store symbolic representations, manipulated by syntactic rules” (Garrido 2010: 40); furthermore the models fundamentally disagree with theories of learning.

Here Deep Blue beats a human component at chess, marking a milestone for AI (1997); see the game in action in the video below.
Here Deep Blue beats a human component at chess, marking a milestone for AI (1997); see the game in action in the video below.

Deep Blue beats G. Kasparov in 1997

Milestones

Even though Artificial Intelligence in the computing sense, is not yet 60 years old, the achievements and milestones within its history are vast; for the purpose of brevity this paper now analyses AI Advancements via the contrasting fields of Medicine and Warfare.

In the Medical world, very early on, doctors were captivated by the potential of Artificial Intelligence and the possible implications for medicine (such as Ledley and Lusted, 1959); amongst its applications, it has been utilised to solve difficult skin cancer detection (Christensen et al 2004), provide hypertension management advice (Nillson 2010) and has aided in the prognosis of conditions that were previously subject to inadequate methods (such as thyroid ophthalmopathy [Dazzi et al 2002]).

In contrast, Artificial Intelligence is increasingly being employed in War Fare, “by 2010, one-third of the aircraft in the operational… force aircraft fleet were unmanned; and by 2015, one-third of the operational ground combat vehicles… (will be) unmanned" (Nillson 2010:604); Whilst unofficial figures are unavailable, estimates put the dead, due to Automated Air Vehicle attack, at between 1,932 to 3,176 since 2004, of which 18-23% are thought to be civilian (New America Foundation 2012).

Whilst it is debatable whether aircraft that are remotely controlled are classified as Artificial Intelligence, it is notable that the US army is funding a project to equip robot soldiers with a conscience to give them the ability to make ethical decisions (Sharkey 2007); the ethical considerations of such a move are grave, and whilst the debate of Artificial Intelligence’s possible contributions to society are more grounded in a philosophical realm, it is certainly an important point to highlight.

Not all are optimistic about the future of artificial intelligence. Fierce critic Professor Stephen Hawkings recently stated that ""The development of full artificial intelligence could spell the end of the human race".
Not all are optimistic about the future of artificial intelligence. Fierce critic Professor Stephen Hawkings recently stated that ""The development of full artificial intelligence could spell the end of the human race".

Conclusion

Artificial intelligence is more than merely a facet within the realm of computing; indeed even thousands of years prior to computers even being imagined, the draw of Artificial Intelligence captured imaginations and inspired human endeavours; furthermore neural science and artificial intelligence have become inextricably linked, reflecting and sharing their characteristics entirely.

When considering what computing can achieve beyond the office, we recognise that AI has the ability to achieve greatness in the form of saving lives, as well the ability to take life away; ultimately however this merely reflects human nature.

This humanistic issue is epitomised by Lady Lovelace’s infamous quote, whereby she denoted that “Machines can only do what we tell them to do” (McCorduck et al. 1977: 952) and as such, the greatness or destruction that technology achieves, will be entirely attributable to Human Kind.

Artificial intelligence and the future

What do you think of Artificial Intelligence? Does it hold an exciting future and plenty of promise for all that it can help us achieve? Or, like Professor Stephen Hawking, do you believe that AI could spell the end of humankind?

Comments

    0 of 8192 characters used
    Post Comment

    No comments yet.