ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel
  • »
  • Technology»
  • Computers & Software

What really is a Computer?

Updated on April 17, 2015

Introduction

Computer Definition

Blundell et al (2007: p.2) define a computer as “consist(ing) of a set of electronic and electromechanical components able to accept… input, process this input … by means of a set of instructions, and produce… output” and are characterised by “speed, reliability and storage capability” (Capron and Johnson 2002: p.6).

The World's first human computers
The World's first human computers | Source

Historical overview of computing power: A comparison

Grier (2001) denotes that prior to computers being machines, they were in fact humans, figure 1 picture’s NASA’s ‘computers’ in 1949, who undertook the calculations of high speed flight.

The pivotal transition from Human to Machine computing materialised during Word War 2, with the advent of The Eniac, the world's first electronic digital computer; critically, whilst a human "computer(d) a 60 second (firing) trajectory in about 20 hours.... the ENIAC required only 30 seconds" (Moyes 1996).

What is a computer: A straightforward technical answer

The Eniac operated at “5,000 machine cycles per second” (Mitchell 2003), by contrast, it’s contemporary counterpart, ‘K’, has achieved “8.162 petaflops ( 8,162,000,000,000,000 machine cycles per second) (Fujistu 2011).

Historical overview of computing power: A brief explanation of computer generations


The Eniac, and ‘K’ represent the first and fifth (current) generation of computers respectively (although the current generation of computer, and year spans are a matter of academic debate [Gaines 1984]).

The First Generation (1946 to 1955)

The first generation computers used vacuum tubes as their circuitry, magnetic tape was used as memory, and punched cards for input and output.

The Second Generation (1955 to 1965)

Vacuum tubes were replaced with Transistors.

The Third Generation (1965 to 1975)

Integrated circuits (a complete electrical circuit whose components (transistors, capacitors, etc.) are fabricated onto a small "chip" made of silicon [Frost 2012]) came into use.

The Fourth Generation (1975 to 1989)

Fourth generation computers are characterised by the advent of the microprocessor (a single Integrated Circuit chip that contains an entire computer processor [Microsoft® Encarta® Online 2000]).

The Fifth Generation (1989 to present )

There are various school of thought on fifth generation computers, whilst some argue that they actually represent a change in computer use (e.g the internet, mobile computing etc.), others argue that they represent the study field of Artificial intelligence (Gaines 1984).


Historical overview of computing power: A 1965 Prediction

The vast void between the Eniac’s power and K’s power, is comparable to the increase in general computing processor power; an increase which correlates exponentially to doubling every two years, a fact that is often, referred to as Moore’s Law.


In actuality, the term is a misnomer, as Blundell et al (2007: p.62) emphasise, “Moore’s law is not a law, it is simple a prediction… therefore…a more appropriate name would be Moore’s model”; additionally, Moore merely predicted that between 1965 and 1975, the, “device yields (transistor count per component [CPU]) of 100%” every 18 months looked set to continue (Moore 1965: p.2 ), it was only in a later paper that this rate was adjusted to doubling “approximate(ly) a doubling every two years” (Moore 1975: p.1 ) (demonstrated by figures 1 and 2 respectively).

The ENIAC: The Worls' first

Multi-Core CPUs

The advent of the microprocessor meant that a computer such as The Eniac “could now fit in the palm of the hand” (Babatunde and Mejabi 2007); however, as the capabilities of processors have increased, so have expectations of performance; as technology approached its limits in terms of chip transistor count, sustaining ‘Moore’s Law’ became progressively challenging, a solution to this problem has been found in the form of multi-core processors (which are effectively multiple processors) (Lee 2006).

Moore’s original transistor count per component increase rate (Source: Moore, 1965: p. 3).
Moore’s original transistor count per component increase rate (Source: Moore, 1965: p. 3).

Multi-Core Programming: An Analysis

Programming Multi-Core Processors requires a process of effectively programming each ‘thread’ (“a flow of execution of one set of program statements” [Farrell 2008: p. 565]), in parallel, a process known as multi-threading.

Multi-core processors solved the recognised problem that processors wasted time running single tasks while waiting for certain events to complete, multi-core processors made it possible for one thread to run while another was waiting for something to happen (Farell 2008), providing the advantages of economy as well as increased throughput and reliability (Blundell et al. 2007).

Additionally threads had the advantage that existing programming “languages require(d) little or no syntactic changes to support threads” (Lee 2006: p.1), however “making use of parallel processing facilities is difficult – some problems do not lend themselves to parallel computation. Furthermore, equitably dividing the tasks between processors is difficult” (Blundell et al. 2007: p. 171).

Furthermore, Lee (2006: p.1) contends that parallel programming “discard(s) the most essential and appealing properties of sequential computation: understandability, predictability, and determinism” (Lee 2006: p.1), two common approaches to this problem have been to either design a new parallel programming language or to alternatively develop a compiler, however, neither has achieved significant success (Campbell et al 2010).

Consequently academics of various development environments, perhaps unsurprisingly, promote their wares as a solution, such as the Microsoft .NET Framework (Campbell et al. 2010), and Eclipse PTP (DeBardeleben and Watson 2006).

Despite difficulties with programming multi-core processors, there has been much documented research that has offer purported solutions; for example, branch predictor (the prediction of the digital path of a conditional statement) error duplication, has solutions based on the use of a Graphics Processing Unit (Jiang et al 2012) and techniques exploiting the leading thread branch outcome, to predict the last (Hyman et al 2011); achieving capable and scalable middleware (mediating software that communicates between application software and network [FOLDOC 2010]), has solutions based on the use of a parallel Java programming model (Al-Jaroodi et al 2003, Taboada et al 2010, Ramos et al 2012) and program degradation via excessive threads has solutions based around limiting the number of ‘runnable’ threads (Intel 2008).

Supercomputers: The power of computing today

Figure 3: Moore’s 1975 transistor count per component increase rate (Source: Moore, 1975: p. 3).
Figure 3: Moore’s 1975 transistor count per component increase rate (Source: Moore, 1975: p. 3).

Multi-core Processing Achievements

Despite criticism, the progression in computing due to multi-core processing is arguably undeniable; Gorder (2007) observed the advancement in gaming graphics and performance by the Playstion 3’s introduction of its ‘Cell’ chip that utilised nine CPU’s; the recognition of this particular chip would prove particularly significant as in 2008 the IBM Supercomputer ‘Roadrunner’, became the first to achieve “1.026 quadrillion calculations per second” (CNET 2008); far exceeding the prediction of processing power “reach(ing) petaflop (one quadrillion calculations per second)… by the end of the decade” (Gorder 2007: p.5); the ‘Roadrunner’ is trusted for the job of simulating how nuclear materials age to predict whether the USA's aging arsenal of nuclear weapons is safe and reliable.

Currently the record amount of processors per chip stands with the field programmable gate array chip, “which has the functionality of 1000 cores”, through specific programming methods (ZD Net 2010); however, others argue that the future of processing power lies with continuing increases in transistor count through Molecular Electronics and Nanotechnology (Jurvetson et al 2004).

Conclusion

To conclude, a computer is defined not merely by its specifications and components, but also by what it has enabled, and will enable mankind to achieve; the harnessing of multi-core technology, although not without its problems, can be directly credited with the advancement in computing power and applications.


Comments

    0 of 8192 characters used
    Post Comment

    • ShelleyHeath profile image
      Author

      Shelley Heath 2 years ago from Birmingham

      Thanks Venkatachari, I just popped them on in the hope that maybe they'd help a stressed out third year computing student somewhere :)

    • Venkatachari M profile image

      Venkatachari M 2 years ago from Hyderabad, India

      Very interesting and useful knowledge shared by you here.

      Voted up.