ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel
  • »
  • Technology»
  • Computers & Software

What is Abstraction and what is it good for?

Updated on April 17, 2015
Figure 1: Stages of Pixilation (Source: Zhang 2011).
Figure 1: Stages of Pixilation (Source: Zhang 2011). | Source

Introduction

Defining Abstraction

Philosophically, abstraction can be described as the thought process whereby ideas are detached from concrete objects (Langer 1953); the separation of tangible object from thought is a fundamental principle of abstraction allowing for generalisations that conceal complexity to capture the commonality between instances (FOLDOC 2012).

Figures 1 and 2 demonstrate the principal of ‘levels’ of abstraction, with high level abstraction resulting in a far removed version of the original image.

Figure 2: Dora Maar Photograph (Source: Bidermanas 1946) Figure 3: 'Dora Maar with Green Fingernails' (Source: Picasso 1936) Figure 4: 'Portrait of Dora Maar' (Source: Picasso 1937)
Figure 2: Dora Maar Photograph (Source: Bidermanas 1946) Figure 3: 'Dora Maar with Green Fingernails' (Source: Picasso 1936) Figure 4: 'Portrait of Dora Maar' (Source: Picasso 1937)

Tversky (2010) observes that inevitably, the process of creating abstract representations, in whatever form, results in a certain amount of ambiguity; Hofstadter’s (1979: Pg. 329 ) recognition of this ambiguity is illustrated in Godel, Escher, Bach, whereby the move from abstract to concrete is demonstrated as moving from “a publication” to “my copy of the May 18 edition of the The San Francisco Chronicle as it was when I first picked it up (as contrasted with my copy as it was a few days later: in my fireplace, burning)”; whilst this novel representation of abstraction illustrates ambiguity, it also succinctly defines abstraction’s purpose: the separating of relevant features of an object, whilst others are disregarded (Jesseph 1993), thus abstraction is characterised by ambiguity.

Abstraction within Computer Science

The nature of computing in all respects lends itself to abstraction, software exemplifies this notion, as its “structured in architectures… enable(ing) humans to create enormous systems by concentration on a few issues at a time”; such an architecture is argued to be inevitable and complex; this notion is encapsulated Greenspun’s so-called ‘Tenth Rule’ (Mujadi 2010).Greenspun’s advocacy of the highly abstract programming language Lisp reflects his contention that “languages fall along a continuum of abstractness, from the most powerful all the way down to machine languages” (Graham 2001).

Therefore abstraction fundamentally underpins programming and forms its conceptual and academic foundations (Shakelford and LeBanc 1997); in exemplification of this, C# abstract classes form the basis of concrete classes, effectively the process abstracts the respective objects similarities to form a new abstraction, reflecting the concept of inheritance. Complexity within programs is managed by the factoring in, and out of details as demonstrated by figure 5.

Programming Abstractions (Lecture)

Figure 5: Abstract class with consequent concrete classes (Source: Stackoverflow 2010).
Figure 5: Abstract class with consequent concrete classes (Source: Stackoverflow 2010).
Figure 6: Random Network (Source: Huang, C. Sun, C. Lin, H. 2005).
Figure 6: Random Network (Source: Huang, C. Sun, C. Lin, H. 2005).

Abstractions of computer networks

Random Networks

The abstraction of networks has enabled both misleading and groundbreaking ideas in equal measures; Erdős and Rényi (1959) established a mathematical model based upon their work in communication networks, whereby they argued that systems could be modelled by connecting its nodes with randomly placed links; such a so called ‘Random’ Network would be characterised by the majority of nodes having approximately the same number of ‘links’, as illustrated in figure 7; furthermore very few nodes would have markedly more or less links than the average (Barabási and Bonabeau 2003), as illustrated by figure 8.


Figure 7: Random Network Node Link Distribution (Source: Jost, J. and Banerjee, A. 2008).
Figure 7: Random Network Node Link Distribution (Source: Jost, J. and Banerjee, A. 2008).

The elegance and simplicity of Erdős and Rényi’s model (1959) founded a focus upon random graph theory within mathematical academia (Barabási and Bonabeau 2003), and it’s effectiveness in extracting the essential behaviour of various networks, would go on to account for the general academic understanding of the internet and World Wide Web (Chung and Lu 2006); it would take 40 years to determine that the model was not applicable to the World Wide Web; determining it to instead follow the form of a so called ‘scale free’ network (Barabási and Albert’s 1999).

Figure 8: Scale Free Network (Source: Huang, C. Sun, C. Lin, H. 2005).
Figure 8: Scale Free Network (Source: Huang, C. Sun, C. Lin, H. 2005).

Scale Free Networks

In 1999 Barabási and Albert’s study of the internet found that “(i) networks expand continuously by… new vertices, and (ii) new vertices attach preferentially to (well connected) sites”; as opposed to Random Networks, it became “increasingly recognized that the topology and evolution of real networks (were) governed by robust organizing principles” (Albert and Barabási 2002).

They found that the internet structure was essentially characterised by a few well connected nodes, as illustrated by figure 9; whereby the majority (more than 80%) having fewer than 4 links, with a small minority ( less than 0.01 percent) having more than a 1,000 (Barabási and Bonabeau 2003), as illustrated in figure 10.

Gallagher and Appenzeller’s (1999) Complex Networks paper made observations that indirectly clarified reasons of the erroneous application of the Random Network model, despite this being some months prior to Barabási and Albert’s study being published. Notably, they argued that the reductionism (“analyzing and describing complex phenomenon in terms of phenomena that… represent a simpler/more fundamental level” [Oxford 2012]) of problems within a given field of science were applied to a different field of science, presenting an over simplification of a given problem.

In exemplification of this, the Erdős and Rényi model (which has its basis in mathematics) had no consideration of social factors; therefore, the so called ‘preferential attachment’ of network nodes, were overlooked.

The argument that network modelling should move away from reductionist approaches to an understanding of a system as a whole (Albert and Barabási 2002) has seemingly come to be realised, with Newman (2003: Pg. 2) denoting that “recent years… have witnessed a substantial new movement in network research, with the focus shifting away from the analysis of small single (models)… to consideration of large-scale statistical properties of graphs” (2003: Pg. 2).

Abstractions of other networks

The scale free network model has been discovered to reflect systems from that of “a cell's metabolic system to actors in Hollywood"; as has been succinctly observed, this has “important implications… (for areas) from drug development to Internet security” (Barabási and Bonabeau 2003: p.1).

Notably however “many simulations… in… neuroscience (follow)… networks of the Erdös–Rényi type… despite it being recognized…that… brain networks are more complex in their connectivity and… exhibit… “scale-free”… properties” (Prettejohn et al. 2011: p.33); this is compounded by the fact that biological networks in general have been acknowledged as far from random (Newman 2003).

Thus it seems that cross disciplinary sciences appear to continue to suffer, at least to some extent, from Gallagher and Appenzeller’s (1999) observation of over simplification of a given problem; that said, it seems that the rise in the statistical approach to networks has benefitted other disciplines; a notable example being network abstraction within the social sciences in observing social networks (Newman 2003).

Computer Science Concept- Abstraction

Conclusion

Abstraction fundamentally underpins the nature of computing, enabling humans to simplify programming tasks and create enormous Software Systems.

The abstraction of networks has enabled groundbreaking advancements in the observation of technological networks and promises much in the way of application to other fields; notably however, these advancements have been equally marred by the application of incorrect models of abstraction.

With a notably shift away from reductionism within the field of the abstraction of networks, there is an inevitable shift in the level of abstractness; despite the power of abstraction being the simplification of problems, history has taught us that caution should be used in its employment.

Comments

    0 of 8192 characters used
    Post Comment

    No comments yet.