The Media Zeitgeist: The Masses as Cultural Hostages and Consumers of Converging Technological Gizmos, Zines and Meme
Digi-Reality And Environments In Our Present Technological Societies..
Cultural Dysfunction As Caused By Technique Enhancement
The media today has power far beyond our common understanding of the spirit of the times. This power and influence the media wields is by means of Technique. Whether we are talking about the printing press, radio, television, Internet or emerging technologies, the media's interests have always been profit and the shaping of men's attitudes.
These profits drove the development and the facilitated for the proliferation of nascent technologies. By craftily manipulating this technique of orchestrating and redirecting men's attitudes and behaviors, these new technologies were presented as the furtherance of democracy and modernizing of the way men communicated with their world and the enhancement of their appreciation of these new gadgets.
Jacques Ellul wryly observed: "Technique, as the universal and autonomous technical fact, is revealed as the Technological Society itself in which man is but a single tightly integrated and articulated component. It is a description of the way in which an autonomous technology is in process of taking over the traditional values of every society without exception, subverting and suppressing these values to produce at last a monolithic world culture in which all non-technological difference and variety is mere appearance".
Ellul says that, 'technique has become indifferent to all traditional human ends and values by becoming an end-in-itself'. It seems like technique rules and controls and shapes men's attitudes and behavior.
It's easy to see that we are living in precipitous times. The media today is in the hands of the Corporate few suited for the shareholders and advertisers. It is known by many that humans possess the ability to alter their destiny than they did before, thus it's the duty of the democrat to regulate this technique and power, and to work so that social decision-making and power may be made as enlightening and egalitarian.
Jacques Ellul states: "Only the naïve can really believe that the world-wide movement towards centralism results from the machinations of evil statesmen. The intellectual discipline of economics itself becomes technicized. Politics in turn becomes an arena for contention among rival techniques. The technician sees the nation quite differently from the political man: to the technician, the nation more than another sphere in which to apply the instruments he has developed.
"To him, the state is not the expression of the will of the people, nor a divine creation nor a creature of class conflict. It is an enterprise providing services that must be made to function efficiently. He judges states in terms of their capacity to utilize technique effectively, not in terms of their relative justice. Political doctrine revolves around what is useful rather than what is good.
"Purposes drop out of sight and efficiency becomes the central . As the political form best suited to the massive and unprincipled use of technique, dictatorships gains in power. And this in turn narrows the range of choice for the democracies: either they too use some version of effective technique-centralized control and propaganda-or they will fall behind. Public opinion provides no control because it too is largely orientated toward "performance" and technique is regarded as the prime instrument of performance, whether in the economy or in politics, in arts or in sports."
Manipulation of Mass Consciousness
As Ellul notes in the excerpt above,technique or technicized environment, be it the state or spheres of communication/media operations, has no regard for the 'will of the people' nor justice. It is only interested in efficiency and effectiveness, to the detriment of public opinion, which becomes left out of the equation by both the politicians and the technician. Either the people fall within the matrix of 'propaganda' or 'centralized control' as set forth by both the technician and politicians, or the will be left behind and they will fall behind.
This affects the public view of culture and its role with regard to the state and the technician. The purpose of uniting all different races and culture, at times is sacrificed for those inflammatory and rabidly passionate ideas of divide and conquer. The public,viewer or cyber-user, in this case, becomes left out if they attempt multiculturalism or unity of races, or harmony amongst the races.
The history of how the media monopoly has become concentrated into the hands of the few, one needs to understand the geo-politics of information. Through slavery and colonialism, the people who decided what's news, were the very people who had the power to disseminate and control what is news.
Douglas Rushkoff encapsulates this perspectives in this way: "We live in an age when the value of data, images, and ideologies has surpassed that of material acquisitions and physical territories. Gone are the days when a person's social stature could be measured by the distance he had to walk to see smoke from his neighbors camp.
"We've finally reached the limits of our continental land masses; we've viewed the earth from space over national broadcast television. The illusion of boundless territorial frontiers has been destroyed forever. There's simply no room, nothing left to colonize". Technique has narrowed the gap between underdevelopment(backwardness) and modernization.
"Efficiency has replaced the gradual and slow change and evolution of technology and its application and consumption by the public. The public has been effectively pacified by automation having been fine-tuned to technique and efficiency, making man not understand what the rule of technique is doing to him and to his world, leaving man beset by anxiety and feelings of insecurity.
"The spirit of the times in the media today has long been put on re-set when the television children of the sixties were exposed to techniques of psychology, conditioning, sociology and marketing. When these 50s kids grew up with programming developed and produced by themselves products of media of the day.
"This included techniques of 'thought control, pattern recognition and neurolinguistic programming' and used them to create television that changes the way we view reality and thus reality itself'."(Rushkoff). This feat was achieved through smart packaging.
Rushkoff summarily posits thus:
"The messages in our media come to us packaged as Trojan horses. They enter our homes in one form, but behave in a very different way than we expect once they are inside. This is no to much a conspiracy against the viewing public as it is a method for getting the mainstream media to unwittingly promote countercultural agendas that an actually empower the individuals who are exposed to them.
"The people who run network television or popular magazines, for example, are understandably unwilling to run stories or images that directly criticize the operating principles of the society that its sponsors are seeking to maintain. Clever young media strategists with new, usually threatening ideas need to invent new unthreatening forms that are capable of safely housing these dangerous concepts until they have been successfully delivered to the American public as part of our daily diet of mainstream media.
"The more harmless or inane the forum, the more unsuspecting the audience". In this manner, then, the free flow of ideas is controlled and language distorted as to how we deal, address and understand our reality and politics.
"The viewing public has been lulled into a false sense of entertainment, information and data sphere trough technique and effective efficiency. We have been hardwired together through devices like cable television, telephone systems, personal computer; we have been rendered as remote conduits of Twitters, cell phones and other new and emerging technologies. The media and all its concomitants have become the extensions of ourselves as the nervous system extends itself throughout our torsos."
Rushkoff observes: "If we are to understand the data sphere as an extension of a planetary ecosystem or even just the breeding ground for new ideas in our culture, then we must come to terms with the fact that media events provoking real social change are more than simple Trojan horses. There are media viruses. This term is not used as a metaphor. These media events are not like viruses. They are viruses… Our present consciousness has been designed and controlled by technique embedded in present-day emerging technologies/gizmos.
The spirit of our times in terms of present-day media is packaged in several myths and half-truths zines and memes. The common lie being that we are giving the people what they want-the lie and myth that these new technologies will set us free. Yes, people are given what they want at an affordable prices, but being enslaved to the media programs and products. This is an era of technological dependency wherein all has been made to view everything with a herd mentality.
The First Amendment, the freedom of speech and democracy has been extended to and covers the corporate and commercial activity exclusively. McChesney says that: "when commercialism penetrates everything, and when noncommercial public life diminishes or merges with commercialism, the capacity to distinguish between the two is compromised". With The Corporate Media and their think-tank ideologues aggressively advertising they became the leading definers of memes/zines and behavior of this new age media.
In the age of Obama, we see now, more ominously than ever, radio, television, internet, television being used by some racist organizations to wreck all public good offered by the new president, in the wake of America's economic debacle, solutions towards reviving the melt down, met with vicious, vitriolic and rabid racism.
The Convergence of these nascent Technologies, has been used and coalesced negatively against the Health Care Proposal that has been bandied around by the Present Obama administration. Insurance companies and pharmaceuticals all jumped into the fray and are using every available issue to create discordancy and civil disorder, in the form of Town-Hall gatherings.
During the Presidential Primaries we saw the revamping of all media apparatus, phone banks, Shortwave radio and right wig web sites proliferate rapidly. Now of late, mid to late 2009, we saw the media used to create a Tea Party counterculture, hosted by right wing radio, television and internet groupies up to the motley crew of bussed in hecklers and sign holders(Most signs racist and unbecoming of civil order and society), racism boiled over.
Some on TV land and internet Blogs pointed out this is due to the fact that Obama is President and also African American; we are seeing gun-toting gunslingers, Deathers, Birthers, and the whole bit display vociferous race baiting and hate. This was achieved through emerging and converging and old technologies about in the Media ecology by corporate media moguls and their minions, by dictating discourse and dominant ideas, leaving an unhappy population dominated by technique.This can be gleaned from the Social Media environs.
The technique developed by mass media in dealing with racial minorities and others outside the mainstream involved symbols and stereotypes. The mass media, because they dealt with a wide audience, came to rely on symbols and stereotypes as shorthand ways of communicating through headlines, characters and pictures.
Dramatic portrayals such as those of rich bankers, heroic cowboys, or old spinsters were used so audiences would understand the character the first time it appeared on the screen or in the short story. At the same time, newspapers used symbols such as 'right wing," "Left wing," and 'moderate " in headlines to characterize people or parties in different places on the political spectrum.
These symbols were a useful shorthand for the mass media, because they allowed the entertainment and news media to capsulize much more complex personalities and issues in a shortened character or term.
Thus, when the audience at a western movie saw a man come on the screen with a white hat, they knew he was supposed to be the hero in the western. Or when the term "leftist" was used in a headline that meant that the group to which the term was applied was on the liberal extreme, bordering on socialism/communism.
(As Obama is often accused of being, by the Tea "Baggers" Party) The terms themselves were useful because they became symbols that triggered stereotypes, which Walter Lippmann long ago described as "pictures in our heads." The symbol was the term that called up a whole set of characteristics ascribed to those associated with the term in the minds of the mass audience. It was those characteristics that became the stereotype/and vice versa, in the mentality of the audience.
Media Zeitgeist, Memes and Zines
Today's Media Zeitgeist, memes and zines compound the present day race and social relations. Genesis P-Orridge and other meta media activists feel that they can break the tyrannous linage that constructed the media sphere. Genesis succinctly states: "Studying, exploring, and exposing the gaps in our postmodern, highly edited media reality, virus formulators learn that the deepest, darkest power of the media is its ability to break through the perceptions of Linear Time.
"What I'm really saying is that the enemy resides in the concept of heritage and inheritance. The medium it uses to maintain its life is the culture. It's patently obvious that time is not linear. This is the imposition of an impractical and inappropriate construct. Things are really in chaos. But the policing of this inept construct damages and cripples us psychically, emotionally, intellectually and even physically".
Understanding the media, human communications will help plant the seeds of civility and social growth. Although we have been relegated to the back in terms of political discourse and production, we have the ability to deconstruct the news media that has been formulated and packaged for us by the professionals. With the spirit of the times, we can revamp and change media zeitgeist.
It is doubtful that anyone ever really changes culture in the sense that this term is ordinarily used. What happens is that small Informal Adaptations are continually being made in the day-to-day process of living. Some of them work better than others. These adaptations eventually become technicalized as improvements, and the improvements accumulate imperceptibly until they are suddenly acclaimed as"break-throughs." Steady, small improvements in airplane design have snowballed into machines undreamed of couple of decades ago.
If a person really wants to help introduce culture change, he should find out what is happening on the informal level and pinpoint which informal adaptations seem to be the most successful in daily operations. Bring these to the level of awareness. Even this process can only accelerate change, not actually control it in the manner desired by men of action. This is because the out-of-awareness nature of the informal is where all changes start. To paraphrase Dobzhansky. 'life is due to the dynamic interaction of living substance with itself and is not the result of either change or design.'
Culture Is Communication;
In considering man's total life as communication we see a spectrum covering a wide range of communication events. It is possible observe complete messages of differing duration, some of them very short (less than a minute) and others covering years and years. In general the study of culture deals with events of fairly short duration.
The study of government and political science may involve messages that take years to unfold. Sentences can be meaningless by themselves. Other signs may be much more eloquent. The significant components of a communication on the level of culture are characterized by their brevity as compared with other types of communication.
The fact that communication can be affected in so brief a time on the cultural level is often responsible for the confusion which so often occurs in cross-cultural exchanges. According to Berry: "The one cardinal error... is to assume that man is free to choose his own long-term future. He is indeed free to choose it within narrow limits. Like a snake compelled to move through a tunnel, he may move from side to side, experimenting with this way of life or that. But there is only one forward path to unending technological expansion."
According to Lorde, the future for race relations lies in teaching people about the existing realities in society provide mechanisms by which people can learn to function well in diverse contexts. Teaching and learning are essential future undertakings that will help facilitate a smooth transition to[and through] the twenty-first century.
Peccei(1981), president of the Club of Rome, an international body that is active in future related concerns, calls the: "creation of a movement of innovative learning on the broadest possible bases; promotion of the development of alternative projects for the future by mobilizing the creativity of the young; and stimulation of a fundamental renewal in our current way of thinking."
The appeal above points towards the importance of our commitment to learning as society becomes more and more complex. Peccei also stresses the centrality of learning and argues that we should learn "to live in consonance with new, fantastic, half-artificial world of our own creation."
By examining the new technology that has rapidly swept the global scene, we begin to discern that the global presence of the Internet has serious future implications for intercultural communication. For instance, it was practically impossible for social groups opposed to oppressive domestic policies of certain countries in the Southern hemisphere to communicate their concerns to groups outside of their countries.
Through the Internet, like-minded groups residing in different continents exchange views on democracy and human rights that are shaping political behavior in countries that were once isolated from wold public opinion.
With such power to access national and cultural groups in different parts of the world, the potential for collaboration is heightened. So, however, is the potential for the destruction of cultural norms revered by certain societies. A major dispute on the telecommunications law recently passed in the United States government is a case in point.
The dispute surrounds the transmission of cild pornography through the world wide web. Such content is offensive to some cultures. Even in the United States where freedom of expression is guaranteed, several groups fight consistently against the practice of using the world wide web for the sale of content featuring child pornography.
Besides the issue of child pornography and other controversial content, the Internet represents one of the most effective ways people can learn quickly about other cultures by visiting websites that specialize in providing such information. The potential for the new information and communication technologies for promoting intercultural understanding through rapid information sharing, places them at the cutting edge of information packaging that could help quell international tension.
And people becoming culturally depended children on the technology without any type of knowledge being siphoned and understood by those who would want to know more. The speed of information permeating all types of existing cultures and societies in its beginning stages, needs to be understood much more clearly as to its affects and effects in propelling all these societies into the technological cultures and societies cultures.
And this can only happen on personal level, family structures, neighborhood collectives, universally for culture and nations and within the spheres of human civilizations. For example, email exchanges between the governments of various countries can facilitate the clarification of issues that otherwise could lead to misunderstandings resulting in conflict.
We can take a few teachable moments from the election of Obama into Power by interrogating how technology worked how the media was controlled and dictated to by the masses within their social telecommunications and cultural miasma. Information sharing holds the best prospect for the future in terms of the positive impact of the new technologies on the promotion of intercultural communication and understanding.
By the same token, caution is required in determining the nature of the information that is transmitted so as not to cause unnecessary cultural damage domestically or internationally. Furthermore, as the level of complexity rises, it is becoming apparent that the future of intercultural communication rests on our willingness to learn from each other regardless of our origin, creed, race, gender, class or orientation.
McLuhan has another way of addressing the issue above, that of misunderstanding of cultures throughout the world, and the way information is disbursed and thus creating damage to local and other international cultures. It is well know that technology has spread its wings world-wide, and now most people are acclimated to its coming out, usage and not yet of its effects on them in a knowledgeable manner.
McLuhan writes: "Any approach to environmental problems must be sufficiently flexible and adaptable to encompass the entire environmental matrix, which is in constant flux. Effective study of the media deals not only with the content of the media but with the media themselves and the total environment within which the media function."
The reaction and self application of people to the new media zeitgeist is summed-up cogently by McLuhan when he says: "People are beginning to understand the nature of their new technology, but not yet nearly enough of them — and not nearly well enough."
Most people, as I indicated, still cling to what I call the 'review-mirror' view of their world.
"This means to say that because of the invisibility of any environment during the period of its innovation, man is only consciously aware of the environment that has preceded it; in other words, an environment becomes fully visible only when it has been superseded by a new environment; thus, we are always one step behind in our view of the world. Always Looking in the rear-view mirror of our past.
"Because we are benumbed by any new technology — which in turn creates a totally new environment — we tend to make the old environment more visible; we do so by turning it into an art form and by attaching ourselves to the objects and atmosphere that characterized it, just as we've done with jazz, and we're now doing with the garbage of the mechanical environment."
Everyone is affected by the proliferation and the new technologies and techniques embedded within the media, regardless of where human beings are, their beliefs, culture and day-to-day existence. As McLuhan has so expertly observed "Because of today's terrific speed of information moving/streaming/splurging, and this has affected early man who led a complex, kaleidoscopic life precisely because the ear, unlike the eye, cannot be focused and is synesthetic rather than analytical and linear.
"...By their dependence on the spoken word for information, people were drawn together into a ' tribal' mesh; and since the spoken word is more emotionally laden than the written — conveying by intonation such rich emotions as anger, joy, sorrow, fear — 'tribal' man was more spontaneous and passionately volatile.
"Audile-tactile tribal man partook of the collective unconsciousness, lived in a magical integral world patterned by myth and ritual, its values divine and unchallenged, whereas literate or visual man creates an environment that is strongly fragmented, individualistic, explicit, logical, specialized and detached."
Man is Homogenized by Culture
McLuhan tells us that:
"Look a bit closer at both nationalism and industrialism and you'll see that both derived directly from the explosion of print technology in the 16 Century. Nationalism didn't exists in Europe until the Renaissance, when typography enabled every literate man to see his mother tongue analytically as a uniform entity.
"The printing press, by spreading mass-produced books and printed matter across Europe, turned the vernacular regional languages of the day into uniform closed systems of national languages — just another variant of what we call mass media — and gave birth to the entire concept of nationalism.
"The individual newly homogenized by print saw the nation concept as an intense and beguiling image of group destiny and status. With print, the homogeneity of money, markets and transport also became possible for the first time, thus creating economic as well as political unity and triggering all the dynamic centralizing energies of contemporary nationalism.
"By creating a speed of information movement unthinkable before printing, the Gutenberg revolution thus produced a new type of visualized centralized national entity that was gradually merged with commercial expansion until Europe was a network of states.
"By fostering continuity and competition within homogenous and contiguous territory, nationalism not only forged new nations but sealed the doom of the old corporate, noncompetitive and discontinuous mediaeval order of guilds and family structured social organization; print demanded both personal fragmentation and social uniformity, the natural expression of which was the nation-state.
"Literate nationalism's tremendous speed-up of information movement accelerated the specialist function that was natured by phonetic literacy and nourished by Gutenberg, and rendered obsolete such generalist encyclopedic figures as Benvenuto Cellini, the goldsmith-cum-concodottiere-cum-paiter-cum-sculptor-cum-writer; it was the Renaissance that destroyed Renaissance man."
In the end, McLuhan synergizes his perceptions as follows: "The electric media are the telegraph, radio, films, telephones, computer,television, internet, cellphones, and so forth, that, all of which have not only extended a single sense or function as the old mechanical media did — i.e., the wheel as an extension of the foot, clothing as an extension of the skin, the phonetic alphabet as an extension of the eye — but has enhanced an externalized our entire central nervous systems, thus transforming all aspects of our social and psychic existence.
"The use of the electronic media constitutes a break boundary between fragmented Gutenberg man and integral man, just as phonetic literacy was a break boundary between oral-"'ribal' man and 'visual' man."
The change of visual man has been extended and replaced by media that has become social media, which in turn integrates man with machine, and machine erases and dictates a new culture that is beholden to technology and technical memes and zines. In the final analysis, technological technique and its gizmos has become human communications and culture.
It is important that we begin to realize that as mass consumers, that as we imbibe on the new communication, writing and print cultures we'd be better off understanding and knowing how this process came along and how and why we say it homogenizes us. Postman writes:
"As the twentieth century began, the amount of information available through words and pictures great exponentially.
"With telegraphy and photography leading the way, a new definition of information came into being. Here was information that rejected the necessity of interconnectedness, proceeded without context, argued for instancy against historical continuity, and offered fascination in place of complexity and coherence.
"And then, with Western culture gasping for breath, the fourth stage of the information revolution occurred, broadcasting. And then the fifth, computer technology. Each of these brought with it new forms of information, unprecedented amounts of it, and increased speeds (if virtual instancy can be increased)." This gave rise to and spawned the present technological zeitgeist we are all part of and participating in, by being enabled by these new technological techniques and their constantly merging and emerging gizmos.
Postman goes on to give us a structure of the media and we can now begin to get a picture of how and why we get sucked-in wholly and completely and made part of the culture and communication systems, and as consumers, become homogenized by the magnanimity and depth, breadth and extension (in the McLuhan sense) of all forms of media and communication entities along with a new language and its meanings and effects on the mass media consumers.
Postman informs us as follows:
"It is also worth noting that the proliferation of all the media and communication system bring along with it environmental zeitgeist infused with the memes and zines which are a by-product thereof. Postman informs us about the architectural structure of these forms as follows: "In the United States, we have 20,000 billboards; 11,520 newspapers; 11,556 periodicals; 27,000 video outlets for renting video tapes/[DVDs]; more than 500 million radios; and more than 100 million computers.
"Ninety-eight percent of American homes have a television set; more than half our homes have one. There are 40,000 new book titles published every year (300,000 worldwide), and every day in America 41 million photographs are taken. And if this is not enough, more than 60 billion pieces of junk mail (thanks to computer technology) find their way into our mail boxes every year.
"From millions of sources over the globe, through every possible channel and medium — light waves, airwaves, ticker tapes, computer banks, telephone wires, television cables, satellites, printing presses — information pours in. Behind it, in every imaginable form of storage — on paper, on video[DVD], audiotape, on discs, data tapes, film, and silicon chips. ...Information appears indiscriminately, directed at no one in particular, in enormous volume and at high speeds, and disconnected from theory, meaning, or purpose."
This has a debilitating effect of rendering all who use and depend on these gadget, and the viral world closer to ignorant and incapable of thinking for themselves. Postman observes:
"All of this has called into being a new world. I have referred to it elsewhere as a peek-a-boo world, where now this event, now that, pops into view for a moment, then vanishes again.
"It is an improbable world. It is a world in which the idea of human progress, as Bacon expressed it, has been replaced by the idea of technological progress. The aim is not to reduce ignorance, superstition, and suffering but to accommodate ourselves to the requirements of the new technologies[ultimately being held hostage to them]. We tell ourselves, of course, that such accommodations will lead to a better life, but that is only the rhetorical residue of a vanishing technocracy.
"We are a culture consuming itself with information, and many of us do not even wonder how to control the process. We proceed under the assumption that information is our friend, believing that cultures may suffer grievously from a lack of information, which of course, they do. It is only now beginning to be understood that cultures may also suffer grievously from information glut, information without meaning, information without control mechanisms."
Understanding Culture of the Internet
Hall talks about culture in its entirety as a form of communication. He sees culture as formal, informal and technical and understanding what all these mean. "Since man progresses from formal belief to informal adaptation and finally to technical analysis," writes Hall, "a theory of change is also implied in this tripartite division."
Hall feels that, "Little is said about mass-communication media such as the press, radio, television and the Internet, which are the instruments used to extend man's sense." He mostly investigates and writes that he is focused on the one of the ways in which man reads meaning into what other men do.
"Language is the most technical of the message systems. It is used as a model for the analysis of the others, In addition to language there are other ways in which man communicates that either reinforce or deny what has he has said with words. Man learns to read different segments of a communication spectrum covering events of a fraction of a second up to events of many years.
"We must learn to understand the 'out-of-awareness' aspects of communication. We must never assume that we are fully aware of what we communicate with someone else. There exists in the world today tremendous distortions in meaning as men try to communicate with one another. The job of achieving understanding and insight into mental processes of others is much more difficult and the situation more serious than most of us care to admit."
Hall continues: "Culture hides much more than it reveals, and strangely enough, what it hides, it hides most effectively from its own participants. ...Simply learning one's own culture is an achievement of gargantuan proportions for anyone. To forestall atrophy of his intellectual powers, man can begin learning about those areas of his own culture which have been out of awareness. He can explore his own frontier.
"Once I was addressing a group of principals on the subject of culture. We were discussing the need for Americans to progress in their jobs, to get ahead, and to receive some recognition so that they would know in a tangible way that they were actually getting someplace… One of the audience said to me, "Now you are talking about something interesting, you're talking about me."
When the man in the audience learned something about himself, the study of the culture got lost in the shuffle. He did not seem to realize that a significant proportion of the material which was highly personal to him was also relevant cultural data.
The best reason for the layman to spend time studying culture is that he can learn something useful and enlightening about himself. One other most effective way to learn about oneself is by taking seriously cultures of others. It forces you to pay attention to those details of life which differentiate you from them. It involves new ways of looking/seeing and knowing at things."
The masses become cultural hostages of the new zines and memes because most of them have not yet understood the role of culture as communication and understanding how it helps them to be able to deal with other and the new emerging languages and their meanings-that is, the environment in which these meaning or the new cultural and linguistic meaning that are emerging form the preponderance and confluence of these new culturally relevant ways of communicating given our new technical and communicative ways.
Man can only manage to escape the far reaching effects of the new techniques, gadgets and technologies by paying attention to the fact that they are also fully part of their modern and contemporary culture, which when experienced and understood might empower all to manage the coming and already here of the present technological future and societies which are all now clearly embedded/ensconced within the World Wide Web(Internet) - and have become extensions of ourselves.
Cultural Entropy Of The Media Ecology
The Culture of Emergent Media Ecologies and Digital Tools
Kahn and Kellner write: "Since the blossoming of hypertext and the Internet from the early 1990s, the emergence of a utopian rhetoric of cyberdemocracy and personal liberation has accompanied the growth of the new online communities that formed the nascent World Wide Web. While the initial cyberoptimism of many ideologues and theorists of the 'virtual community'(Barlow; Gates; Kelly),Now seems partisan and dated, debates continue to rage over the nature, effects, and possibilities of the Internet Technopolitics.
"This is about the innovative developments occurring between the Internet, other new media, and general populations, and how new World Wide Web forms are influencing and being influenced by technopolitics and culture." It is the memes, the inter-exchange disseminating, proliferated and affected within these systems that I say in this Hub, affects the mass-consuming public, and who ultimately become cultural hostages and enmeshed within these viral memes and zines.
The masses are held cultural consumers of converging media because of the preponderance of emerging, converging, interconnected and what Rushkoff calls "Trickle-down media". "Underground artists and writers can utilize mainstream cultural icons like Bart Simpson, the President, or Amy Fischer much more purposefully and pointedly than can their overground counterparts because they are unencumbered by the pressures of a corporate environment or mass media or mass-media censorship," writes Rushkoff.
Rushkoff further informs us thus: "There are two ways to market 'memes' without entering mainstream and being subjected to the scrutiny of overground exposure. The first is to use alternative formatting-low-status, 'trickle-down media,' including video games, comics, trading cards, and fantasy role-playing games(now of late, the 'Wii').
"This merchandising sometimes makes use of popular iconography like Ren & Stimpy, or major political figures, but twists it for more satirical purposes or even toward activist agendas. These media can be thought of as bottom feeders in the 'data ocean'. The other marketing style is to self-publish 'memes' that ate too radical for mainstream outlets in the hope that they might "trickle up".
"Thousands of 'zines' [self-published magazines] and independent book publishers have sprouted throughout the United States and Europe ranging fro income-tax avoidance to anarchy. The proliferation of self-distributed books and 'zines' has led to an entirely new philosophy of publishing, in which sampling with attribution or even downright plagiarism is encouraged and copyrighting is scorned.
"Alternative forums of all kinds give media activists low-cost, highly resilient, and provocatively interactive viral shells for the 'memes' they wish to disseminate The fact that these media are not taken too seriously also keep them from appearing too threatening. Like kids' television, which is not intended entirely for children, seemingly innocuous 'zines, comics, and games are packed with deeply threatening 'memes'.
"Moreover, the formats themselves are designed to reflect a value system in which the concepts of interactivity, interconnectivity, [and intra-activity] interactivity, feed back, iteration, and viruses are always of the universe.'(Rushkoff).
These clog and morph into the newly embedded meanings and message-systems which become the new cultural effects and affects and shows how people in a way people become handcuffed and through convergence and divergence, transformed and conditioned to the dictates of the new 'zines' and 'memes' cruising through the viral universe and language consciousness and development of the mass consuming culture and societies.
There are multi factors which are in play in changing and making the consumers of the new cultural communication cyber-surfing media hostages. One of these aspects are the techniques embedded with the new emerging Media ecologies which bring forth new language systems and meaning/realities which affect the users, who then end up creating a world and existence around and within or with these emergent media ecologies are experienced by them and people all over the Globe/Web World.
"Some claim that the Internet's role, as the primary engine driving the ecological arrangement of today's media, is simply to produce a proliferation and cyberbalkanization of 'Daily Me' news feeds and fragmented communities," writes Sunstein.
Kahn and Keller note: "In our view, the continued growth of the Internet and emergent media ecologies ultimately have to be thought together as a complex set of digital tools for organizing novel relations of information and global-local, cultural interaction" [ Kahn; Kellner], Contemporary "media ecologies" extend Marshall McLuhan's notion of 'media environments that constantly evolve as new media and technologies'(McLuhan).
While people have lived in natural place-based ecologies for a long time and have mixed elements of place with those of industrialized cultural space in modern urban ecologies. Today's media ecologies relate people throughout the globe and constitute a virtual world space that is a complex amalgam of ever-shifting global and local spaces and places.
In this networked and interconnected, [converging and merging], world, emergent media ecologies exert and contain a variety of sociopolitical, cultural, and historical forces that interact interface with people as they become media producers and consumers-A new reality and environ of media use and interactive communication.
If emergent media are to remain tools for human users rather than instruments of mass dehumanization, then the technoplitics of such emergent media ecologies must be virtually retheorized from the standpoint that is both critical and reconstructive and subject to active transofrmative practice (Kahn; Kellner). Knowing and Understanding Media Ecological Environments is going to be essential for the still coming future
Embedded Techniques For Media Hostages
The Emergent Cyber/Viral Ways of Knowing and Techniques
According to Disinfopedia, 'Echo Chambers' is a colloquial term used to describe a group of media outlets that tend to parrot each other's uncritical reports on the views of a single source, or that otherwise rely on unquestioning repetition of official sources. In the United States, the Republican Party uses a network of conservative foundations, coordinated by Philanthropy Round(the Koch Brothers and their ilk), to support an echo chamber of think tanks, industry-friendly experts and subsidized conservative media that systematically spread its messages throughout the political and media establishment.
Typically, the message starts when conservative voices begin making an allegation. for e.g., Democratic candidates are engaged in "hate mongering", and this time around it was spun by Glenn Beck who retorted in bellicose manner and trumpeted that "Obama was racist against Whites" such type of belligerent and irresponsible statements have been made by the Tea-Baggers, along with their placards, voicing their messages, 'memes' and 'zines'.
Columns start getting written on this theme, which spreads through the subsidized conservative media, and eventually begins appearing in places like the local newspapers and tabloids and becomes talking points and "accepted fact" throughout the media and social Media enclaves. There are a myriad techniques and ways of knowing that are applied to the mass media consuming, in-as-much as there are different discourses about the new emerging and social media from other media outlets as noted above, with their own applications.
We shall now look at the various way the 'spirit of the time'(Zeitgeist) manifests itself on the internet and the media and spirit of the media today on the web and throughout the social media, with their 'memes' and 'zines'-plus the effects and affects thereof of such environments and realities.
Nathaniel Sheppard wrote on this subject in 1998 in the following manner: "A Television program recently explored one on of the down sides of the Internet - the surreptitious gathering and dissemination of information that is used by some unscrupulous individuals or crime rings to steal consumers' identities.
"I looked up my own name with AOL's Net-Find and discovered several listings for Nathaniel Sheppard that included telephone numbers and street addresses with detailed nine-digit zip codes. Three of the listing included a middle initial I have seen on junk mail, but my parents did not give me.
"But they correctly included 'Jr.' in the name. One listed my current address, a second my former address and a third put me in Cincinnati, a city I have visited only three times. The other listings had me in Toledo, Ohio, Syracuse, NY; Berkeley, Ohio, and Denver. Investigation continues into whether these are mere coincidences.
"The Federal Trade Commission [FTC] says that advances in computers technology have made it easier than ever for anyone to get detailed personal information about others. That is a good thing when it helps law enforcement track down criminals or missing children, enables banks to prevent fraud and provides us with information to help us make intelligent decisions. But it also means we as individuals, companies, associations and even governmental bodies must safeguard that information."
Sheppard continues to inform us that: "A cottage industry of individual reference or 'look-up] services has sprung up on the Net. For a fee, they will provide detailed profiles on individuals, including spending and bill payment patterns,involvement in civil and criminal litigation, driving and property records, social security numbers and other intrusive bits of information.
"Identity theft — the appropriation of a person's name, address, social security number or other personal information for the purposes of fraud — goes to the heart of personal privacy. Identity thieves may use consumers' identifying information to open new credit card accounts,take out loans in the victim's name, or to steal funds from existing checking, savings, or investment accounts. Then numbers [of such cases] seem to be exploding and also the easy availability of information electronically creates the risk that tis will increase even further.
"Among those crimes are the submission of fake medical bills to private insurers and the running up of tens of thousands of dollars of debt under assumed names. A recent case brought by the Secret Service illustrates the problem. In 1997, a computer savvy Maryland couple pled guilty to running up ore than $100,000 in debt under stolen identities.
"They also admitted to routinely using Internet databases to select their victims. Congressional testimony has shown the impact of identity fraud on consumers can be substantial. Reputations and credit standing can be ruined. One woman became the target of an arrest warrant for a domestic battery crime she did not commit.
"A department store clerk whose identity had been assumed by a shoplifter spent years unsuccessfully looking for work in the retail industry. "Consumers mainly discover that they've been victimized when they apply for credit or for employment and their credit report is pulled," Medine said. By then the damage has been done" (Sheppard).
According to Sheppard: "Most people, accustomed to print and broadcast, have a high expectation of privacy, say Marc Rotenberg, director of the Electronic Privacy Information Center,, a Washington-based public interest research group. "You could read the morning paper, listen to the radio, or watch TV and no one would that you were doing any of those particular things," he says.
"This is not simply protection but the specific ability to withhold disclosure of your identity — the right to remain anonymous. Not so in Cyberspace. When you go on tour on the Net, many eyes may be watcing. If you visit certain areas, such as chat rooms or post to message boards, you may wind up on some junk mailer's list and begin receiving unwanted ads for sex and other services.
"And when you visit sites, information collected from you may be passed on to online database services that in turn sell the data. In a report to Congress in June on privacy online, the FTC said protection of consumers' privacy online is scarce and efforts to promote voluntary adoption of very basic information practices had fallen short."
Sheppard informs us thus: "The commission surveyed more than 1,400 Web sites and divided these into six samples representing all US commercial Web sites. Of these, 92% collect personal information on users but only 14 percent provide notice of their information collection practices and a mere 2 percent have comprehensive privacy policies.
"Most disturbing was the commission finding that 89 percent of the 212 children's sites surveyed collects personally identifiable information from children. The FTC said there are four information practice principles that are essential to making sure the collection and dissemination of personal information are conducted fairly: notice, choice, access and security.
"In June 1997, an association of look-up business known as the Individual Reference Services Group [IRSG] announced its intent to address Internet privacy concerns through self-regulation. The FTC said in its report to Congress that so far, those efforts have been inadequate."
Sheppard concludes with some advices: "So what can you do while the industry is trying to clean up its act? First, think twice about providing personal information or data online. Use a nickname as a screen name and don't use your full name in the personal information section of your internet browser.
"Ask the managers of sites you visit about their policy on collecting and releasing information on users and drop those whose policies make you uncomfortable. Fire up a Net Search engine such as Yahoo!, Alta Vista or HotBot, or use AoL Netfind to see what information comes on you. You also should should use the search engines to lookup "Internet Privacy." This will give you a list of articles and sites that provide guidance on the issue. And check out the FTC's site at www.ftc.gov.
"The FTC site will give you links to the three major credit bureaus - Equifax Inc., Experian and Trans Union Corporation - and forms you can send electronically asking them to not disclose your personal information for promotional purposes. Now that you have a little more information about road conditions on the Internet, continue to enjoy the ride. Lon on or be left behind." (Sheppard).
It is important for us to note that the issues discussed above are the 'spirit of the time' as this relates to the use of computers and people interacting with the probes in the sites they find the Net. The media ecology has changed drastically in the past decade, and the techniques, 'memes' and 'zines' are very quick to go viral, and until the users take notice, this will go on as it does today. Although the article above was written in the 1990s, it is still relevant to today's reality wherein a lot of people are suckered into scams that they are still not wary of, even today.
Health and the Internet
The electronic media ecology is not mostly a negative place as we learn from Sheppard about the fact that "Surfing the Net can be Healthy", article, wherein he writes:
"My wife says that I snore and my 7-year-old son concurs. I don't recall making noise in my sleep and tend to doubt the veracity of their assertion. I went online, nonetheless, to look for information on snoring just in case, I ever develop the condition. What I found was astounding. Not only was there a lot of information on snoring, and a more serious, related illness called sleep apnea, but a plethora of information on a wide variety of well-known and obscure medial and psychological problems.
"There are numerous "message boards" (online discussion forums) in which people and medical professional discuss illness and treatment options. It was like bringing an auditorium of medical experts and a well-stocked library right onto my desktop. I could probe a seemingly endless stream of health questions without the long waits and blood-pressure-rising costs such access involves in the real world.
"As with anything, though, there are caveats. Just as smut peddlers and obnoxious bulk mailers have invaded private e-mail boxes, some quacks have thrown their 2-cents worth of potentially dangerous nonsense into medical discussions."
Sheppard further informs us thus: "Science in cyberspace is no substitute for your physician, but it certainly can empower you with the knowledge to take a bigger role in the care of your body and mind. Using search engines such as HotBot (www.hotbot.com) and Yahoo! (www.yahoo.com), Alt Vista (www.altavista.digital.com) and the search term "medical" on America Online, I was able to zero in on several illnesses common among African Americans.
"HotBot turned up 1,574 responses to the search term 'high blood pressure and African-Americans' and these ranged from a simple description of hypertension and its causes to studies underway and recent developments in treatment. According to the American Heart Association, high blood pressure kills about 60,000 African-Americans each year.
"One of the many links from HotBot was to a page by the association site (www.amhrt/hbpfactor.html) that listed factors that contribute to high blood pressure, including high salt intake, being overweight, heavy alcohol consumption, use of oral contraceptives and lack of exercise. the search also yielded a report on a study published in October 1996 in the American Journal of Public Health, by Harvard researcher Nancy Krieger, which found a correlation between discrimination and high blood pressure in African-Americans."
According to Sheppard: "Another study, by a John Hopkins Hospital researcher, found that an increase in potassium, either through supplement or eating more potassium-rich foods, could help lower blood pressure. Using Alta Vista, I found more than 100,000 documents after entering a search for "diabetes and African-Americans," including a link to www.diabetes.com.
"Another link was the Diabetes Texas World Wide Web site, which provided grim statistics indicating that nearly 6 percent of African American men and 8 percent of African-American women have diabetes and that Blacks experience higher rates than other groups for three serious complications of diabetes: blindness, amputation and kidney failure. Another site )www.expressnews.net/unauth/newsfiles/diabetes/day4.htm), also accessible through Yahoo! gives diabetes index on African-Americans, Native Americans, Mexican-Americans and other minorities.
"It focuses on diet, such as some African-Americans' apparent penchant for drive-through hamburgers and other fat-laden foods, as a major contributor to diabetes. Specific searches for prostrate and breast cancer, heart disease, glaucoma and plain-old heartburn also produced reams of material from experts."
Sheppard offers this final conclusive piece as follows: "For more general searches, try logging onto Yahoo! and then clicking on 'health' in its menu. This takes you to a page with links to health news reports, a listing of online chats and programs on health issues, alternative medical therapy such as acupuncture, herbology, chiropractic, holistic, homeopathy and gemstone therapy.
"Click onto "indices" on the same page, and you are taken to a page with links to 26 medical sites. among these are Achoo, a directory of Internet health care sites; Health A to Z, a site of health resources and topics catalogued by medical professionals; health and medical newsgroups (discussions between people with like illnesses); Mollykat's Survivor's Resources, which deals with sexual abuse, domestic violence. ritual abuse, depression, physical and mental health and multiple personality disorders.
"Yet another jumping-off point is the Virtual Medical Clinic (www.mediconsult.com), a site that lists its objective as To help patients make informed medical decisions in conjunction with their physician. The site, run by physician Michel Bazinet, provides information on more than 50 medical topics, such as AIDS, cirrhosis. brain tumor, Alzheimer's disease and Attention Deficit Disorder.
"It also contains a cancer support group. Or try the keyword search "medical" on America Online to get to its Medical Services Community and an impressive lis of medical information links. With each main site you have a chance to either look up information on medical problems, treatments and medicines or to forward questions to medical professionals who can answer them. But don't abandon common sense.
"If advice or information seems too hokey or too much like mumbo jumbo, it probably is. Always consult your physician before trying any online suggestions. By the way, I learned that in the remote possibility that I do develop documentable snore, I can undergo allegedly painless laser surgery and make sure that my entire household gets a good night's sleep.s Log on or be left behind
The Net is a place where man is finding out more about themselves and various things. How one find serious or not serious data or information on any topic, is what is at present the present-day zeitgeist... Here are Web Addresses for medical links:
alt.arthritis; alt.support.depression; alt.support.diet; misc.health.diabetes; alt.infertility; alt.support.stop-smoking; misc.kids.health; alt.support.anxiety-panic
Other Medical Websites
Medscape(www.medscape.com); Oncolink (www.oncolink.upenn.edu); MedicineNet (www.medicinenet.com); American Medical Association (www.ama-assn.org); Minority Health Resource Center (www.omhhrc.gov) Hypertension Network (www.bloodpressure.com).
The Good And The Bad Of Verbal Convergence
Some Of It Is Information; Some Propaganda
It is important at this juncture to look at and consider the relation between good and bad effects, the press and Information; or, one may add how the converging memes and 'zines' collude and combined to culturally hold its users or agents hostage. For this, we defer to Jacques Ellul who informs us that:
"It seems to be a simple matter, for example, to distinguish between Information and propaganda. But, closer study of the problem reveals that it is practically impossible to make such a distinction. Considering but a few elements of the situation, the problem of information is today no longer that of the necessity of transmitting honest information-everybody agrees on this point. On the moral level it is a commonplace that we ought to transmit.
I merely inquire, "How do we get it?" To remain on the moral level is simply not to understand the situation. The concrete situation, to take but single example, is something like the following: Over the wires and into the offices of the Associated Press pass daily up to 300,000 words of World News , approximately equal to an enormous volume of 1000 pages.
From this mass of words,it is necessary for the Associated Press, in competition with all the other World Agencies, to choose, cut, and re-expedite as quickly as possible, perhaps a twentieth part of the whole to its subscriber.
Ellul continues to add: "How is it possible to select from such a flood just what should be retained, what is true, what is possibly false, etc.? The editors have no criteria, they are at the mercy of whatever comes in, and (even when they judge in good faith, and knowledge) they must essentially judge subjectively. Then again, even if the editor had only true news, how should he assign it a coefficient of importance"
"To do so is his business, and here the stereotypes of the editor are true enough: The Catholic editor will deem the news of the latest Vatican Council of great significance, information which has not the slightest importance to the Communist editor.
"What we have to do with here is not a question of bad faith, but of difference of perspective on the world. The result is that we never know, even under the most favorable circumstances, if a given piece of information is subjective. And we must always bear in mind that this information, whatever it is, has been worked over by at least four of five different Paris of hands."
Ellul concludes thus: "My reasons for maintaining that good effects re inseparable from bad are now, I trust, clear. And, as communications improve, the freer will be the flow of the news and the more available to all agencies concerned. These factors will play an ever greater role, making the difficulties of editing proportionately more difficult, and the chance of selecting absurd than sound news ever get."
We see that the words, the 'memes' and 'zines' that compose and proliferate in our vocabulary and semantics, have a way of making us cultural hostages to them and the means through which they are acquired. It is in this spirit that McLuhan wanted us to appreciate that human technologies, like all other artifacts (to adopt the Anglo-Canadian spelling he naturally used), are 'outerings', or "Utterings" pf our human faculties.
Technologies, whether they be devoted to communication or not, are thus extensions of our humanity, not the cold, alien, external forces envisioned by the paranoia of bad science fiction. Seen as utterings, technologies and thus be seen as utterances, as rhetorical tropes we use to express and enhance our humanity, and can therefore be read and analyzed for their cognitive, social and cultural effects.
True also to the medieval philosophy he knew so well, he wanted us to see that Nature, including humankind, is a book we can read, if we can only know and decode its language and analyze its significance. Such a methodology should have an interpretive power akin to that of the medieval four-level exegesis of the Book of God - the literal, the figurative (allegorical), tropological (moral), and the anagogical (eschatological) level.
McLuhan makes a distinction between "content" of a medium and its 'message,' as well as his observation that manifest content of any communication is always less important than the cognitive impact. The fact that technologies of communication exercise a large measure of control over the content of what is communicated.
In a nut nuts hell, McLuhan's extension of the research conducted in the relationships between orality and literacy may provide some reasonable explanations for many of the cultural changes taking place that give contemporary observers alarm:
"The erosion of "canons" of literature, the apparent decline in literacy among college students, and the creation of an arid postmodernism that denies the existence of intrinsic meaning in texts, to name but a few. We know that the key to understanding why the "achieved values" of "Gutenberg mechanical culture" are today being undermined lies in the ways in which our various technologies condition our senses and sensibilities." (McLuhan)
But perhaps the reluctance on the part of others to take such matter seriously is tied to their reluctance to recognize and admit that technologies, and even powerful, hybrid combinations of technologies, are fundamentally extensions and expressions of our deepest humanity which remains handcuffed by the 'zines' and 'memes' which are dictated to by the technique of the gizmos and their ways of dissemination these is a way that is held hostage, that is, what this media zeitgeist is all about, and its subduction of culture, in the process. I further discuss below, the Cultural aspects of Electronic Media Zeitgeist.
"Zeitgeist" means "the spirit of the times," and this again simply means "the general intellectual, moral, and cultural climate of an era."
According to Manuel Castells, "Because culture is mediated and enacted through communication, cultures themselves, that is our historically produced systems of beliefs and cods, become fundamentally transformed, and will be more so over time." We also come across this definition by Jessica Evans and Stuart Hall who write:
"Certain themes imbued with visual metaphors and terminologies of looking and seeing have become the staple diet of cultural and media studies: the society of the spectacle and the simulacrum; the politics of representation; the male gaze and the possibility of a female gaze; the 'mirror stage' fetishism and voyeurism; the reproduction of the image; the 'other' as the projection of racialized discourse.
"It may thus appear contentious to claim, as we do, that "visual culture" has been somewhat overlooked in the rapid expansion of cultural media studies throughout the past decade and a half. Contentious because, after all, the work of Barthes, Benjamin, Lacan and Foucault, with their clearly visual concerns — not to mention a host of others — forms the canonical foundations upon which much cultural and media studies rest."
So that, "Information society," "network society," and "media society" have become central concepts to describe the contemporary society.Recent technological and social developments seem to be characterized by a fast transformation that shakes the old traditions and steady structures of our communities.
Our thinking, our daily activities, and the very survival of homo sapiens are heavily interlinked with technological innovations and media cultural systems. The basic problem concerning communication and information technology continues, however, to be the lack of research carried out from the perspective of the humanities and social sciences.
Accounts based on technical and techno-economics premises — as well as various strategies by governments and central administrative agencies — can be easily found. Qualitative and critical research focusing on such issue as values, morals and social implications of technology is rare This despite the fact that the role of information technology can be considered so central as to justify What W.C. Zimmerli's view of it as the "Cultural Technology" or our time.
We have to really and incisively delve into the Cultural Media Zeitgeist in the context of this Hub as a critical, multi-disciplinary anthology that discusses, e.g., the theories, problems and possibilities of multimedia communication, computer interaction, hypertextual representation of knowledge, contemporary utopias, television broadcasting, semiotics of media, and sexuality in the cyber age.
Instead of technological determinism or trendy techno-optimistic rhetoric, this Hub has been an attempt on an analytical approach to contemporary media, future technologies, and electronic texts, which help to describe and clarify the information and media saturated culture or our time; the possibilities and problems it brings.
For instance, we read from Sherry Turkle that, "Computer technology not only 'fulfills the postmodern aesthetic' as R. A. Lanhnam would have it, heightening and concretizing the postmodern experience, but helps that aesthetic hit the street as well as the seminar room. Computers embody postmodern theory and bring it down to earth."
According to Castells, "It is obvious that semiotic, aesthetic and philosophical codes of contemporary media channels — both in the form of traditional 'mass media' and in the recent forms of s-called 'new media' [Internet, multimedia, hypertext, virtual realities, etc.] — define the millennial Zeitgeist of the coming years. Through the powerful influence of the new communication system, mediated by social interests, government policies, and business strategies, a new culture is emerging: the culture of real virtuality.
This culture of ('real') virtuality emphasizes the technical, psychological and dromological aspects of communication. What characterizes the new system of communication, based on digitized,networked integration of multi communication modes, is its inclusiveness and comprehensiveness of all cultural expressions. Therefore, we can expect to be immersed in all kinds of ever expanding communication — including masses of information overload, worthless data thrash, and seducing media soma.
When we begin to understand the breadth and depth of the Cultural media Zeitgeist, we then begin to wrap our hand around the technological, economical, occupational, spatial and cultural nature of how and why users or consumers are held hostage of these converging technological gizmos and their disseminated zines and memes: that their effect and affect on us and how this has changed and shifted the Media Ecology paradigm form the traditional modes of Mass Information.
According to Webster and Castells, "It is no coincidence that computers,information networks and media technologies in general have held a central position in the recent cultural theoretical and philosophical debate in which both sides of the 'information society' [Technological Society-a la Jacques Ellul], and the 'postmodern' state of culture have been emphasized."
"In fact," according to Poster, it seems that the philosophical analysis of new media and information technology brings up a conflict. These technologies (hypermedia,computer networks, virtual reality, etc.), are generally closely associated with cultural postmodern(ism), the indicators of which are, e.g., global databanks, electronic communications, and the principle of operating in real time."
Beyond this, "The media [cultures] appear to be sketching the sort of qualitative definitions which are often associated with postmodernism, such as the superficiality and brokenness of our world [view], as well as the fragmentary discontinuity of the surround field of phenomena." (Inkinen)
At the same time, though, the media are presented as the Meta Narrative of our time, the total conquest chaos(entropy)?, and an ambitious utopian landscape. Sherry Turkle informs us that, [m]uch of the conversation about electronic mail, bulletin boards, and the information superhighway in general steeped in a language of liberation and utopian possibility. It is easy to see why. To date, a user's experience of the Internet is of a dizzying free zone. Onit information is easily accessible. One can say anything to anyone. [...]."
Over the last few years, themes such as the 'new communication paradigm,' 'digital economy,' 'techno society,' 'interactivity,' 'cyber culture,' 'cyberocratia,,' etc., have been topics of never-ending discussion. Unfortunately, more often than not, comments have been focused on defending or criticizing.
Critics like Tom Forester, for example, consider the information society utopia to be unrealistic. Forester has shown in his biting article, "Megatrends or Mega-mistakes? What Ever Happened to the Information society? ("the paperless office," "the electronic cottage," "the cashless society," "computerized teachers,," , etc.), have failed to come to pass.
It appears, vice versa, that the computer has brought new social, psychological, and ethical problems into the Western society, examples of which are unreliable programs, computerized crime, copyright violations,hackers,crackers, computer viruses, questions of privacy, and general information overload. According to Forester:
"The truth is that society has not changed very much. The microchip has had much less social impact than almost everyone predicted. All the talk about future shocks, third waves, megatrends, and post-industrial societies must now be taken with a large pinch of salt. Life goes on for the vast majority of people in much the same old way.
"Computers have infiltrated many areas of our social life, but they have not transformed it. Computers have proven to be useful tools — no more, or less. None of the more extreme predictions about the impact of computers on society have turned out to be correct. Neither Utopia nor Dystopia has arrived on earth as a result of computerization."
The doyen of Media Ecology Marshall McLuhan gives us his take in the following manner:
"[...] Computers offer the potential of instantaneous translation of any code or language into any other code or language. If a data feedback is possible through the computer, why not feed-forward of thought whereby a world consciousness links into a world computer? Via the computer we could logically proceed from translating languages to bypassing them entirely in favor of an integral cosmic consciousness somewhat similar to the collective unconsciousness envisioned by Bergson."
"The Spirit Of The Age" - Media Zeitgeist Redux-Part Deux
McLuhanite rhetoric is well suited for expressing the "spirit of the age" ("Zeitgeist") stressing global media,electronic technology and transitional culture. Considering the society and technology of today. McLuhan's writings on electronic culture, television age, global village, hot/cool media, etc., have been prophetic…
And, as a creative dissident, his energetic vision, often seems unbelievable in its eloquence in the academic world and/or a theoretical realm with its many original and futuristic ideas(and this in the 1960s-which is still more than relevant today and into the future). This fact about McLuhan has be encapsulated by Benedetti & DeHart in the following manner:
"There are different reasons for McLuhan's revival. For the first time since television achieved domination of the culture in the fifties and sixties, there is a new wave of technological innovation that seems on the verge of radically remaking our world — a wave signified by the Internet and virtual reality.
"Personal computers, first used largely as glorified typewriters, now seem capable of linking individuals into an electronic, instantaneous, global communication network. These developments have sharpened our belief that an old-fashioned, content-based approach is inadequate to understanding technology.
"A comprehensive, effects-orientated approach — an attempt to grasp the whole pattern of change, including the innumerable and often ignored side effects of technological development — seems more fitting. McLuhan is the master of this approach."
McLuhan writings were ahead of his and our time, and this point is made clear by Kevin Kelly who writes: "Everyone thought McLuhan was talking about TV, but what he was really talking about was the Internet - two decades before it happened... McLuhan's strange ideas seem perfectly obvious in light of the Web, e-mail and cyberspace."
And in a more expanded sense, Jonathan Miller provides a critical view of McLuhan in this manner:
"[...] In fact, he [McLuhan] sees the more recent developments in electronic technology as offering a God sent escape from the slavery exerted by wheels and levers. For in a somewhat confused way, he has identified the circuits of the electrical engineer with those of the human nervous system itself, and invites us to acknowledge that through TV and radio we have given ourselves the opportunity of communicating with one another through media that can reproduce the plural simultaneity of thought itself.
"Through these media images and sounds can be flashed upon the attentive mind with telepathic speed; and, since the various mechanisms cam be inked in a vast network, electronic man has reconvened the tribal village on a global scale."
A theorist of mass communication and media culture, McLuhan has been quite justly called a prophet in his own time. He saw the irresistible impact technological change had on the world and society, providing new ways to explain them. Such terms as "vortex," "sensorium," "sensory impact," "extension of man," "global village," originally coined by him, have become part of the language.
Particularly popular has been McLuhan's idea of a global village brought together by the mass media telecommunications infrastructure which seems to be an ideal analogy for picturing an Internet-style global information network.
It is this infrastructure with its communicating speech patterns that this Hub was exploring. As this Hub develops, we will now go much more deeply into how these 'zines ' and 'memes' function effect and affect us, change and modify our communication methods and lifestyle, along with thinking and doing things in our day-to-day lives. This will be keeping up with "the spirit of the times" [Zeitgeist-Cultural Media Zeitgeist].
Extending Friendships and Interests
Online spaces enable youth to connect with peers in new ways. Most youth use online net-works to extend the friendships that they navigate in the familiar contexts of school, religious organizations, sports, and other local activities. They can be “always on,” in constant contact with their friends via texting, instant messaging, mobile phones, and Internet connections.
This continuous presence requires ongoing maintenance and negotiation, through private communications like instant messaging or mobile phones, as well as in public ways through social network sites such as MySpace and Facebook.
With these “friendship-driven” practices, youth are almost always associating with people they already know in their offline lives. The majority of youth use new media to “hang out” and extend existing friendships in these ways.
A smaller number of youth also use the online world to explore interests and find information that goes beyond what they have access to at school or in their local community. Online groups enable youth to connect to peers who share specialized and niche interests of various kinds, whether that is online gaming, creative writing, video editing, or other artistic endeavors.
In these “interest-driven” networks, youth may find new peers outside the boundaries of their local community. They can also find opportunities to publicize and distribute their work to online audiences and to gain new forms of visibility and reputation.
Self-Directed, Peer-Based Learning
In both friendship-driven and interest-driven online activity, youth create and navigate new forms of expression and rules for social behavior. In the process, young people acquire various forms of technical and media literacy by exploring new interests, tinkering, and “messing around” with new forms of media.
They may start with a Google search or “lurk” in chat rooms to learn more about their burgeoning interest. Through trial and error, youth add new media skills to their repertoire, such as how to create a video or customize games or their MySpace page. Teens then share their creations and receive feedback from others online. By its immediacy and breadth of information, the digital world lowers barriers to self-directed learning.
Others “geek out” and dive into a topic or talent. Contrary to popular images, geeking out is highly social and engaged, although usually not driven primarily by local friendships. Youth turn instead to specialized knowledge groups of both teens and adults from around the country or world, with the goal of improving their craft and gaining reputation among expert peers. What makes these groups unique is that while adults participate, they are not automatically the resident experts by virtue of their age. Geeking out in many respects erases the traditional markers of status and authority.
New media allow for a degree of freedom and autonomy for youth that is less apparent in a classroom setting. Youth respect one another’s authority online, and they are often more motivated to learn from peers than from adults. Their efforts are also largely self-directed, and the outcome emerges through exploration, in contrast to classroom learning that is oriented toward set, predefined goals.
Implications for Educators, Parents, and Policymakers
New media forms have altered how youth socialize and learn, and this raises a new set of issues that educators, parents, and policymakers should consider.
Social and recreational new media use as a site of learning. Contrary to adult perceptions, while hanging out online, youth are picking up basic social and technological skills they need to fully participate in contemporary society. Erecting barriers to participation deprives teens of access to these forms of learning. Participation in the digital age means more than being able to access “serious” online information and culture. Youth could benefit from educators being more open to forms of experimentation and social exploration that are generally not characteristic of educational institutions.
Recognizing important distinctions in youth culture and literacy. Friendship-driven and interest-driven online participation have very different kinds of social connotations. For example, whereas friendship-driven activities center on peer culture, adult participation is more welcome in the latter, more “geeky,” forms of learning. In addition, the content, ways of relating, and skills that youth value are highly variable depending on what kinds of social groups they associate with. This diversity in forms of literacy means that it is problematic to develop a standardized set of benchmarks to measure levels of new media and technical literacy.
Capitalizing on peer-based learning. Youth using new media often learn from their peers, not teachers or adults, and notions of expertise and authority have been turned on their heads. Such learning differs fundamentally from traditional instruction and is often framed negatively by adults as a means of “peer pressure.” Yet adults can still have tremendous influence in setting “learning goals,” particularly on the interest-driven side, where adult hobbyists function as role models and more experienced peers.
New role for education? Youths’ participation in this networked world suggests new ways of thinking about the role of education. What would it mean to really exploit the potential of the learning opportunities available through online resources and networks? Rather than assuming that education is primarily about preparing for jobs and careers, what would it mean to think of it as a process guiding youths’ participation in public life more generally? Finally, what would it mean to enlist help in this endeavor from engaged and diverse publics that are broader than what we traditionally think of as educational and civic institutions? Taking note of the changed reality and environment from analogic to digital, that too, must be taken in notice.
A Sense of Wonder - Jason Silva - Zeitgeist Americas 2013
The Spirit Of The Time-Time Erasure of the Past
"When You give people too much information, they instantly resort to pattern recognition to structure the experience. The work of the artist is to find patterns." (Marshall McLuhan)
"The name of a man is a numbing blow from which he never recovers." (Marshall McLuhan)
"The Next Medium, whatever it is--it may be the extension of consciousness-will include television as its content, not its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual's encyclopedic function and flip it into a private line to speedily tailored data of a salable kind." (Marshall McLuhan)
The spirit of the time is nigh, and Marshall McLuhan was talking about this change. Even for us, as we see the change that is happening, we are aware that western media-driven societies began to morph, and quickly, to the point where around where in the twentieth-century that time not only seems to be moving more quickly, but is beginning to feel funny, too. There's no more tolerance for waiting of any sort.
"We want all the facts and we want them now. To go without email for forty-eight hours can trigger a meltdown. You can't slow down, even once, ever, without becoming irrelevant. Music has become more important because music is a constant. School reunions are beside the point because we already know what our old classmates have done. Children often spend more time in dreamland and cyberspace than in real life. Time is speeding even faster.
"And then the economy collapsed in a weird way that felt like a hard-to-describe mix of Google, New York times's website, pop-up ads for Russian pornography websites, and psychic radiation emitted by all those people you see standing by the Loblaws produce section at 6:15 on a weeknight, phoning home to see if spinach is a good idea.
"All this information and more has overtly, osmotically, or perhaps inadvertently damaged a collective sense of time that has been working well enough since the Industrial Revolution and the rise of the Middle Classes. This "time sickness" is probably what killed the economy, and God knows what it's up to next.
"Everywhere we look, people are making online links-to conspiracy, porn, and gossip sites; to medical data sites and genetics sites; to baseball sites and sites for Fiestaware collectors. To sites where they can access free movies and free TV, arrange hookups with old flames or taunt old enemies-and time has begun to erase the twentieth century way of structuring one's day and locating one's sense of community.
People are now doing their deepest thinking and making their most emotionally charged connections with people around the planet at all times of the day. Geography has become irrelevant. Our online phantom world has become the new us. We create complex webs of information and people who support us, and yet they are so fleeting, so tenuous. Time speeds up then it begins to shrink.
Years pass by in minutes. Life becomes the strange experience in which you zooming along a freeway and suddenly realize that you haven't paid any attention to driving the last fifteen minutes, yet you're still alive and didn't crash. The voice inside your head has become a different voice. It used to be "you." Now your voice is that of a perpetual nomad drifting along a melting landscape, living day to day, expecting everything and nothing." (Douglas Copeland)
In 1962 MCLuhan wrote:
"Instead of tending towards a vast Alexandrian library the world has become a computer, an electronic brain, exactly as an infantile piece of science fiction. And as our senses have gone outside us, big Brother goes inside. So, unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total independence, and superimposed co-existence."
With those words, McLuhan anticipated, four decades earlier, the Internet. And he added to this by saying, Environments are invisible. Their ground rules, pervasive structure, and overall patterns elude easy perception. McLuhan's "The medium is the message" means the ostensible content of all electronic media is insignificant; it is the medium itself that has the greater impact on the environment, a fact bolstered by the now medically undeniable fact that the technologies we use everyday begin, after a while, to alter the way our brains work, and hence the way we experience our world.
Social Video-The Emerging and Converging New Trend
Emerging Social Videos To Capture The Watching audience's Approval
The following article and the videos posted in it was written and composed by David Waterhouse which he dubbed:
"The Emerging social Video Trends of 2013 - Part "1
We are only just over halfway through 2013, but we are already seeing some significant trends in the social video space.
It’s not surprising. If you thought a week was a long time in politics, it feels like a lifetime in social video. Every day we are seeing more and more brands setting out to create content that will engage the web.
The results have been simply staggering. Over the last few months we have seen brands, large and small, make a big impression online. 'Moonwalking ponies,' 'dogs driving cars'...even an unfashionable brand like Kmart enjoyed incredible social video success thanks to the kind of joke that would not look out of place on an 11-year-old’s social media profile.
And with good reason. Video consumption is rocketing, with Cisco even predicting that in less than four years’ time, video will be bigger than Facebook and Twitter, making up 69% of consumer internet traffic.
With such a huge rise in people now watching online, brands have been quick to follow. Online video advertising was the fastest growing category of ad spend in 2012, with 30% annual growth. Within three years, online video spend is expected to rise to $10B.
But what are the biggest trends we have seen this year?
1. Brands creating their own dramas
Let’s face it, 2012 – it was emotional. Hello ‘Empty13’. That's right, after being moved to Olympic-sized tears by P&G and inspired by Red Bull’s record-breaking space jump over the last 12 months, there hasn’t been a great deal on the calendar for 2013.
Last year offered up a wealth of opportunities for brands, which had their pick of awe-inspiring, headline-grabbing events to build social video campaigns around.
But this year there is has been no Olympics to inspire us, no election to bore us or even a Mayan apocalypse to not worry about.
Following a hectic year of global events, the global watercooler has been running a little dry. In fact, our diaries have been as bare as Mitt Romney’s of late.
So it’s no surprise brands have been creating their own dramas and events to build their campaigns around. Here are some examples of how brands have been filling up their content calendars.
Tapping into the zeitgeist plays a key factor in creating contagious content — in fact it is one of the key social motivations for driving sharing. So when there’s a vacuum of real world events, it’s only natural that brands should turn to the Internet for inspiration.
This has been happening for years, with mixed results, but in the last 6 months we’ve certainly seen an intensification of brands’ interest in meme culture.
The most prominent example of this is Harlem Shake. Many have called it the natural successor to Gangnam style, but what really made the dancing meme stand out from any other is the sheer number and speed of the number of parodies.
Around 40,000 Harlem Shake videos were uploaded in just the first 11 days, giving a cumulative total of 175 million views. That’s simply staggering.
Gangnam Style inspired a lot of parodies, but there was a central video as a reference point. With Harlem Shake, every man and his dog wanted to do their own version. In fact, you almost felt like you had missed out if your office, sports club or group of friends hadn’t donned the helmet and danced around like loons.
NBA team Miami Heat's video is the most popular among brands, but Manchester City FC, Seaworld, the Norwegian army, Facebook, and even Unruly produced their own Harlem Shakes in the first few days.
The Harlem Shake v1 (TSCS original)
Racial Biased Videos
With now more than 100 hours of video uploaded to YouTube alone, we have a seen a lot of brands taking a few risks with their content in 2013 to get cut-through.
As more and more advertisers embrace their role as content creators, it seems only natural that more and more are increasingly pushing the boundaries and being more controversial in their content output.
Creating controversial content is high risk and does not always go to plan — just ask Hyundai and Mountain Dew - but sometimes it can drive awareness in an increasingly crowded space, especially during a year with no tent-pole events.
It has certainly paid off for brands like Kmart, with its recent “Ship My Pants” video — the 11th most shared ad of all time — and Paddy Power, whose campaigns regularly push the boundaries — one of which, 'Ladies Day,' was among the most complained about ads of 2012.
Kmart's "Ship My Pants" Commercia
On The Video Meme: Punk Ads of "Pranvertising"
With very few events in the calendar, it’s hardly surprising that some brands have decided to create their own stunts for people to talk about.
These “punk” ads or “prankverts” — a phrase first coined by AdWeek following the success of Pepsi Test Drive to describe brands contrive an event in a public space and trick supposedly unsuspecting members of the public into joining difficult situations — are nothing new.
For example, in 2011 Carlsberg created the cinema from hell and awarded couple brave enough to sit down — it was one of the top 10 ads of the year.
But in 2013, possibly fueled by the mammoth success of TNT's Dramatic Surprise the year before, we have seen a real shift in focus to these types of ads.
Pepsi’s Test Drive, in which NASCAR driver Jeff Gordon takes a car salesman on the test drive of his life, has been the most popular so far, attracting more than 2.58 million shares since its launch on March 12.
However, there has been a load of ads that have enjoyed a lot of success using this technique over the past six month. Some of the most notable examples are Nivea Stress Test, Carisberg's Friends Test, Adobe's Street Retouch, THIN!'s Public Loo Shocker, Renault's Va-Va-Voom, plus Murder Elevator and Beauty Salon Scare for horror movies, Dead Man Down and The Last Exorcism 2.
But why are they so popular? Research has found that the secret of sharing success is eliciting the strongest possible emotions from your viewers — and capturing people's reactions to a stunt is a clever way of doing this.
What better way to make people emote than to make them question how they would respond in the same situation?
Many of these prankverts also work well because they combine the content triggers ‘hilarity’ and ‘surprise’. In other words, viewers were surprised by the fact that brands have put credulous consumers in compromising positions and were amused by the ensuing results.
Media Plugged-In or Unplugged?
It has become much more difficult to unplug plugged-in media gadgets and the viral soup and stream that is 24/1/365 on. These gizmos offer a lot of distraction and time consumption that beckons upon the user to be attached to, tun-on onto and engage in games and social media, and so forth.
Compass is smashed and we’ve lost our way to truly listen and respond to our own interior landscape of minds, bodies, and gut instincts giving way to amped up high stakes media drama and either/or thinking.
This “all or nothing” polarity as people struggle for life balance and media management really circles back to that same very innate survival skill I experienced this week…
…Learning to TRUST your brain and body to KNOW when something’s not right. (e.g. food, fitness and relationship dependencies, wired ways/media overload, stress, family habits, take your pick…) Admittedly, I don’t submit easily to illness, controlling forces and, so yes, it was a huge deal for me when I opted out of hugging the porcelain bowl any longer to take a ride to the ER for fluids.
For instance, there's too much Tweeting, Media addiction, Virtual Farm games; Teen's Have online friends as strong family; Facebook about to reach saturation point? Brains constantly plugged-in onto computers without let-up; Digital devices are making us less creative; so that, living in a mediarized environment and using gizmos is not a big-deal for the young ones, and Ng Hwee Koon reports:
Unlike those of us who have had to migrate into a digital world, Generation Z* children are true digital natives who have never known life without mobile phones or the Internet.
Unlike the previous generations, they have never known the limits of desk bound computing. Instead they live in a world where everyone has at least one mobile phone, with the Internet increasingly available in their pockets.
So just how different does this make the kids of today? Mr Richard Gerver, an internationally renowned speaker and Education Advisor to former British Prime Minister Tony Blair, describes them as the “on demand” generation.
They expect immediacy and personal control — as can be seen from the way they listen to music, download only what they like, and create their own playlists. For them, a world without Google, Facebook, YouTube, music and games downloads, and IM has never existed.
As a result, there is a growing disconnect between how kids are learning on their own and the way lessons are taught in the formal education system, says Associate Professor Looi Chee Kit from the National Institute of Education and Founding Head of the Learning Sciences Lab, a research centre for learning and teaching.
The challenge, he thinks, is to design curricular activities that are relevant to students’ daily lives so that they would continue to pursue them on their own, beyond the classroom.
What’s the strategy?
The way not to do it is to try to confine technology in education to a computer or a laptop meant only for word or data processing. This, says Mr Gerver, is “patronizing and limiting”.
We also don’t want to be constantly nagging or banning our kids from activities such as gaming or social networking, which will only make them very controlled," as one 12-year old puts it.
Mr Gerver tells Challenge in an e-mail interview:
“We must remember that this generation do not view technology as we do. They aren’t dazzled by it, and they expect and demand more from it. To them technology is a facilitator and is temporary.”
Indeed, today it’s iPad and Kindle; tomorrow it will be something else.
Hence as we seek to harness their abilities to help them grow, we need to first be conscious of what makes them different.
Our teachers have observed that in the classroom, the students’ infocomm technology (ICT) experiences translate into the ability to multi-task, and a preference to communicate by texting — be it SMS or IM – as opposed to the act of speaking up.
They also prefer graphics and multimedia content to reams of grey text. At the same time, they tend to have shorter attention span than children of the past, and are less focused.
Says Ms Chea Seok Choo, a teacher at Rulang Primary with 30 years of teaching experience:
“Children today are distracted by many alternative sources of information. I am constantly amazed by their intelligence. They are very capable and can learn on their own at a single click..."
"To them, school is just [a place] to socialize and interact with their friends.”
Indeed, kids today want to be friends. They want to be connected and have no use for hoity-toity airs from their teachers, or anybody.
One then has to ask: If the voice of authority no longer speaks to this generation of children, how can the old way of learning with one teacher talking to a class of 40 students work?
It is unlikely to. Not even if you use “newer” tools like PowerPoint, tablet PCs and visualizers (instead of white boards and overhead projectors), because, according to Alexandria School, a Secondary 3 student from a well-known school, such lessons — which are already the norm for her — are “boring”.
Teachers as Action Researchers
To make learning “as exciting as Disneyland” (as Mr Gerver envisions it to be) some of our schools are already experimenting with new methods that make teachers “action researchers”.
For example, Prof Looi helped Mayflower Primary put in place a “Group-Scribbles” system that works like a virtual notice board with sticky notes that students can “write” on. The teacher poses questions and students “paste” their answers onto the board.
Rather than doing it the traditional way, where teachers might have struck fear in class by pinpointing out students to answer questions, this system creates a “safe learning environment”.
Students, now cloaked by anonymity, can respond without fear of ridicule. The entire class, including the teacher, focus more on the ideas generated rather than on the person giving the answer.
In this way, teachers can gauge the students’ conceptual understanding from the general response and are able to help correct misconceptions, if any, on the spot.
A similar system using “Wallwisher.com” is used in Ngee Ann Secondary without anonymity by Ms Rachel Poh, an English Language and Literature teacher. By tapping into students’ natural inclination for text messages, she has seen dramatic results.
Compared to the past when lessons wind to a quick close from a lack of interaction, the students were so engaged that they answered not just her questions but also commented on their classmates’ answers — behavior commonly displayed on Internet forums and social networking sites. The session was extended to allow the students to continue posting their comments after they went home. Now, that’s homework on demand.
Getting more to join the Bandwagon
The results from all that experimentation have been encouraging, with pilot studies by the Ministry of Education (MOE) showing that students found ICT-integrated lessons to be interesting and engaging.
For example, in Marsiling Secondary, about 80% of students who used an online collaborative environment in their lessons reflected high motivational levels, with a majority participating actively in online discussions.
MOE also has a range of workshops to help teachers pick up the necessary skills to respond to this new learning environment. From 2010, it will train more than 1,000 ICT Mentors to raise the level of ICT used in schools. To foster collaboration and the sharing of ideas and resources, the ministry has also created ICT platforms such as edumall2.0 and iSHARE.
Informally, schools that have experimented with ICT are also enthusiastic about sharing their knowledge. Kranji Secondary, for instance, has been organizing the “National ICT Sharing Session” since 2003. The session usually sees some 600 teachers from across Singapore gathering for an afternoon at the school’s 40 classrooms to trade notes on their ICT projects.
Children Are being brought up on Technological gadgets and techniques
Mediarized Digital Media Environments of the Present Futre
So that, the objective is to create a school environment that encourages face-to-face interaction to build students’ collaborative and communication skills, and to develop a curricula that helps them become successful learners and confident individuals. These are the life skills that educators agree would help our children tackle the unknown future in their world.
Therefore, the objective is to create a school environment that encourages face-to-face interaction to build students’ collaborative and communication skills, and to develop a curricula that helps them become successful learners and confident individuals. These are the life skills that educators agree would help our children tackle the unknown future in their world.
Upgraded Facebook and YouTube In your Computer Now
The Digital zeitgeist of the present future if full of many new ideas and applications/operations that needed to be clarified more often than not. What is happening on Facebook and on YouTube bears looking at much more closer. We learn this bit from Guerrilla And Chalk that:
Facebook has begun rolling out Embedded Posts that let people add public posts from Facebook to their blog or web site. When embedded, posts can include pictures, videos, hashtags and other content. People can also like and share the post directly from the embed.
Why it’s interesting: Up to now, one distinctive difference between Facebook and Twitter has been openness; FB being a walled-garden (talk amongst friends only), while the Twitter is an open party (friends and anyone else who will listen). However, Facebook’s efforts over recent months have indicated a shift to make its content more accessible outside of just your circle of friends – Graph search, hashtags, following users without friending and now embedding"
At this juncture I refer the reader to read-up the newsfeed on 'EdgeRank' online for further analysis and breakdown on Facebook's Newsfeed Algorithm. So that, we further learn from G&T that:
Previously, the news feed would rate all the stories published since the user last logged on, and show users only the ‘best’ (i.e., top ranked) ones when they did log on. With Story Bumping, Facebook draws from the back catalogue and considers ALL the posts the user hasn’t seen, basically helping show users more new material. From Facebook’s initial testing of 7,000 users, Story Bumping leads to an 8% increase in (brand) page interactions.
The Last Actor feature keeps track of the last 50 people or pages you interacted with and gives these people or brands a small bump in relevance. Brand pages will want to try and stay in that magic 50 to increase the likelihood of their posts being seen by fans.
Chronological by Actor
Facebook users have always been keen to see updates in the order they were posted. Facebook will attempt tackle this with chronological tweaks, so that you see posts from the same friend or brand in chronological order. Useful for following live, real-time updates for a sports match, TV program or similar.
Why it’s interesting: These updates, particularly Last Actor Signal and Story Bumping, are looking to reward people and pages who have a compelling story to tell, and in an engaging way. Posts that are inherently good (offer value to the reader) should typically enjoy more time in their followers’ feeds. It is therefore more important than ever to ensure, as brand guardians, the quality of our story in print and TV is matched on Facebook, as well as other social channels.
What’s happening? YouTube has created a Top Fans feature, making it easier for creators to keep in touch with their most influential and most engaged fans. Announced last week, Top Fans will feature a dashboard for creators, a feed of Top Fans’ activity and the ability to post content exclusively to Top Fans. Plus, if a creator syncs their Google+ and YouTube accounts, creating a circle for their Top Fans, they will be able to access demographic information about them.
Megan O'Neil informs us that:
This afternoon at VidCon, YouTube previewed a new feature called Top Fans that sounds like it’s going to revolutionize the way that YouTube creators engage with their fans. The feature will make it easier for creators to engage with their most influential and engaged fans, as well as glean demographics about these fans to aid in potential collaborations and beyond.
In a panel called Kissing Babies and Couch Potatoes: Lessons from Politicians and TV to Drive Your Watch Time, YouTube Product Manager Sherry Listgarten previewed the new feature, which will be revealed in more detail later today during the YouTube Keynote at VidCon Industry day.
We’ll update with more information after the keynote, but what we’ve gleaned so far is as follows:
- A Top Fans dashboard will exist as a new page inside the Video Manager
- Creators will be able to view a feed of the latest activity from the most-engaged and influential fans instead of having to wade through hundreds or thousands of comments
- Creators will be able to sent out exclusive postings only to top fans
- When Google Plus is connected to a creator’s YouTube channel and they’ve created a circle of top fans, they will be able to view demographic details about these top fans
Listgarten said, “We think there’s a lot of potential here. It’s very early days, but we’re really excited.”
Up to this point, with the emerging media and gadgets, we keep on asking the question of the topic of this Hub, as to whether the users of Social Media and YouTube are being harnessed and rammed-down deeper into these new upgrades,or liberating the, The Judge is still out on this matter until we can ascertain their effects/affects in the very new future.
Digital Culture as Catalyst for Change: Arts Education and Creative Rights
Putting Things into Perspective
The Spirit Of The Times In Africa Is Helter-skelter
The Spirit Of The times? See The Real Time Of the Zeitgeist
The South African Zeitgeist: Who Is Fooling Who? The ANC And Its Voting Polity In Focus....
This then must be desperation on our government to begin asking people to pray for the ANC... One thing the ANC is doing very much of is going door-to-door rallying to get the vote for its party-and the poor are telling them off-but I often wonder if it really does matter, for the very people they are canvassing, are going to vote for them, in the end.
But what is apparent to the underdogs(the poor of South Africa) the ANC is scared. This may not bey readily apparent to a casual viewer, but there is some desperation that is rearing its uncertain head from the ANC potentates.
This is a very serious and crucial time here in Mzantsi for the ANC and it adherents/followers/sympathizers, and just borderline observers(and the number of these is growing by day). The unravelling of the movement is heavily downplayed, and there is now more lies about how:
The ANC laid Foundation For South Africa's Success…
"The African National Congress has in the last 20 years laid a foundation for the country's future successes, party NEC member Thoko Didiza said on Monday.
“When we look back we can say yes, we have not completed the agenda of transformation, but have laid the foundation,” she said during an elections debate by leaders of various political parties at Unisa.
"South Africa's negotiation process before 1994 enabled everyone to engage with each other.
“History is very important when we reflect on this journey of 20 years...1994 saw South Africans agreeing on a broad agenda to take the country forward.”
"The Democratic Alliance, UDM, ACDP, and Cope agreed with Didiza that the country is a better place compared to pre-1994, but that more needed to be done.
“While others enjoy the fruits of democracy and freedom, the majority is still suffering,” said DA Gauteng premier candidate Mmusi Maimane.
Black people are landless and unemployment was growing.
UDM leader Bantu Holomisa said South Africa was becoming more of a welfare state than a developmental one.
“There is a heavy reliance on social grants...most South Africans do not have a good story to tell.”
Joan Downs, of the ACDP, said lack of access to education takes the country backwards.
“We acknowledge that the health system has improved, but access to education is still limited to a minority 1/8of 3/8 white people.”
Now, we really have to begin to read carefully what is happening or said by our leaders. This is really the problem, because the government throws out 'memes' and 'zines' at us, coughed in high-rigged PR-spin jargon… but, in reality, it has begun to make our leaders look ridiculous and true frauds.
Information amongst the poor is in the form of Newspapers and the radio stations/TV and CableTV, by those who can afford it;-the Internet - is still remote, but many people who can afford it, buy time to receive it, and this media angle is fraught with censorship and rigid control by the present government.
So the government is trying the PR technique that says that, "If you tell a lie more often and long enough, it ends up being the truth." The repetitious format used in commercials is the modus operandi of the Media/Communication and its systems here in Mzantsi. The commercial spots on TV have begun to seduce the viewers subliminally. The news presentation and format take on that characteristic. The government and its PR advisors/along with the sponsors of the programming, are in cahoots in hoodwinking the Voter.
What I am onto here on this part, is educing the reality of our confounding and dumbfounding messages and messaging systems which are corralling us to a predetermined end. It is imperative for us to begin to "Understand The Media." We still can talk and discus all topics, but the new technologies and their emerging and merging techniques, need to be fully understood by many-a-lay-man-and be utilized for his emancipation and freedom.
Thoko Didiza is a cog in the machine. She matter-of-factly informs the readers that: "The African National Congress has in the last 20 years laid a foundation for the country's future successes,"
This a response from a reader on this article"
"ANC laid foundation for SA's failures... Alternative headline... "
Someone who thinks they know how to manipulate their ANC puppets to parrot such senseless retorts really does not understand the relationship of the ANC and its people. Meaning, the PR puppet masters and propaganda spinners may have ANC under their control, but it is the people they are trying to apply propaganda to and attempting to put into action that are not really easily manipulable.
One of the prerequisites of propaganda for it to work is have an educated population. This government is countering this technique by Dumbing Down its populace, which is what universities prepares the next generation to be able to manipulate its own society. So that, the more educated you are, the less aware you are that you are a victim of propaganda and the more you are ready to spread your ideology to tother who will in turn reinforce you and be reinforced by you in a horizontal process.
Leaders are not telling you what to think (directly), you are being told by your peers what to think and you pass along this information to others to inform them what to think. Then when this ideology reaches a substantial portion of the population, and when you demand the leaders to comply, they reluctantly do so (which they had originally intended to, but they'd rather do something else that is lucrative and crooked than to let you know their good intentions).
This is what the article above tries to tell us and informs us to do. This is what the National Executive Committee of the ANC of the ANC wants us to know, through their mouth-piece, Thoko Didiza. What is perplexing is the way the government uses the media towards its own ends. Whenever the media is not in lockstep with their ideology and objectives, it is the media that is accused of spreading all the malice about the ANC. But When it suits them, they use the very same media to propagate their intentions and policies, etc.
Repeating their lies often has this time around brought the ANC face to face with the people in its door-to-door voter campaigns, and I have hinted a bit about it above. The poor, when asked why they vote for the ANC, often say that there is no one else to vote for. My contention is that if then that be the case, the people will have to find a way of removing these cronies from power and replacing them with legit leaders appointed by and accountable to the people who put them in power-within the structure of the ANC(which for now is a pipe dream).
The problem is the creation of the ANC of an elite crew/class, which they falsely accuse of derailing its rule. They have managed to put in place people who have the same elitist ideas as they have, and when things are not going their way, they blame the elite. Worse, when the people decry their state of existence and poverty, the ANC tells them to hassle very hard, and that the media and the elites are out to bring down the ANC.
The ANC responds to the people's poverty and suffering in the same elitist way, far flung from and oblivious to the people's realities of the Concentration camps; they have no concrete way of responding to the poor, and issue palliatives, homilies and vague answers that makes them seem aloof from these dreaded decrepit living conditions of the subjects. The following article by Babalo Ndenze, Lebogang Seale and Piet Rampedi, makes my point eloquently and in an erudite manner:
Angry Citizens Confront ANC Leaders
The ANC’s charm offensive in the Eastern Cape and Free State was met with cheers, jeers and tough service delivery questions from disgruntled residents who complained about poor or inadequate services.
The party’s senior officials, including President Jacob Zuma, his deputy, Cyril Ramaphosa, treasurer-general Zweli Mkhize and secretary-general Gwede Mantashe got the message first-hand after the ANC dispatched them to Port Elizabeth in the Eastern Cape, and to the Free State areas of Mangaung, Sasolburg and Welkom.
The ANC’s top brass took the party’s campaign trail to the two provinces to sell the “good story," and to rally support for the party ahead of the May 7 general elections.
While they were warmly received by some residents, others raised concerns about poor services and unfulfilled promises in the delivery of housing, water and jobs.
In Sasolburg, Mantashe met residents complaining about leaking, cracked and incomplete RDP houses.
In ward 13’s Iraq township, a woman grudgingly accepted the ANC’s T-shirt emblazoned with Zuma’s face.
“I don’t like this man sitting here,” said the woman, who did not want to be named, as she rolled out the folded T-shirt and pointed at Zuma’s picture.
A visibly embarrassed Mantashe tried to calm her down: “Don’t vote for him [Zuma], then — vote for the ANC.”
But the woman would have none of it, continuing with her protestations: “I only vote because of Mandela. This one sitting here (on the T-shirt) — he must go. He must give others (a) chance.”
Mantashe later defended Zuma’s leadership in an interview with The Sunday Independent. He said the woman’s statements were “not a representation” of other residents, and blamed the negativity around Zuma’s administration squarely on the media and sections of society.
“The media has been talking about the president negatively. The question of the president and Nkandla doesn’t come only during elections. It’s a preoccupation and obsession of the media and the elite,” said Mantashe.
Mantashe’s election trail in townships around wards 1 and 13 of the Metsimaholo Municipality at first got off to a good start, with residents chanting pro-ANC slogans.
His message to residents was unambiguous as he moved from house to house: “We are asking you to vote for the ANC on May 7.”
Many people grudgingly accepted his request, while complaining about the water seeping through their houses’ walls and floors.
Most residents grudgingly accepted Mantashe’s invitation.
“Yes, we will vote for the ANC, but my house is wet [waterlogged]. The toilets don’t flush because there’s no sewerage (pipes). We are using pit toilets,” said Jane Ncwane, 49, who lives with her four children.
Mantashe said the protests in Bekkersdal, west of Jo'burg, which resulted in last Thursday’s violent clashes between angry residents and the party’s Gauteng provincial leadership, were born of anarchy, rather than genuine concerns.
“It wasn’t as if what happened in Bekkersdal couldn’t be expected. If you have concerns and don’t talk to people (ANC leaders), then there’s something more than a protest. It’s anarchy,” Mantashe said yesterday.
Gun-toting people believed to be bodyguards escorting Gauteng Housing MEC Ntombi Mekgwe and her entourage had to fire shots to disperse angry Bekkersdal residents who barricaded the streets with burning tires, rocks and debris to stop the ANC leaders from conducting a door-to-door campaign in the area.
In Port Elizabeth, Zuma was greeted by loud cheers and jeers during his campaign in the Nelson Mandela Bay Municipality.
He was accompanied by provincial ANC chairman Phumulo Masualle, Gauteng Premier Nomvula Mokonyane, ANC Youth League national convener Mzwandile Masina and chief whip Stone Sizani.
His whirlwind tour of the region saw him visiting a number of communities in Walmer, Jacksonville, Motherwell and Uitenhage.
Walmer residents told Zuma of their daily struggles with their poorly built government houses, which flood after rain. “The rain leaks while we’re sleeping. During the night, everything happens,” said Nowandile Mayekiso, 65.
Another resident, Pumezile Dumezweni, added: “We’ve been waiting for you [Zuma]. Our beds are rotten from the rain. But we will definitely vote for the ANC. We are just mopping (because of the leaking roof) now.”
Dumezweni lives with two unemployed adult children.
However, Zuma got a different reception a few kilometers away in the colored township of Jacksonville, a DA-controlled ward.
Police were forced to create a human chain to contain disgruntled residents.
One of the placards carried by residents read: “Where were you at the beginning of the [housing] project? You come and show your face now that it is election time.”
“There’s gangsterism and selling of drugs. It’s always been a problem, especially the gangsterism here in the area. Since this morning, we knew our president was coming, so we prayed, because we just want peace among the people,” said resident Cindy Coetzee.
Another resident, Annie Lomberg, said the rectification that needed to be done on their houses had not been done.
While Ramaphosa was generally well received during his walkabout in the Bloemfontein central business district yesterday, with young and old posing for pictures with him, he was also confronted by disgruntled residents who demanded jobs and housing.
One of them, Vincent Kgoe, a 41-year-old street vendor, told Ramaphosa he had lived at the Phase 10 informal settlement for 10 years without his conditions improving. He said he needed basic services such as water, electricity and housing.
“I also want to live like other people. I am unemployed. I am hustling here at Central Park, selling cigarettes,” Kgoe said, who added he would still vote for the ANC.
Ramaphosa said the ANC-led government was on its way to improving Kgoe’s living conditions."
I had to insert this lengthy article from the authors above because that is what is happening throughout the country as the ANC canvasses and traverses the country for gathering the vote in order to reach a certain percentage and rule over the same people who made them aware about their grievances, and they are going to get the same treatment, as it has been for the past 20 years-when the elections of 2014 are over. The people know this, but they resignedly and grudgingly vote for the ANC-as the article points out above.
As for the DA and other fledgling parties on the ballot, they are just that: weak and really ineffective as to whether they will be able to untangle/tap into the type of vote that the ANC is receiving-that's not about to happen very soon. Hellen Zille seems not to understand the Africans of Mzantsi and their struggles. The others who are the African faces in their parties are just opportunistic and working to gain materially or otherwise from this system.
So, the media is used to keep us entrapped and ignorant as to what is going on in the world/our own world. I have pointed out above about the nature and structure of media feeds/access/information for Africans, and the Internet is still hard to come by, without one becoming bankrupt. So, the media really is controlled by those with big capital, and the rest of us(the poor and wretched masses) are there for their taking. Or are We?...
I paraphrase Jacque Ellul who writes that:
'Propaganda makes its victim believe that he/she is immune, hence easier to manipulate. What would universities be doing with these modern textbooks that claim education will prevent one from being victimized in propaganda? Well, it seems that they prove Ellul correct.
Ellul says that Nazi and Communist propaganda is the easiest form to withstand and modern scholars claim that we must be eternally vigilant from falling victim to such insidious forms of propaganda as we witnessed with the Nazi and Communist regimes.
Yet modern advertising campaigns use short term techniques that no one bothers to condemn. So we focus on the vertical integration and pretend that the horizontal integration is non-existent, demonstrating propaganda at work in our society without opposition.
The same technique described above is what we are experiencing here in Mzantsi. The way the ANC has been carrying on in a carte blanche fashion, ignoring what the people are telling them, and seemingly have 'all' the answers, and pretend that the reality that they are canvassing in is what matters-but choose to focus of issues that have no relevance to basic needs of the power, and ignore the poor's direct please and complaints-just pooh-poohing them away.
It is this way, Ellul's warning demonstrates propaganda at work in our present-day society in Mzantsi, without any opposition, because in reality, 'horizontal integration is nonexistent', to echo Ellul
This then brings us to the sobering reality I had averred upon above: We Need To Understand The Media And All It Merging And Emerging Gizmos And Techniques. In the spirit of the times that is so filled with instability and dread, it is up to some of us to begin to dissemble the past distorted myths and begin to put in place positive and progressive media that embraces the peoples concerns and, lives, history and so forth in the fore-front. We need to control our Zeitgeist, much more expertly and with confidence
Phil Donahue and Amy Goodman On The Media
Neil Postman on Cyberspace, 1995
Place-Based Knowledge In the digital Age
Thomas Fischer writes:
In Victor Hugo's The Hunchback of Notre Dame, the archdeacon holds up a book before the cathedral and says, "This will kill that. The book will kill the edifice." Of course, we know that the printing press did not "kill" buildings. We still have cathedrals and books, and indeed, most books wouldn't survive very long unless stored in buildings.
But we also know that the book changed cathedrals, which had been thought of as "books in stone," with the stories of the Bible depicted in the statuary and stained glass of those buildings. We still have cathedrals today, but they no longer have to serve also as books, and so they have changed in fundamental ways, becoming more abstract in form, more diverse in function, and largely shorn of their didactic ornament.
I mention this because we find ourselves at another moment in time where we could hold up a digital device—a laptop, tablet, smartphone, or e-reader—and declare, in front of either a book or a building, that, "This will kill that." While we know that such devices will not "kill" books or buildings for that matter, we have also gone far enough into the digital revolution to sense that digital media—and spatial media like geographic information systems—seem destined to have the same kind of effect as the printed book did beginning some 500 years ago.
This brings to mind the observation of Marshall McLuhan that each new technology "turns its predecessor into an art form."2 Books will indeed survive the onslaught of downloadable e-books, but as we depend less and less upon books for information or even as the most convenient way to access information, we will increasingly value them as an art form, as McLuhan put it—as beautiful objects and works of great craftsmanship, exemplified in the rise of popular institutions such as the Minnesota Center for the Book Arts.
And what about Victor Hugo's claim that books ultimately trump buildings? Downloadable e-books have not "killed" the library as a building type, but as happened with the cathedral after the widespread adoption of the printing press, libraries have increasingly become places where people go to have experiences that they cannot find anywhere else. Like the modern cathedral, the modern library will likely have a greater array of functions; play a more social and less didactic role in people's lives; and, at least partly, lose the primary purpose they once served of storing large quantities of books. It may be that, in the future, we will go to libraries to admire the craftsmanship of books and then interact with others about what we have learned from the information we have downloaded on our portable devices.
The biggest effect of the digital revolution, though, may be less material and more metaphorical. As McLuhan argued, the "medium is the message," with major changes in media leading to changes in our metaphors and to the meanings that we ascribe to the world.3 The mass-produced book led to a view of the world as a kind of machine, a metaphor that reflected the very thing that made this new media possible: the printing press. And that change in metaphor, in turn, led to many of the revolutions that followed: the Protestant Revolution in the sixteenth century, the Scientific Revolution of the seventeenth century, the Democratic Revolutions of the late eighteenth century, and the Industrial Revolution of the nineteenth century.
The machine metaphor lasted well into the twentieth century and paradoxically gave rise to the very technology—computers—that would ultimately overturn that metaphor. In the early twentieth century, we still heard people talk about the world in mechanistic ways, with intellectuals like Leo Tolstoy calling the body "a living machine" and Le Corbusier calling the house a "machine for living in." Later in the last century, we still heard such mechanistic analogies, such as the physicist Stephen Hawking regarding the "brain as a computer."6
Computing, though, also gave us the Internet. And with that, along with the miniaturization of computing in mobile devices, we have gradually realized that computers represent not just a faster form of computation but an entirely new medium, which has brought with it a new metaphor that increasingly dominates our view of the world.
It took machines, in other words, to move us from a mechanistic view of reality to a networked one. We no longer view the brain as a kind of computer, but instead as a neural network; no longer speak of society as a well-oiled mechanism, but instead as a social network; and no longer see the human body as a machine, but instead as part of the web of life.
This shift in metaphor matters even more than the media that has prompted it. We will, of course, continue to use machines just as we will continue to use books, and so our material world will remain layered with technologies of the past as well as the present. But when we start to think of ourselves and see the world differently, big things begin to happen, as we saw in the wake of the printing press.
Human relationships and social structures change, as we have already seen in the "Arab Spring" revolutions taking place in areas like North Africa, fueled by the crowdsourcing capabilities of cell phone technology; in the micro-lending revolution in the developing world, enabled by the financial transfers possible through social networks; or in the green revolution going on around the world, empowered by our access to information formerly out of reach of ordinary people. The metaphor may ultimately be the message that matters.
The Ecology of Being
This metaphor of the world as a network or web will alter our intellectual lives as well. The old machine metaphor privileged physics, mechanics, and engineering—three of the fields most closely associated with mechanisms. The new web metaphor, instead, draws from fields like biology and ecology, seeing in their understanding of how natural ecosystems work a parallel to the networked world we now occupy and informing us about human ecosystems and how they relate to each other in particular places.
The increasingly weblike way of seeing the world, in turn, has profound implications for how and in what form we will seek information. The printed book offers us a linear way of doing so. We begin at the beginning—or maybe at the end, with the index—and work forward or backward through a book, or at least parts of it, to find the information we need. Digital media, in contrast, operate in networked ways, with hyperlinked texts taking us in multiple directions, social media placing us in multiple communities, and geographic information systems arranging data in multiple layers. No one starting place, relationship, or layer has privilege over any other in such a world.
The linearity of the book, compared to the multiplicity of the web, leads to an even more fundamental shift in how we assess reality: from a temporal to a spatial one. Like reading a book, we see time as an arrow, a linear path with starting and ending points.
While we learned from Albert Einstein that we could slow time down depending upon how fast we accelerate, we cannot reverse time or occupy different speeds of time at the same time. But that is not the case with space. Like the web, we can manipulate space, move in multiple directions within it, and reverse it—tear a space down, for example—if we choose.
The worldwide web, of course, often seems spatial. It connects us to people and places not in, and often far from, the actual spaces we occupy. Indeed, this new weblike way of engaging in the world appears to have collapsed both time and space, making everything that ever existed anywhere in the world immediately available to us, as if temporal or spatial distance no longer mattered.
Such effects, however, disguise the essentially spatial nature of digital media. The laterally linked and complexly networked nature of the web gives it a spatial form, conceptually if not always physically. And the layering of data and simultaneity of information through the web makes it place-based, even if that "place" exists in virtual space.
This line of thinking, in turn, suggests that the current way we store information—through digital documents and files—and distribute it—through e-mail, e-books, e-zines, and the like—may represent a transition stage in this technology. Such formats mimic the forms that emerged from the printing press and paper technology and, because of their familiarity, have enabled us to adapt to the access of digital information more easily. But they also reinforce a linear way of thinking about information inherently at odds with the weblike way in which we increasingly see the world.
GIS will eventually become a major way—perhaps the dominant way—in which we will access information in the future because of the essentially spatial nature of that software. Rather than see information as discrete bits, accessed linearly and temporally, like moving along a necklace of data, GIS layers information spatially, linking it according to its relevance to other data on a given layer and according to its relevance to all the other layers in a given place. It allows us to "map" information, which may become the primary way we organize, access, and distribute knowledge in the future.
This use of spatial tools to match the spatial nature of a web will have a profound effect on how we think about information itself. The book led us to see information in highly discrete ways. By packaging related content between two covers, books encourage us to see knowledge as a set of distinct disciplines, each with its own discourse and eventually its own set of assumptions and use of language that made it increasingly hard for anyone else not in that discipline to understand.
And by sorting information according to disciplines, books enabled us to think of knowledge divorced from any particular physical or conceptual space. As a result, we can take almost any subject—say, water—and find that topic addressed in myriad ways by many disciplines—the sciences and social sciences, literature and history, art and poetry—all located in different places in a library and all addressed in different ways through different disciplinary lenses.
That way of organizing knowledge has served us well in the last several centuries as we have sought to understand and control the world around us. But it's gotten in our way in recent decades, as we have come to realize the damage we have done to the world and the threat that that poses to our civilization and to us.
It has led, for example, to what Adam Smith called the paradox of value, when he asked, at the beginning of The Wealth of Nations, why we so value diamonds that have so little real use, and why we don't value water, without which we cannot live.7 By dividing information into discrete, disciplinary units, we have created what we might call the paradox of knowledge: in which we have so much information about the world and yet remain so ill informed about our effect on the world.
This suggests that we may need to arrange knowledge differently in the future, not according to disciplinary categories, but instead according to spatial phenomena and, as Smith would say, to the things without which we cannot live. GIS offers one way of doing so. While the data-rich digital mapping of GIS arose, initially, to ease geographic analysis and enhance spatial decision making, it has the potential to organize knowledge in ways that align more closely with the ways in which the world itself is organized: spatially.
That may make sense in spatially oriented fields, like geography, forestry, or planning, but how, you might ask, does that make sense for fields that appear to have no spatial equivalent: philosophy or pharmacy, history or histology, literature or linguistics? It's a good question, but maybe the wrong one to ask. It may be that we need to stop asking how to preserve our disciplines, which, for all their value, remain abstractions of or at best partial views of the world, and instead start asking how to preserve what remains of the natural world, which our disciplines, if they have any value, need to serve.
How might GIS help us spatialize knowledge? Rather than organize knowledge by type or discipline, we could use GIS to embed all the knowledge relevant to a place in the myriad layers of information about it. And as we scroll over a place, we can select the pertinent layers and begin to see the relationships among disciplines and the connections among data. So many talk about the need for interdisciplinary, but as long as we organize knowledge in disciplinary silos, the connections among disciplines will continue to elude us. When we instead begin to organize knowledge spatially, the connections come to the fore, as we focus less on the layers and more on the overlay of them and on their relevance to particular situations.
This, of course, may seem too much to ask: the reorganization of knowledge and the spatializing of education. We have, however, managed over the last couple of centuries to temporize education. Every field has a history, and almost every one requires that students study the history of the discipline as part of knowing it. Indeed, historical understanding has become such a part of what we define as an educated person that we take it almost for granted, but it wasn't always so. It wasn't until the nineteenth century that we assumed, as Georg Hegel argued, that we couldn't fully comprehend anything without knowing its history.
In the first decades of the twenty-first century, we need to see that the same holds true for space as much as it does for time. We cannot fully understand any field without also spatializing it, without also seeing how it relates to every other discipline as they come together in particular places, with a given group of people, in specific social and environmental contexts. We need to know how disciplines evolved over time, but we also need to know how they, so to speak, hit the ground and how they play out as part of the web that constitutes the whole of a place and of the people there.
This does not mean that we should see such spatial analysis as an end in itself. Except for historians, we rarely study temporal phenomena—history—as an end in itself. In most fields, history serves as a means to an end, as a way of better understanding how the present came to be and what the future might hold. The same is true for a spatial understanding of our fields. Except for a few fields, like my own field of architecture, which does study space as an end in itself, most disciplines will likely see this weblike, spatial turn in our thinking as a means of understanding their subject in new ways. Space represents, like time, and a priori condition, as Immanuel Kant argued—a precondition to everything else, and so having a sense of the relationship of space and time—how a field evolved spatially as well as temporally, what happened where as well as when—will increasingly become necessary to fathom how we have done so much damage to so many places and to the cultures of so many people on the planet even as we purportedly know more about them.
The spatializing of knowledge via its mapping onto places has another advantage as well: it becomes a visual way of conveying information across the barriers of language and to the growing percentage of the human population that remains illiterate. The book divides the literate and illiterate and, as such, has helped reinforce the power of the former over the latter. Hugo understood that when he had the archdeacon hold up the book as killing the building. The medieval cathedrals spoke to both the literate and illiterate and, in some respects, the book made large stores of knowledge inaccessible to the latter.
The digital divide threatens that as well, with the wealthier parts of the world having much more access to information than the poorer parts. The web and cloud computing may help end that division by making most of what we need to know available at low cost, with "dumb" devices able to access information anywhere in the world. But there remains the problem of literacy, as well as translation, and so closing the digital divide through such devices will only partly close the gap that exists between those who have access to knowledge and those who don't.
We may never close the latter until we spatialize knowledge through the use of visual tools like GIS. Enabling people to see the information relevant to their lives, whether or not they can read, and to map it to the places they know to understand the conditions that affect their lives, could have a transformative effect in empowering those who have been left behind by the book and even by the early incarnations of the computer.
GIS may represent the leading edge of computer mapping and visualization technology, but it also signifies, in some respects, a return to the world that Hugo's archdeacon saw as threatened. This brings to mind the observation of the novelist and semiotician Umberto Eco—that modernism represented a premedieval condition, which suggests that our post-postmodern world may have more characteristics in common with the medieval world than we have recognized.8
If the medieval cathedral tells its stories in stone and glass, GIS tells them through layers and overlays. Both do so visually and spatially, both speak to viewers whose language or even whose literacy may not matter, and both reveal relationships and meanings that no book could ever capture. At the same time, the medieval cathedral and digital cartography both have the power to move us to action, to help us see things with our own eyes and without the interpretation of an author who might want to edit what we know or affect what we think.
Just as the book helped give rise to the Protestant Revolution, in which people wanted to read the Bible for themselves and make up their own minds, so too might the visual and spatial power of GIS someday give rise to a secular version of the same, in which people, protesting the power of a few to control so much of the knowledge about the world, will want to see that information for themselves and make up their own minds.
Geodesigning the Future
This leads to my final point about the spatializing of knowledge. The temporalizing of knowledge has, through the agency of history, helped us understand the past and possibly comprehend how the present came to be, but rarely do we venture very far into the future. We call that science fiction or fantasy to set such future-oriented thinking apart from what we can reliably know about the world as it is or as it once was. And we tend to see such work as somehow of lesser quality or validity than what the sciences, social sciences, and humanities offer.
But spatial understanding has a different relationship to the future, as well as the past and present. Spatial knowledge recognizes place, rather than time, as the ultimate continuity in our lives. And while none of us can see the future as a temporal idea, we continually imagine the future of places, projecting possible spatial arrangements based on what we see around us.
The design disciplines do this all the time, using spatial means to imagine what could be, envisioning the future of a place, product, or environment, and depicting that visually for others to see. We commend or criticize a design and accept or alter it to fit our idea of what should happen in a particular place or with a particular product. We don't consider design a lesser discipline, simply a different one, operating according to its criteria and assumptions.
I mention this because the leading edge of GIS rests with the idea of "geodesign," the use of geographic analyses of what is as the basis of making design decisions about what could be. Rather than see future-oriented thinking as somehow fiction or fantasy, geodesign allows us to connect what we know about the world with what we might want the world to be. Just as GIS can serve as a means of organizing knowledge spatially, geodesign might serve as a means of projecting that knowledge into the future and assessing its merits based on what we know about a place.
Why does this matter? Because we stand on a similar precipice as Hugo's archdeacon, with even more drastic implications. We might well say that "this will kill that," but in our case, "this" represents modern civilization and "that," the natural world. Since the rise of the book, although not necessarily because of it, we have devised a Ponzi scheme with the planet over the last couple of centuries, exploiting natural resources, other species, foreign cultures, and even future generations to keep those at the top of this pyramid scheme enriched.
As we know from the collapse of other, smaller Ponzi schemes, such frauds cannot last. They tend to collapse suddenly and without warning, and those most enriched by the scheme—us—have the farthest to fall. The only way we can avoid such a fate is to realign our relationship with the natural world, to reorganize our considerable knowledge about it to reveal the forces that lead to our unsustainable practices, and to relearn how to steward what remains of the planet we have so altered. And if we don't, we have only to alter the terms of Hugo's observation only slightly. This—the collapse of our Ponzi scheme—will kill that—the civilization we have built up over the last 200 years.
The spatialization of our knowledge, in other words, isn't just an academic exercise or the result of some arcane interest of a few spatial thinkers or GIS specialists. With it, we can begin to set the foundation for a more sustainable future for ourselves as we see the impact of our actions and the relevance of our knowledge to the particular places in which we live. This will not kill anything except the ridiculous illusion that we can continue to live beyond the carrying capacity of our planet. And doing so is not just about space; it's about time!
Interconnectd And Interactive Memes
Social Media In Content In Context
Social Media Extensions..
Mapping It Out On The World Wide Web: Internet
A Book Facing One..
Inside The Link
"Like" Motivates Posts And Drops
Supplied Applied Applicatons
Modern technology is changing the way our brains work, says neuroscientist
Extensions Of Ourselves
With all the formats, structures, environments, graphics and clicking and downloading that's going on in the Web, and then some, it is no wonder that our brains are going to start functioning in a way that is dictated by the present-day media and mediums. If one grew up in the Analogic era, the change is clear and apparent. But those of the generation called the 'natives', those born into and grew up with the present technological techniques and their attendant features, are more adapted and flowing along with these technologies.
Whereas, we of the analogic era still need to catch-up, albeit slowly, with these new and fast emerging and merging interactive technologies. This has displaced, for most of us, our past recognition of cognition, and now we have to try and augment the past with the present future we find ourselves in. Apparently, we are facing a crisis of magnified proportions, that we shall have to constantly upgrade our past knowledge and try to merge it with the current splurging digital viral splurging Web. In doing so, we are slowly forgetting and letting ourselves surrender to the present technologies and their embedded techniques.
On this subject, I have solicited the sage musings of Susan Greenfield, and I find her article relevant to the Hub and what I have briefly stated above.
Modern technology is changing the way our brains work, says neuroscientist
By SUSAN GREENFIELD
Human identity, the idea that defines each and every one of us, could be facing an unprecedented crisis.
It is a crisis that would threaten long-held notions of who we are, what we do and how we behave.
It goes right to the heart — or the head — of us all. This crisis could reshape how we interact with each other, alter what makes us happy, and modify our capacity for reaching our full potential as individuals. And it's caused by one simple fact: the human brain, that most sensitive of organs, is under threat from the modern world.
Unless we wake up to the damage that the gadget-filled, pharmaceutically enhanced 21 century is doing to our brains, we could be sleepwalking towards a future in which neuro-chip technology blurs the line between living and non-living machines, and between our bodies and the outside world.
It would be a world where such devices could enhance our muscle power, or our senses, beyond the norm, and where we all take a daily cocktail of drugs to control our moods and performance.
Already, an electronic chip is being developed that could allow a paralyzed patient to move a robotic limb just by thinking about it. As for drug manipulated moods, they're already with us — although so far only to a medically prescribed extent.
Increasing numbers of people already take Prozac for depression, Paxil as an antidote for shyness, and give Ritalin to children to improve their concentration. But what if there were still more pills to enhance or "correct" a range of other specific mental functions?
What would such aspirations to be "perfect" or "better" do to our notions of identity, and what would it do to those who could not get their hands on the pills? Would some finally have become more equal than others, as George Orwell always feared?
Of course, there are benefits from technical progress — but there are great dangers as well, and I believe that we are seeing some of those today.
I'm a neuroscientist and my day-to-day research at Oxford University strives for an ever greater understanding — and therefore maybe, one day, a cure — for Alzheimer's disease.
But one vital fact I have learnt is that the brain is not the unchanging organ that we might imagine. It not only goes on developing, changing and, in some tragic cases, eventually deteriorating with age, it is also substantially shaped by what we do to it and by the experience of daily life. When I say "shaped," I'm not talking figuratively or metaphorically; I'm talking literally. At a microcellular level, the infinitely complex network of nerve cells that make up the constituent parts of the brain actually change in response to certain experiences and stimuli.
The brain, in other words, is malleable — not just in early childhood but right up to early adulthood, and, in certain instances, beyond. The surrounding environment has a huge impact both on the way our brains develop and how that brain is transformed into a unique human mind.
Of course, there's nothing new about that: human brains have been changing, adapting and developing in response to outside stimuli for centuries.
What prompted me to write my book is that the pace of change in the outside environment and in the development of new technologies has increased dramatically. This will affect our brains over the next 100 years in ways we might never have imagined.
Our brains are under the influence of an ever-expanding world of new technology: multichannel television, video games, MP3 players, the internet, wireless networks, Bluetooth links — the list goes on and on.
But our modern brains are also having to adapt to other 21 century intrusions, some of which, such as prescribed drugs like Ritalin and Prozac, are supposed to be of benefit, and some of which, such as widely available illegal drugs like cannabis and heroin, are not.
Electronic devices and pharmaceutical drugs all have an impact on the micro-cellular structure and complex biochemistry of our brains. And that, in turn, affects our personality, our behavior and our characteristics. In short, the modern world could well be altering our human identity.
Three hundred years ago, our notions of human identity were vastly simpler: we were defined by the family we were born into and our position within that family. Social advancement was nigh on impossible and the concept of "individuality" took a back seat.
That only arrived with the Industrial Revolution, which for the first time offered rewards for initiative, ingenuity and ambition. Suddenly, people had their own life stories — ones which could be shaped by their own thoughts and actions. For the first time, individuals had a real sense of self.
But with our brains now under such widespread attack from the modern world, there's a danger that that cherished sense of self could be diminished or even lost.
Anyone who doubts the malleability of the adult brain should consider a startling piece of research conducted at Harvard Medical School. There, a group of adult volunteers, none of whom could previously play the piano, were split into three groups.
The first group were taken into a room with a piano and given intensive piano practice for five days. The second group were taken into an identical room with an identical piano — but had nothing to do with the instrument at all.
And the third group were taken into an identical room with an identical piano and were then told that for the next five days they had to just imagine they were practicing piano exercises.
The resultant brain scans were extraordinary. Not surprisingly, the brains of those who simply sat in the same room as the piano hadn't changed at all.
Equally unsurprising was the fact that those who had performed the piano exercises saw marked structural changes in the area of the brain associated with finger movement.
But what was truly astonishing was that the group who had merely imagined doing the piano exercises saw changes in brain structure that were almost as pronounced as those that had actually had lessons. "The power of imagination" is not a metaphor, it seems; its real, and has a physical basis in your brain.
Alas, no neuroscientist can explain how the sort of changes that the Harvard experimenters reported at the micro-cellular level translate into changes in character, personality or behavior. But we don't need to know that to realize that changes in brain structure and our higher thoughts and feelings are incontrovertibly linked.
What worries me is that if something as innocuous as imagining a piano lesson can bring about a visible physical change in brain structure, and therefore some presumably minor change in the way the aspiring player performs, what changes might long stints playing violent computer games bring about? That eternal teenage protest of 'it's only a game, Mum' certainly begins to ring alarmingly hollow.
Already, it's pretty clear that the screen-based, two dimensional world that so many teenagers — and a growing number of adults — choose to inhabit is producing changes in behavior. Attention spans are shorter, personal communication skills are reduced and there's a marked reduction in the ability to think abstractly.
This games-driven generation interpret the world through screen-shaped eyes. It's almost as if something hasn't really happened until it's been posted on Facebook, Bebo or YouTube.
Add that to the huge amount of personal information now stored on the internet — births, marriages, telephone numbers, credit ratings, holiday pictures — and it's sometimes difficult to know where the boundaries of our individuality actually lie. Only one thing is certain: those boundaries are weakening.
And they could weaken further still if, and when, neurochip technology becomes more widely available. These tiny devices will take advantage of the discovery that nerve cells and silicon chips can happily co-exist, allowing an interface between the electronic world and the human body. One of my colleagues recently suggested that someone could be fitted with a cochlear implant (devices that convert sound waves into electronic impulses and enable the deaf to hear) and a skull-mounted micro-chip that converts brain waves into words (a prototype is under research).
Then, if both devices were connected to a wireless network, we really would have arrived at the point which science fiction writers have been getting excited about for years. Mind reading!
He was joking, but for how long the gag remains funny is far from clear.
Today's technology is already producing a marked shift in the way we think and behave, particularly among the young.
I mustn't, however, be too censorious, because what I'm talking about is pleasure. For some, pleasure means wine, women and song; for others, more recently, sex, drugs and rock 'n' roll; and for millions today, endless hours at the computer console.
But whatever your particular variety of pleasure (and energetic sport needs to be added to the list), it's long been accepted that 'pure' pleasure — that is to say, activity during which you truly "let yourself go" — was part of the diverse portfolio of normal human life. Until now, that is.
Now, coinciding with the moment when technology and pharmaceutical companies are finding ever more ways to have a direct influence on the human brain, pleasure is becoming the sole be-all and end-all of many lives, especially among the young.
We could be raising a hedonistic generation who live only in the thrill of the computer-generated moment, and are in distinct danger of detaching themselves from what the rest of us would consider the real world.
This is a trend that worries me profoundly. For as any alcoholic or drug addict will tell you, nobody can be trapped in the moment of pleasure forever. Sooner or later, you have to come down.
I'm certainly not saying all video games are addictive (as yet, there is not enough research to back that up), and I genuinely welcome the new generation of "brain-training" computer games aimed at keeping the little grey cells active for longer.
As my Alzheimer's research has shown me, when it comes to higher brain function, it's clear that there is some truth in the adage "use it or lose it".
However, playing certain games can mimic addiction, and that the heaviest users of these games might soon begin to do a pretty good impersonation of an addict.
Throw in circumstantial evidence that links a sharp rise in diagnoses of Attention Deficit Hyperactivity Disorder and the associated three-fold increase in Ritalin prescriptions over the past ten years with the boom in computer games and you have an immensely worrying scenario.
But we mustn't be too pessimistic about the future. It may sound frighteningly Orwellian, but there may be some potential advantages to be gained from our growing understanding of the human brain's tremendous plasticity. What if we could create an environment that would allow the brain to develop in a way that was seen to be of universal benefit?
I'm not convinced that scientists will ever find a way of manipulating the brain to make us all much cleverer (it would probably be cheaper and far more effective to manipulate the education system). And nor do I believe that we can somehow be made much happier — not, at least, without somehow anesthetizing ourselves against the sadness and misery that is part and parcel of the human condition.
When someone I love dies, I still want to be able to cry.
But I do, paradoxically, see potential in one particular direction. I think it possible that we might one day be able to harness outside stimuli in such a way that creativity — surely the ultimate expression of individuality — is actually boosted rather than diminished.
I am optimistic and excited by what future research will reveal into the workings of the human brain, and the extraordinary process by which it is translated into a uniquely individual mind.
But I'm also concerned that we seem to be so oblivious to the dangers that are already upon us.
Well, that debate must start now. Identity, the very essence of what it is to be human, is open to change — both good and bad. Our children, and certainly our grandchildren, will not thank us if we put off discussion much longer.
There May Be some Ways To Avoid Being Listened To And spied On One's Cell Phone
There's also a lot to be said about the gizmos we are using today like they are part and parcel of our lives. The most common is the cell-phone. Cell-phones are subject to many shenanigans perpetuated against the different citizens all over the world by the security apparatuses globally. The NSA in America has already been named and written about. On this Hub, I will used an article I found on Democracy Now Web Site, wherein the Hostess, Amy Goodman interviews Christopher Soghoian under the topic:
Security Researcher Christopher Soghoian on How to Use a Cellphone Without Being Spied On..
A new investigation by The Intercept reveals the National Security Agency and its British counterpart, the GCHQ, hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe. The secret operation targeted the Dutch company Gemalto. Its clients include AT&T, T-Mobile, Verizon, Sprint and some 450 wireless network providers around the world.
In part two of our interview, we speak with Christopher Soghoian of the American Civil Liberties Union about ways to securely use your cellphone.
In our online archive, watch all of our reports about NSA surveillance and the leaks of Edward Snowden.
AMY GOODMAN: This is Democracy Now! democracynow.org, The War and Peace Report. I’m Amy Goodman, with part two on a new investigation by The Intercept that reveals the National Security Agency and its British counterpart, the GCHQ, hacked into the internal computer network of the largest manufacturer of SIM cards in the world, stealing encryption keys used to protect the privacy of cellphone communications across the globe. The secret operation targeted the Dutch company Gemalto. Its clients include AT&T, T-Mobile, Verizon, Sprint and some 450 wireless network providers around the world. It produces two billion SIM cards a year.
To talk more about the significance of this story, we’re joined by Chris Soghoian. He’s the principal technologist at the American Civil Liberties Union, a visiting fellow at Yale Law School’s Information Society Project. The piece is in The Intercept that reveals all of this, and it’s by former Democracy Now! Correspondent, Intercept co-founder, Jeremy Scahill.
Chris Soghoian, in part two of our discussion, first quickly summarize what the great SIM heist is.
CHRISTOPHER SOGHOIAN: So, in this operation, GCHQ, which is Britain’s intelligence agency, hacked into this major Dutch supplier of SIM cards. This is a company that provides these microchips to wireless carriers around the world, and these are the chips that provide the security that’s in our phones. They secure the communications between our phone and the phone network and are intended to protect our calls and text messages from interception by private parties.
AMY GOODMAN: Should Gemalto be doing—can they protect the SIM card?
CHRISTOPHER SOGHOIAN: I mean, what’s in the story, the anecdotes that are revealed in the story suggest that Gemalto and its wireless carrier partners have at times followed pretty pathetic security practices, and so it didn’t seem like—that GCHQ had to work too hard in some cases. But, you know, that was just for some of the collection. In other cases, it seems like GCHQ—GCHQ’s hackers targeted and hunted individual engineers and employees.
I mean, they were stalking these engineers online in an effort to compromise their email accounts, their Facebook accounts, and then ultimately to compromise their computers as a way to gain access to the inner sanctum of Gemalto’s network. That kind of stalking of engineers is really terrifying, particularly given that, you know, we’re not talking here about a government stalking or targeting a terrorist. These engineers are not accused of breaking any law.
These are law-abiding people who have mortgages and families and husbands and wives, and they just happen to work for companies that these intelligence agencies, you know, think are fair game. And, you know, I think this article, building on The Intercept's piece last year about GCHQ's hacking of Belgacom, Belgium’s largest phone company, is really going to serve as a wake-up call to the entire technical industry, because, you know, in essence now, it seems clear that any engineer at any company that does anything remotely interesting is now fair game for GCHQ and NSA and their other partners.
AMY GOODMAN: How do AT&T and Verizon compare to companies like Google and Apple?
CHRISTOPHER SOGHOIAN: So, the phone companies don’t do a very good in securing their communications. The encryption technology that’s built into your phone, the technology that protects your telephone calls and text messages as they go over the air, the encryption was built in the late '80s and early ’90s. It was—forms of it were broken in the 1990s by graduate students, and we're still using it today.
These carriers are—these large, multibillion-dollar telephone companies are really not that interested in providing a secure method of communication. They’re certainly not interested in designing or deploying methods of communication that our own governments cannot intercept, let alone other governments.
And, you know, for more than a hundred years, the US telephone companies have been happily providing wiretapping assistance to law enforcement and intelligence agencies. We should just—we should give up on this idea that AT&T or T-Mobile or Verizon are ever going to deliver truly secure communications to their customers.
You know, in contrast, it really does seem like Silicon Valley companies are much more interested in providing strong, secure communications services, to the extent that their business models permit. And what I mean by that is, you know, at the end of the day, you’re not paying Google for their services, and so they want to read your emails, but a company like Apple, who—you know, Apple doesn’t make money by selling you email service.
They make money by selling you an expensive phone. And as long as you keep buying the phones, Apple makes lots of money. For companies like that, where their business model and your privacy are more aligned, I really do think we can expect that these companies will provide us with much stronger and much more secure methods of communication.
Particularly given that they’re competing in a global market where, you know, German consumers don’t want a phone that can be easily spied on by the NSA. And so, you know, these tech companies really are having to up their game.
AMY GOODMAN: So, Chris Soghoian, you started in part one of our interview talking about how people can protect themselves. Explain further what people can do.
CHRISTOPHER SOGHOIAN: Sure. So, again, the voice and text message services provided by your wireless carrier, if you’re just sending a text message through your phone or making a telephone call through your phone, those calls can be intercepted by your own government, by police and intelligence agencies.
They can be intercepted by foreign governments who are operating domestically. They can be intercepted by sophisticated criminals and by hackers and by stalkers. You should not expect that those kinds of communications services can deliver real security.
On the other hand, there are now a number of apps and Internet-based services that you can run on your smartphone that will give you much, much more secure communications. So, Apple has built iMessage into its iPhone product for several years. If you have an iPhone and you’re sending a text message to someone else who has an iPhone, this is used by default.
Those messages are encrypted in a strong way. They’re sent via Apple’s system, and it’s very, very difficult for governments to intercept those. If you’re using WhatsApp, which is a service now owned by Facebook and used by hundreds of millions of people around the world, if you’re using WhatsApp on Android, it’s encrypted, again, in a very strong way.
USAnd if you have an Android or iPhone, you can download third-party apps, the best of which are called Signal for iOS and TextSecure, T-E-X-T Secure, from Android. These are best-of-breed free applications made by top security researchers, and actually subsidized by the State Department and by the US taxpayer. You can download these tools today. You can make encrypted telephone calls. You can send encrypted text messages. You can really up your game and protect your communications.
To be clear, if you are a target of a law enforcement or intelligence agency and they really care about you, they can hack into your phone, and these tools won’t stop that. But you can make it much more difficult. You can make it so that they have to work really hard. And, you know, it’s unfortunate that the phone companies, that AT&T and Verizon haven’t warned their customers. They should be telling the public. They haven’t. But we can do things right now to make wiretapping much more difficult and much more expensive.
AMY GOODMAN: Is it possible to use a phone without a SIM card?
CHRISTOPHER SOGHOIAN: No, the SIM card is sort of like the driver’s license for the phone. The SIM card proves that you have a valid account. That’s, you know, what lets the phone company know who to send the bill to. So, you need a SIM card. You know, think of it this way. When you go to Starbucks, you don’t expect Starbucks to provide you with a secure Internet connection.
You expect them to provide you with an Internet connection, and then you bring your own security on top. If you’re using Gmail or Facebook or Twitter, all of those services build their own security in. They don’t rely on the Internet provider to do that for you. By the same token, we should stop relying on the AT&Ts and Verizons of the world to provide security.
We should just use them for data, and then we should run secure, encrypted communications apps that go over the data portion of the network. We know how to do this.
These apps exist. We can secure our communications. And, you know, it’s been delightful post-Snowden.
You know, the journalism community is really starting to take note of this. And I now regularly have encrypted telephone calls with national security reporters around the world. I send encrypted emails with reporters, with lawyers. These tools really are getting to be easy to use, and it’s just—we just need more people to start using them.
AMY GOODMAN: So, explain again, if you just want to make a phone call, but you don’t want it to be listened to or tracked, how would you do that? How would you encrypt a phone call?
CHRISTOPHER SOGHOIAN: So, if you have an Apple device, you could download—so FaceTime is already installed in your iPhone. It’s built by Apple. It’s built into the iPhone. If you make a FaceTime audio or video call from your iPhone to someone else’s iPhone or iPad, it’s encrypted with very strong technology, and it will be very, very difficult for a government to intercept. If you have an—if you don’t want to use an Apple encryption product, there’s a fantastic app in the app store called Signal, S-I-G-N-A-L. It’s free.
It’s open source. It’s very, very good. That makes encrypted telephone calls anywhere in the world for free. Even if you’re not worried about security, it’s actually a way of saving money on your phone bill. And then if you’re using Android, there’s a great app by the same people who do Signal called RedPhone, R-E-D-P-H-O-N-E. Again, it’s free. It’s supported by the US government.
So you’re paying for it anyway; you might as well use it. And that will also let you make free encrypted telephone calls. These tools work, and they make—they make wiretapping much more expensive, which is what we want. We want governments to have to focus their resources on the people that really matter, the real threats, but they shouldn’t be able to spy on everyone at low cost.
AMY GOODMAN: I also wanted to ask about another NSA story in the news this week, this new probe that finds the NSA has embedded spying devices deep inside hard drives in computers around the world. The Russian firm Kaspersky Lab says it uncovered the spyware in personal computers across 30 countries, from Iran to Russia, Pakistan, Libya, China, Belgium, Ecuador and the United States. The targets include government institutions, oil and gas firms, Islamic activists, scholars and the media. Can you comment on this, Chris Soghoian?
CHRISTOPHER SOGHOIAN: Sure. So, the NSA has a multibillion-dollar budget. They hire some of the best and brightest hackers, and they give these really smart people a lot of resources and basically tell them that they can do whatever they want. And, you know, it’s not surprising that a well-resourced team, that is not constrained by the law, can get up to a lot of really interesting and terrifying things.
You know, they’ve been given a mandate by their superiors to go out and hack and get access to every system they can, and they’re doing that. You know, in one way, as someone with a technical background, I’m impressed with what they’ve been able to do.
But I think, you know, we should all be very scared about what the NSA is doing, the capabilities they have and the extreme lack of effective oversight that’s taking place. For an agency that is engaged in this degree of highly sophisticated technical compromise of computer systems, the extent to which policymakers and agency overseers lack technical competence, I think, should be terrifying.
We need to make sure that those in Congress, those in the courts, who are supposed to perform oversight over the NSA, we need to make sure they have technical understanding or technical advisers. And the fact that there’s no technical oversight of these agencies, I think, is one of the reasons why they’ve been able to do as much as they have with as little oversight.
AMY GOODMAN: Chris, why is the government funding the apps that you’re recommending, that make it harder to break into, eavesdrop on texts or calls?
CHRISTOPHER SOGHOIAN: Because they’re tools of foreign policy. You know, the US government isn’t this one machine with one person, you know, dictating all of its policies. You have these different agencies squabbling, sometimes doing contradictory things. The US government, the State Department has spent millions of dollars over the last 10 years to fund the creation and the deployment and improvement to secure communications and secure computing tools that were intended to allow activists in China and Iran to communicate, that are intended to allow journalists to do their thing and spread news about democracy without fear of interception and surveillance by—
AMY GOODMAN: But maybe—
CHRISTOPHER SOGHOIAN: —the Chinese and other governments.
AMY GOODMAN: But maybe the US government has a way to break in.
CHRISTOPHER SOGHOIAN: Well, you know, it’s possible that they’ve discovered flaws, but, you know, they have—the US government hasn’t been writing the software. They’ve been giving grants to highly respected research teams, security researchers and academics, and these tools are about the best that we have. You know, I agree. I think it’s a little bit odd that, you know, the State Department’s funding this, but these tools aren’t getting a lot of funding from other places.
And so, as long as the State Department is willing to write them checks, I’m happy that the Tor Project and WhisperSystems and these other organizations are cashing them. They are creating great tools and great technology that can really improve our security. And I hope that they’ll get more money in the future.
AMY GOODMAN: And, Chris, very quickly, what’s the legislation you think needs to be passed, at least in the United States, to protect privacy?
CHRISTOPHER SOGHOIAN: I mean, we need so much. We need a technically informed FISA court. We need controls over domestic surveillance. But we also need—we need strict oversight and legislative controls over what NSA does abroad, which right now are largely regulated under Executive Order 12333. Most of the scary things that NSA does, it doesn’t have to go the FISA court to get approval for, and I think we really need to rein that in.
AMY GOODMAN: Chris Soghoian, we want to thank you so much for being with us, privacy researcher and activist, principal technologist at the American Civil Liberties Union, also visiting fellow at Yale Law School’s Information Society Project. You can also go back to democracynow.org to see part one of this conversation. I’m Amy Goodman. Thanks so much for joining us.
Mobile Apps Invading Privacy
The smart phones are a wet dream for every spy. I fact all cell phones are, but the smart ones communicate your position all the time too. So Google or Apple will know what strip club you prefer and when and how often you visit it.
An examination of 101 popular smartphone "apps"—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone's unique device ID to other companies without users' awareness or consent. Forty-seven apps transmitted the phone's location in some way. Five sent age, gender and other personal details to outsiders.
The findings reveal the intrusive effort by online-tracking companies to gather personal data about people in order to flesh out detailed dossiers on them.
Smartphone users are all but powerless to limit the tracking. With few exceptions, app users can't "opt out" of phone tracking, as is possible, in limited form, on regular computers. On computers it is also possible to block or delete "cookies," which are tiny tracking files. These techniques generally don't work on cellphone apps.
Here are some other points to consider and ponder"
Are you suspicious that your cell phone is being tracked, tapped, or monitored by spy software? If so, there are a few telltale signs you can look for to help you find out if any spy software has been installed on your smart-phone, or if your phone is being tapped or tracked by some other means. These signs are often subtle, but as long as you know what to look for, you can sometimes find out if you are being spied upon through your cell phone. Any modern smart-phone (IPhone iOS, Android, Blackberry and Symbian) can be infected by cell phone spy software.
cell phone spy app
Even if you do not think you are being tracked, tapped, or otherwise monitored, you should pay heed to these possible symptoms. With today’s obsession with cell phones, more and more personal information is being transmitted through our phones, and most people fail to realize just how easy it is for nefarious people to hack your phone and steal your identity.
Finding the Phone Spy Software
There are a few telltale signs that, if not caused by other means, are likely caused by spy software. These can be caused by other factors, so you have to use your best judgment, but these symptoms can at least give you a hint of a problem.
Odd Behavior: If your phone is misbehaving in odd ways, like suddenly lighting up when your phone is not in use or making random beeping noises, this could be a hint of a problem. Likewise, random shutdowns, if not caused by a fault with your operating system or hardware, can be caused by spy software. Granted, any cell phone can show strange symptoms once in a while, but if these symptoms are happening with regularity, you should be suspicious of a problem.
Poor Battery Life: Many spy software applications increase your phone’s battery usage, so you should monitor how quickly your battery runs down. If it seems to be needing a charge more often, you may have spy software on your phone. Note that you are looking for a dramatic change, not a gradual one; battery lifetimes do drop over time, but this change is usually much more subtle than that caused by apps running when they are not supposed to.
Random Reboots: If your phone randomly reboots, even when the battery is fully charged. Spy software is often buggy and is known to cause random reboots.
Background Noise: Try not to get too paranoid on this one, because the nature of cell phone communication can cause problems. However, if it is happening more often than usual, this can be a telltale sign of spy software. Listen for static, clicking, beeping or voices in the background. Some call recording features can cause this, too, so be mindful if you are intentionally recording a conversation.
Increased Data Usage: If you notice that you are using up more data than usual, this can be caused by the spy software sending data to the recipients. Some of the less reliable spy apps are known for using excess amounts of data; therefore, unusual data usage can be a significant symptom of spy software hiding on your cell phone.
Watching for these symptoms can help you detect whether your phone is being spied on in some fashion. You should always do a master reset of your phone if you suspect this problem.(SecureVoice)
Thinking, In any Way Or form, Is A Human Act
Analogic And Digital Environments
Social media is made of analog people. Not robots.
Social Media has always been in existence — using different mediums — and it has always been analog. Whether word-of-mouth, pamphlets, telegraph, letters to the editor, telephone, or snail mail, humans have always been social, and they have used the technology of the era to accomplish this. This communication now exists online, which means it has a larger presence thanks to the global and fast nature of the medium, but that has not changed the nature of the nature of the message.
The message is created and read by analog people.
Our gizmos always interact with us, or we with them in many strange ways, as depicted in the article above. Sometimes, some of us will throw up our arms in the air and say that 'these machines have a mind of their own'. This would be said in jest and frustration when the predictable and expected functioning of the machine goes awry.
At times, on my app, cell phone calls, or the even older landline phone, one does from time to time hear echoes, some strange noises, and many undescirnable sounds emanating in the background. Other times the phone shuts down for no known reason to the user who is always left perplexed and staring or shaking the phone.
I read somewhere that the words we use in conversing are code words that trigger the online phone bots to tune in into one's conversation. I do not know how true is that, but I know, again, that I saw on TV that the newer models of TV are tracking one's voice in the house, and that one's TV is watching one. When I used to teach some courses in Communication and Media, I would aver, to my students that we really do not know if our TV's are spying or watching us.
Today we are monitored, tracked and stalked by our own gadgets, cameras in our cities, in our hands or whilst driving(GPS Tracking). I really do not know how much Google Map's camera's work, but from the frozen images of my hometown, one strange thing is that I saw first hand what Google is capable of. One of my childhood friends was photographed in his home holding seeds, and strangely enough he was looking in the direction of the satellite in the Google photo-whether he saw the machine or what, I will never know.
I have seen people in their yards coming out of the outside toilet and headed into their houses on Google Map; children playing in the streets. But these were still photos, and to me, this said much more than the still photos. I now know that we are being watched from the Sky, and our feed goes where I will never know. I have tried to discuss this issue on some of my published Hubs on the Spying of ordinary and important people that is going on around the world.
I have overheard some of my friends, their phones would ring on my house phone, and even if I say 'Hello,' they do not hear, and sometimes I overheard some damning details, whilst the owner of the phone is not aware that the phone is turned on. This has happened many-a-times, and to me too, but this time; it was my brothers phone, taping me, and my girlfriend listening to me talk to him many salacious details that guys talk to each other about women. Phone gone rogue.
One of the problems I have encountered, myself, is the short battery life of the phone. Sometime people barely touch a button, and the phone turns on and this creates a lot of problems when I can hear the person whose phone rang mine, and they, the owners of the phones did not even know that their phone is on-thus losing a lot of battery life, and their airtime minutes, in the process.
I do see a lot of reboots on my TV, and not so much with the computer, and I use a little of the cell phone as possible. Sometimes the TV has some data that is being filled-in-at times, I do not really understand why the reboot when I would be sitting far from the TV and not even touching the remote. Then the machine goes into shut down or reboot mode… Duh!... 'These machines have their own minds'… I would often catch myself saying.
The article above by SecureVoice is very much spot on, because there are many things that one, as a lay-user, observes and notes some of the facts they have pointed out to regarding the strange comportment of our cell phones. I for one, I am still averse to the use of cell-phones, except on one certain app to communicate with my friends. Otherwise, I steer clear from using the cell phone for I realize there are many unresolved issues that are compounded by our use of the cell phone.
I do understand that convergence and submerging technologies bringing along other unintended consequences. It is these that I am trying to tabulate on the Hub above, and keep tabs on. I do realize that there is going to have to be a serious and concerted effort to 'keep up' with these technologies, and I am still working it, very hard, and it is not getting any easier. I know my son is the 'native' of these technologies, at times I call him my "IT" man, but he keeps on insisting that I learn, for to him, he cannot fathom as to why I am lagging so much behind. I always sum it up by saying, he never lived in the analogic era. He is the product of the Digital World.
The Nature of Digital
Digital is something we program. It’s more reliable, more predictive. We create an environment with a specific set of questions and a set number of ways to answer the questions. We determine a limited number of solutions to a problem. We create flow charts and know that while we might not control how the answer is reached, the eventual answer reached will be within our parameters.
Digital seems self-aware in that it mimics our own problem-solving abilities. We programmed it that way.
Traditional marketing is digital. Marketing wants a reliable, controllable customer. They want a customer that is easy to configure, and immune to noise and outside influence. They want a market that is simple to design, with limited ways to resolve the questions. Creating this very controlled marketing is very expensive and takes a serious investment to find the optimal programming.
Going Analog In A Digital World
Traditional marketing doesn’t really work, especially now when the technology has actually encouraged that analog path of least resistance. When there were fewer paths, we could at least herd people towards a certain outcome. Now, there are too many paths, too many ways for people to find a different way, a different message, and a better equilibrium. Traditional marketing is a digital answer to analog people who have endless paths in front of them.
Can people be made digital? Not really. We don’t really want a robot for a customer anyway. Can marketing be made analog? Definitely, by using Social Media.
Think about how viral marketing is analog; the message is sent out into the unknown and the people are allowed to incorporate it, use it, and share it according to the needs of their personal equilibrium. Social Media, even when it hasn’t gone viral, is the same. We can provide paths for people to find, we can encourage them to see our message, our brand, as the path of least resistance. We make our path the easiest and people will naturally flow into it.
Tap into both. Embrace the analog nature of people and marketing while using the great Social Media technology available. And be thankful that we aren’t selling to robots. Robots might be easier to sell to, but they’re really rough on humanity when they invade.(John Glenday)
When Mr. Rushkoff moves on to what he calls digiphrenia — digitally provoked mental chaos — he writes about present shock’s capacity to be a great leveler. Now that a single Facebook post can have as much impact as 30 years’ worth of scholarship, how do we analog creatures navigate the digital landscape? How do we shield ourselves from distraction, or gravitate to what really matters?
This section of Mr. Rushkoff’s agile, versatile book veers into chronobiology, a burgeoning science that has not yet achieved peak popular impact. Dr. Oz may speak of it on television, but the correlation between time and physiology is ripe for more exploration. Mr. Rushkoff, who likes being his own guinea pig, divided his writing of this book into weekly segments based on a lunar cycle.
Among the intuitive ideas turned tangible by “Present Shock” is “filter failure,” the writer and teacher Clay Shirky’s improved term for what used to be called “information overload.” Mr. Rushkoff’s translation: “Whatever is vibrating on the iPhone just isn’t as valuable as the eye contact you are making right now.”
Your new boss isn’t the person in the corner office; it’s the P.D.A. in your pocket. And there are the discrepancies between age and appearance that are increasingly possible in our malleable present. The book contends that young girls and Botoxed TV “housewives” all want to look 19; that hipsters in their 40s cultivate the affectations of 20-somethings, to the delight of marketers; and that apocalyptic types just want to opt out of time altogether. “Present Shock” gives them good reason to feel that way.
But in the end only some of the ills in “Present Shock” can be chalked up to dehumanizing technological advances. “I am much less concerned with whatever it is technology may be doing to people that what people are choosing to do to one another through technology,” Mr. Rushkoff writes. “Facebook’s reduction of people to predictively modeled profiles and investment banking’s convolution of the marketplace into an algorithmic battleground were not the choices of machines.” They were made by human intelligence, because present shock’s ways of targeting, pinpointing and manipulating aren’t just shocking. They’re very lucrative too.
How Our Devices Make Us Less Efficient..
Speaking at PSFK CONFERENCE 2013, Douglas Rushkoff introduced the concept of ‘present shock,’ from his recent book Present Shock: When Everything Happens Now. The concept examines how we are using digital devices to manage our lives, and how something designed to give us more time can, in fact, give us less. Central to this is the dichotomy between Chronos, which is time of the clock, and Kairos, which is timing in a more abstract sense. When these two distinct concepts are confused, and when our busy and digitally connected lives pull us too far out of Kairos and into Chronos, we are left stuck in a state of ‘present shock.’
Rushkoff focuses on how the digital renaissance of the past few decades has been recast as a way to extract more and more time out of individuals, rather than an opportunity to improve efficiency and slack in peoples lives. Instead of extracting us from the ‘time is money’ mentality of the industrial revolution, digital connectivity has found ways to extend this obsolete agenda into the tech age. So rather than giving us more time, time is in fact being subtracted.
Rushkoff goes on to propose that time does not create media, but rather media creates time. Looking through history, the invention of writing gave way to historical documentation and the idea of the contract, and the invention of the clock gave way to breaking down our days in to segmented sections, allowing people to sell pieces of their time. In this way, the current digital age has created a world of updates and interruptions, which leaves us in the disconnected neurological state of ‘present shock.’
How Technology Killed The Future
The crises arrive from everywhere, and all at once. The responses do, too. New allegations about NSA eavesdropping, for instance, pop up on Twitter before the White House has had a chance to fully spin the last set. A Cabinet secretary is presumed ripe for firing over a botched health care website even before the site’s problems are fully diagnosed.
The pauses between an event and a response to it—the space in which public opinion was once gauged—is gone, and now the feedback is indistinguishable from the initial action. The verdict, the takeaway, the very meaning behind what is happening is more elusive than ever before. We cobble together narratives and hunt for conclusions. Millions of social media posts per minute are parsed and analyzed as if those vast bits of opinion, conjecture and fancy somehow coalesce into a story
But they don’t.
Welcome to the world of “present shock,” where everything is happening so fast that it may as well be simultaneous. One big now. The result for institutions—especially political ones—has been profound. This transformation has dramatically degraded the ability of political operatives to set long-term plans. Thrown off course, they’re now often left simply to react to the incoming barrage of events as they unfold. Gone, suddenly, is the quaint notion of “controlling the narrative”—the flood of information is often far too unruly. There’s no time for context, only for crisis management.
Sure, the rate at which information spreads and multiplies has accelerated, but what’s taking place now is more than a mere speeding up. What we’re experiencing is the amplification of everything that happens to be occurring at the moment, and a diminishment of everything that isn’t. It’s not just that Google search results favor the recent over the relevant; it’s that suddenly an entire society does.
I feel myself chasing the “now” all the time. Last June, on my way to the stage to speak about the phenomenon of present shock at the Personal Democracy Forum, the NSA scandal hit the wires and CNN began pinging my phone for me to appear on air. Sensing a kind of meta-moment, I switched the approach for my talk and wove the emerging news story into my remarks, reading live updates from my phone as I talked about our urge to be caught in the now. Using any other example of a fast moving news story would have felt past tense. My talk became more of a demonstration: an example of present shock about present shock, on a day of present shock.
It wasn’t always like this. As recently as the end of the 20 century, the zeitgeist was animated by a kind of forward-leaning futurism. There was a sense that we were accelerating toward a big shift fueled by new technologies, networks and global connectivity.
Today, that shift may have finally occurred, but rather than encouraging us to look further ahead, it has instilled in us a pervading “presentism.” Our old obsession with the pace of progress has been drowned out by the onslaught of everything that is happening right now. It’s impossible even to keep up, much less to look ahead.
This new paradigm is fundamentally scrambling our politics. Our leaders’ ability to articulate goals, organize movements or even approach long-term solutions has been stymied by an obsession—on their part and ours—with the now. Unless we adapt to this new presentism, and soon, we may edge more dangerously close to political paralysis.
As you might expect, we can blame our current condition, at least in part, on digital technology. Consider the remote control, DVR and even YouTube, which in their own way have each eroded the traditional storytelling functions of television, rendering instead a deconstructed landscape of independent memes. The typical story arcs on which both news and entertainment used to depend no longer function when the audience can dart away—or move forward and backward—with the press of a button.
Traditional stories with beginnings, middles and ends just don’t work anymore. The looping mini-movies on Vine, for instance, don’t even attempt to adhere to them. And when we’re not engaged with disjointed mashups like that, we gravitate toward epic, endless sagas—such as “Game of Thrones” or even “Breaking Bad”—which move more like fantasy roleplaying games than the TV shows of old.
Martin Luther King Jr. wouldn’t be able to rally people to realize his great dream today. He would be as desperate for hourly retweets as the rest of us, gathering “likes” from followers on Facebook as a substitute for marching with them.
Our relationship with social and political movements is changing much the same way. Gone are the days when we could follow a charismatic leader on an ends-justify-the-means journey toward a clear goal. A person like Martin Luther King Jr. wouldn’t be able to rally people to realize his great dream today.
He would be as desperate for hourly retweets as the rest of us, gathering “likes” from followers on Facebook as a substitute for marching with them. Imagine John F. Kennedy attempting to rally national support for a decade-long race to the moon? The extreme present is not an environment conducive to building lasting movements.
But without a guiding narrative to make sense and create purpose, we end up relying too much on whatever happens to be happening in the moment. When it occurs, we over-respond to the latest school shooting. But over the long term, we lack the resolve or attention span to do anything to stop others from occurring.
Terror and rage replace our ideological goals; we end up reacting only to the latest crisis. And, because of what we can find (and what we can say) on the Internet, we react with a false confidence in our command of the facts. Just because we can all blog in the same size font doesn’t mean all of our opinions are equally valid or informed.
Consider the movements that have gained the most attention so far this century. The Tea Party may have originated as an almost libertarian anti-tax movement, but it gained steam the more it became characterized with an impatience for action. As a movement, it has focused on seeing direct results, now. Better to shut down the government in the present, as proof of what can be done, than to quietly persist without knowing whether one’s action are having an effect. Create a plot point, no matter the outcome.
On the other side of the political spectrum, the Occupy Wall Street movement began, similarly, as a protest against financial excesses, but it quickly morphed into a new style of political activity. Where the Tea Party yearned for results, the Occupiers seemed almost allergic to them.
Process mattered as much or even more than product. The “general assembly” protocol that the demonstrators instituted required total consensus. When asked by reporters about their demands, occupiers insisted these would emerge at some point in the future, if it all. The Occupiers saw the movement not as something that would end, but as a new normative state. A permanent revolution.
Neither of these movements may augur the emergence of a third political party, but they both point to a hunger for a new way of doing things—and they suggest approaches that fit with the modern presentist landscape. People are willing to try something new. Are their leaders?
As I see it, the very technologies that brought us into this state of present shock offer two contrasting ways to contend with it in our politics. The first is simply to ratchet up the polling, the metering and the analysis we’ve been using to probe voters. Politicians have been doing this since the late1990s, adapting computers, social networking streams and big data to home in on evermore granular shifts in opinion on evermore minuscule issues.
Technology is giving us the ability to have something like the “people meters” that measure audience responses and attitudes during television debates up and running perpetually. Pursuing this approach, our politics takes on the qualities of the Home Shopping Network, where television salespeople can adjust their pitches in real time based on the number of people placing orders.
Of course, access to continuous and instantaneous feedback is addictive—and quite counterproductive. We’ll demand that our politicians have clear answers to, say, the latest fracking disaster, lest they risk being seen as removed and non-responsive. Yet forcing them to engage at every bump in the road, however minuscule, will encourage them to lose their sense of direction and discourage them from taking in new information and making adjustments in thinking.
For instance, as new revelations about the attack on the US mission in Benghazi make the incident less useful as a talking point against former Secretary of State Hillary Clinton, Republicans double-down in real time and dispute new revelations, rather than wait until more evidence is revealed. There’s just no time to work with facts; opinions are being formed right now.
This relegates the processes that actually matter to the back room. The Obama administration’s deal-making with Iran over nuclear weapons occurred out of public view, lest the din of the digital reaction confuse or forestall whatever progress was being made. Unlike the Camp David accords of the Carter era, where proceedings were daily news, the public wasn’t privy to diplomatic history-making.
As a result, the public has responded with more fear and suspicion than it might have otherwise. That’s the liability of expecting constant pings from government the same way we expect text messages from our spouse throughout the day.
Instead of amplifying the effects of present shock—using digital technologies to gather for themselves more data—our leaders could work to adjust our expectations by taking new cues from digital culture. For example, most forms of interactive engagement, such as video games, abandoned the structure of the traditional narrative long ago.
Video games, like the fantasy role playing games on which they are often based, are not always contests that one wins and ends. Things like massive multiplayer games are more successful the more people get to play, and the longer the game is kept going. In our new presentist reality, those looking to rally support for causes or candidates would be wise to think about similar kinds of open-ended approaches.
Things that actually matter in our politics are being relegated to the back room, lest the din of the digital reaction confuse or forestall whatever progress is being made.
Take the White House rollout of the Affordable Care Act. The troubled debut of the HealthCare.gov website was less about bad Internet programming than about creating the kinds of expectations that accompany a Broadway opening, not a website. Online launches—the White House perhaps now knows—should be thought of as a permanent beta-test wherein engineers are iterating toward improvement in a present that’s never quite perfect. The finish line is never quite reached.
Operating within today’s presentist political landscape, the administration is at the mercy of a world without an organizing story. Often it takes a real disaster—a Boston bombing or a chemical weapons deployment in Syria—to generate a plot point capable of sustaining a narrative for a few days.
But then it unravels again. Instead of imposing a narrative on this new, open, never-ending story, leaders must develop strategies to solve problems that are resistant to easy declarations of victory. Gone are the days when America could plant a flag on the moon and declare the space race won. Modern obstacles are more often chronic ones to be managed and mitigated over time. Greenhouse emissions, child hunger, mutating bacteria, drug abuse and even terrorism are not wars one wins.
The age of present shock is, it seems, forcing Americans to realize that our journey is less about reaching a conclusion than it is about sustaining ourselves for as long as possible. Our politics may come to have less to do with triumph than endurance—a shift in perspective that, while born out of an obsession with the present, wouldn’t be so bad for the future.
"If the end of the twentieth century can be characterized by futurism, the twenty-first can be defined by presentism."
This is the moment we’ve been waiting for, explains award-winning media theorist Douglas Rushkoff, but we don’t seem to have any time in which to live it. Instead we remain poised and frozen, overwhelmed by an always-on, live-streamed reality that our human bodies and minds can never truly inhabit. And our failure to do so has had wide-ranging effects on every aspect of our lives.
People spent the twentieth century obsessed with the future. We created technologies that would help connect us faster, gather news, map the planet, compile knowledge, and connect with anyone, at any time. We strove for an instantaneous network where time and space could be compressed. Well, the future’s arrived. We live in a continuous now enabled by Twitter, email, and a so-called real-time technological shift.
Yet this “now” is an elusive goal that we can never quite reach. And the dissonance between our digital selves and our analog bodies has thrown us into a new state of anxiety: present shock. Rushkoff weaves together seemingly disparate events and trends into a rich, nuanced portrait of how life in the eternal present has affected our biology, behavior, politics, and culture.
He explains how the rise of zombie apocalypse fiction signals our intense desire for an ending; how the Tea Party and Occupy Wall Street form two sides of the same post-narrative coin; how corporate investing in the future has been replaced by futile efforts to game the stock market in real time; why social networks make people anxious and email can feel like an assault. He examines how the tragedy of 9/11 disconnected an entire generation from a sense of history, and delves into why conspiracy theories actually comfort us.
As both individuals and communities, we have a choice. We can struggle through the onslaught of information and play an eternal game of catch-up. Or we can choose to live in the present: favor eye contact over texting; quality over speed; and human quirks over digital perfection. Rushkoff offers hope for anyone seeking to transcend the false now.
"Present Shock" discounts the past and the future; i.e., this moment, the now, is the paradigm of individual knowledge and belief. Everything is in the moment. Douglass Rushkoff compels one to believe that the model of what is important in the world has changed. History and the future have become irrelevant.
Part of Rushkoff’s insight is based on the advent of the computer and how it affects education. Because history is at American fingertips, memory is not used to remember the past because the past is always present at the click of a mouse. Memory is only used to describe the present in real-time language, naturally acquired and innately available. The use of the brain becomes more focused on patterns of events in the now rather than relationship to a past or projection into a future.
Past and future melt into the present. Rushkoff names the phenomena "Present Shock". When something happens, it is already past; history is irrelevant. The future takes care of itself by becoming today. All time is compressed into now. Everything is everything because the mind conflates events of now as a construct of a mind-patterned reality, the only perceived reality.
Rushkoff opines loss of interest in understanding the past and future by noting that writing a book is unlikely to be read by many. It does not have immediacy, immediacy demanded of by the click generation. As Alexander Pope, an 18tcentury poet, said, “Hope springs eternal…” Here is a book review that hopes Rushkoff is right; that the click generation will widen its field of vision to include a future. Ironically, Rushkoff’s and Pope’s hope are based on a matter of time.
Douglas Rushkoff: Present Shock. When Everything Happens Now
The Internet And The Lack Of Dearth Of Concentration and Contemplation
The Hub above has been attempting to trace and interrogate the emerging, merging and submerging media, the gizmos that enable them, how these ffect and effect the users today. My topic above contends that we are being adversely sucked in and altered by these viral strams and consciousness, that it affects our cognitiove abilities and ability to thinkand be. The new social media and Web actually condition and redirect our thinking, reading, comportment and in many other ways which are new to us today.
I really would like to use the article below written by Nicholas Carr to elucidate my point above. I think he does it brilliantly, and I will liberally cull from it:
Is Google Making Us Stupid?
"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial 'brain.' “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”
I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy.
My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.
I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes.
A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets-reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they’re sometimes likened, hyperlinks don’t merely point to related works; they propel you toward them.)
For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.”
But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation.
My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.
I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon.
Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in college, and used to be [a] voracious book reader,” he wrote. “What happened?” He speculates on the answer: “What if I do all my reading on the web not so much because the way I read has changed, i.e., I’m just seeking convenience, but because the way I THINK has changed?”
Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me.
His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”
Anecdotes alone don’t prove much. And we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cognition. But a recently published study of online research habits , conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think.
As part of the five-year research program, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a UK educational consortium, that provide access to journal articles, e-books, and other sources of written information.
They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report:
It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense.
Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self.
“We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain.
“We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace.
When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.
Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic characters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains.
Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works.
Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling-Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch-typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.
But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.”
Living With a Computer (July 1982)
"The process works this way. When I sit down to write a letter or start the first draft of an article, I simply type on the keyboard and the words appear on the screen..." By James Fallows
“You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler , Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.”
The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.”
As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization, the historian and cultural critic Lewis Mumford described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.”
The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation, the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock.
The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level.
The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.
When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration.
The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets.
When, in March of this year, TheNew York Times decided to devote the second and third pages of every edition to article abstracts , its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules.
Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure.
About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick Winslow Taylor carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines.
By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared.
More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers.
The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”
Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.”
Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does.
Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind.
The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers.
Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains.
“The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.”
Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it?
Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements.
Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.
Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus, Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.”
And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).
The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds.
Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.
So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different.
The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking.
If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman eloquently described what’s at stake:
I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.”
As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.”
I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency.
Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.
Technolgy: A blessing Or A Curse? Go figure..
Gadgets – Boon or Bane...
We (Humans) may really be held hostage in this world by our "must-have" contraptions that takes up the greater part of our day letting us know where to go, what to do, track our prosperity, amuse us, enlighten us and even manage our minutes. In fact, despite the fact that we may have the capacity to live without these devices, it would be the life of a debilitation or rationally hindered. Individuals have since a long time ago quit utilizing watches to check time. The regular sickness of adolescents and even the mid-thirties today is "I can't function without my phone."
We're so continuously subject to our electronic companions that they don't just live with us, they live on us. Wearable tech can pass on messages from associates, let you know when to pack an umbrella, and even supervise music. Autonomously functioning devices count both your steps and your calories.
For everything that the devices accomplish for us, we are dependable to them as well, for a brief time of time. Exactly when something new comes and something new reliably follows along, we feel that same attraction we felt to our now-timeworn device just a year or two former. From the get go we tell ourselves the tech we have is sufficient, that we can acknowledge without picking up. In the meantime we without a doubt give in.
Fascination in gadgets was nothing new. It was the telephones at first; however it never did stop there. Notebooks were the accompanying tremendous thing until they were positively excessively gigantic, instantly supplanted by tablets that are comfortable everywhere.
Affects And Effects Of The Internet
Positve Internet Use And Effects
The Present Day Challenges Wrought By Merging And Converging Technologies
Media Convergence posted the following Blog:
Media Convergence – Opportunities and Challenges
Media convergence have become a vital element of life for many people. With the development of technology in different platforms and operations such as television, Internet and mobile communication, audiences have had both a bigger choice of media and a life which media technologies has made easier. However, one question needed to be asked whether or not media convergence bring opportunities and challenges to the industry and society itself.
On the one hand, in terms of industry, with the development of technology, the cost of products and software was lowered.Instead of having different news crews for every medium, one converged media operation can use the same reporters and staff to produce stories for, television, telecommunication and Internet mediums. By combining each medium’s resource, a converged operation can increase the quality of its product. As a result satisfaction of customer is increased, which leads to a larger audience. From the public’s standpoint, the increased convenience of information provided by converged stories makes using the media a better experience.
Besides, in term of society especially, media convergence cause the fragmentation of audiences for news. Nowadays, people talk about not having enough time to everything they want in a day or doing more than one thing at one time. Convergence led the media is more interactive and audience participation is encouraged. In addition, greater audience engagement can help to enhance the way people experience the media. Moreover, with the interactive World Wide Web, audiences are able to download and share music , video , photo via social networking and become media produce.
On the other hand , media convergence bring many challenges . Audiences complain about information overload and they can be overwhelmed by and find it difficult. Furthermore, the rapidly changing of technology has obstructed audience’s activities. People lack of skill to take full advantage of new media especially old people and the disabled. So will an audience so used to traditional forms of media embrace a new way of receiving information, or not? In addition , media companies pursue audiences by greater benefit from maketing and advertising through cross-selling.
While the future of converged media seems very bright, its proprietors will have to ask themselves some questions: Will the new technology that is anticipated be as revolutionary as people expect? Will the investment in convergence be profitable enough in the short term, or in the long term? What competing technologies should be utilized in order to produce the best media? Will converged media be successful in a world marketplace?
Those are few of the many questions posed by the growing trend of media convergence.
Our world is now a digital world and its technology is forever evolving, technology drives the change. Some of the biggest technological changes over the past decade have been phones, computers and television and it is still evolving.
In the 1870’s, Elisha Gray and Alexander Graham both created the telephone. This was a massive technological change that would shape our world today. Before the creation of this invention people were communicating via telegraph. The dot-and-dash morse code system was a very successful way of communicating however change was needed, people needed to talk to people. This is where the telephone began.
Ever since that bright moment, we have been communicating via telephone, however the invention has changed dramatically still since then. In the late 1990’s the mobile phone was created, this enabled people to use the telephone on the go and has been an even greater success.
Since the early 1930’s Computer technology had startedits journey in the new digital world and began to rapidly evolve along with it. In 1981 a company called Apollo Computer revealed its first computer work station called the DN100. This was one of the first computer work stations. A year later a company called Commodore released their own computer, an upgrade to the Apollo DN100, they called it the Commodore 64, it came with 64KB of RAM and featured impressive graphics for its time. Nothing compared to computers today.
Over the years computers evolved rapidly, getting bigger and more powerful, faster and more reliable. Nowadays we have desktop computers and laptops, all of which vary from size, speed, graphics and other technical components that build them. Still to this day computers are evolving, what will come next in computer technology-is istill evolving.
Televisions are also a fast moving digital change. They have gone from black and white to colour in the space of a decade and nowadays they are in HD and more recently 3D. When television were first around not everyone had them, they were expensive and very much like gold dust, now nearly every home has at least one television, it has become a necessity in this digital world, we rely on it. It’s entertainment, educational, relaxing, informing and reliable.
Over the years we have invented many technical devices, mobile technology, television and computer technology being the most important and effective to our lifestyle. These individual technical devices are now evolving into one another. For example mobile technology now offers new possibilities and has merged with television and computer technology.
We have gone from just phoning people to communicate to texting, messaging such as Facebook and twitter, Imessage and Blackberry Messenger and the more recently Whatsapp, which enables you to Imessage and Blackberry Message people. Using the internet, watching television on your phone and news applications, nowadays instead of picking up a newspaper or waiting for the news on the television or radio, you can look at it straight away on your phone by just entering an application such as the BBC News application on my phone. Games, emailing, photography, video-recording and it has now even evolved in 3D technology.
What is going to come next? What will be the next evolution phase? What else is going to happen in our forever changing digital world?
Old media – New media
The evolution of converged media has been slow and subtle. Publications such as Time were experimenting with television in the late ’70s. Major newspapers like the Ft. Worth Star Telegram began experimenting with computer applications in the early ’80s. These attempts and most other early convergence ventures were unsuccessful. Despite the huge startup costs the companies incurred, and the lack of success they achieved, efforts to converge media continued.
With technological advancements that made computers more affordable, a new wave of convergence efforts began in the early ’90s. Newspapers such as the Atlanta Journal had graphical and navigational capabilities far beyond prior efforts. Of course, all of this was made possible by the growth of and increased access to the Internet. Now media companies had a standard format to build their convergence efforts.
In the mid ’90s, the computer world, especially the Internet, experienced a period of extreme growth that rivals any other in history. As more households became linked to the Internet, consumer online services such as Prodigy, America Online and Compuserve became increasingly popular. Recognizing the trend, many news organizations signed on with the consumer online services, which set up sites for the newspapers on their program.
While the sites gave many newspapers national exposure, few of them gained enough from the efforts to justify their disadvantageous revenue splits with the online service providers. It was during this time that USA Today became the first newspaper to successfully bring its signature look to the Internet.
Television also joined in the convergence effort, with networks like Bloomberg Informational Television, which combined aspects of the Internet with traditional broadcast news. In the late ’90s, most major newspapers established their own websites without the consumer online services. This combination of print and Internet paved the way for the next stage of media convergence.
In the last four to five years, media companies have been fine-tuning the concept of convergence. Local newspapers, radio stations, television broadcasters and websites have combined to form fully converged websites. National newspapers, the New York Times and Washington Post, reached cooperative agreements with the networks ABC and NBC in 2000. While these local and national efforts have brought convergence to a new level, many major and local news organizations have yet to incorporate all the elements of media.
Many so-called converged organizations merely republish or repurpose material from one medium to another. The standard newspaper-Internet combination that developed in the ’90s is still convergence’s most common form.
World Wide Web
The internet has been one of the biggest contributors to the media convergence phenomenon. It has allowed a vast range of media platforms such as print, video and audio to become almost instantly accessible from nearly anywhere and has completely changed the way in which we, as an audience, absorb information. And as mobile internet is becoming increasingly popular, it has allowed the convergence of anything at all to one handheld device.
For example, news sites have massively changed their style of writing to allow quick fast hits of information by using short, to the point headlines followed up by a brief summary of events. This is aimed at an ‘on the go’ audience who want to quickly check for updates and learn the events occurring in the world.
The internet, however, enables this to go one step further, using images and video clips to tell stories of the day. The BBC news site frequently has a ‘story in pictures’ section which uses images taken of a certain event and captions to tell the story which benefits more visually minded people and can provide a different perspective on a news story.
Images are also very appropriate to hand held devices as many smart phones and tablets are designed for smooth and clear picture viewing. There is also a link allowing you to watch a live stream of the television broadcast of BBC news. This allows access to the television on your handheld device making television, which has always before been a very static and passive form of media intake free to take anywhere.
It is not only the news industry which has seized the opportunities of the internet, however. Almost all radio stations are available to listen to live online and in fact many are exclusively broadcast over the internet. Youtube.com is a site that allows anyone to upload a video where anyone can view it and as a result has become one of the biggest examples of the convergence of media online there is.
Almost anything at all can be accessed through a single website, from music videos, comedy sketches, feature films and people’s personal reviews of a subject and this can be accessed anywhere with an internet connection and with the rise of smartphones, this means almost anywhere. Youtube.com is the second most popular site on the internet and it is this ease of use which has made the site so successful, allowing anyone to create their own media and allow the rest of the world to see it.
I am going to talk about three topics which I found the most important in the development of our digital world; mobile technology, computer technology and television technology.
Firstly I am going to talk about mobile technology.
In the early 1870’s Elisha Gray and Alexander Graham both created the telephone.
Before this we were using the telegraph with a system of dot-dash-dot. This was a good way of communicating however change was needed, messages could be translated wrong and an improvement was needed.
This was a massive step in our mobile technology movement. The first major step.
This invention evolved further leading to the creation of the mobile phone in the late 1990’s.
Now-a-days mobile phones are a necessity in our lives. We use them everywhere, everyday for everything.
Journalists use them, we use them in a journalistic way, filming anything exciting that we come across. We use them in emergencies and much more.
Computer technology is also a massive necessity in our digital world now-a-days. Just like the mobile we will use them everyday and some people will take them everywhere!
The Computer started its slow journey in the early 1930’s. However it wasn’t until 1981 that a massive break through happened.
Apollo Computer created the DN100.
It was a fully working work station. Slow to todays standards but it worked.
A year later the Commodore64 was created and came with 64KB of RAM. WHICH IS NOTHING! But back then it was a lot.
How did we survive?
Now-a-days we have macs, laptops decent desktop computers with GB’s of RAM and TB’s of memory!
Last but not least Television Technology.
Televisions have changed dramatically over the years and have probably evolved the quickest.
They’ve gone from Black and white with no sound, to black and white with sound, to colour and now HD and 3D. They’ve been in all shapes and sizes, box and parallegram shaped to now flatscreen, even mounted on the wall.
Now-a-days we don’t just use mobile phones for talking we use them for all sorts:
We have phone applications for News, Facebook, News, Twitter and much more
For that we need internet access.
We use them for storage
Some even have Microsoft office or similar software
They are mini computers
They are faster, stronger, smaller and bigger, we have nearly unfillable harddrive space, HD webcams, software galour and now even mini computers such as the ipad!
Are now in HD
And the new and upcoming IPTV is coming soon. What is going to happen next?
Why have a chosen these 3 technological things to talk about.
Because they all link together.
Mobiles needed the computer technology to work.
Computers needed the television technology to work.
Mobiles also needed the television technology to work.
All of them linked to each other, needed each other and they wouldn’t have been successful without each other.
Our world depends on technology...[and electricity]
1st Video Blog for Creative Media Analysis
McLuhan On Media And Technological Convergence
“New” Media: Digitization, the Internet, and Mobile Devices
Sage Publication posted the following blurb:
Digitization and the rise of the Internet have blurred the boundaries between types of media and changed the broad parameters that used to be associated with all mass media. As a result, it makes more sense to speak of “new” media as breaking significantly with many of the features that characterize traditional mass media. We place the term new in double quotes because the “new” media, of course, are no longer new; the Internet is well into its third decade. However, no other single umbrella term has yet emerged to encompass the variety of media that now exist and to flag their distinctiveness from traditional mass media. For now, we’re stuck with the awkward term: “new” media.
“New” Media: Digitization, the Internet, and Mobile Devices
Any media content that is digital can be stored as the 1s and 0s of computer code, including text, audio, pictures, and video. This digital content can be delivered via different media, such as a compact disk (CD), digital video disk (DVD), or digital radio or television broadcast signal. By itself, the shift from analog (nondigital) to digital media content was significant. A music CD, for example, has different properties than a phonograph record; CDs typically have lower audio quality, but they are immune from the accumulation of scratches and pops that eventually plague vinyl records. And identical copies of a CD’s content can be made easily on a computer. However, much more significant changes developed when digital media content was united with the Internet.
The Internet is the communications platform on which digital media content can be delivered to a wide variety of devices, including desktop computers, wireless laptops, smartphones, and other mobile devices. Over the past few decades, the growth of digital media, the rise of the Internet, and the proliferation of mobile devices have combined to burst open the very meaning of mass media in several ways (Bolter and Grusin 2000; Lister et al. 2009).
First, the Internet blurs the distinction between individual and mass audiences, and replaces the one-to-many model of traditional mass media with the possibility of a many- to-many web of communication. This can be seen as people use the Internet and digital content for individual communication with single known recipients (e-mail, instant mes- saging), small group communication with a limited number of recipients (forums, social networking sites, microblogging like Twitter), and mass communication with an unlimited number of unknown recipients (websites, blogs, streaming video).
This blurring of the boundaries between communication to individuals and communication to a large audience has led observers to often replace the language of mass media with that simply of media (though we will see later that there is still good reason to pay attention to distinctions in audience size).
Second, the notion of known senders and anonymous receivers becomes problematic on the Internet. The producer of media content may remain anonymous to the typical reader, listener, or viewer, such as when no identifying information is provided on a website or blog. This opens the door to mischief, as with spam e-mail and false information or rumormongering through blogs or anonymous websites.
On the other hand, with the Internet, the audience is sometimes known by the producer, as when registration is required to access a website, join an online community, post comments on a site, or receive an electronic mailing. Even when we do not supply personal information to websites—or use fictitious identities—we still leave our digital footprint (in the form of our computers’ IP addresses).
This changes the relationship between users and producers because, as we will see, advertisers on the Internet can know a good deal more about the identities and behaviors of those they seek to reach than they ever could with traditional mass media.
Third, with “new” media, communication is often potentially interactive, rather than being one way.
For example, readers of newspaper websites can provide instant feedback on a story, shoppers can post their own product reviews at online retail sites such as Amazon.com, and viewers can comment or vote to “like” or “dislike” a video on YouTube. Interactivity can also mean that users are able to employ these media to communicate with each other.
Finally, the interactive capacities of “new” media blur the distinction between producers and receivers. Not only can audiences comment on or respond to media content created by others, but the widespread availability of digital media tools means that people with relatively modest financial resources and basic technological literacy can create their own media content and contribute to or alter content on other media platforms.
The require- ments for such a task are still insurmountable hurdles for the world’s impoverished and illiterate—and indeed the majority of the world’s population—but the creation of media content is within the grasp of more people than ever, especially in more affluent countries. People can create blogs and websites, upload videos, post their photographs, and engage in a host of other activities.
They can also contribute content to existing sites by, for exam- ple, using a television station’s website to submit photos and video that might be broadcast. In some cases, the traditional terms audience and even readers no longer accurately reflect the active role of what can be called more appropriately users of the “new” media.
Convergence a la Carte
Media Versus Human Communication
We now know how TV and the Web are affecting and determining what and how people think, live and act. The evolution of communications media as described above by the two articles I have just posted, show us how, as we interact with and use the present-day media gizmos and their techniques, we are transformed and we change our whole human person too. It is at this juncture that I will utilize Ong to give us a sense what is happening with media versus human communication.
"In treating the technologizing of the word, is the reason that the term can a false impression of the nature of verbal/written communication, and of other human communication as well. Thinking of a 'medium' of communication of 'media' of communication, suggests that communication is a pipeline transfer of units of material called 'information' from one place to another.
"My mind is a box, I take a unit of 'information' out of it, encode the unit (that is, fit it to the size and shape of the pipe it will go through), and put it int one end of the pipe)the medium of something in the middle between to other things). From the one ed of the pipe the 'information' proceeds to the other end, where someone decodes it 9(restores its proper size and shape) and puts it in his or her own box-like container called a mind. This model obviously has something to do with human communication, but, on very close inspection, very little, and it distorts the act of communication beyond recognition. Hence, McLuhan's wry book title: "The Medium Is the Message" (not quite the 'message').
"Human communication, verbal and other, differs from the 'medium' model most basically in that it demands anticipated feedback moved from sender-position to receiver-position. In real human communication, the sender has to be not only in the sender-position, but also in the receiver-position before he or she can send anything.
"To formulate anything I must have another person of other persons already 'in mind'. This is the paradox of human communication. Communication is intersubjective. The media model is not. There is no adequate model in the physical universe for this operation of consciousness, which is distinctively human and which signals the capacity of human beings to form true communities, wherein person shares with person interiority, inter-subjectivity.
"Willingness to live with the 'media' model of communication shows chirographic conditioning. First, chirographic cultures regard speech as more specifically informational than do oral cultures, where speech is more performance-orientated, more a way of doing something to someone.
"Second, the written text appears "prima facie" to be a one-way informational street, for no real recipient (reader, hearer) is present when the texts come into being. But in speaking as in writing, some recipient must be present, or there can be no text produced: so, isolated form real persons, the writer conjures up a fictional person or persons.
"The writer's audience is always a fiction. For a writer any real recipient is normally absent (if a recipient is accidentally present, the inscribing of the message itself precise as though the person were somehow absent — otherwise, why write)?. ... Modern studies in the shift from orality to literacy and the sequels of literacy, print and electronic processing of verbalization, make more and more apparent some of the ways in which this evolution has dependent on writing.
[So] .. "The interaction between the orality that all human beings are born into and the technology of writing, which one is born into , touched the depths of the psyche. Ontogenetically and phylogenetically, it is the oral word that first illuminated consciousness with articulate language, that first divides subject and predicate and then relates them to one another, and that ties human beings too, another in society. Writing introduces division and alienation, but a higher unity as well. It intensifies the sense of self and foster more conscious interaction between persons. Writing is Consciousness-raising."
For me, Ong does many things is breaking down and deconstruction how we have to begin to view the present day technology, which uses writing, language, audio, visual as something that sucks one into it, that one applying oneself to it. This is a very important disctinction, for as we change as affected by the present-day technologies, we ought to know how we evolved from the word, language and orality to literacy in order for us to grasp and wrap our heads around what the new technologies are with their techniques and how these affect and effect us today.
Therefore, living in a Media-Oral/visual and writing Convergence A la Carte, is the over-stimulation we have on all our senses, extended in all directions as projected in the photo above. Everything, then, is Everything...
Ways Of Seeing And Understading The Media
Reviewing The Changes Wrought By Technology In Our Use Of It
Being immersed-in and living in an environment as projected on the photo above does really affect and change people. The mechanization and mediarization of such environment have produced mediarized and mechanized users and people. This is what is happening. The way people carry themselves now in public is very interesting and disconcerting. In the local trains, one see a lot of people faces buried in their screens of the cells and iPad and listening to their iPods. This is something to see, whether they are sitting, crouching and having earphones, some of them, and splurging throughout the Viral Soup-clicking uncontrollably at these contraptions, enabled by the Wi Fi Stream, enabled by the mediating technological gizmos and their embedded techniques.
The new emerging and converging media is us and with us and now within us. This observation is important to pay attention to, for in our using the media, we are effected and affected by its technique, and this is what we are enamored not with the message, as of yet, but the technical object/contraption/cell pone, etc, as was noted by McLuhan below:
McLuhan’s work was not just about what he called the “electric media” but encompassed a broad sweep of history that included the development of human languages, the meaning of cave drawings, the impact of writing and the alphabet, and the immense changes induced by the invention of the printing press. In The Gutenberg Galaxy (1962), McLuhan talks about the profound changes that were wrought by the introduction of writing and the alphabetization of communications. Here are some intriguing quotes:
“The interiorization of the technology of the phonetic alphabet translates man from the magical world of the ear to the neutral visual world.”
“The alphabet is an aggressive and militant absorber and transformer of cultures.”
“… Phonetic writing destroyed Greek society without their having the slightest notion of how it happened.”
“Schizophrenia may be a necessary consequence of literacy.”
Note that McLuhan was talking about the consequences of writing as a medium, not about the content of writing. In The Medium is the Massage (1967), he writes, “Societies have always been shaped more by the nature of the media by which [humans] communicate than by the content of the communication.” This is an important point in that he contended that each new medium had consequences flowing from its use, and not from any content embedded in the new medium. In fact, McLuhan suggested in Understanding Media (1964), that content is like the meat carried by a burglar to distract the family dog while the burglar is robbing the house. A focus on content can distract us from fully seeing the impact of the new medium itself. (This doesn’t mean that the nature of content in a medium is not important, just that content in itself is not the cause of the effects of a given medium.)"
“Societies have always been shaped more by the nature of the media by which [humans] communicate than by the content of the communication.”
As I have observed, above, it is the gadgetry that has many people sucked into the mediarized world that's been shown above, and then some. The nature of the new media is to mediate our reality to us. It is the time warp that has transformed how we interact and communicate with one another. How we are able to have access to and be enabled to communicate with one another nowadays, is what holds many users enthralled by these tech-toys.
What we see about ourselves when in using these gadgets, is the total change that has been wrought by the new technologies and how and why/where and when we use these means of interconnectivity's, we are changed by that action, as a result, in a technological society environ and realm, we are mechanized and technologized people.
And this means, prior to the digital age, analog was what we can say was us being at the cusp of something new; and the new technological age, the Age of the Web and the Internet of things, we have since then changed and are still changing, as the technique worms it's way into controlling and determining our technological existence. Mulch informs us:
"During the mechanical ages we had extended our bodies in space. Today, after more than a century of electric technology, we have extended our central nervous system itself in a global embrace, abolishing both space and time as far as our planet is concerned. Rapidly we approach the final phase of the extension os of man — the technological stimulation of consciousness, when the creative process of knowing will be collectively and corporately extended to the whole of human society, much as we have already extended our senses and our nerves by the various media.
"Whether the extension of consciousness, so long sought by advertisers for specific products, will be a "good thing" is a question that admits of a wide solution. There is little possibility of answering such questions about the extension of man without considering all of them together. Any extension, whether of skin, hand, or foot, affects the whole psychic and social complex
"In the mechanical age now receding, many actions could be taken without too much concern, slow movement insured that the reactions were delayed for considerable periods of time. Today the action and the reaction occur almost as at the same time. We actually live mythically and integrally, as it were, but we continue to think in the old, fragmented space and time patterns of the pre-electric age."
That is why I have said in not so many ways that that as we moved from the analogic era to the digital, we are now caught up with the gadgets that enable and mediate for us to easily navigate and maneuver this environ, and we are now caught up with the viral stream splurging throughout this mediarized environ-through our gizmos, that we are still figuring out, as it morphs, merges, interconnects, submerges in and out of lives and existence-how to try and master the new technologies and their techniques. This is how we become affected and effected by the present-day media environments that we are immersed and submerged-in-about thoroughly.
Merging, Submerging, Cooneecting and Virally Streaming Media Communications
Elliot King Writes:
Although interest in news is apparently an integral part of social life (Stephens, 1988), the definition of news, the practices by which news is gathered and disseminated, and the impact of news on human interaction has changed dramatically over time (Lewis 1996). Although the character of news in any specific historical period is shaped by several factors, the available technology plays a significant part.
For example, Michael Schudson has argued that new printing technology was a necessary, if not sufficient, condition for the emergence of the inexpensive mass newspapers in the United States in the 1830s (Schudson, 1978). James Carey has suggested that the invention of the telegraph and the development of news wire services in the mid-nineteenth century was decisive in the development of a specific form of presenting news--a form which has been equated with the professional value of objectivity.
(Carey, 1989) The appearance of portable cameras at the end of the nineteenth century heightened the intrusiveness of the press into private lives and led directly to the notion of a zone of personal privacy in which the press could not intrude (Warren, 1890).
The twentieth century has witnessed a series of technological developments which have influenced modern and contemporary journalism also. Film news reels, radio and, of course, television changed the way journalists gathered and presented news as well as the way the public viewed and understood ongoing events. Some observers now believe that the way in which political campaigns are covered by television is the most important factor in many elections (McCubbin, 1992).
However, the interplay between journalism and technology has never been more dramatic than with the emergence of Internet as a widespread medium of communication in the 1990s. At the beginning of the decade, few journalists and fewer still affiliated with mainstream, mainline media had even heard of the Internet. By 1994, journalists on the leading edge were predicting that the Internet would fundamentally change the way their colleagues would go about doing their work (Reddick and King, 1994). By 1997, the Internet was an integral part of the practice, presentation and social impact of journalism.
Not only did the Internet diffuse through journalism more rapidly and more completely than other once-new technologies, computer-mediated communication and the Internet has had an impact on every aspect of journalism. The Internet has emerged as a new medium of publication, a new tool for reporting, and a new focus for journalism education. Moreover, the Internet has raised new questions about the social responsibilities of journalists to inform the public as well as the ethical practice of journalism. Finally, the Internet is an international medium. Consequently its impact has been felt around the globe.
Below, Jeffrey McIntyre Reviewed Todd Gitlin's Passion for Illusion" in the follwoing manner:
Todd Gitlin, a leading cultural critic and professor in New York University's department of culture and communication, has made a startling, eloquent and timely contribution to the field of media studies. Surrounding ourselves with amusements, Gitlin announces, "tranquilizes us, wrecking not only democracy and spirit but even deep pleasure itself." Strong words, but it is difficult to disagree with some of his more withering observations.
The world Gitlin surveys is a place of constant (always-on) television households, in which the sentence length in bestsellers has declined 43% since 1936, in which the cost of entertainment has dramatically decreased and the supply exploded, and in which the Walkman and car cup holder, reality TV, Napster, elevator music, branding and the stars of Hollywood and webcam life reflect the "fast," convenience-based economy. The coins of the realm are speed and stimuli.
Thhis caught my attention:
The End Of Progress
Risk has always been a part of life, but Christopher Coker argues in
Spiked that we are entering into a new phase in how we deal with it.
"The post-modern condition is one we all experience in a mode that is more than ever defined by risk; by the cluster of risks, insecurities and control problems that have played a crucial role in shaping our changing response to the world. Concern about risk is no longer a peripheral matter; it is built into the environment, culture and the everyday routines that guide our lives. In this sense we live in a 'risk age'. Risk has become a way of thinking about one's moment in history; it is not only inherent in the moment itself."
The implications for policy are significant. For one thing, our focus shifts away from what we want, onto what we wish to avoid. This creates a scattering of resources because our political system can easily select a few priorities but cannot simply choose among a myriad of hypothetical insecurities. Combined with a culture of victimization, governments can no longer stick to doing a few good things, but must make certain that nothing bad happens to anyone. This situation requires a new understanding of values and norms. Not everyone perceives risk in the same way. Mothers and their sons have different views about motorcycles for example.
Perhaps the most important consequence of this ethos of risk management lies in the shifting of burden of proof. In the status quo, dangers are known, and if not totally accepted are at least 'absorbed' in some way by society. But do something new, and irrespective of benefits, you are totally responsible for unanticipated dangers. A strong motivation for doing nothing."
A Marxian view here would do to add to the perspective I am pushing, as to the origins and the present affects and effects of our burgeoning technologies. Srachma Denarius informs us thusly:
In recent years Karl Marx has been cutting a peculiar figure as the soothsayer who foretold the demise of the socio-economic systems he had inspired. As if it were not enough to be proclaimed by the New Yorker magazine as the ‘Next Great Thinker’ (Oct. 1998), his writings are now being plundered for ammunition against the latest crop of ‘anti-capitalist’ protesters. It seems that history’s revenge on the scholar who tried to subject it to scientific analysis was an avalanche of bizarre praise.
Of course Marx deserved most of his disciples. They are the price he paid for his misplaced confidence in our capacity to keep up with his soaring and sometimes erratic flights of thought. At a time when people good and evil were equally horrified by the conversion of all human relations into money exchange, calling for the merchant to be restrained and lamenting the loss of traditional communal values, Marx refused to be blinded by the evident injustice, deceit, cruelty and vulgarity of the new era. Instead he celebrated the new fangled sources of wealth and was thrilled by the shock of the new.
Behind all the cacophony, he had heard the wheels of progress churning imperceptibly, arming the weak with the technology which would, eventually, help them defeat want, transcend prejudice and acquire self- control. Marx, a poet first and foremost, combined an unsurpassable dramatic talent with the most fruitful critique of political economy in a bid to depict humanity as the rising giant who gains self-consciousness, attains genuine freedom and shakes off the shackles inherited from its idiotic past.
This is precisely how he would feel today. Not known for his sentimental attachment to failed projects, one brief comparison of the Soviet massive, but decrepit, steel industry with that of, say, the South Koreans; one short trip in an East German Trabant (after having glimpsed at an Alfa Romeo); one look at Bulgarian computers in the 1980s, and he would have joined the crowds dancing on the ruins of the Berlin Wall. A minute’s surfing on the Web would have delighted him and he would have difficulty in containing his joy before deciding what to do next: To experience more of capitalism’s ‘toys’ or to sit down and re-write his Prelude to the Critique of Political Economy at once?
Marx’s writings leave no doubt about his engrossment with capitalism’s verve. He could not tolerate pre-modern critiques of the bourgeoisie’s accomplishments, laden as they were with hypocrisy and moralism. When he waxed lyrical about cheap commodities becoming the heavy artillery with which capitalism dissolves prejudice and defeats stupidity at a global level, he was not being ironical. Equally, when he spoke of the double freedom of the proletarian (free from his bonds but also free from any autonomous means to reproduce himself), he meant every word: The wretched had come closer to liberty than ever before, not in spite of having lost their access to productive means but because of this loss."
Kindles and Nooks and Droids, Oh My!: Analog Reading Habits Help Us Adapt Better to Digital Contexts by Wendy Austin
In this age of Kindles, Nooks, e-Readers, and Droids, the size of our reading space gets smaller and smaller. With analog reading, college students in particular can get a better sense of the context of their source, while the new media reading tools tend to confuse the origins, motives, effects, and contextual cues of reading events. The college writing classroom (which inevitably includes reading) is an excellent place to clarify for students the pleasures and difficulties of sustained analog reading, while comparing it to new media reading tools that filter ads, convey other forms of media simultaneous with the reading event, and (may) include interactivity.
Joshua Quittner’s “The Future of Reading” (Fortune magazine) lays out excellent groundwork for the elements of the debate, and the means for achieving success with analog reading, while maintaining interests in new media literacies. Nicholas Carr’s The Shallows describes why giving up analog reading would be detrimental to the next generation of web citizens, but a wide array of new media fans explain why we need to help students do both: learn to love to read the long way, while embracing the tools that convey the new media literacies. Cynthia Selfe’s The Importance of Paying Attention provides a guide for how we can incorporate both types of literacies without losing too much by the wayside.
The Reading Brain in the Digital Age: The Science of Paper versus Screens
Ferris Jabr writes:
How exactly does the technology we use to read change the way we read? How reading on screens differs from reading on paper is relevant not just to the youngest among us, but to just about everyone who reads—to anyone who routinely switches between working long hours in front of a computer at the office and leisurely reading paper magazines and books at home; to people who have embraced e-readers for their convenience and portability, but admit that for some reason they still prefer reading on paper; and to those who have already vowed to forgo tree pulp entirely. As digital texts and technologies become more prevalent, we gain new and more mobile ways of reading—but are we still reading as attentively and thoroughly? How do our brains respond differently to onscreen text than to words on paper? Should we be worried about dividing our attention between pixels and ink or is the validity of such concerns paper-thin?
Since at least the 1980s researchers in many different fields—including psychology, computer engineering, and library and information science—have investigated such questions in more than one hundred published studies. The matter is by no means settled. Before 1992 most studies concluded that people read slower, less accurately and less comprehensively on screens than on paper. Studies published since the early 1990s, however, have produced more inconsistent results: a slight majority has confirmed earlier conclusions, but almost as many have found few significant differences in reading speed or comprehension between paper and screens. And recent surveys suggest that although most people still prefer paper—especially when reading intensively—attitudes are changing as tablets and e-reading technology improve and reading digital books for facts and fun becomes more common. In the U.S., e-books currently make up between 15 and 20 percent of all trade book sales.
Even so, evidence from laboratory experiments, polls and consumer reports indicates that modern screens and e-readers fail to adequately recreate certain tactile experiences of reading on paper that many people miss and, more importantly, prevent people from navigating long texts in an intuitive and satisfying way. In turn, such navigational difficulties may subtly inhibit reading comprehension. Compared with paper, screens may also drain more of our mental resources while we are reading and make it a little harder to remember what we read when we are done. A parallel line of research focuses on people's attitudes toward different kinds of media. Whether they realize it or not, many people approach computers and tablets with a state of mind less conducive to learning than the one they bring to paper.
"Technology codes our minds," he writes in the video's description. "Magazines are now useless and impossible to understand, for digital natives"—that is, for people who have been interacting with digital technologies from a very early age.
Screen Reading And Book Reading
Computer Screen Reading Versus The Science Of paper: The Reading Brain In The Digital Era
Navigating textual landscapes: We learn from the following extended text by Ferris Jabr that:
"Understanding how reading on paper is different from reading on screens requires some explanation of how the brain interprets written language. We often think of reading as a cerebral activity concerned with the abstract—with thoughts and ideas, tone and themes, metaphors and motifs. As far as our brains are concerned, however, text is a tangible part of the physical world we inhabit. In fact, the brain essentially regards letters as physical objects because it does not really have another way of understanding them. As Wolf explains in her book Proust and the Squid, we are not born with brain circuits dedicated to reading. After all, we did not invent writing until relatively recently in our evolutionary history, around the fourth millennium B.C. So the human brain improvises a brand-new circuit for reading by weaving together various regions of neural tissue devoted to other abilities, such as spoken language, motor coordination and vision.
"Some of these repurposed brain regions are specialized for object recognition—they are networks of neurons that help us instantly distinguish an apple from an orange, for example, yet classify both as fruit. Just as we learn that certain features—roundness, a twiggy stem, smooth skin—characterize an apple, we learn to recognize each letter by its particular arrangement of lines, curves and hollow spaces. Some of the earliest forms of writing, such as Sumerian cuneiform, began as characters shaped like the objects they represented—a person's head, an ear of barley, a fish. Some researchers see traces of these origins in modern alphabets: C as crescent moon, S as snake. Especially intricate characters—such as Chinese hanzi and Japanese kanji—activate motor regions in the brain involved in forming those characters on paper: The brain literally goes through the motions of writing when reading, even if the hands are empty. Researchers recently discovered that the same thing happens in a milder way when some people read cursive.
"Beyond treating individual letters as physical objects, the human brain may also perceive a text in its entirety as a kind of physical landscape. When we read, we construct a mental representation of the text in which meaning is anchored to structure. The exact nature of such representations remains unclear, but they are likely similar to the mental maps we create of terrain—such as mountains and trails—and of man-made physical spaces, such as apartments and offices. Both anecdotally and in published studies, people report that when trying to locate a particular piece of written information they often remember where in the text it appeared. We might recall that we passed the red farmhouse near the start of the trail before we started climbing uphill through the forest; in a similar way, we remember that we read about Mr. Darcy rebuffing Elizabeth Bennett on the bottom of the left-hand page in one of the earlier chapters.
In most cases, paper books have more obvious topography than onscreen text. An open paperback presents a reader with two clearly defined domains—the left and right pages—and a total of eight corners with which to orient oneself. A reader can focus on a single page of a paper book without losing sight of the whole text: one can see where the book begins and ends and where one page is in relation to those borders. One can even feel the thickness of the pages read in one hand and pages to be read in the other. Turning the pages of a paper book is like leaving one footprint after another on the trail—there's a rhythm to it and a visible record of how far one has traveled. All these features not only make text in a paper book easily navigable, they also make it easier to form a coherent mental map of the text.
"In contrast, most screens, e-readers, smartphones and tablets interfere with intuitive navigation of a text and inhibit people from mapping the journey in their minds. A reader of digital text might scroll through a seamless stream of words, tap forward one page at a time or use the search function to immediately locate a particular phrase—but it is difficult to see any one passage in the context of the entire text. As an analogy, imagine if Google Maps allowed people to navigate street by individual street, as well as to teleport to any specific address, but prevented them from zooming out to see a neighborhood, state or country. Although e-readers like the Kindle and tablets like the iPad re-create pagination—sometimes complete with page numbers, headers and illustrations—the screen only displays a single virtual page: it is there and then it is gone. Instead of hiking the trail yourself, the trees, rocks and moss move past you in flashes with no trace of what came before and no way to see what lies ahead.
"The implicit feel of where you are in a physical book turns out to be more important than we realized," says Abigail Sellen of Microsoft Research Cambridge in England and co-author of The Myth of the Paperless Office. "Only when you get an e-book do you start to miss it. I don't think e-book manufacturers have thought enough about how you might visualize where you are in a book."
"At least a few studies suggest that by limiting the way people navigate texts, screens impair comprehension. In a study published in January 2013 Anne Mangen of the University of Stavanger in Norway and her colleagues asked 72 10th-grade students of similar reading ability to study one narrative and one expository text, each about 1,500 words in length. Half the students read the texts on paper and half read them in pdf files on computers with 15-inch liquid-crystal display (LCD) monitors. Afterward, students completed reading-comprehension tests consisting of multiple-choice and short-answer questions, during which they had access to the texts. Students who read the texts on computers performed a little worse than students who read on paper.
"Based on observations during the study, Mangen thinks that students reading pdf files had a more difficult time finding particular information when referencing the texts. Volunteers on computers could only scroll or click through the pdfs one section at a time, whereas students reading on paper could hold the text in its entirety in their hands and quickly switch between different pages. Because of their easy navigability, paper books and documents may be better suited to absorption in a text. "The ease with which you can find out the beginning, end and everything inbetween and the constant connection to your path, your progress in the text, might be some way of making it less taxing cognitively, so you have more free capacity for comprehension," Mangen says.
"Supporting this research, surveys indicate that screens and e-readers interfere with two other important aspects of navigating texts: serendipity and a sense of control. People report that they enjoy flipping to a previous section of a paper book when a sentence surfaces a memory of something they read earlier, for example, or quickly scanning ahead on a whim. People also like to have as much control over a text as possible—to highlight with chemical ink, easily write notes to themselves in the margins as well as deform the paper however they choose.
"Because of these preferences—and because getting away from multipurpose screens improves concentration—people consistently say that when they really want to dive into a text, they read it on paper. In a 2011 survey of graduate students at National Taiwan University, the majority reported browsing a few paragraphs online before printing out the whole text for more in-depth reading. A 2008 survey of millennials (people born between 1980 and the early 2000s) at Salve Regina University in Rhode Island concluded that, "when it comes to reading a book, even they prefer good, old-fashioned print". And in a 2003 study conducted at the National Autonomous University of Mexico, nearly 80 percent of 687 surveyed students preferred to read text on paper as opposed to on a screen in order to "understand it with clarity".
"Surveys and consumer reports also suggest that the sensory experiences typically associated with reading—especially tactile experiences—matter to people more than one might assume. Text on a computer, an e-reader and—somewhat ironically—on any touch-screen device is far more intangible than text on paper. Whereas a paper book is made from pages of printed letters fixed in a particular arrangement, the text that appears on a screen is not part of the device's hardware—it is an ephemeral image. When reading a paper book, one can feel the paper and ink and smooth or fold a page with one's fingers; the pages make a distinctive sound when turned; and underlining or highlighting a sentence with ink permanently alters the paper's chemistry. So far, digital texts have not satisfyingly replicated this kind of tactility (although some companies are innovating, at least with keyboards).
"Paper books also have an immediately discernible size, shape and weight. We might refer to a hardcover edition of War and Peace as a hefty tome or a paperback Heart of Darkness as a slim volume. In contrast, although a digital text has a length—which is sometimes represented with a scroll or progress bar—it has no obvious shape or thickness. An e-reader always weighs the same, regardless of whether you are reading Proust's magnum opus or one of Hemingway's short stories. Some researchers have found that these discrepancies create enough "haptic dissonance" to dissuade some people from using e-readers. People expect books to look, feel and even smell a certain way; when they do not, reading sometimes becomes less enjoyable or even unpleasant. For others, the convenience of a slim portable e-reader outweighs any attachment they might have to the feel of paper books.
Steven Waldman informs us that:
“America is at a critical juncture in the history of communications. information technology is changing our lives in ways that we cannot easily foresee.
“The digital age is creating an information and communications renaissance. But it is not serving all americans and their local communities equally. it is not yet serving democracy fully. How we react, individually and collectively, to this democratic shortfall, will affect the quality of our lives and the very nature of our communities.”
"In most ways today’s media landscape is more vibrant than ever, offering faster and cheaper distribution networks, fewer barriers to entry, and more ways to consume information. Choice abounds. local TV stations, newspapers and a flood of innovative web start-ups are now using a dazzling array of digital tools to improve the way they gather and disseminate the news—not just nationally or internationally but block-by-block. The digital tools that have helped topple governments abroad are providing americans powerful new ways to consume, share and even report the news.
Yet, in part because of the digital revolution, serious problems have arisen, as well. most significant among them: in many communities, we now face a shortage of local, professional, accountability reporting. This is likely to lead to the kinds of problems that are, not surprisingly, associated with a lack of accountability—more government waste, more local corruption, less effective schools, and other serious community problems. The independent watchdog function that the Founding Fathers envisioned for journalism—going so far as to call it crucial to a healthy democracy—is in some cases at risk at the local level.
As technology offered consumers new choices, it upended traditional news industry business models, resulting in massive job losses—including roughly 13,400 newspaper newsroom positions in just the past four years. This has created gaps in coverage that even the fast-growing digital world has yet to fill. it is difficult to know what positive changes might be just around the corner, but at this moment the media deficits in many communities are consequential. newspapers are innovating rapidly and reaching new audiences through digital platforms but most are operating with smaller reporting staffs, and as a result are often offering less in-depth coverage of critical topics such as health, education and local government. many local TV news broadcasts remain excellent, and, on average, they actually produce more hours of news than a few years ago—but too few are investing in more reporting on critical local issues and some have cut back staff.
Beyond that, a minority are exhibiting alarming tendencies to allow advertisers to dictate content. in most communities, commercial radio, cable, and satellite play a small role in reporting local news. Public TV does little local programming; public radio makes an effort to contribute but has limited resources. most important, too few internet-native local news operations have so far gained sufficient traction financially to make enough of an impact.
On close inspection, some aspects of the modern media landscape may seem surprising:
- an abundance of media outlets does not translate into an abundance of reporting. in many communities, there are now more outlets, but less local accountability reporting.
- While digital technology has empowered people in many ways, the concurrent decline in local reporting has, in other cases, shifted power away from citizens to government and other powerful institutions, which can more often set the news agenda.
- Far from being nearly-extinct dinosaurs, the traditional media players—TV stations and newspapers—have emerged as the largest providers of local news online.
- The nonprofit media sector has become far more varied, and important, than ever before. it now includes state public affairs networks, wikis, local news websites, organizations producing investigative reporting, and journalism schools as well as low-power Fm stations, traditional public radio and TV, educational shows on satellite TV, and public access channels. most of the players neither receive, nor seek, government funds.
- Rather than seeing themselves only as competitors, commercial and nonprofit media are now finding it increasingly useful to collaborate.
Attempting to convey a clear picture of the modern media landscape is like trying to draw a hurricane from within the storm. In our review of the industry’s history, we note that leaders of each medium believed that the latest new tech- nology would doom them, yet many survived and adapted.
And yet this sense that the future is unknowable cannot be used as an excuse for failing to attempt to understand what is happening around us now—especially when history has shown that in transformative moments like this, decisions made by policymakers and industry leaders reverber- ate for decades. So in approaching this analysis, we faced two opposing challenges:
-fully describing the current media landscape is impossible; failing to try is irresponsible.
Certainly there can be no doubt that the traditional media business has been significantly shaken, with potentially serious consequences for communities:
- Newspaper advertising revenue dropped 47 percent from 2005 to 2009.
Between 2006 and 2009, daily newspapers cut their annual editorial spending $1.6 billion per year, or more than a quarter, according to the Poynter Institute’s Rick Edmonds.12
- Staff at daily newspapers has shrunk by more than 25 percent since 2006, with some major newspapers see- ing half their staffs disappear in a matter of a few years. There are about as many journalists working today as there were before Watergate.
- Television network news staffs have declined by half from the late 1980s.
- Newsmagazine reporting staffs have dropped by almost half since 1985.
- The number of all-news local radio stations has dropped from 50 in the mid-1980s to 30, which reach a third of the country.
- Only about 20 to 30 percent of the population has access to a local all-news cable channel.
- There are 520 local TV stations that air no local news at all (258 commercial stations and 262 noncommer- cial stations). Considering those, along with stations that air less than 30 minutes of local news per day, 33 percent of commercial stations currently offer little or no local news.
Hyperlocal information is better than ever. technology has allowed citizens to help create and share news on a very local level—by town, neighborhood, or even block. These sites mostly do not operate as profitable businesses, but they do not need to. This is journalism as voluntarism—a thousand points of news.
“As news is posted faster, often with little enterprise reporting added, the official version of events is becoming more important. We found official press releases often appear word for word in first accounts of events, though often not noted as such.... Government, at least in this study, initiates most of the news. In the detailed examination of six major storylines, 63 percent of the stories were initiated by government officials, led first of all by the police. Another 14 percent came from the press. interest group figures made up most of the rest.”
As news media migrate to the Internet, and wireless becomes an increasingly common way of accessing the Internet, it follows that a flourishing wireless ecosystem is essential to the future of the news. To the extent that wire- less provides an open, affordable, and innovative platform for civic discourse, this will likely be to the benefit of news producers and consumers alike.
A world without an open Internet would be one in which the very innovation we are depending on to save journalism would lose its oxygen before it had a chance to flourish. Whether training people to use the Internet for their own benefit or to shoot video as a professional skill or to become a citizen journalist who contributes occasionally, the net result will be a more robust local news and journalism ecosystem.
“The record and our economic analysis demonstrate, however, that the openness of the internet cannot be taken for granted, and that it faces real threats. indeed, we have seen broadband providers endanger the internet’s openness by blocking or degrading content and applications without disclosing their practices to end users and edge providers, notwithstanding the Commission’s adoption of open internet principles in 2005.”
"If the Internet were to evolve toward a tiered system in which preferred customers get better service, it could end up rewarding certain types of content over others—perhaps established, well-heeled companies over less-well- capitalized startups.
Splurging In The Vrial Soup
A Word About Social Networking
According to Viktor Kjoss:
Social networking sites are extremely perplexing; some say they are helping us move toward a more harmonious world and will be a tool that perpetuates greater international unity and discourse; while others say they're making us take steps backwards and that they may be making our minds more infant-like They're also fascinating due to the fact that regular people can have a strong impact on their evolution.
Carol Quinn points out that the me makers of Twitter didn't create "re-tweet" or the "RT"; the people did. "Hashtags" are also entities that were developed by Twitter users, not its makers. People can really change the way people use social networking, and it's nice to find a realm where people still have a voice and the majority is the ultimate power broker,not a small group of powerful individuals.(Wintour)
According to Kjoss:
"As of early 2010, the United States was estimated to have a population of 309,003,000 citizens and its estimated that 116,010,760 US residents were on Facebook. This means that about 38% of US citizens are Facebook users. To put things in a more panoramic perspective consider this:there are 6,845,609,960 humans on the planet Earth. Of these 6,845,609,960 people, only 1,966,514,816 are Internet users. This means that about 9% of the humans on our planet are on Facebook. In the span of 8 years, Facebook has been able to seduce 9%of the world's population into joining their site.
The New York Times recently cited a new Pew Study that found that 50% of Americans are on social networks. In 2006, Pew found that only 5% of adults were on social networks. The study reported that 83% of people in the 18-29 age bracket said they used social networking sites every day.
Facebook's impact on society though can't be solely be appraised by the sheer number of people that have accounts on the site; its real potency is how much time people put into the sites. Humans spend over 700 billion minutes per month on Facebook and 50% of the site's members long into the sine on a daily basis That means 375 million people log into Facebook every single day.
According to statistics compiled by Nielsen, the average US Internet user spends more time on Facebook than on Google, YouTube, Microsoft, Wikipeda and Amazon Combined. As of January 2010, the average Facebook user spent more than seven hours per month on the site. American Facebook users spent an average of 421 minutes on Facebook per month, which amounts to more than 14 minutes per day. If you coalesce together the time American Internet users spend of Google(1:35), YouTube (1:02), Microsoft/Bing (:35), Wikipedia (0:15), and Amazon (0:22) each month it doesn't equal the time users spend on Facebook.
Some 55 percent of adult Americans now have a broadband Internet connection at home,7 whereas almost all have access to TV.8 If traditional media companies devote fewer resources to accountability journalism, it becomes more important for all Americans to have access to a full range of comparable resources online. Those that have low quality newspapers or TV and limited Internet access end up with less useful news.
We are really immersed within a technosociety and environment. Our time spent with the gadgets and the Internet splurging has disabled our cognitive abilities. We are being reduced into an infantile stage of total dependency on Technology and its accoutrements. There's now an infinite variety of ways and means of communicating, and the Internet has made it a poor man's technology. Even though I have pointed to the lack of online affordability and access in Africa and South America, but those very remote areas, are having some form of contact and use of the Internet.
Many pundits and scholars dispute McLuhan's assertion that technology, in the end, is conditioning and determining us, and they render fuzzy diatribes that really confuse people who read their works. As a user myself, although I had fancied myself as Luddite, I have been observing some quirky and unusual Internet feedbacks, and at times I am left puzzled. It is as if there's something that anticipates my thinking and somehow, my action and intent.
Words pop-up, and they give me what I did not expect, especially on Yahoo e-mailing; I type in anything that I want or need and Google give me a feedback, be it image or translation or topic on the web… I am not fond of the mobile phone, and have still not yet bought into that medium, but am keeping up with the Cool Medium of the house phone(echoing McLuhan, there). I find some task very menial, easy to do, in many ways I use the Web, and overtime, there's some Computer Brain that meets my most awkward needs, wants and so forth.
I still insist that we are media and technological hostages for the computer has and is replace some of the mundane things we used to do as humans. For instance, many people can no longer hold the phone numbers from various people in their head, they just 'save' it, and to dial it, just press a button, scroll and click.
Mechanical conditioning all the way, and that to me are the effects and affects of the Internet Splurging on the Viral Soup and technological technique, enabled by the merging and emerging gizmos to hake ones world functional and real. Some people are leery of the introduction of the Windows 10, and they caution that this is hidden in plain sight monitoring of their whole being, and there are no 'privacy matters' safeguards in the incoming computer environ.
According to the statistics above regarding the visitations and usages by people in their billions, this has created an environ that is littered with all types of mobile phones, iPads, eBooks, iPods, with games, texting, Twittering, Facebook participation, pings, trills and gongs-musical texts and many other features that envelope and impregnate our present-day technological society, milieus and environments. We are totally enmeshed and immersed within all this electronically, technological cacophony and cognitive dissonance.
The technological smorgasbord that we see engulfing and submerging human cognition and entrench their cybernetics. The feedback we the users, and receives reaction to the sender is the cybernetic concept used for its relevance to the study of communication. As we learn from the new wave and way of communicating that appears as entropy, has been clarified by the stats above, we are now conditioned by its enabling us to do things very easily, and exert no effort for it. We have surrendered our natural abilities to amass knowledge into our brains, and have deferred all that to the Web[Internet]
The Age Of Digitalia...
Techniques to End Depression In The Digital Age
According Sobiya N. Moghul, we all are our attached to our media devices and social media and how it is not having a positive effect on our mental health. In fact, the use of electronic devices may reduce some of these 6 basic skills, which can help to alleviate our device-induced depression in the modern world. Moghul's 6 techniques for reducing depression in the digital age are as follows.
1. Look Around
Sobiya shares that she notices the positives people in her life, including her coworkers, and tries to learn from them. Likewise, there are people around you who are doing small things like eating right or exercising, and when we pay attention to the positive qualities of others, it helps us to become motivated to make changes within ourselves. Get your head out of your computer or your mobile device for a few moments, and notice who is around you and what is really going on. Do you really want to live your life in a 6-inch computer screen and miss the richness of the physical world?
Self talk is a way to walk yourself through painful situations or feelings. This technique helps you to figure out if what you are thinking or feeling are based in reality. In addition, it can help you to make good decisions, and even help motivate you to y saying positive things to yourself. take action. Most people self talk in their own head quite frequently. Practicing self talk can help lift your mood. Once again, take a break from the chatter of Facebook, Twitter, and the world-wide-web in order to think for yourself again - Reclaim your own internal dialog, and don't just repeat your Twitter Feed to your friends. What do YOU think about the world and what's happening on our planet?
3. Practice Active Listening
Sobiya recommends that you “Listen attentively when someone is talking to you. Try and learn to understand other's point of view. She refers to, Racheal Remen, “The most basic and powerful way to connect to another person is to listen. Just listen! Perhaps the most important thing we ever give each other is our attention.”
When you actively listen, you are taking the time to understand and process what the other person is saying In a conversation. You are not waiting to respond, or inect your feeling, but genuinely listening to the other person. Are you glued to a device? If so, you are probably only nodding and saying "uh huh" to people who want you to see, hear, and understand them. Go ahead and look up, make eye contact, and have a conversation. Let go of the electronic device, and engage in a real person-to-person dialog.
4. Sing Outloud
According to Colin Slater OAM, a former opera singer and the founder of Sing Australia, “When you sing, your posture and breathing is different to when you speak. It's like a sport. And just as in playing sport, singing promotes mental and physical fitness; it creates endorphins and it leaves you feeling uplifted and energized." Singing can elevate your mood by working with your physiological and mental well being. Go ahead and find an online video that you can sing along with - Many have lyrics, so you can really let loose and participate in your favorite album. The digital world isn't bad - Use some of it's benefits to perk yourself up!
5. Cry or let it go
Crying is good for you both emotionally and physically. As the Sobiya puts it, “ Crying is basically relieving you body of its toxics and emotional outburst of the brain which is imperative at times. Crying here doesn't mean wailing and getting swollen eyes every morning. This crying is purely relieving your mind from its worries, sadness and misfortunes.” Are you using your electronic devices to avoid your feelings? It's very easy to do, because digital mediums are filled with activities that can distract us from our emotional states. Are you holding something back?
6. Engage in Laughter
Laughing has been proven to have healing properties. Laughing is beautiful. According to the author, “So laugh even if you sound like a horse or a hyena. Need more inspiration? Well, laughter relaxes the whole body, boosts your immune system, triggers the release of endorphins and also protects the heart!” Again, the digital world isn't a bad place to be, but it needs to be used in moderation - Watch some online comedy or a great show, but then think about going out to a Live Stand-Up Comedy event. What else can you do in the digital age to keep yourself healthy and balanced? (Kim B)
In today’s society we have a plethora of electronic devices available for people of all ages, and this has created a social void. The missing link for the millennial generation seems to be the challenge of interpersonal communication.
In my very plain view and way of speaking, I believe the millennial generation, and perhaps the generations to follow, struggle and will continue to will mightily with interpersonal relationships because they do not know how to communicate without using their thumbs. My wife and I have three children. By the end of this calendar year we will have three teenagers. Our firm and steadfast rule in regards to mobile phones has been that our children must reach the age of 13 before having a mobile phone, and our youngest child is currently waiting with baited breath for her first mobile device.
However, from the point that our oldest child became 13 and now when our youngest child will soon be 13, the difference in the available technology is light years apart. As my son who so frequently reminds me often says, “Dad we have a more powerful computer in our hands then they had at NASA when they put man on the moon.”
I don’t know about you, but to me that is very scary. I know that our children and grandchildren need every available opportunity to use the technology of today to be prepared for the advances in technology of tomorrow. I want my children to be on the cusp of technology advancement rather than playing catch-up like so many of us do today. I am concerned that with so little personal interaction, that does not involve a mobile electronic device, our society will forget what interpersonal communication and relationships are really about.
The isolation that comes from only communicating by means of our electronics, which will only further exacerbate our problems and greatly increase the number of people with symptoms of depression, if not a diagnosis of depression. This does not take into account that there could be physical injury, disabilities, and/or death that could come about because of our negligent use of mobile electronic devices.
You may think of me as overly paranoid or perhaps even over-hyping my position. However as a parent, I have witnessed firsthand the potential for disaster while using a mobile electronic device. I am not merely referencing the all too common problem of texting while driving. I have on many occasions stopped my own teenagers from walking in front of a moving car while their heads were stuffed into their devices.
This not only saddens me, it also worries me for today and for our future.
This is what Judy McGrath had to say:
Everything is changing in the digital age. The amount of data available is really extraordinary to all of us. We work with lots of outside companies and really want to be on the frontline of being able to offer the absolutely best sort of return on innovation, if you will, to anybody who plays with us on all our platforms. We have all kinds of things, like our "Hills" engagement study, where we do things very specific to a franchise like that. Nick has something they call the "true reach meter," which is sort of a way to really measure consumption by kids and family viewers, websites and magazines. It's taking behavioral targeting, but trying to make it as simple and single-source as we can. ...
Right now we're really enjoying a good run in digital. People are spending more time on our sites. We're really at the forefront of offering a number of ways for clients to reach this consumer. We can prove our efficacy, change and move on. We just set up a Digital Fusion group which does nothing but create new ways to make those connections, whether it's promoting movies like "Speed Racer" and "Iron Man" or creating a series for T-Mobile, and we have an instant-feedback loop we share with the clients.
Is Modern Technology Killing Us?
The Whole Article below has been penned by Erica Etelson...
The problem with technology is that most innovations have unintended consequences, and those unintended consequences are piling up, causing harm and creating dangers of existential magnitude. We turn a blind eye to those dangers and uncritically presume that, for all but the creepiest technologies (such as animal cloning), the benefits outweigh the risks and that technological innovation is humanity's highest calling.
Global monoculture rarely sees a technology it doesn't like. Working off the tacit assumption that technological innovation can and will solve the most critical threats to civilization - the collapsing environment, poverty, tyranny, disease pandemics and resource depletion - we are quick to celebrate unproven technologies and slow, oh so dangerously slow, to critically examine their safety and utility. It's as though a magical spell has pervaded our groupthink, immersing us in deluded fantasies of meeting human needs with a few swipes of a touchscreen.
If you would like to be the laughingstock of your next dinner party, challenge the cultural presumption in favor of technological progress. Other than a few head nods about how we really should unplug from our hand-held devices for a few minutes a day, you will likely be scoffed at as a backward-thinking loser whose resentment probably lies in your pathetic inability to figure out how to organize your iTunes library.
You might even be called a Luddite, because most people, liberals included, think the Luddites were knee-jerk reactionaries scared of any form of technology when, in fact, they were tradesmen and artisans engaged in a class protest against "all Machinery hurtful to Commonality" (i.e. forms of mechanization that damaged people and uprooted communities by forcing skilled workers to become wage slaves in factories). To be labeled a Luddite today is to be intellectually and culturally dismissed even by class-conscious leftists who have a blind spot when it comes to the politics of technology.
Technology is the practical application of scientific knowledge, the manipulation of elements (fire, water, rock) to create tools, methods and products. Primitive technologies like fire and spears enabled humans to meet their basic needs more easily and, hence, be fruitful and multiply. Modern technologies optimize comfort, convenience and speed, enabling humans to be very fruitful indeed, not only with respect to procreation, but in our astonishing ability to create and share ideas, literature, art and music. Still, it must be said that very rich cultures predate modern technology by centuries, and modern communications technologies may have reached a tipping point where what is authentically created and shared is overshadowed by market-driven, corporate-generated content that is sold or imposed.
Technology has its place. After all, the alphabet and the magnetic compass were innovations in their day, and I feel pretty confident in asserting that literacy and knowing which way is north are, on balance, good things. What we as progressive thinkers must do - because no one else is doing it - is acknowledge the ways in which technology can serve us, understand the ways in which many technologies have harmed us and develop some kind of rubric through which we can evaluate the merits of existing and emerging technologies. Furthermore, we must be mindful of the ways in which technologies can be used by corporate and government actors to repress and control us and question whether the benefits of using the technologies outweigh the risks.
As techno-literacy expands, eco-literacy contracts.
The Dangers of Technology
In which the author bemoans some of the unintended consequences of commonly used technologies
1. It weakens our resilience.
A resilient community is one whose people collectively possess the requisite knowledge and skills to meet their basic needs and are healthy enough in body and spirit to meet challenges and take care of less able members of the community (children, seniors and disabled people). Members of modern societies think themselves very clever though we lack even rudimentary knowledge of the biological and artificial life systems that support us. Lacking much authentic wisdom and knowledge, we tend to take vicarious pride in the inventions of others - I, who cannot so much as spark a flame without a match, feel intelligent by dint of my fellow human's invention of the combustion engine.
Our deluded pride in our species' intelligence blinds us to the core deception of technology - that it makes us more resilient. A species that is utterly dependent on the seamless functioning of a fabulously complex global superstructure with millions of impersonal moving parts, none of which most of us have even passing acquaintance with, is not, by any stretch of meaning, resilient. Evaluate your own resilience next time you turn on the faucet - what would you do if nothing came out? Do you even know where your water comes from? Many of us cannot imagine how we would survive without mobile phones much less indoor plumbing.
As techno-literacy expands, eco-literacy contracts.
Consider the sad history of the Arctic Ihalmiut people who are the subject of Farley Mowat's The People of the Deer. Mowat chronicles the Ihalmiut's adoption of rifles and subsequent loss of the ability to hunt with bow and arrow as their forerunners had done for millennia. Intermittent access to ammunition (and squandering of shells on fox whose pelts they sporadically sold to white traders) resulted in waves of starvation that reduced their numbers to a handful. The Ihalmiut, understandably seduced by the prospect of an easier means of securing meat, lost their resilience. The same story is now playing out among young Igloolik hunters who use GPS to navigate their ancestral landscape. A recent article in The Atlantic ("The Great Forgetting," November 2013) reports on the rise of fatal hunting accidents as these hunters get lost or wander blindly onto hazardous terrain.
As techno-literacy expands, eco-literacy contracts. As the saying goes, "We know more and more about less and less." We know how to create PowerPoint presentations, but don't know a watershed from a wetland. Worse, the more tech-savvy and eco-ignorant we become, the more we delude ourselves into believing that humans are immune to the laws of nature and can magically replenish our planet's finite resources. We're unwilling to take even relatively simple, easy steps that would reduce demand for water, electricity and fuel unless a smart marketing campaign convinces us that we'll save big bucks. Modern conveniences have not only made us lazy, but have led us to assume that the arc of human betterment is inevitable, and that we are but passive observers of its triumphant mastery over nature.
Our attachment to technology often fosters hubris. And though in many cases it enhances community, it also often fuels our cultural bias toward hyper-individualism by making it all too easy to forego human community and place our bets, instead, on the techno-nanny to care for us. We think that we humans are so smart that each of us can go it alone. It's a dangerous dynamic in which we acquiesce at our peril.
Ironically, the internet and social media have made possible an unprecedented flow of cross-cultural communication and provide a means for mobility-challenged and geographically or socially isolated folks to connect and learn. Much of the re-skilling that is taking place (learning to garden, mend clothes, preserve food, etc.) is greatly facilitated by the vast electronic information storehouse. But as one homesteading educator ominously warns, "Be sure to print out a hard copy of this material, just in case . . ."
Zeitgeist Day 2013: End Speech (Peter Joseph) and Q&A Panel with All Speakers
Technology separates us from the natural world by diverting our focus from natural to human-made wonders.
It fuels hyper-consumption.
This is fairly self-evident: Machines and fossil fuels enable humans to manufacture far more stuff, far more cheaply than we could ever produce by hand. Our stores are crammed with products that are cheap enough to buy without a second thought. Even online activity consumes more energy than most users realize, because massive data servers require ghastly quantities of electricity.
The philosopher Lewis Mumford wrote, "Every theoretic innovation, no matter how innocent in intention, automatically multiplies the number of practical products - and, more significantly, profit-making wants." In a capitalist economy, in which technological innovation is financed and controlled mostly by the private sector, this dynamic is clearly at play. And once the new product hits the shelves, it's not long before citizens all over the world clamor for it.
Technology separates us from the natural world by diverting our focus from natural to human-made wonders. Every day, we are offered a free gift of joy and serenity courtesy of Mother Nature, but we usually opt instead for artificial pleasures like video games. A vicious cycle is born, in which our separation from nature and from each other leaves us feeling empty and compels us to seek more creature comforts to fill the hole, and we then become addicted to the pleasure of consuming and spend even less time connecting with people and nature.
It accelerates environmental ruin, resource depletion and resource wars.
Where does all that hyper-consumption land us? In the dawn of the sixth mass extinction, it seems. Atmospheric carbon is approaching the dreaded tipping point Al Gore warned of nine years ago. We're poisoning our air and drinking water, poisoning ourselves and wildlife with pesticides, and quite literally trashing the oceans. We're rapidly depleting non-renewable resources like fossil fuels and rare earth metals and stripping renewable resources like forests, aquifers and fisheries faster than they can regenerate.
Technology has thus far largely shielded us from the real-life consequences of our hedonism, but ecosystem failures are starting to catch up with us faster than technology can respond.
Thanks to mass production, it's very easy to make and consume products. But our consciousness, our ethics, lag behind our technological mastery. We're highly motivated when it comes to creating and consuming new things but relatively uninterested in conserving and reusing those things.
Technology has thus far largely shielded us from the real-life consequences of our hedonism, but ecosystem failures are starting to catch up with us faster than technology can respond. God save us when 3-D printers hit the shelves, and everyone can manufacture from their living rooms products to satisfy every fleeting fancy that passes through their heads.
Certainly, we know how to minimize if not outright abandon our use of dangerous or scarce materials; but, trapped in a wicked knot of inertia, corruption and hubris, we stay the course, and will even place our young in harm's way to secure foreign-owned supplies of the raw materials that fuel our unsustainable lifestyle.
Is there a single violent conflict in the world that is unrelated to resource scarcity or conquest, be it water, oil, natural gas, coal, metals, minerals or food? A comprehensive review of global conflict is beyond the scope of this article, but the short answer is "no."
This delusion that we are separate from nature is the perilous essence of the techno-topian myth.
As dire as our environmental crises have become, there's a curious lack of interest or concern outside of environmental circles. Environmental issues appear way down on the list of the public's priorities, below Medicare, crime, education, terrorism, the budget deficit, tax reform and jobs and the economy.
The environment is in our blind spot partly for lack of media attention and partly because people have a natural tendency toward tunnel vision, focusing intensely on the issues that they understand to be directly affecting them in the present moment and ignoring everything else.
But there's something more insidious going on: The fact that people do not see how environmental degradation is affecting them right now is yet another symptom of our techno-topian delusion. Having never known anything but an artificial lifestyle, we have no reason to think that the degradation of the natural world is of any consequence to us.
Sure, it's sad that the fish will all be gone in 50 years and, yes, it sure is unusually hot outside, but I can just pop my frozen lasagna in the microwave and turn up the air conditioning. This delusion that we are separate from nature is the perilous essence of the techno-topian myth. The sooner we can shatter it, the better.
It carries some seriously scary risks.
In 1999, a $125 million Mars probe crashed and burned because one team of rocket scientists did their calculations in millimeters and the other in inches. Okay. We all make mistakes. But here's my point: We all make mistakes. Scientists are not infallible and we cannot expect products to be harmless just because the scientists and spin doctors say so.
Some of the most dreadful unintended consequences of technology are those whose dangers we've overlooked or downplayed. Pesticides, antibiotics, flame retardants, asbestos, food additives, plastic bags, lead in toys - the list goes on, from the moderately harmful (refined sugar) to the potentially apocalyptic (nuclear power).
There are countless consumer products in widespread use that have undergone little to no health and safety testing, including 80,000 chemicals the health effects of which have never been tested, but are - incredibly - presumed safe until proven harmful. You'd be forgiven for assuming that our regulatory scheme is grounded in the precautionary principle (unauthorized for widespread use until proven safe), but you'd be sadly mistaken.
We're all subjects of the biggest radiation exposure experiment in history.
Look at cell phones and Wi-Fi, universally adopted despite the fact that 75 percent of non-industry-sponsored studies have found that cell phones damage our DNA and that brain cancer in children has increased 1 percent a year for the past 20 years. On top of this, we bombard ourselves 24/7 with the radiation emitted from wireless networks and cell phone towers with nary a study of health effects. With cancer latency periods of up to 30 years, it will be another 20 years before we know the full extent of the harm. In the meantime, we're all subjects of the biggest radiation exposure experiment in history.
It's now common knowledge that, for decades, the tobacco industry suppressed evidence of the inherent harmfulness of cigarette smoking until there were just too many dead bodies for society to overlook. Given the propensity for industries to conceal the harmfulness of their products, our trust in the $3.1 trillion telecommunications industry's self-serving assurances is naive. As a great American president once said, "Fool me once . . ."
The most existentially threatening technology of all is playing out right now in Fukushima. If you're like most news consumers, you've probably been made to understand that the 2011 meltdown is safely behind us. You might want to sit down: In November 2013, the Tokyo Electric & Power Company (TEPCO) began a high-stakes operation to remove the 1,331 damaged, spent fuel rods from Reactor 4 and entomb each rod in an underwater cask. (Fuel rods from another damaged reactor cannot be removed at all right now because they are so hot they're literally melting into the ground and allowing unknown amounts of radiation to seep into the Pacific).
Depending on whom you ask, the worst-case scenario if something goes amiss could be anything from a massive release of radiation necessitating the evacuation of Tokyo to an apocalyptic explosion that would force the evacuation of the West Coast of the United States.
The removal procedure, which will take some uncertain number of months or years to complete is, by all accounts, extremely dangerous. According to anti-nuclear activist Harvey Wasserman, the amount of radioactive cesium stored at Fukushima is 15,000 times the amount released at Hiroshima. The fuel rods were damaged and bent during the earthquake and there is debris floating in the cooling pool in which the rods are currently housed. During the removal process, if a rod is exposed to air or comes in contact with another rod, it could explode, which could trigger a chain reaction among some or all of the 1,331 rods. The operation, which nuclear engineer Arnie Gundersen likens to pulling cigarettes out of a crumpled pack, allows for no margin of error. Oh, and by the way, it's never been done before, not by TEPCO or anyone else.
Depending on whom you ask, the worst-case scenario if something goes amiss could be anything from a massive release of radiation necessitating the evacuation of Tokyo to an apocalyptic explosion that would force the evacuation of the West Coast of the United States . . . or worse, unthinkably worse. Japan's Nuclear Regulation Authority Commissioner has directed TEPCO to exercise "extreme caution" due to the "very large risk potential."
The safe resolution of the Fukushima crisis hinges on human infallibility. To feel confident that the procedure will be carried off without incident, we must have faith that the engineers considered every contingency, that the executives cut no corners, that the laborers do not drink or grow tired or daydream or pull the wrong lever. Or sneeze. It's come to this, our fate in the hands of a few underpaid human beings handed a responsibility of existential proportion.
Assuming we dodge the nuclear apocalypse, there are other dangerous technologies in play or on the horizon, many of which are conceived of as techno-fixes for the unforeseen consequences of previous innovations. A couple of examples: 1) "Roundup-ready" crops genetically engineered to withstand the herbicide Roundup have led to unprecedented levels of herbicide application which have given rise to herbicide-resistant superweeds. The techno-fix? Stronger (more toxic) herbicides; 2) Running out of fossil fuels? Plan B is to hydrofrack every last drop of oil and gas out of the earth and never mind the fact that we're forever contaminating scarce water supplies in drought-ridden places like California. And when we run out of shale oil and gas, don't worry, Chinese scientists have plans to install a ring of solar panels around the moon's equator. The moon. You know, that celestial sphere that controls the ocean tides and rotational axis of the Earth. What could go wrong?
It often diminishes rather than enriches our quality of life.
Technology's greatest contribution to human well-being, we have been made to believe, is the invention of machines that carry out the dangerous, exhausting and tedious tasks previously performed by humans. Thus humans are liberated to pursue more inspiring, creative pursuits. Tell that to a worker on an iPhone assembly line (assuming she hasn't yet committed suicide). As for the products we consume, how blissed out were you last time you interacted with a "customer care" voice tree when your latest gadget failed to boot up?
Many times, in the thrall of our toys and drugs, we devalue life-affirming activities and ways of being.
If rates of depression, anxiety and the disintegration of social bonds are our guide, we already have too much technology for our ancient souls to integrate. Many modern activities are, simply put, not wholesome, that is to say, the activities serve to disconnect and numb us. If the critique of unwholesomeness seems quaint, I submit that this is my entire point. Many times, in the thrall of our toys and drugs, we devalue life-affirming activities and ways of being.
Pushing buttons and swallowing pills can be so easy. And so unfulfilling. A rash of studies has shown what we could have guessed - that having everything done for us (growing and preparing food, cleaning our homes, transporting and healing ourselves) denies us our sense of agency and purpose and makes us depressed and anxious. We think we want all the conveniences modern life has to offer, but when we adopt them, a deep, ancient part of what makes us human quietly slips away.
The "Mad Men" sold Betty Draper appliances to manage her home, television to entertain her children and cigarettes and pretty dresses when suburban isolation and the boredom of empty leisure festered into clinical depression. Ever since the post-war years, we've increasingly turned to machines instead of people to help us. And when we don't need people for our survival, we become isolated from one another and distanced from the richness of human traditions, lore and knowledge.
As a parent, I'm keenly aware of how anemic my child-rearing responsibilities are now that the primal breastfeeding chapter is long behind my son and me. Parents through the ages have passed along to the next generation vast amounts of knowledge and skills to navigate the world around them - how to hunt, collect and grow food, where to find water, how to find and use healing herbs, how to build fire and shelter, how to make clothes, bowls, instruments, toys, you name it and they made it. What can I teach my son beyond how to bookmark his favorite web pages and operate the remote control? It makes me sad.
Zeitgeist: Addendum (Full Film)
It erodes our privacy.
This bears mentioning even though it seems that many Americans under the age of 60 don't really care very much. If we did care, Edward Snowden's revelations alone would have toppled the government. What's interesting here is the unspoken Faustian bargain between the public and internet and software companies - we get to make use of their often "free" products, and they get to data mine the hell out of us and turn over the minutiae of our private lives to digital marketing and government agencies.
The erosion of privacy may not directly kill us (as the title of this article suggests), but it makes it easier for government to adopt totalitarian practices that, as we saw all too much of during the 20th century, can lead to state-sponsored repression, brutality and genocide. In fact, it's difficult to conceive how a government could control a large population withoutaid of modern technology. Even if government passes up the chance to empower itself with our personal data, corporate marketers most certainly will not, and we can expect to be tracked, analyzed and served up increasingly personalized ads that will lead us to buy ever more stuff.
Productivity gains could and should mean higher wages and benefits and shorter working hours, but that's not what's happened.
It deepens inequality.
University of Michigan economics professor Mark Perry reports on his blog that US manufacturing worker productivity has increased by more than eight-fold since 1947. This trend is largely the result of robotics and software. Productivity gains could and should mean higher wages and benefits and shorter working hours, but that's not what's happened. The gains have inured to the benefit of corporations and their executives and shareholders, the now notorious 1% who, by Credit Suisse's reckoning, own 46 percent of global wealth.
University of Michigan economics professor Mark Perry reports on his blog that US manufacturing worker productivity has increased by more than eight-fold since 1947. This trend is largely the result of robotics and software. Productivity gains could and should mean higher wages and benefits and shorter working hours, but that's not what's happened. The gains have inured to the benefit of corporations and their executives and shareholders, the now notorious 1% who, by Credit Suisse's reckoning, own 46 percent of global wealth.
In a non-capitalist economy, productivity gains could theoretically benefit workers. However, even in the case of a democratic socialist utopia, the profits from increased productivity might be shared more equitably, but the environmental impact will be the same - more stuff, more consumption, more pollution until, eventually, we simply run out of materials to cut and mine and burn, or choke to death on soot and smog.
New economic elites are born of the technocracy, many of them libertarian and decidedly uninterested in problems such as poverty and structural inequality that cannot be solved through software hackathons. They gush about "disrupting" without much thought to what and who are the disruptees.
We are witnessing the rise of the digerati, the Jeff Bezoses of the world who are amassing enormous wealth by leveraging online efficiencies in marketing, distribution and staffing. (Witness the rise of online "crowd workforce" sites such as Bezos' Mechanical Turk and Crowdflower, where anonymous workers earn an average of two dollars an hour performing tiny tasks for high-tech companies happy to skirt wage and hour laws.) The digerati's concentration in cities like San Francisco has caused rental and real estate prices to skyrocket, meaning working class households can't make ends meet and, ultimately, simply cannot afford to live in San Francisco.
Just because we can have machines replace people doesn't mean we should.
Poverty is unhealthy and often deadly. Residents of poor neighborhoods with high violence and without access to quality food and health care have shorter life spans. Technology-induced unemployment is not the sole cause of poverty, but a heavy contributor. Just because we can have machines replace people doesn't mean we should.
At the same time that technology is disrupting communities, it is also providing us with an accessible, low-cost platform for mobilizing resistance. It's hard to imagine, for example, how Occupy would have become an overnight sensation without Twitter and Facebook alerts or how anyone would have a clue as to what's really going on without sites like Truthout. Moreover, online organizing and socializing is surely of huge benefit to people with mobility issues or who live in isolated regions.
Then again, up until 20 years ago, humans managed to overthrow monarchies, strike against unfair labor practices, desegregate schools and lunch counters and more, much more, usually without even a land line much less a smart phone. Organizers and alternative media creators and consumers clearly cannot hope to galvanize millions without use of modern communications technologies, but should be mindful that these tools do take a personal, interpersonal and environmental toll and can never wholly substitute for face-to-face, community-based organizing.
What technologies can most safely, effectively and efficiently meet the world population's basic needs for clean water and adequate food, hygiene, shelter, clothing, medicine, birth control, education, arts and transportation?
Some technologies have helped more than they've hurt, though the list is, by my reckoning, shorter than one might expect and cannot in good conscience include pillars of modern living such as nuclear and coal-fired electricity and most of the entertainment and communications devices they power, cars, air travel, processed food, GMOs and single-use products. As for antibiotics, the jury is out until we see whether drug-resistant superbugs become the nightmare scenario many are now predicting.
There are 7 billion people on this planet, many of whom have been conditioned to expect their standard of living to improve. Certainly, most of them will be disappointed to learn that it would take five planet earths to enable everyone to live like a North American. Putting aside that fantasy, the question becomes: What technologies can most safely, effectively and efficiently meet the world population's basic needs for clean water and adequate food, hygiene, shelter, clothing, medicine, birth control, education, arts and transportation? A secondary question is: What additional technologies should be used exclusively by people uniquely in need of them? Examples might include personal vehicles for elderly and disabled people and air conditioning for people with certain health conditions. And lastly: What technologies might we share as a community rather than each owning our own (e.g. computers, vehicles, phone booths - remember those)?
Appropriate technologies are those that meet human needs and enhance people's lives and communities, without exploiting or endangering workers and without damaging the environment. The best go one step further and actually restore damaged ecosystems. Examples of restorative technologies include permeable pavement that allows for rainwater to replenish aquifers and mycologist Paul Stamet's remarkable use of fungi to clean up toxic spills and radiation.
The good news is: We already have most of what we need to live comfortable and healthy lives. Many of the technologies that serve us the most safely and effectively are toward the low-tech end of the spectrum and have been around for decades if not millennia - composting, gray water, indoor plumbing, fermentation food storage, greenhouses, irrigation, herbal and homeopathic medicine, acupuncture, condoms, electrolyte solutions, bicycles, trains, LED lights, solar panels, windmills, printing presses. We know how to thrive without nuclear reactors, fossil fuels, pesticides, air cargo and smartphones; we're just not doing it on a societal scale.
In a crony capitalist system in which corporate interests determine which technologies to develop and bring to market, the public interest is, by and large, irrelevant. So, what's a neo-Luddite to do?
Take the need for human waste disposal for the billions of people who don't have flush toilets: The Bill and Melinda Gates Foundation sponsored a toilet design competition. The winning toilet is a nifty, high-maintenance $2,000 contraption involving a solar-powered electrochemical reactor and hydrogen fuel cells. That toilet might be just the ticket for one of the Gates' vacation homes but, for a rural dweller in a developing country, there are already cheap, sanitary, low-tech toilets available through organizations like SOIL. The hubristic toilet competition encourages innovation for the sake of innovation, an egotistical trap that blocks viable, existing solutions from widespread adoption.
Evaluating the appropriateness of new and existing technologies is no easy task, and it's easy to see how one's biases can distort the process. A writer, for example, might be quick to conclude that the benefits of personal computers outweigh the environmental and spiritual damage they cause. The following questions might serve as a good starting point:
Who owns and controls the technology?What are the benefits? To whom do they accrue?Has the technology been proven safe beyond a reasonable doubt (the precautionary principle)? Who is sponsoring the research and do we trust them?Who are the subjects of safety tests? Have they given informed consent or are they a marginalized population being exploited to test new substances or procedures? If they are animals, do the benefits justify the animals' pain, suffering and captivity?What are the downsides and risks? Are there risks of catastrophic harm that can never be mitigated (e.g. nuclear accidents)?Does the technology bring us together or drive us further apart?Is there a less energy-intensive alternative?Does the technology deplete finite resources?Does manufacture or operation endanger worker or consumer health and safety?Can the technology be applied to weaponry or spying?Would widespread adoption of the technology displace and render obsolete classes of workers?What mental and physical capacities will atrophy when we adopt the technology?Can all 7 billion of us adopt it and, if we do, what are the consequences?Can an ethical argument be made for adoption by some subset of the global population and, if so, who, and who decides?
What About Communications Technology?
I've touched on some of the benefits and pitfalls of communications technology in this essay, but there are others, so many in fact that the exercise of evaluating these technologies is quite challenging. I don't hold myself out as the arbiter of whether smartphones and Wi-Fi and Facebook do more harm than good. I simply argue that these technologies are not exempt from the same risk-benefit analysis I suggest all technologies undergo.
What's a Neo-Luddite to Do?
It's all well and good to undertake individual assessments of the merits of various technologies, but how do we translate our formulations into law and policy? There are no easy answers here, and the path wends through terrain familiar to many progressive causes. In a crony capitalist system in which corporate interests determine which technologies to develop and bring to market, the public interest is, by and large, irrelevant. So, what's a neo-Luddite to do?
Organize. Given such unparalleled heights of corporate power, the "Move to Amend" movement to strip corporations of constitutional personhood is essential so as to enable the enactment of draconian campaign finance reforms, the more draconian the better, given how thorough corporate political control has become. It should be noted that Move to Amend has already galvanized widespread support, thanks to the convenience of online petitions and other "netroots" tools.
There's a push for states and, ultimately, the federal government to adopt the Genuine Progress Indicator (GPI) as a more holistic and accurate measure of social and economic well-being than the GDP. The GPI would take stock of several of the risk factors technology presents, including environmental impacts, physical and mental health, divorce rates, educational achievement, etc.
Emerging technologies are approved and regulated by the Food and Drug Administration (FDA), Federal Communications Commission (FCC), Nuclear Regulatory Commission (NRC) and other government agencies run by scientists and policy makers who worked in the past - and will likely work again in the future - within the very industries they are entrusted to regulate. To pick just one example of many, Tom Wheeler served as the head of two telecommunications lobbying groups before being sworn in as FCC commissioner in 2013. Until the revolving door is firmly shut, there's little hope of regulation in the public interest.
Ultimately, sound decision-making about emerging technologies may not be possible in a market versus a planned economy. By its very nature, market economies are technologically driven, with corporate players constantly racing to be the first to bring new technologies to market. Absent a system in which accountable elected officials oversee economic development and conduct risk-benefit analysis of proposed new products, the odds will always be in favor of new technologies.
Be skeptical. Technology certainly can address some problems, but don't assume the proposed techno-fix is the answer just because it's receiving breathless media hype. Reporters love innovators and startups and their megalomaniacal founders, sometimes going so far as to portray them as supernatural heroes (Steve Jobs). Caught up in the techno-topian dream as much as any of us, reporters tend to pen glowing odes to the latest and greatest invention or product and avoid penetrating questions about its utility, safety and environmental impact. Don't believe the hype. As Ralph Waldo Emerson forewarned, "Don't trust children with edge tools. Don't trust man, great God, with more power than he has, until he has learned to use that little power better. What a hell we should make of the world if we could do what we would."
Reject scare tactics. Putting the brakes on the rollout of new technologies will not, as the technocracy and their libertarian champions would have us believe, destroy the economy any more than any form of regulation, which is to say, not at all. Will it "disrupt" (to use a techie term) business as usual? You bet it will; that's the whole idea. In its stead, people-powered and eco-friendly technologies and businesses will arise and will hire new workers, lots of them. If technocrats find some of their wealth, power and mythical prestige diluted in the process, that is all to the good.
Opt out. If all 7 billion of us cannot live large, American-style, why should any of us? Contemplate and discuss with your family and community what you really need to live a healthy, meaningful and resilient life. Experiment with how close you can come to that standard of living voluntarily, and be prepared to accept involuntary cutbacks that are surely in store for us all, either in the form of a carbon tax, economic crisis, war, disease pandemic or ecological collapse. In the meantime, don't be so fast to download the newest app or buy the latest device. Ask yourself both what the app or device will give you and what it will take away.
Small acts of resistance and the adoption of new/old lifestyles will, eventually, coalesce into a force that can arrest our civilization's race toward the cliff's edge. Change seems to unfold slowly, but that's only during the early, underground stages. Once the shoots break through the surface, things can change very quickly. Who foresaw the dismantling of the British Empire? Who in 1965 expected cohabitation of unmarried couples and recreational pot-smoking to become commonplace within 20 years? Who predicted a black president and the acceptance of gay marriage in the first decade of the 21st century?
Once a tipping point in human consciousness is reached, once a noticeable number of people start rejecting what is cruel and destructive and begin reinventing their beliefs and lifestyles, things can change very quickly. With respect to our current predicament, they have to.
More by this Author
- 6From Cave Painting to Satellite-From Long Term to Instant Telecommunication: Viewing The Inter Gallactic Superstructures
Long Term instant communication, from Cave painting, drums, language - basis for sending images which facilitated for radio televisions, Internet. Technology evolved from Cave Painting To Satellite
- 4The Technologizing Of The Word: The Internet Is Changing Our Speech and Perceptions - Writing Structures Consciousness
Since use of Technology, the word has not only been technicized , but has now become digitized which led to the word becoming viral, enabled by the Internet language media & culture are consciousness
- 144The History And the Age of The Moors in Spain: How The Moors Civilized Europe - The History of Africa
In Ancient times, Africans were called the Ethiopians; in medieval times most africans were called Moors; Moors brought culture and learning to Europe. They civilized and gave Europe all modernity.