ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel

Why Social Science Research Is Suspect

Updated on August 24, 2015

"... in the end, we often had to choose between keeping a social norm, and getting quantifiable data."

Human behavior is really, really hard to quantify.

In the early 2000s I participated in a linguistic field research project in Asia. The goal was to determine whether two local languages could be considered dialects of each other or were, in fact, mutually unintelligible. Word lists from both languages had already been taken and compared, but they fell into that grey area (about 70% similar) that made further testing necessary.


Our method was called Recorded Text Testing. To do this, you record a person telling a two- or three-minute story in one language. You take it into the other language area, play it for the people there, and see how well they understand it. Then you do the same thing in reverse.

Sounds simple, but it was astoundingly hard to prove - or, in some cases, determine - how much of the story people actually understood.

Our team had originally tried to measure comprehension by giving a comprehension test. But they found that answering an oral quiz about a story was not natural to the people. When asked content questions, they simply started re-telling the entire story. Obviously this meant they understood it a bit, but since they had not answered the questions, comprehension had not been actually proved. So the team changed their method. We would go with what came naturally. We would ask people to retell the story.

However, this still was not good enough for a research project, because simply having someone re-tell a story does not yield a number, and for research you need numbers. You cannot say the subject re-told the story "pretty well." So, we counted up the significant details in the story and graded each person's re-telling based on how many of them they hit.

Even with this method, our quantification process was imperfect, to say the least. For one thing, the story was so long that people often forgot parts of it when retelling. (Was this because of language or not?) Sometimes our intuitive sense of how much the person understood, conflicted with their score on our "test." Some people, instead of re-telling the story, wanted to give commentary on it or react to it with questions of their own. We came away with a decent informal sense that people understood it pretty well, but this sense was undermined by our total inability to prove this quantitatively.

To get this basically useless data, we had had to impose a great deal on those lovely people. They allowed us into their homes, and we sat there with clipboards, asked a bunch of questions in our broken trade language, and all this seemed an awful lot like either a test at school or an interview at a government office (neither one a happy association). Of course, we did our best to be gentle and respectful and explain that we were just researching these languages, that there were no wrong answers, etc., etc., but in the end, we often had to choose between keeping a social norm, and getting quantifiable data. The low point on our survey was probably when we asked each participant: "How similar would you say this language is to your own: 0 to 25%, 25 to 50%, 50 to 75% ...?"

That last ridiculous question is not too different from the methods that we often read about in various social science studies, where participants are asked to rate their own or another person's personal qualities, such as attractiveness, intelligence, likeability, etc., on a scale of one to five or one to ten. These kinds of qualities are even more abstract and subjective than whether another dialect is understandable. Though people generally rate them about the same, and this is thought to show that they are reliable, it's worth remembering that quantifying another person's character is not a task anyone is asked to do, ever, in everyday life. Of course we make character judgments all the time, and base decisions on them, but we don't use numbers.

I myself do not even like to be asked, as we often are at a hospital, to "rate your pain on a scale of one to ten."

"One is no pain at all, and ten is the worst thing you've ever felt."

The first time I was asked this, I was in labor. I thought, How should I know? I'm uncomfortable now, but I've been told it's going to get a lot worse. Maybe I have never felt a 10. What if I say 6 now, and later it turns out it was only a 2?


Statistical significance is weird.

For a real-life example of this quantification problem, take a recent article in The Economist (July 25, 2015). The headline is, "Science and Justice: Looks Could Kill."

According to The Economist,

[The researchers] selected 371 prisoners on death row and a further 371 who were serving life sentences. ... A group of 208 volunteers ... were then invited to rate photographs of each convict's face for trustworthiness, on a scale of one to eight, where where one was "not trustworthy at all" and eight was "very trustworthy."

The results of all this work revealed that the faces of prisoners who were on death row had an average trustworthiness of 2.76 and that those serving life sentences averaged 2.87. Not a huge difference, but one that was statistically significant (it, or something larger, would have happened by chance less often than one time in 100).

... In Floridian courts, at least, it seems that your face really is your fortune. (page 64)

The difference between the two groups' average scores may be statistically significant, but it is - how to put it? - tiny. Not even a full point of difference. As a lay person, I couldn't help noticing that the death row group "scored" 2 3/4, and the lifer group scored not quite 3.

Is a difference like that really significant in real life? I'm not sure. But I'm wary.

This hit home for me in 2009 when I read an op-ed that took evangelical Christians to task because apparently, in polls, torture (e.g. of suspected terrorists) was approved of by just over half of Evangelicals ... and just under half of the non-religious. It was something like 55% of Evangelicals approving, compared to 45% of non-evangelicals approving. That is a pretty big difference statistically. But, in real life, it means that in both groups about half of the people said they would approve of torture and about half said they wouldn't. The numbers were close enough to 50% in both groups to show that, whether Christian or not, people were deeply conflicted about the morality of torture. Yet on the strength of this statistically-significant-yet-real-life-ambiguous difference, the author asserted that Christians don't stand up to evil, just as they didn't stand up to Hitler. (Yep, he mentioned Hitler. In his opening paragraph.) We have learned that teeny tiny differences in numbers are hugely significant, that they can even help us to see into the souls of entire groups of people. But I am not so sure.


Social scientists will often measure something other than what they think they're measuring.

Because of the impossibility of actually measuring the things in the human mind that we really want to measure, a popular method is to measure something related. Sometimes the researchers seems to think that they actually are measuring the target thing, but they're not.

For example, you may have heard of "pregnancy brain." Many women report that they are more forgetful during pregnancy. During my childbearing years I read a blurb somewhere (probably reporting on this study) that said this idea of pregnancy brain had been disproved. It turns out that pregnant women usually do just as well on memory tests as the non-pregnant. Sorry, ladies, it looks like your excuse is gone.

Except, of course, that taking a memory test in a controlled setting is a completely different mental skill from being able to concentrate and remember all your duties and commitments in the whirlwind of actual, everyday life.

The only way to determine whether some pregnant women have a harder time with the latter task is to ask them. But this is not "hard data," so we will test something related instead.

I remember a similar study from my college years that "proved" that children under a certain age can't feel empathy. The experiment involved sitting a child down at a sand table, across from a large doll. On the sand table were three hills - one close to the child, two closer to the doll. The children were asked to "draw what the hills look like to the dolly."
They drew three hills, one closer to the viewer and two farther away.

At the time I heard about this study, I accepted that it meant young children aren't capable of empathy. Only years later did I realize that all it proved is that they find it hard to visually rotate an object in their mind.

The study did not prove a lack of empathy at all. The kids might be unable to imagine what visual scene another person is facing, but perfectly able to read and identify with their emotions.

Social science findings often "prove" what most human beings already know.

Much as you might not suspect it from the title of this article, I enjoy reading social science books and articles. Part of what I enjoy is the mental "click" when a study confirms my sense of how things are in the real world. It's nice to feel that your common sense has been vindicated by Science. And now you can sound smart when you talk about it! But all too often, this ends up meaning that "studies" tell us what everyone already knows, as in the example above, where most of us could have guessed that defendants with shiftier faces would get harsher sentences.

Many people (myself included) also enjoy reading a study that finds the opposite of what we would expect. Then we feel that the conclusion is truly insightful, precisely because it's unexpected. And we get double smart points for knowing something from Science that most people wouldn't guess! Unfortunately, when it comes to human behavior, a scientific conclusion that sounds nonsensical is often precisely that.

When a study doesn't "prove" what the average person has already known for millennia, it may instead "prove" whatever the researcher, or the research community, know or want to be true. This is not to say that social science researchers are dishonest. Failing to notice things that we didn't expect to see happens to everyone, and it's called Confirmation Bias.

With the scientific method, you start with an hypothesis and set out to prove or disprove it. If the hypothesis has to do with the physical behavior of moving objects or gasses or chemicals, it's usually pretty clear whether or not your hypothesis was right. But when the hypothesis has to do with human behavior, with all it ambiguities and difficulties of interpretation, often the results are inconclusive. But since the researchers themselves have to interpret the results, inconclusive results look like a confirmation of the hypothesis, whatever it might be.

Even in the hard(er) sciences, findings tend to change with changing social values, and even with fads. Think about how many foods have been "proven" to be unhealthy in study after study, only to be rehabilitated after a couple of decades. It wasn't that all the studies were completely invalid, it was just that they didn't prove as much as was first thought. Think about how much greater this problem is when the subject of study is people.

Conclusions reached from suspect data get reported as unassailable fact because it's "Science."

To finish the rant part of this article before I propose a solution ...

We all know the great respect accorded to Science. If something was scientifically studied, it is considered authoritative and indisputable. This works pretty well for the very hard sciences, like astrophysics, biochemistry, and so on. (Although even there, scientists lament that the way their studies get reported is often so oversimplified as to be misleading.) But this idea of scientific-studies-as-proven-fact does not work so well for fields of study that, to put it mildly, do not lend themselves to a quantifiable, laboratory approach.

The reader can probably see the danger of having magazines and other news sources report as sober fact the results of studies that, for all the reasons given above, are very difficult to conduct accurately ... to interpret ... and to guard from becoming little more than the codifying of a subculture's biases. But it's so hard to resist. When we like the implications of a "conclusion" that has been reached, we cheerfully share it. It makes us feel smart, and vindicated. Only on rare occasions does a study come along that "proves" something that we know from personal experience to be untrue. When that does happen, we might question how well the study was carried out. But we know our motives taint the questioning. So, my solution is to become skeptical of all "studies" involving people, short amounts of time, and quantifiable data.

What kind of study would get my respect?

When it comes to social behavior, detailed descriptions are better than numerical "data."

One social science researcher whom I respect deeply is Deborah Tannen. She is a Georgetown linguistics professor and is the author of several books about human communication. The ones with the catchiest titles are her books That's Not What I Meant, about communication styles, and You Just Don't Understand, about the dynamics of communication between men and women.

What I appreciate about Tannen's books is the amount of detailed description of these tricky dynamics. There is a lot of analysis. People's actual words are mentioned. So are their communication norms (this is a big theme of Tannen's), and their intentions, what they are trying to do. This is so much deeper than a misleading quantifiable study. Tannen's methods, often, are interviews, reading, analysis and comparing the results to her personal experience. She does this in a thorough way. This stuff does not tend to yield numbers very well, so Tannen does not rely much on methods like polls and people rating each other on a scale. Instead, she just describes what is going on.

That is the way to research people, and to report on it.


The best social science research is the kind that takes a long time.

In this article, I do not mean to say that there is no useful information among the reams that have been written coming from the field of social science. Of course there are helpful studies out there. Some are better than others, depending upon what is being studied. For example, the recorded-text testing that I described above was relatively hard to do well. Some other kinds of language tests, such as taking word lists and sentence-repetition tests, yield better data (though they do not measure the exact same things).

Of course, when comparing two languages or language communities, the best picture of the situation would come from living in one or both of them, learning the language, and doing a thorough analysis of the languages' sound, grammar, and rhetoric. But that takes a very, very long time.

Similarly, with other social science research, the best data are the data that take a lot of time and expense to get. Interviews, case histories, longitudinal studies that follow the research subjects throughout their lives... And, of course, large sample sizes.

All of these very involved research methods are hard to do in a semester with graduate students on a low budget. Also, they tend to yield data that is rich, complex, and difficult to distill into a headline or even a single article.

That's why my favorite examples of good social science tend to be books like Tannen's. Particularly when those books are written by people who have been in their field for years, acquiring wisdom. These people are in a unique position to provide detailed and insightful descriptions of the human behaviors that we most want to understand. Though their main conclusions may, in a sense, only confirm what we already know, their detailed descriptions of the dynamics of how people relate can, indeed, help us understand each other better.

And the very best social science study of all? That might just be a really well-researched, honestly written memoir or history book.

Comments

    0 of 8192 characters used
    Post Comment

    • Jennifer Mugrage profile imageAUTHOR

      Jennifer Mugrage 

      2 years ago from Columbus, Ohio

      Thanks, all, for your insightful comments and for taking the time to read.

    • rodrigo sebidos profile image

      Rodrigo Sebidos 

      2 years ago from Zone2 Brgy.Guadalupe, Baybay City, Leyte, Philippines

      That's true many of us disregard the results if it does not suit our own personal agenda, we tend to be biased on our own perception of the problem that the result is tailored to satisfy targeted subjects when in fact it is not, then we commit a graved error of accepting the null hypothesis (affirmative) when in fact it's false. Many decisions(eg.government) are based on this, and millions of lives are compromised. I believed social scientist are in the position to deal with these biases when the human factor is involved. A very interesting article

    • Mel Carriere profile image

      Mel Carriere 

      2 years ago from San Diego California

      I'm sure you've heard the quote attributed to Abraham Lincoln saying that there are three kinds of lies - lies, damn lies, and statistics. Statistics shift with the wind and often miraculously side with the viewpoint of whoever is paying for the poll. I'm not saying they are invalid, I'm just agreeing with your point that they have to be analyzed carefully, which the media never does. Very well written and thought provoking hub.

    • tamarawilhite profile image

      Tamara Wilhite 

      2 years ago from Fort Worth, Texas

      There have been a number of articles out on the biases in academia against conservatives. For example, 40% of the public is conservative, but barring engineering, less than 10% of most academic professors and less than 1% of the social science professors.

      When there are half as many blacks in one area as the average, social sciences look for deliberate exclusion instead of factoring in personal choice. When there is one hundredth as many conservatives in their field, they come up with excuses that conservatives are stupid, ignorant, off making money - and can't see their own biases in the matter.

      So you have many scientific papers supporting political agendas, papers tailored to support an agenda regardless of the data, suppression of papers supporting contrary ideas or undermining political ideologies, and denial of the biases of those in decision making positions because "everyone thinks like we do".

    • rodrigo sebidos profile image

      Rodrigo Sebidos 

      2 years ago from Zone2 Brgy.Guadalupe, Baybay City, Leyte, Philippines

      Yes, you have your point, Ron, moreover, the problem arises when the expected deviate from the observed due to small sample size. Oftentimes recommendation is based on this results which is not the true picture of the population/subjects under study. That's a good article.

    • RonElFran profile image

      Ronald E Franklin 

      2 years ago from Mechanicsburg, PA

      Thought-provoking article. My take-away: when researchers try to quantify data that is inherently unquantifiable, they are no longer objective observers. Rather, they are imposing a preconceived notion of the categories the data should fall into, and will do whatever mental shoehorning is required to fit the data into those categories. In other words, they don't see what's there, but only what their preconceptions have prepared them to see. Anyway, I enjoyed reading this.

    • RTalloni profile image

      RTalloni 

      2 years ago from the short journey

      How interesting…thanks much for a thoughtful read.

    • rodrigo sebidos profile image

      Rodrigo Sebidos 

      2 years ago from Zone2 Brgy.Guadalupe, Baybay City, Leyte, Philippines

      Nice Hub! yes, a social science research is more complicated than plant or animal research because of the human factor and small sample size. Most often computed statistical significance has to be verified in the area of research.

    working

    This website uses cookies

    As a user in the EEA, your approval is needed on a few things. To provide a better website experience, hubpages.com uses cookies (and other similar technologies) and may collect, process, and share personal data. Please choose which areas of our service you consent to our doing so.

    For more information on managing or withdrawing consents and how we handle data, visit our Privacy Policy at: https://hubpages.com/privacy-policy#gdpr

    Show Details
    Necessary
    HubPages Device IDThis is used to identify particular browsers or devices when the access the service, and is used for security reasons.
    LoginThis is necessary to sign in to the HubPages Service.
    Google RecaptchaThis is used to prevent bots and spam. (Privacy Policy)
    AkismetThis is used to detect comment spam. (Privacy Policy)
    HubPages Google AnalyticsThis is used to provide data on traffic to our website, all personally identifyable data is anonymized. (Privacy Policy)
    HubPages Traffic PixelThis is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.
    Amazon Web ServicesThis is a cloud services platform that we used to host our service. (Privacy Policy)
    CloudflareThis is a cloud CDN service that we use to efficiently deliver files required for our service to operate such as javascript, cascading style sheets, images, and videos. (Privacy Policy)
    Google Hosted LibrariesJavascript software libraries such as jQuery are loaded at endpoints on the googleapis.com or gstatic.com domains, for performance and efficiency reasons. (Privacy Policy)
    Features
    Google Custom SearchThis is feature allows you to search the site. (Privacy Policy)
    Google MapsSome articles have Google Maps embedded in them. (Privacy Policy)
    Google ChartsThis is used to display charts and graphs on articles and the author center. (Privacy Policy)
    Google AdSense Host APIThis service allows you to sign up for or associate a Google AdSense account with HubPages, so that you can earn money from ads on your articles. No data is shared unless you engage with this feature. (Privacy Policy)
    Google YouTubeSome articles have YouTube videos embedded in them. (Privacy Policy)
    VimeoSome articles have Vimeo videos embedded in them. (Privacy Policy)
    PaypalThis is used for a registered author who enrolls in the HubPages Earnings program and requests to be paid via PayPal. No data is shared with Paypal unless you engage with this feature. (Privacy Policy)
    Facebook LoginYou can use this to streamline signing up for, or signing in to your Hubpages account. No data is shared with Facebook unless you engage with this feature. (Privacy Policy)
    MavenThis supports the Maven widget and search functionality. (Privacy Policy)
    Marketing
    Google AdSenseThis is an ad network. (Privacy Policy)
    Google DoubleClickGoogle provides ad serving technology and runs an ad network. (Privacy Policy)
    Index ExchangeThis is an ad network. (Privacy Policy)
    SovrnThis is an ad network. (Privacy Policy)
    Facebook AdsThis is an ad network. (Privacy Policy)
    Amazon Unified Ad MarketplaceThis is an ad network. (Privacy Policy)
    AppNexusThis is an ad network. (Privacy Policy)
    OpenxThis is an ad network. (Privacy Policy)
    Rubicon ProjectThis is an ad network. (Privacy Policy)
    TripleLiftThis is an ad network. (Privacy Policy)
    Say MediaWe partner with Say Media to deliver ad campaigns on our sites. (Privacy Policy)
    Remarketing PixelsWe may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.
    Conversion Tracking PixelsWe may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.
    Statistics
    Author Google AnalyticsThis is used to provide traffic data and reports to the authors of articles on the HubPages Service. (Privacy Policy)
    ComscoreComScore is a media measurement and analytics company providing marketing data and analytics to enterprises, media and advertising agencies, and publishers. Non-consent will result in ComScore only processing obfuscated personal data. (Privacy Policy)
    Amazon Tracking PixelSome articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products (Privacy Policy)