The Frustration that Comes from Getting Scraped and Outranked

What is Scraped Content?

Scraped content is essentially content that is copied from one site and placed on another. Typically, content is scraped using an automated script (it's so much more efficient to steal content this way), than by copying it by hand. It's not unusual for all articles on a site to get scraped and placed on another site.

Scraped content outranking original content (pauledmondson.hubpages.com is the original)
Scraped content outranking original content (pauledmondson.hubpages.com is the original)

Why Do People Scrape Content and Display it?

Crawling and indexing pages like Google does is a form of scraping, but that's not what we are concerned with. We are concerned with people that take our content, place it on another site in the hopes of driving traffic to it via search engines. Scraping is such a low cost way of getting content, that all major scrapers need is for search engines to send them a trickle of traffic to make it financially viable.

When Does Scraping Become a Major Problem for Publishers?

Scraping becomes a major problem for publishers when search engines get the original source wrong for queries that sends traffic. When search engines get it right, it's not too much of a problem, but when they get it wrong, the scraped page will rank above the original content that they spent time and money producing that cannibalizes their potential audience and undermines their work.

Why Do Search Engines Get the Original Source Wrong?

There are a few major theories why search engines get the original source wrong. The first is that when content is newly published, a scraper picks it up and creates a page almost instantly. Then, Google indexes the page on the site that scraped it first. They deem the first finding of the page as the original and give it top billing in the search results.

The second major theory is that the site that hosts the scraped page has more authority than the site they scraped it from. For example, if CNN (theoretically) scraped your article from a newly created blog, and then drove a bunch of traffic and links to the page, that scraped page would have significantly more juice behind it and it would likely outrank the original. This can happen with legally syndicated content like Associated Press articles.

The third, is the most frustrating for everyday people. I'll use a recent example of my own site to illustrate the frustration. I wrote a post on Google Panda. My site (pauledmondson.hubpages.com) is full of unique content and all original by me. Perhaps, my writing isn't the best, but surely I don't deserve to be outranked by a scraper.

If you search Google for "what we don't know about Google Panda," you will see a site that ranks #1 above my site. Why did this happen? My theory is that Google decided my site was low quality. That the combination of my Hubs on search engines, bbq, and kids didn't fit what they saw as high quality. Over the last few Panda updates I lost a huge amount of my traffic from Google. When Google punishes a site with Panda or a penalty it suppresses the pages so much that a scraped copy of the page can outrank it. I really believe Google has it wrong with my site....Time will tell.

As a webmaster, there is no way for me to tell why Google demoted my site and is allowing scrapers to outrank my work.

What Can You Do About Scraped Content Outranking Original?

Here are a few options.

1. Post the example in Google Webmaster Forums. Leave an example of the query and your page. Make sure the scraper is actually outranking your content with specific queries. Hopefully, this will help google solve the issue.

2. File a DMCA request. You can do it via Google's wizard. You can also send a DMCA request to the other site following this guidelines.

3. If the scraper site is monetizing your content with ads, affiliate links or other means, you can contact the partners and let them know. It's possible that the monetization partner will cancel their accounts.

TechCrunch is Scraped, but Ranks Appropriately

Example of healthy site ranking above scrapers
Example of healthy site ranking above scrapers

Do I Need to Worry About All My Content Getting Scraped?

My recommendation is to focus on situations where your original content is getting outranked. There are services that will monitor your content and notify you of when it's duplicated on the web, but it can be a bit overwhelming.

As an example, I did a guest post on Panda on Techcrunch. If you perform the search with omitted results included, you can see how many copies there are of the article. I don't think TC is too concerned because their original page is ranking first. Otherwise it would be very burdensome to file DMCA complaints for each infringing article since they're extensively scraped.

More by this Author


22 comments

Cardisa profile image

Cardisa 3 years ago from Jamaica

It is frustrating and what's even more frustrating is when other hubbers copy your content and they get noticed. A lot of the content taken from HP are by people who have registered here under some guise as hubbers. They get a feel for what is popular and copy that. Some even copy articles to their own hubs.

Paul, we need to be stricter about the publishing policies and registration of new members.


Randy Godwin profile image

Randy Godwin 3 years ago from Southern Georgia

Personally I think Google should be penalized for placing their ads on stolen content. They should be required to identify the correct owner before making any money whatsoever from stolen content. This makes them an accessory to theft and should be held accountable for their actions. There's no way to stop the thieves with the present state of affairs and Google making money from the thieves.


DigbyAdams 3 years ago

I think HubPages needs to change its policies about not getting involved. They need to fight for the integrity of this site. If they make it any harder to sign up and publish. They may find that they have a hard time keeping anyone.


Cardisa profile image

Cardisa 3 years ago from Jamaica

I agree with Randy on this as well. Google needs to take more responsibility.


NateB11 profile image

NateB11 3 years ago from California, United States of America

Wow, definitely frustrating. I didn't know about all that and how it occurs. It's good to learn of the options on how to handle it.


Blake Flannery profile image

Blake Flannery 3 years ago from United States

If scrapers can immediately copy content, and it takes several hours to get content "featured" on this site (even though it is published), then wouldn't it make sense for Google to display the scraped version that can be indexed right away? Have you tried delaying publishing until after the QAP has been performed?


Will Apse profile image

Will Apse 3 years ago

'My theory is that Google decided my site was low quality. That the combination of my Hubs on search engines, bbq, and kids didn't fit what they saw as high quality'

Are you sure this is true? Google has never tied Panda to niche issues as far as I can tell. I went digging today and Google has almost nothing to say about niche sites, period.

It has talked about authority sites and Mark Cutts has dropped broad hints about a boost for anyone seen as an 'authority' in a particular field.

I have never seen any reference to disparate content being penalized simply for being disparate.

For anyone interested:

hxxp://searchengineland.com/google-authority-boost-googles-algorithm-to-determine-which-site-is-a-subject-authority-159405


Marketing Merit profile image

Marketing Merit 3 years ago from United Kingdom

Think Google have de-indexed the cararticle entry now. However, the same article has been copied onto healthylivingbox (dot) net and is showing in second place, with the original hub article ranking first.

What concerns me is the 'pending' period attached to hubs and the associated 'no-index' tag that accompanies this. If hubs are being scraped from the site feed, which can potentially be within minutes of them being published, depending upon the cron job settings, then isn't there a danger that the copied hub could actually be indexed prior to the original hub?


Paul Edmondson profile image

Paul Edmondson 3 years ago from Burlingame, CA Author

@Will Apse

If a site gets hit by Panda, that allows scrapers to rank above it. That was the point I was trying to make....


Glenn Stok profile image

Glenn Stok 3 years ago from Long Island, NY

Thanks, Paul, for this in-depth discussion of the issue with scraping. I have found some of my own hubs that were scraped. Some ranked higher than my original, but most ranked lower.

I found your discussion helpful. I know now that I should not worry too much about those copies that rank lower. I still will file a DMCA complaint. But I won't bother with following up as I have been. It has been too time-consuming.

As for the copies that rank higher, you have helped me realize that those are the ones I should really concentrate on. Posting in the Google Webmaster Forum, as you suggested, is a great idea as an extra step in addition to the DMCA complaint.


Writer Fox profile image

Writer Fox 3 years ago from the wadi near the little river

1. I don't think Google "got it wrong" when the scraped content outranked yours. I think Google chose the better article. Your article has an outbound link for the 7th and 8th words of content where keyword phrases should be and the anchor text is unrelated to the title of your Hub. The scraper didn't have a link. Your link "personal health" is totally unrelated to the subject of the webpage and, therefore, a spam link. The scraper didn't link that either. But the most important reason is that the Google cache for your Hub shows a 404 crawl error. Your Hub cannot be the best answer to the search query if Google has a problem crawling it. Google didn't get it wrong. Google got it right.

2. The scraped content outranked yours on a search for the title. Ranking really has to do with a search for keyword phrases. Since your Hub is not optimized for discernible keywords, any conclusion drawn will not be accurate for true 'ranking.' In the second example of a search query, you searched for an exact match of text in an article. If you perform the same search right now without the quotation marks, the scraping sites don't show up at all.

3. Plenty of websites cover a wide variety of topics: Wikipedia, every news site, Yahoo, etc. Two of my Hubs were scraped by the same site, but they never outranked mine for keyword phrases.

4. Scraped content can affect rankings even if it doesn't outrank the original. What happens in many cases is that both pages are given very low ranking positions, especially if they are published within days of each other. Every scraped copy of a Hub should be diligently pursued. If not, the entire HubPages domain is at risk for low rankings because it looks like a free-for-all for free website content like EzineArticles. A true syndicated article is something quite different because it comes with disclosure of the original source, such as content syndicated from Reuters.com. Scraped content from HubPages does not.


donnah75 profile image

donnah75 3 years ago from Upstate New York

This is a problem that I, and so many hubbers, have been facing a lot of lately. Maybe always and I just started to notice. Either way, it is very frustrating. I appreciate your discussion here. I do continue to wonder if Hubpages can do more on behalf of the writers here when it comes to all of this blanket theft. I guess I feel that Hubpages as a company has more knowledge and contacts than I do as a single writer. I will continue to fight the good fight for my work, as I don't expect you to do it for me. However, Hubpages loses revenue from this theft too. I wonder what the solution is. Thanks for writing this and letting us know that you are fighting the scraper demons as well.


Sue Bailey profile image

Sue Bailey 3 years ago from South Yorkshire, UK

Very interesting Paul. I intend to do some investigating because this may explain why my page views on some hubs have dropped to zero. Disheartening! Voted up, useful and interesting.


Lastheart profile image

Lastheart 3 years ago from Borik√©n the great land of the valiant and noble Lord

My twin hub killed my hub, now it comes up in Google search instead of my hub. Very good information. Writer Fox has done a good job also.

I wish I had seen this before. Thanks for the share. I will share it just in case somebody else has my same sadness.


Paul Edmondson profile image

Paul Edmondson 3 years ago from Burlingame, CA Author

It does look like the scraper went away and my article is now outranking the copies...Not sure why the scraper is now 404ing...


Writer Fox profile image

Writer Fox 3 years ago from the wadi near the little river

I just put the URL in the Google search box and this Hub is not in the search results. I put the title in the search box and it wasn't in there either. I did an exact search (in quotes) for the title and it did not show. This Hub is not in the Google index. You might try "Fetch as Googlebot" from webmaster tools and see what has happened. If the crawl is successful, you can submit the URL again there.


IzzyM profile image

IzzyM 3 years ago from UK

I had this problem on my account for the longest time after August 2011. My traffic dropped by 80% and has never recovered, despite Google having apparently 'lifted' its embargo - just checked and my hubs are ranking above the scrapers now. They are mostly just not ranking at all anymore. I guess I shouldn't have de-optimized them all.


pstraubie48 profile image

pstraubie48 3 years ago from sunny Florida

This is such an important topic. It is so wrong that someone can swoop in and literally steal one's work. Hopefully some way will be devised to keep this from happening...I am not sure HOW that can happen but when it is cutting into the livelihood of the original owner of the content then it is a problem that needs to be corrected sooner than later.

Angels are on the way to you this morning ps


Easy Exercise profile image

Easy Exercise 2 years ago from United States

An important post. Sadly, I have been scraped twice. Once I have had to file the DMCA complaint but I didn't think to research the web traffic and the money. Shame on me! I learned allot! I have some more work to do. And it has been over 90 days now since I filed the DMCA complaint so it is time to revisit this. Thank you!


Larry Rankin profile image

Larry Rankin 2 years ago from Oklahoma

As someone who likes to write, I can't understand the mentality of wanting to take credit for anything that isn't your own. Yes, some folks do a better job than I do, but I'd still rather take credit for my writing, simply because it is mine. I like money like everyone else, but if there is no positive legacy in how you've acquired it, then at the end of the day you're still just a nothing.


PegCole17 profile image

PegCole17 2 years ago from Dallas, Texas

Anything current, Paul Edmondson? What about the copy cat sites that have the entire Hub Pages platform copied into other languages? Russian, Japanese, Chinese? Can anything be done besides us filing reams of DMCAs?


makingamark profile image

makingamark 23 months ago from London

The big difference I notice between Squidoo and HubPages is that when major scraping starts by a scraper site identified by individual site owners, the tech team at Squidoo used to get involved if the site was clearly attacking a number of different lenses owned by different lensmasters. As a result a number of scraper sites were taken down through collaborative action.

Here at HubPages the concerted action on major scraping activities between tech team and individual site owners just doesn't seem to happen. Maybe that explains why parts of some of my hubs have been scraped in the 4 months since they arrived at HubPages - and that very rarely happened at Squidoo.

Try putting the terms 'hubberusername' and 'HubPages' into Google and see what comes up.

I tried with the terms 'pauledmondson' and 'HubPages' and at the top of page 3 the scraped content starts.....

    Sign in or sign up and post using a HubPages Network account.

    0 of 8192 characters used
    Post Comment

    No HTML is allowed in comments, but URLs will be hyperlinked. Comments are not for promoting your articles or other sites.


    Click to Rate This Article
    working