jump to last post 1-3 of 3 discussions (9 posts)

Crawl errors purposeful (see http://bit.ly/19FRgBc)

  1. janderson99 profile image85
    janderson99posted 3 years ago

    Google WMT has started reporting large numbers of crawl errors (>150), starting in May of the type

    Is this related to 'Related Search"
    Is this damaging my sub?

    1. missolive profile image92
      missoliveposted 3 years ago in reply to this

      I have a numerous amount of these as well (xml/stats/relatedhubevents.php?aid=). Thanks for posting in the forums, I will follow along. I hope these aren't hurting us in any way.

      By the way, I also tend to get lots of WMT errors from the Topics pages on HP. These are on active (featured) hubs. Anyone else get these?

    2. Ceres Schwarz profile image12
      Ceres Schwarzposted 3 years ago in reply to this

      I see a lot of errors like that on my webmaster tools account too. And going to the links just brings me to a page not found. Where did these errors and links come from?

      I just click the "mark as fixed" but I don't think that really does anything because sometimes I see them again when I check my webmaster tools account.

  2. LCDWriter profile image94
    LCDWriterposted 3 years ago

    I'm noticing crawl errors being reported in my WMT as well (when there used to never be any). Wonder if it is related to the sub domain issue?

    1. Matthew Meyer profile image77
      Matthew Meyerposted 3 years ago in reply to this

      Please see Paul Edmondson's response below.

      1. janderson99 profile image85
        janderson99posted 3 years ago in reply to this

        I checked my sitemap and it includes an entry for every page in my sub
        After every sitemap entry there is one of these


        which is exactly what is appearing in the crawl errors.
        The number of indexed pages missing from the Webmasters list for my sub 194 is very close to the number of crawl errors of this type  56 now (but was 150 a few days ago).
        Hope this helps!

  3. Paul Edmondson profile image
    Paul Edmondsonposted 3 years ago

    We added these and some other pages to sitemaps to get Google to crawl them and drop them per advice from Google.  The particular page you mentioned is now 403 (forbidden), while other pages in the /xml file have been no indexed with an x-robots tag.

    They will show up at crawl errors in webmaster tools, but shouldn't hurt your site.  We are working on reducing the total number of pages in Google's index from HubPages.  It takes many months for them to fall out, but in five or six months, the total number of pages indexed should be reduced by a significant percentage - including the pages that are indexed URL only.  For those not familiar, URL only pages happen when google picks up a URL, but the robots.txt file prevents google from crawling the page, so the page is added to the index as URL only. 

    We are watching this closely, and isn't anything to worry about if you see 403, 404, or 410 crawl errors.  Here is an example of URLs we would like Google to drop.

    https://www.google.com/search?q=site%3A … mp;bih=906

    1. LCDWriter profile image94
      LCDWriterposted 3 years ago in reply to this

      I can see what you are talking about here. Thanks for the explanation, Paul.

    2. janderson99 profile image85
      janderson99posted 3 years ago in reply to this

      Thanks Paul,
      Your reply is very helpful and clarifies things.
      It is an old, old story, but the transition to subs will remain incomplete while the topic sitemaps and all the other site-wide listings remain (latest, hot, etc.).
      Just curious to know why the June 2011 threshold for hub SERP listing with HP (topic) URL, rather than Sub URL applies, and whether this affects rankings and ratings. Several people (e.g Marisa W) have reported that older hubs were less affected by the recent Panda squat.
      Cheers and thanks again.

      PS does this "so the page is added to the index as URL only" mean that the page is not ranked or rated by Google as the bot does not crawl it? Consequences?