Although there are plenty of reasons why you might want to use cloaking techniques for your website – having a conditional redirect that pushes people through to a mobile version of your website if they arrive using a mobile phone is one – it’s pretty much a technique that has been consigned to the depths of the black hat world, and is pretty likely to get you banned from Google.

Cloaking can be pretty simple – here’s a quick and dirty piece of PHP that would serve Google with a nice clean piece of content but redirect everyone else to a spam site where I might be flogging viagra:

<?php $useragent = $_SERVER['HTTP_USER_AGENT']; 
if( strpos($useragent,"Googlebot") ) 
{ echo "here is the nice keyword rich code for Mr Google"; }
{ header("Location:"); }

Pretty simple and quite elegant isn’t it?

Now, there are ways and means of making your cloaking more robust, for example rather than using a specific user agent, I might want to use a list of IP addresses that I know are assigned to people who work at Google, or alternatively, it might make sense to redirect based on the browser choice.  It’s a safe bet to assume that most Googlers are currently running Chrome.

To get around the fact that it is comparatively hard to spot cloaking if it is well implemented, Google employ an army of quality raters who surf the web checking the integrity of the search results.  Their handbook has been “accidentally” released to the public a few times over the years, and you can even apply to join the programme.

The number of people who work as quality raters is not disclosed, however there are estimates around that there are in excess of 10,000 of them world wide.  According to PotPieGirl’s post, they are expected to work around 20 hours per week and assess around 30 sites per hour.  That means that they will get through more than 300 million assessments per year.

That’s quite a lot.

So, why does cloaking still work?

Well it doesn’t, really.  Not all of those 300 million websites get assessed immediately, or at the same time.  There will always be a gap between content going live and getting crawled by the spiders and starting to rank.  If the cloaker is also doing some mass automated link building with Xrumer or something similar, then the chances are that they’ll rank quickly and that’s the key.  Provided you can stay one step ahead of the game, you win.

However, and this is a big However

While it seems like there’s a lot of Google raters, there aren’t compared to the number of Google users.  Increasingly, we are all Google raters, so much user data is being collected  that we are all beginning to assess Google’s results (and those of other search engines) all the time.

While at the moment otherwise undetectable  problems need to be addressed via manual intervention, in a world where every user is passively curating the search results as they interact with them, the quality rater becomes redundant, and the number of visitors who can be affected by an individual piece of cloaking decreases.

There will come a point at which the number of people who can be cloaked to effectively by each instance of a script will fall to one.  That’s bad news for the current wave of black hatters, but it won’t be the end of it.  if anything, with such a low cost to entry for the web, we’ll see an explosion of hacked sites and other techniques arise.

Tagged with:

One Response to Cloaking still works … Why

  1. Dan Clarke says:

    It’s a shame, for want of a better word, that the Googlebot is getting better at reading JS and AJAX queries. They can now parse data that is hauled over an Ajax call, which is new. But at there are always new ways to cloak content/hide content/fake content, you just have to think outside of the box {^^,}

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>