How to reverse engineer Google algorithms – from Domain-b magazine

Written by admin on . Posted in Web & Content Tips


12 August 2011

Don’t try to be smarter than Google, or Bing, or Baidu, or Yahoo – it is a waste of time and money, says Amy Munice, president of b2b consulting firm ALM Communications

Please excuse the trick title – but the answer is – it is both impossible and a waste of time and money to try to reverse engineer Google algorithms-or any search engine algorithms, for that matter.

When, like me, you have distilled the instructions of more search engine optimisation (SEO) consultants than I dare to count, they typically suggest in one way or another that you try to do just that – reverse engineer algorithms. Following such SEO instructions may appear to help 50 per cent of web sites, and leave the other half at the bottom of the heap, or more accurately, the curve.
It’s impossible to reverse engineer search algorithms mainly because the worldwide web is aconstantly changing entity. Every time one goes into the search bar seeking out something and landing somewhere, staying there, and going somewhere else, etc, that activity adds to the growing database that ‘web crawlers’ mine for insights on how people think and search.

These web crawlers are trying to get smarter at natural language processing, a sibling of computational linguistics, and they do get smarter, every day – and not just when Google or another search engine company makes major announcements on things like the Panda algorithm.

Perhaps in the real world these web crawler ”smarts” are increasingly dumb and blind to global business-to-business (B2B) companies’ online marketing efforts, but that’s another story.

If your company is presented with a static list of target ”phrase depth” or ”title counts” or other on-page or off-page factors as a to-do list for search engine optimization that are purported to affect every site on the web the same way, you need to know that this list is pure fiction.

First, it is fiction because the web is never static. This means that if there was some magic formula for website optimisation yesterday it won’t be making the same fit with the web today because today’s web is different from yesterday’s and tomorrow’s web will be different again.

Second, a static one-size-fits-all SEO to-do list is fiction because every page and every site is in a unique competitive landscape on the web. What works for Tata Steel in terms of website optimization is very different from what will work for Bharti Airtel because they reside in a unique landscape on the web.

Third, even in the very small corner of the web that a ‘giant’ like Tata Steel occupies, the competitive landscape of the web is in a constant flux. And because so many factors are at play – eg a recent report by Google said they considered 200 factors – that affect how one or another competitor for a certain keyword shows up on any particular person’s search, the competitor rankings are not linear.

Fourth (and perhaps most germane to why the static lists of SEO to-do tasks seem to work about half of the time), on the web and in the corner of the web where your web page/s compete you are always graded on a curve. How ”good” does your site need to be? It only needs to be better than the other pages and sites you are competing against.

Do inbound links matter? In reality, to some web pages they matter not a whit. To others, all other SEO factors pale in comparison to the number of inbound links, the ”authority” of these inbound links, the title of these links, etc.

So why do SEO consultants give you the same set of instructions that they give to all their other clients? This rarely is because the person giving you this list is trying to trick you. Rather, it’s just a reflection of how out-of-date and out-of-reality many ubiquitous SEO notions are.

Take a look at the websites of some of the biggest companies in India (or any country) and what you find may surprise you, if you are up-to-date on how search engines really work.

As an example, consider ”keyword metatags”. Pick say 10 companies for a start. Using the Mozilla Firefox menu ”view” option and selecting ”page source” you can take a quick look at the code for the web page you are visiting. Take a look at the homepages of the 10 companies you selected. Then, see what they list for ”keyword metatags”.

Then, read up on the announcement that Google made in 2009 indicating that keyword metatags did not affect search – http://www.mattcutts.com/blog/keywords-meta-tag-in-web-search/.

Since keyword metatags do not count towards making your site visible on search engines but they do supply competitors with quick information about your website optimisation efforts, perhaps the wiser course may be to just take keyword metatags off your site.

In fact, your page may actually be downgraded if / when you include lists of keywords that do not have anything to do with the verbiage on that page.  In my experience today, the 10 pages that I pick at random are likely to have a majority that make this keyword metatag mistake.  This is a good example of how out-of-date SEO practices are.

The bottom line is: don’t try to be smarter than Google, or Bing, or Baidu, or Yahoo. If you waste your time and money trying to reverse engineer search engine algorithms, the bigger cost is that you are not paying attention to putting the type of B2B content on and off your site –keyword rich– that makes the difference.

Tags: , , ,

Trackback from your site.

ALM Communications Inc. · 1714 North Honore, Suite 3 · Chicago, Illinois U.S.A.