The two sides to Search Engine Optimisation

There are two parts to Search Engine Optimisation - and this is something that Google did really well when they launched.

Firstly, they needed to know about as many web sites as possible.

For this, they built a piece of software called a "crawler".

This loads up a web-page, analyses it, trying to understand what the page is about, which words it contains, and then stores it in a big index. The crawler then finds the links within that page and then "crawls" those other pages as well.

Secondly, they needed to understand which were the best websites to show when someone was searching. If you type in "Adidas trainers", which site should appear first? Which sites should appear on page one? Which sites should appear on page six-hundred-and-sixty-three?

For this, they used an algorithm called PageRank. The crawler already knows which pages link to which other pages. PageRank (very simplified) basically counted the links to each page, to each site, and made the assumption that if lots of people are linking to a page, then it must be worth seeing.

This worked incredibly well.

In 1998.

But, it had unexpected consequences.

Take action: Think about how easy your website is for a crawler to understand ... is all your content hidden in images or stuff that's hard for a computer to read? Or is it all nicely structured text?

Rahoul Baruah

Rahoul Baruah

Rubyist since 1.8.6. I like hair, dogs and Kim/Charli/Poppy. Also CTO at Collabor8Online.
Leeds, England