Distributed computing moving from search for aliens to searching the Internet

Grub is a cool new project that relies on you and me taking part in keeping the search engines updated.

The project is based on open source software, and is in early beta – with clients available for a few of the most common Linux distributions, Windows and as source code ready to be compiled to any *nix system.

So why would you want to waste your cpu and bandwidth on helping someone make a more updated web? Well, when you run a Grub client, you get to have your own websites indexed into the database at each and every scanning you do – which allows you to be certain that your part of the web always stays freshly updated.

They don’t have any deals with the major search engines yet – but if everyone comes on-board that will certainly happen. I am also looking forward to being able to really see some cool statistics of when people update their websites :-)

Grub has the clients here, and the FAQ there.

Scroll to Top