I just read this article on pcnews.com, http://news.cnet.com/8301-10784_3-99...l?tag=nefd.top, it's quite insaine....
Google have over 10,000 servers but look at this:
That's crazy how things like this, in a modern day still happen to a large scale.In each cluster's first year, it's typical that 1,000 individual machine failures will occur; thousands of hard drive failures will occur; one power distribution unit will fail, bringing down 500 to 1,000 machines for about 6 hours; 20 racks will fail, each time causing 40 to 80 machines to vanish from the network; 5 racks will "go wonky," with half their network packets missing in action; and the cluster will have to be rewired once, affecting 5 percent of the machines at any given moment over a 2-day span, Dean said. And there's about a 50 percent chance that the cluster will overheat, taking down most of the servers in less than 5 minutes and taking 1 to 2 days to recover.
Heres an image:
http://i.i.com.com/cnwk.1d/i/bto/200...an_400x318.jpg
Notice the fan to cool the servers, haha?
Pretty interesting stuff although the article's far too long and just bores me.






.
Reply With Quote




.




