Yesterday, Facebook went down for about 2.5 hours. Thousands of sites across the web, seemingly unconnected to Facebook, went down with it.
Facebook is relied upon my many thousands of sites across the Internet, providing a single point of failure for a truly astounding portion of the web.
Is that really a good idea?
The Internet was created to be a reliable network that would route around failures; any disrupted connection would be routed around. This philosophy was baked into the Internet Protocol, into how the backbone is designed, how companies set up servers in redundant configurations, and how the fundamental protocols work. For example, consider email. If the gmail.com server goes down, only its users are effected; if I’m emailing my friend @isobar.com from my @integralblue.com address, there is absolutely no impact to me.
However, lately with the rise of Facebook, Twitter, and Google, a few very important points in the network are appearing, and when they fail, they wreck havoc. Perhaps it’s time to start thinking about how we’re gradually eliminating the reliability and redundancy that has served the Internet so well for so long, and start moving back towards those founding Internet principles.