Some time back, I blogged about using BIND’s DNS Views to distribute traffic and then dabbled with multiple DNS “A” records to do round-robin load balancing for my websites. It seemed to work ok, but I wanted greater control and scalability. Also, a round-robin setup would have opened up another can of worms for any truly dynamic sites that required load balancing.
So the next step was setting up something much more sophisticated. There were 2 web servers to play with, one on the east coast of the US and the other on the west coast, and quite a few hurdles, such as BGP and the basic structure of the Internet.
The theory was that I’ll split the Internet’s IP space into 2 groups (using /8 network addresses), depending on which had better connectivity to my servers. Naturally, the network response time for each server would vary for most people and I could distribute traffic between the two while reducing latency at the same time. I believe that a number of very large sites such as Google and Hotmail are using this method (in addition to methods out of my reach, such as anycasting) to direct users to the closest server.
Deciding which /8 to put in which group was a difficult task since any given network address of this sort could be further divided among network operators in different parts of the world. However, since the primary goal was to divide traffic, network response time wasn’t really a priority. I grabbed a list of IP addresses that were accessing my websites and attempted to ping and trace them from both of the servers.
After some consolidation of the results, I had 2 small lists of /8s, each supposedly closer (in network terms) to the corresponding server. The rest of the IP space, I split up with the help of this IANA address space document and the above map of the Internet’s main registries (searched hard, but couldn’t find a more useful one). This may not be the best approach, but it’s better than randomly grouping a bunch of numbers.
Once the lists were compiled, the rest was just a matter of setting up these two ACLs in named.conf (trimmed) and creating views for each of them.
The ACLs:
acl eastcoast { 12.0.0.0/8; 59.0.0.0/8; 68.0.0.0/8; ....... 213.0.0.0/8; 217.0.0.0/8; }; acl westcoast { 24.0.0.0/8; 38.0.0.0/8; 60.0.0.0/8; 63.0.0.0/8; ...... 218.0.0.0/8; 219.0.0.0/8; 220.0.0.0/8; 221.0.0.0/8; 222.0.0.0/8; };
The View statements:
view "eastcoast" { match-clients { eastslaves; eastcoast; }; include "/etc/bind/named.root.hints"; include "/etc/bind/named.eastcoast.zones"; }; view "westcoast" { match-clients { westslaves; westcoast; }; include "/etc/bind/named.root.hints"; include "/etc/bind/named.westcoast.zones"; };
The result? Average load on the two servers has dropped quite a lot due to the distribution and things appear to be working as expected. I’d love to implement this on a bigger network of servers with mirrors across the globe. That would really be something.
2 thoughts on “Region based load distribution”
Very interesting and useful info. Thanks for sharing. Are westslaves and eastslaves also ACLs?
Yes, they’re ACLs in the above example, but they could just as well be IP addresses (that’s the way I’ve done it).
One drawback with the current BIND is that you need an extra IP on the slave DNS for each view you define. Hence the need for separate slave ACLs. Otherwise, your slave would only get 1 view during zone transfers.
Comments are closed.