Replication has its problems, specially if you have a multimaster replication system. To make matters worse, none of the PHP frameworks support multimaster replication systems nor handle master failover. Symfony uses Propel and only supports master-slave replication systems. When the master fails, it’s true that you have the slaves ready to replace it, but the process of detecting the failure and acting upon it requires human intervention. Zend Framework, on the other hand, doesn’t support replication at all.
I strongly believe that a master failover needs to be handled appropriately on the application side. Of course, you can always use an SQL proxy or any other server-side solution, but they are either limited or unreliable.
From Digg’s Blog:
The Digg database access layer is written in PHP and lives at the level of the application server. Basically, when the application decides it needs to do a read query, it passes off the query with a descriptor to a method that grabs a list of servers for the database pool that can satisfy the query, then picks one at random, submits the query, and returns the results to the calling method.
If the server picked won’t respond in a very small amount of time, the code moves on to the next server in the list. So if MySQL is down on a database machine in one of the pools, the end-user of Digg doesn’t notice. This code is extremely robust and well-tested. We worry neither that shutting down MySQL on a read slave in the Digg cluster, nor a failure in alerting on a DB slave that dies will cause site degradation.
Every few months we consider using a SQL proxy to do this database pooling, failover, and balancing, but it’s a tough sell since the code is simple and works extremely well. Furthermore, the load balancing function itself is spread across all our Apaches. Hence there is no “single point of failure” as there would be if we had a single SQL proxy.
If you are building your own solution and need a sandbox to test it, I recommend using MySQL Sandbox. Also, you might find this script useful: MySQL Master-Master Replication Manager
7 responses to “The Multimaster Replication Problem”
Wouldn’t it be nice to get a peek at Digg’s code, eh? ;)
I can send you a Digg like solution if you send me your email address. Can’t find one on the blog :/
But if I am right Digg’s solution is not a failover if a master goes down. It is selecting randomly from a pool of (in Sync) slave servers, like the code I have to offer.
That’s right, that’s what they are doing. It’s a very straight forward solution. Solar’s database access layer looks good. It’s simple and powerful. My e-mail is: fedecarg at gmail dot com. Thanks.
I’m quite declined to implement database load-balancing and high-availability in the application layer, and I think it is a sign of poor design. Yes, in case of a database proxy (e.g. sequoia, pgpool) there appears a single point of failure dilemma (actually, there are high-availability solutions for sequoio, pgpool, etc.), but, IMHO, this problem has a lower priority than seperation-of-concerns principle in such complex application frameworks you mentioned. Neither application should be concerned with the availability of the database system, nor the database system should be concerned with the application logic — except determining access patterns, caching related studies, etc.
OTOH, consider this situation: In one of your web tiers you find out that X database host is down. How will you propogate this information to other web tiers? This is not rocket science, but it needs a considerable amount of work in the application code, which I think quite unnecessary and brings potential critical control flow bugs with itself. (Actually, this problem can be solved using some sorf of messaging infrastructure (e.g. AMQP) but this solution is almost identical to using a database proxy, completing the cycle.)
I don’t agree with you and here’s why. You are saying that it’s a sign of bad design if the application is concerned with the availability of a database. But, that’s exactly what the application does with the slaves, it checks the availability of a database before moving to the next one. This means that the application is ready to handle a slave failover. And that’s a sign of good design. Of course, adding support for multiple slaves instead of a single VIP is more work, but like you said, it’s not rocket science, and that’s why all the applications are doing it. The concept behind multiple slaves and masters is the same. If you have 2 VIPs, 4 master set of clusters and one of the proxies goes down, your application needs to select a different VIP, otherwise your system becomes unavailable.
I don’t want to sound like an RDBMS zealot, but from same perspective, replication could (and should?) be integrated into the application too.
I read your reply 3-4 times and everytime sentences like “application does …”, “application needs to …”, etc. take into my attention. That’s exactly what I was trying to avoid. Application just should do its application logic job, as database just should do its persistency job. High-availability, load-balancing? Let them appear as medium layers between those components. Apples in one hand, and oranges in the other. OTOH, I don’t claim such a scheme will bring less pain. But considering SoC, this obstacle will be negligible.
But then, based on what you are saying, applications like Torque, Hibernate, Propel and Doctrine violate the SoC principle because they spread work between two or more slave databases, something that a load balancer should be doing?