Database Scaling Methods for SaaS based Multi-tenant Applications
Scalability is one of key requirements of SaaS based Applications as it has to support users and data belonging to multiple tenants. Also it should be scalable in addressing future requirements once SaaS Provider provisions more tenants in the future.
SaaS Providers are inclined towards adopting shared database and shared schema strategy to support multiple tenants due to cost effectiveness involved in leveraging this strategy. Adopting this approach however brings one major challenge pertaining to database scaling as database is shared among all the tenants supported by SaaS Application.
SaaS Applications adopting shared database, shared schema approach should be designed considering that it will need to be scaled when it can no longer meet baseline performance metrics in the future, as when too many users will try to access the database concurrently or the size of the database will be causing queries and updates to take too long to execute. One of the ways to scale out a shared database is database sharding. It is the most effective way to scale out database as rows in shared schema are differentiated for the tenants based on tenant ID and database can be easily partitioned horizontally based on tenant ID. This makes it easy to move data belonging to each of the tenant to individual partition. Database Sharding provides many advantages such as faster reads and writes to the database, improved search response, smaller table sizes and distribution of tables based on the need.
But while partitioning data of multi-tenant SaaS Application, we need to consider factors like performance degradation due to increased number of concurrent users or increased database size due to provisioning of multiple tenants which may impact performance characteristics of existing database. This will help to select appropriate technique to partition based on database size requirement of individual tenant or number of users of individual tenant accessing database concurrently.