While defining .net architecture several times we come across scenarios where multiple server side caching options needs to be evaluated and dealt with, with .Net 4.0 it is to be seen whether these decision get simplified or further complicated.
Microsoft Project code name “Velocity”, part of .net 4.0 is Microsoft’s highly scalable in memory cache framework for all kinds of data in distributed applications. Usually caching is employed in application to improve the performance and scalability. Good candidates for caching data is static or master data e.g. product list. Recommended practice is to cache master or reference or lookup data as it hardly changes over time but at times scenarios exists where transactional or session specific data needs to be cached for valid reasons.
Velocity provides a framework where by the identified data can be cached in middle tier server memory and database round trip for the same can be avoided. Objects can be added in cache using simple Get or Put calls.
Velocity can be hosted either within the application (embedded) or as an independent service. If hosted within the application, the distributed cache shares the memory with the application. If hosted as service, client access the cache using client APIs.
To ensure every time the request to cache is fulfilled and the cache returns the correct results in quick time, distributed cache can be partitioned, clustered and or replicated. Nodes can be added or removed dynamically to and from clusters to increase the throughput or decrease the response time. Velocity will do implicit load balancing and new data will be cached in new node and existing data will also be migrated to new node.
Clients can either directly query the Cache host which in turn returns the correct cached object or client can locally host the routing table which keeps track of where the distributed cached objects are and can query cached objects directly.
To ensure cache consistency, Velocity supports Notification services, optimistic and pessimistic concurrency. Expiration of cache is time based and cache object eviction can kick in based on least recently used object algorithm or high water mark of the application.
Velocity provides SessionStoreProvider class that plugs into Asp.net session storage provider model and stores session state. Using Velocity ensures non sticky session routing and ensures session data is available across clusters.
For application specific data the cache can be hosted or embedded within the application. For data required enterprise wide or cuts across multiple applications should be exposed using Cache Service. In the enterprise architecture, Velocity can very well sit on top of Master Data management applications and negotiate the frequent Database hits from client applications.Caching options
APIs from System.Web.Caching namespace can be used only in Web applications, the problem with directly using these APIs in the application is every developer uses his creativity in dealing uniquely with cache and hence it is recommended to use Enterprise Library Caching application block to standardize the coding/usage pattern. With 4.0 the new namespace System.Caching will take over for specifically inproc caching. System.Web.Caching will remain to support backward compatibility.
In process cache is the fastest of all the caching techniques. Enterprise library Caching application block using in-process cache has limitations on the scalability, reliability and availability front, when the asp.net worker process gets recycled the data in the in memory cache is lost. In Web garden, where multiple worker processes exists, the data integrity issues surface from the limitation due to synchronization of multiple copies of cache. In Web farm scenario, where multiple servers exist, a standalone cache is unaware of other servers and their respective state which again leads to data integrity issues. In process cache cannot usually grow beyond what the application process can handle where as distributed cache can grow in size through distribution.
Outproc technique using Asp.net State Server/service on 32bit machine can grow up to 2GB of memory space, up to maximum 3 GB if you use the /3GB switch. As 32bit servers are becoming legacy and 64bit becomes main stream, this limitation of memory space will not hold long. If run with single state server, the state service technique has single point of failure. If State server is clustered, it has the limitations of data synchronization across cluster. If the state service or the machine on which it is running goes down, clustered state service will still not be able to recover the state of the data e.g. Ready to check out shopping basket. Though you can still continue to be in the application and may or may not have to re login into the application, selecting and adding items in to the basket will have to be started all over again, ultimately hampering the user experience.
Outproc techniques using SQL Server to some extent resolve above mentioned data integrity and scalability issues, but the idea behind caching is to save the database roundtrip, and here the data is being cached in another SQL Server database instance defeats part of the objective. In customer scenarios where SQL Server cannot be used as caching database, custom provider for Oracle or for that matter any database can be used to extend it to respective database but the database roundtrip limitation holds.
Till date if the architecture needs to deal with distributed caching problem, it has to live with the limitations of out of the box outproc techniques or rely on 3rd party components like NCache,
ScaleoutState Server, StateMirror, etc. Having out of the box support for distributed cache from .net 4.0 Velocity framework would help deal with such scenarios without additional investments.
In physics, Velocity is ratio of change of distance to change in time in specific direction. The Microsoft framework Velocity certainly attempts to fulfill this definition by making the data available to end user in quick time J