Performance Modeling - Implementation Know-Hows
As an extension to my previous blog titled 'Performance modeling & Workload Modeling - Are they one and the same?', I would like to share few insights about implementation know-hows of Performance Modeling for IT systems in this post.
Performance modeling for a software system can be implemented in Design phase and/or in Test phase. The objectives of performance modeling in these 2 phases are slightly different. In Design phase, the objective is to validate (quantitatively) if the chosen design and architectural components meet the required SLAs for given Peak Load. On the other hand in Test phase, the objective is to predict the performance of the system for future anticipated loads and for production hardware infrastructure.
In either case, performance modeling can be done using 'Analytical Models' or 'Simulation techniques'. In my view, analytical models are easy to use, less expensive (with acceptable deviations) compared to simulation software which needs to be either built or purchased from marketplace. I suggest using Queuing Network Models (QNM), which is an analytical model, as they are time-tested and mathematically proven. QNMs are widely used in telecommunications, operations research and traffic engineering to name a few.
Before we dive into the nitty-gritties of QNM, I would like to bring out a simple point - 'a software system is a network of resources/service centers, with each service center associated with a queue'. To model application performance, it is necessary to consider all such resources - CPU, Memory, DISK and Network - at different application layers i.e. Web/App/DB/EIS.
The following are the primary steps to be followed for Performance Modeling exercise using QNM / Analytical models:
1. Identify various layers of the given application which are significant from processing standpoint. Exact layers depend on application domain, its functionality and end users. For instance,
• Application Server and Database server are the significant layers for retail banking, application
• WebServer is the primary layer for processing for a marketing campaign application
2. Identify the compute-intensive resources such as CPU, Memory, DISK and Network within each layer. For example-
• For database layer, CPU and DISK are the most compute intensive resources
• Application and Web servers are predominantly CPU & Memory intensive
3. Categorize the application workload as 'Open' or 'Closed' and choose the right QNM model. A system is said to be 'Open' system if the incoming requests are infinite and have no boundary. On the other hand, a system is said be 'Closed' if a finite user or transaction load alone is permitted. In general, Batch systems are 'Closed' in nature and OLTP systems are 'Open' in nature.
4. Capture 'Service Demand' of each resource by running 1-2 different simple-to-moderate load tests as Service Demand is the 'attribute' of a 'resource' and is constant. Service Demand does not change with different concurrent user/transaction load.
5. Using QNM, one can predict performance of software systems by changing the inputs. Below are the typical inputs and outputs for a Performance Model:
a. Inputs - Arrival Rate of requests (user/ transaction load / incoming messages etc.)
b. Output - Residence time, Wait time and Utilization
In both Design & Test phases, the above steps are common but with a slight difference. In Design Phase, one will not have the luxury to build entire system with all the external systems and 3rd party interfaces. Hence, it is suggested to develop a prototype or a Proof-of-Concept (POC) of few use cases that can represent major critical end-user actions. However, in Test phase, data can be captured from a completely developed system making results more accurate. The ideal way is to implement performance modeling in Design phase, and fine tune it with more accurate data points in Test phase and thereafter.
To conclude, implementing performance modeling depends on the SDLC phase, the objective and the complexity of the application under consideration. In my next blog, I will share the implementation challenges and the benefits of performance modeling.