The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

« Test platform in cloud | Main | Database Scaling Methods for SaaS based Multi-tenant Applications »

Blunders in Performance life cycle within SDLC

Performance is embedded into various stages of SDLC typically subject to the perception of the architects to developers to deployment teams. Though "in principle" everybody is aware of the importance of performance - the actual implementation of the "ideal practices" depends on the availability of the time and expertise. Often it swings from strict implementations without weighting the criticality to sheer ignorance to postponement owing to constraints at various stages. Below are some points to take into consideration while doing the "trade-off" during various phases of development to deployment cycle. While most of these points would appear to be "obvious" - they are invariably the most "slipped out" ones !

(i)            Over-Planning in design phase

Extensive preparations and excessive attention put into performance aspects in the planning and design phases often turns out to be not as productive. During this phase one needs to balance the amount of effort spent and their proportional impact on performance - very likely large time spent has an infinitesimal effect of actual performance since things are yet "on paper".

Over-planning typically leads to over-confidence : when you are sure that you have not left any hole in your design phase, it becomes hard to figure out where to start when a performance issue occurs. Projects are managed with a waterfall approach - thorough designs integrated with modeling tools lead to the consideration of being bullet-proof. This perception then persists across architects and project managers to the developers and QA teams. It then turns into a case of missing the forest while focusing on the trees.

(ii)          Under-Planning to enable Performance trouble shooting in development stage

It is imperative to use a logging API in order to help locate performance degradations. Ensure the places of the logging points to be in line with the application flow and that it leads to the right trails rather than just a scatter and clutter of the code execution. Logging APIs invariably include a check for logging levels. While this lets developers have the freedom to include logging statements generously during development - the purpose and the positions should not be restricted to "functional" aspects only which typically remains the case. Every call to an external and/or downstream system must have an appropriate log statement. Any internal algorithms (single or a bunch of methods) that is likely to take longer than a few milliseconds should log at the beginning, the end and during any significant calls made during its execution. Most logging APIs have a configuration where the log entries include the class and a time stamp - thus its not required to create timers to quantify the length of a call.

All exceptions must be logged, and be logged irrespective of the logging levels. This imposes a "restriction" on coding where in exceptions should be used only for exceptions! Exception should not be used as a return value when it can be anticipated to occur - in other words a "catch block" should never have business logic. In such cases unnecessarily lot of time needs to be spent for tracking down performance issues.

(iii)         Passing the ball across when performance issues crop up post deployment

It's human to believe that everything one does is so well done that the problem must be somewhere else ! Make sure that when problems surface the investigation starts with evaluation of the area one is associated with. Simply passing ball from one court to the other does not lead to the solution especially when multiple sub-teams / streams are involved. Projects typically have a team working on the web application and other teams developing service-layer APIs etc. Typically whichever team discovers a performance issue will invariably contact the other team and demand they fix the problem. What helps is rather specifying an estimation of the cause by going through your area and making sure it's "elsewhere". If it is a simple fix, it takes far less time to fix it than to pass it off to someone else to fix. While in case of complex issues - working cohesively leads to a productive solution.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on