The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

« Process Modeling Series V: Value Proposition for Enterprise Process Modeling | Main | Process Modeling Series VI: What do you want to model as part of Enterprise Business Process Modeling? »

Identifying Network Latency is the key to improve the accuracy of the System Performance Models

Network plays an important role in defining the user experience for a distributed application accessible over the internet. Majority of effort is focused on improving the response times at server; however the time it takes for the response to reach a client cannot be neglected. Network latency is a measure of the time delay observed when a packet of data is transmitted from one designated point to another. Some usages also term network latency as the time spent by the data for a complete round-trip i.e. from source to destination and from destination back to source.


In an ideal network, data should be transmitted instantly between one point and another (that is, without any delay at all). However there are different elements that introduce their own respective delays and in turn contribute to the factor of network delay. Following are the few key factors:

  • Network Interface Delays: It is the time the designated point in the data transfer, sender or receiver, take to convert the data into or from the physical data transfer media.
  • Network Element Delays: It is the delay caused by various activities performed along the path by different network elements like routers, switches or gateways. These activities can be any of the following:
    • Processing: The time spent by these elements to process the received packets of data to determine what action needs to be taken
    • Forwarding: The time spent by routers and switches to understand and switch/forward the data to designated destination
    • Queuing: The time spent at the routers and switches while the packet is waiting to be forwarded to the destination. (This queuing happens because only a single packet can be forwarded by the routers/switches at a time to a destination.)
  • Network Propagation Delay: It is the time spent by the data in the travel through the physical transfer media.

So there is an amount of time spent in the transfer of the data from a source to a destination. Considering the importance of the quicker responses from the server machines expected today, even a slightest of the delay because of high-latency network can significantly degrade the overall application experience for the user.


Moreover in any distributed application environment, this network also exists between different tiers, for example web, application and database tiers. So together it finally forms a significant part of the overall transaction response time observed at the client side.


Network Latency (NTime) forms a significant part of the overall response time observed at the client end, along with the server processing time (Proc).

Performance engineers aim to include every possible component that adds to the response times, and thus to the server utilizations and transaction throughputs, in the system performance models. However one tends to model only the components for which the effort asked for by a particular task is known. So a web server, an application server or a database server processing a task can easily be modeled as the service demand values for the particular servers can be found during the testing phase too.

However network latency still remains comparatively complex part to calculate based simply on the testing results. We can calculate the network latency from the production data. But that asks for additional monitoring data which tends to further delay the modeling exercise. Moreover in case the data cannot be produced, it also asks for an additional investment in monitoring setup. Hence network latency factor happens to be neglected or be assumed as a constant delay, thus adding to the inaccuracy of the model.

Accurate identification of these network latency values at different server tiers through a comparatively easier and efficient way will definitely improve the accuracy of the overall system performance model.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

Please key in the two words you see in the box to validate your identity as an authentic user and reduce spam.

Subscribe to this blog's feed

Follow us on