The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

Main

October 11, 2013

Can formal requirement methods work for agile?

By Shobha Rangasamy Somasundaram & Amol Sharma

Formal methods adapted and applied to agile, provide clear and complete requirement, which is fundamental to the successful build of any product. The product might be developed by following Methodology-A or Methodology-B, which changes very few things as far as knowing what to build goes. So we could safely state that the development methodology used by the project team could be any, but good requirements are absolutely necessary. The manner in which we go about eliciting and gathering requirements would differ, and needless to say, this holds true for agile development too.

Continue reading "Can formal requirement methods work for agile?" »

January 23, 2013

Why don't Bees Teleconference while Building a HIVE?

Self Organization in Teams-Learnings from Nature

J. Srinivas, Shilpi Jain, Sitangshu Supakar

SO_1.jpgWhat do pack of wolves, pride of lionesses, bees and ants have in common. What can we learn from them? What is self-organization (SO) and how does it form?

We are exploring different ways to induce this behavioral skill in the team members for greater commitment, motivation and accountability to the work. Many of us think, what is so great about it; we are self-organized and perform our daily course without fail. But, the question is can we perform equally well in a project, during crisis or with reduced resources.

NATURE has tuned the self-organized system. Be it the conduct of animals, insects, or eco-system, nature organizes optimally. What are the attributes of self-organization derived from the nature?  Can project teams organize themselves, the way nature does? Is it meaningful to compare the dynamism of NATURE with the dynamism that organizational teams face?

Before finding answers, let's understand with few examples how self-organization is an adaptive attribute in animals and insects. Imagine how the pack of animals like wolves and lionesses hunt? How honey bees organize their affairs so well in their hive and devote themselves to the welfare and survival of their colony?

Wolves are known for their intelligence and social behavior. They organizeSO_2.jpg themselves for the hunt and care of their group. The motive of the pack is to be as successful as possible, no matter if they are not the strongest one. The whole objective is to make their hunt a success so that every member can get the sufficient food. Each wolf in the pack plays a role. There is always a leader in the herd (pack) but while hunting, it rarely interferes or directs its fellow animals (Michael, Wolf., 1995-2005). Another interesting thing about them is their sense of communication; they follow communication protocol and communicate in many ways (body language, gesture, and expression). The selection of communication mean is highly dependent on the distance between the two wolves. If they are close to each other the communication is non-vocal. Similarly when they are in large group, they do 'Mob-greetings'.

They share a common objective - food for the pack. They have communication protocols and established patterns for hunting, individuals know how to respond to change to meet the objective. Their play mirrors the hunt patterns.

Let's see how bees organize themselves and find the flower nectar. Bees are deaf hence they perform a series of movements called as 'waggle dance'. These dancing steps help to identify the source of nectar and also teach other workers about the location of food source 150 meters away from the hive. The bees have orchestrated movements for communication. Especially when they are hunting for flower nectar, the experienced bees walks straight ahead, vigorously shaking its abdomen and producing a buzzing sound with the beat of its wings (Debbie, 2011). The distance and speed of this movement communicates the distance of the food site to the other bees. Another exciting aspect is the group size, the bees' colony size varies from 20000 to 80000 worker bees and they all work in coordination with each other without much direction and guidance.

The above examples of honey bees too display those benefits of the self-organization concepts discussed above. Adherence to shared objective set of practices, pattern of behavior, and communication. They show the benefits of self-organization, i.e. commitment, efficiency, and achieving self-sufficiency for the community.  Members of the community organize themselves repeatedly and continuously to meet changing requirements.

In a Direct communications with partners, iterative processes helps control conflicting interests and help them to adapt quickly to unpredictable and rapidly changing environments (Monterio et al., 2011).

  • In a research conducted by Hoda et al. (2011), it was proved that "balancing freedom and responsibility, balancing cross-functionality and specialization, balancing continuous learning and iteration pressure uphold the fundamental conditions of self-organization at certain level."

Agile manifesto stresses on self-organizing teams, and we explored what techniques make the teams achieve a sense of teamness and spontaneous adaptability which makes it work in short sprints and what will make it work in the long run. In subsequent blogs we will learn how the concepts of self-organization can be brought in a structured manner and help teams adapt in a changing environment. The resulting framework would help us in recognizing when SO can be formed or in creating the right environment for it.

Our goal is to deconstruct the key concepts of the above examples and apply them in real teams to make it spontaneous and easy to transform into a self-organizing team. Support for the concepts comes from couple of papers we looked at.

REFERENCES

Cao, L., & Ramesh, B. (2007). Agile software development: ad hoc practices or sound principles",. IEEE Computer Society.

Debbie, H. (2011). Honey Bees - Communication Within the Honey Bee Colony. Retrieved September 13, 2012, from About.com: http://insects.about.com/od/antsbeeswasps/p/honeybeecommun.htm

Hamdan, K., & Apeldoorn. (1989). How Do Bees Make Honey? Retrieved September 4, 2012, from A. Countryrubes Web site: http://www.countryrubes.com/D07529EF-066D-494F-A481-AB6EF6A257E9/FinalDownload/DownloadId-886680BD5CAAD9474A1D646219C0FAE6/D07529EF-066D-494F-A481-AB6EF6A257E9/images/How_do_bees_make_honey_update_9_09.pdf

Hoda, R., Noble, J., & Marshal, S. (2011). Developing a grounded theory to explain the practices. (S. S. Media, Ed.) Empirical Software Engineering.

Karhatsu, H., Ikonen, M., Kettunen, P., Fagerholm, F., & Abrahamsson, P. (2010). Building blocks for self-organizing software development teams a framework model and empirical pilot study. International Conference on Software Technology and Engineering (ICSTE), (pp. 297-304). Helsinki, Finland.

Michael, Wolf. (1995-2005). What are Wolves. Retrieved September 4, 2012, from Wolf Ranch Foundation: http://www.wolveswolveswolves.org/WhatAreWolves.htm

Monteiro, C. V., da Silva, F. Q., dos Santos, I. R., Felipe, F., Cardozo, E. S., Andre, R. G., et al. (2011). A qualitative study of the determinants of self-managing team effectiveness in a scrum team. Proceedings of the 4th International Workshop on Cooperative and Human Aspects of Software Engineering (pp. 16-23 ). Communications of ACM.

[1] The image of wolves hunting is taken from the source: http://qpanimals.pbworks.com/w/page/5925166/Grey%20Wolf

[2] The image 'bees at work' is taken from the source: http://openlearn.open.ac.uk/mod/resource/view.php?id=387640


Continue reading "Why don't Bees Teleconference while Building a HIVE?" »

January 14, 2013

DynaTrace - Application Performance Management Solution

DynaTrace Software is a leading application performance management tool and is being widely used.It comes with advanced features for monitoring Java and .NET applications, which aids to identify bottlenecks or errors in the application easily.PurePath technology used in DynaTrace provides end-to-end transaction level details; from browser, across all tiers and database.It helps to uncover performance issues even at the code level and also details of transactions invoking external services.This tool detects abnormalities in response time, transaction rate, throughput and system usage.

 

DynaTrace introduced to performance testing and performance engineering has helped to diagnose and fix many performance issues at an early stage.Its ability to dig deep even to code level aids in root cause analysis of the issue.

For example, provided below is the purepath  snapshot of a transaction which took high response time. i.e. > 93 seconds.[SLA : 2 seconds]Just a click on the transaction name and DynaTrace drills down to the exact web service operation, checkoutItem and displays the exact child method which consumes time.

 

Chkout_DT.JPG

 

DynaTrace helps in optimizing the performance of web, non-web, mobile, streaming and cloud applications.It supports VMWare and EC2 based clouds. This can be integrated with major testing tools like LoadRunner and SilkPerformer. Dashboards which can be customized according to the requirement is another feature of this tool that aids reporting.

March 14, 2012

Interface testing

An enterprise application may comprise of software components and these components needs to interact with each other constantly.This is where an interface comes into picture, to facilitate the working of various modules as a single application.Performance testing is conducted to verify whether a system meets the performance criteria under varying workload.

                Based on the project experience, an overview of interface performance testing is mentioned below.The interface which was tested interacted with the Order Management System and the Employee Management System.Order details and updated order status of customer were transferred between the two systems.

The messages being sent between the systems were in xml file format.TIBCO queues were tested during this exercise.Messages from Order Management System were pushed to TIBCO queue and the queue consumed these messages.This process triggered an adapter service which in turn invoked web services call to update the Employee Management System.

                During load test, large volume of messages would be pushed to queue and the number of messages consumed and time taken for this would be monitored.We faced some issues during our interface testing, hence following points need to be taken care of:

-          Ensure queue receivers are up before test

-          Each XML Message should be in a single line.

-          Ensure latest deployments are done across the systems involved.

August 10, 2011

How important and critical is to capture Events in business process modeling?

Let us take a simple process: Process Name: Issue Visitor Pass for Entry into the Organization Campus; Goal of the process: To check the authenticity and issue the pass for the visitor (from the Security Officer Perspective). Event that triggers this process: "Arrival of Visitor to the Campus"; Function or Tasks followed are: Gather details about the visitor, purpose of visit and contact details in a specified form, Check whether the contact person is an existing employee in the company in the system, connect with the Employee to cross check about the visitor information and get an oral permission, update the visitor information (name, reason for visit etc) into the system, issue an bar code based card with the validity of the card mentioned. Result: Visitor Entry Completed and Visitor allowed entering campus (successful result). From a high level perspective, the arrival of the visitor is an occurrence (and it is also information sensed if one can argue) that someone enters and says I need to visit the campus. But the real information is gathered after this occurrence that someone wants to enter the campus; so the trigger is the occurrence or the state wherein someone needs access to the campus (from the Security Officer Perspective). The business resources used here during the process are a paper based Visitor Form filled by the Visitor, the application which has the employee details (which the security officer checks), telephone (to call the employee), a bar code based security card and a magnetic card reader to pass on the information of the card holder to the card with access permissions to restricted/non restricted areas within the campus. There may be other checks related to restriction of gadgets/materials that are allowed inside the campus for which the Security Officer will cross check with the visitor and ensures to his better knowledge that the visitor is a safe bet to enter into the premises of the company. So many business resources are used in this simple process. Now we are clear on the "trigger", "process" and "result". This is a simple trigger/event and a simple result - the process is a stand alone process - we need not worry what happens once the result or state change of "Visitor Allowed To Enter Campus". But we can be of sure of one thing that the trigger event "Arrival of Visitor" and the result "Visitor Allowed To Enter Campus" are not "action items" per se. These are state changes that either triggers something to happen - a process or a service or other logical information instantiation.

But a complex event or result might trigger multiple processes and leads to state changes that further triggers downstream processes. Another example can be a process wherein the customer wants to purchase a book online. The event is an external event for the system wherein "customer has a need to purchase a book" - then the process wherein the customer fills information online and places the order - state change - "Order for book placed". This result triggers subsequent processes in which different Role acts upon - 1) Verify Inventory for the availability of book 2) Initiate Invoice Process 3) Initiate Shipping Process 4) Initiate inventory of the book if the book total falls below a certain limit. These tasks can happen parallel and can be performed by various roles/departments. Order Manager identifies the book and pass it to the shipping department which then couriers the book; meanwhile the accounting team initiates the invoice process. "Order for book placed" triggers all these downstream process and the same result is to be used in all these processes to appreciate process integration through event/result combination. This way one enable horizontal process integration as per the initial discussion started here.

In more complex event situations, the "annual budget cycle" - which is a trigger event, initiates multiple processes - business planning, resource planning, market plans, target definition, product development and technology planning. Here comes the problem, here these are all high level business processes and further broken down to various levels. It becomes a huge task to first identify the relevant events and the associated processes and make sure integration of processes exist both vertically and horizontally. This will be like boiling the ocean when one tries to cascade and capture everywhere the same event and result when there are multiple levels of abstractions of business processes (for which one can use the tree, branch, and leaves analogy in the discussion - top down or functional decomposition).

From practical business process modeling perspective, it is real difficult to elicit the right trigger event information - it is usually a generic understanding of what triggers what. One cannot make a great sense out of the events/results expect in cases where one can try to horizontally integrate - this is also left to modeler's interpretation in most cases. But if the purpose is for system development, then make sure the right event/result or pre-condition/post-condition is elicited and documented completely.

To summarize, process architecture definition is crucial, while event architecture definition is an add-on or nice to have!

April 1, 2011

Monitoring Load Generators during Performance Testing

Is monitoring of load generators required during performance testing? - This question is generally answered with a YES but in practice it is not followed. The significance of understanding load generation process is generally overlooked in performance testing. This blog discusses a case study where load generators were not able to generate the load as expected and how it was resolved.

We were working on a Proof of Concept (PoC) to study the performance of a sample application under consistent peak load. The load tests were executed for a single business transaction with multiple loads starting from 100 users. All the load tests included ramp up to let the application handle the gradual increase in the load. The load scenarios were created with minimal think time of one second and were executed for a shorter period of 15 minutes as the objective was to find the peak load which results in 80% CPU utilization.

The application was able to handle up to 300 users load but lot of errors were thrown when the user load reached about 330 users during the ramp-up of 500 users test. The test results log showed HTTP 500 Internal Server Errors and exceptions such as java.net.BindException, java.net.ConnectException and java.net.SocketException. Based on the server log analysis it was found that the HTTP 500 errors were caused due to wrong parameter values passed for some of the request parameters. The script was designed to capture these dynamic values from the response of the previous request and use it in the subsequent request as required. Later it was found that the first request itself failed and the second request was not updated with parameter values, which resulted in java.lang.NumberFormatException and HTTP 500 errors were returned for the second request. As the primary cause of failures were the requests that failed first, analysis to find out why the first request failed was done. The error responses of those requests were related to socket exceptions, but none of those socket exceptions were logged in the server logs. To understand things in more detail the load generator was monitored to find out how the socket connections are established from the testing tool to the server. This was done using the Microsoft Windows netstat command. The output of the netstat command showed too many socket connections to the server and the number was much higher than the number of users that were simulated using the testing tool. It was also observed that most of these sockets were in TIME_WAIT state.

After finding that the issue is based on how the connections are established from the load generator to the server, further investigation was required on how HTTP calls are being made. This lead to the analysis of the testing tool plug-in used to simulate HTTP requests and it was found that using the default plug-in causes this issue as the HTTP connections were not reused. This also clarified why so many sockets were in TIME_WAIT state. Based on the documentation of the tool the plug-in was replaced with another plug-in which supported reuse of connections. The problem did not get resolved until the script was updated to use the new plug-in. The test was executed again and the load was generated as expected without any socket issues.

This whole exercise made one thing very clear: close/detailed monitoring of load generators during performance testing (at least for basic system level metrics) should be considered as part of the performance testing process. This will help uncover load generation related issues during the  testing cycles. It will also help in ensuring that the load generator is able to generate the load as expected and reduce the time and effort spent in unnecessary application analysis.

February 9, 2011

Analyze Basic Workload information of a Server through Web Server Logs

One of the easiest and simplest ways to analyze the work load information of a server is through the analysis of web server HTTP access logs. A web server's HTTP access log can contain information like Date-Time stamp, Client-IP, Request-URL, Status Code, Time-Taken (Processing Time), Cookie information etc. (these fields can change with the different logging formats).

-Date Time : the time stamp at which request arrives.
-Client-IP : the IP-Address of the machine from which the request came.
-Request-URL : the page URL requested by the client.
-Status Code : an entry which indicates whether the request is successfully served or not.
-Processing-Time : the time taken by the server to process a given URL request.
-Cookies field : the cookie information.

An example for web server log generated by Apache server is given below.

##IPADDRESS## - - [29/Jun/2010:03:23:51 +0000] "GET /MyApp/SignUp.jsf HTTP/1.1" 200 661

##IPADDRESS## - - [29/Jun/2010:03:25:31 +0000] "POST /MyApp/SearchCompany.jsf HTTP/1.1" 200 14088

##IPADDRESS## - - [29/Jun/2010:03:26:28 +0000] "POST /MyApp/SelectCompany.jsf HTTP/1.1" 200 2155

##IPADDRESS## - - [29/Jun/2010:03:26:32 +0000] "POST /MyApp/CompanyDetail.jsf HTTP/1.1" 200 2355


To examine the number of request processed by the server, Date-Time field is required in log file. With the help of web log analysis tool (like Microsoft Log Parser 2.2, Sawmill, Lizard Log parser etc.) we will be able to find out the number of request coming to the server over a period of time. One can achieve this by querying the number of entries in the log file at any point of time against the Date-Time field. These results can be further analyzed to figure out the request arrival rate across different sampling time interval (per Second, Minute, and Hour etc.). The arrival rate clearly defines the time duration when the server is busy and server is idle. This helps in identifying the core hours and non-core hours for the particular application.

Thus by analyzing the web server logs for a particular server, one can easily identify the work load handled by each server.

January 31, 2011

Is your workflow becoming a bottleneck in addressing production issues on time?

In one of my interactions with a client manager, we were discussing about ways through which the development team can respond faster in the event of a performance problem occurring in production environments.

As with any exercise, understanding and analyzing the root cause of a performance bottleneck begins with collecting all relevant information that would assist the development team to re-create and troubleshoot the problem scenario in development environment.

A most trustworthy, and at times, the only source to identify root cause of any production problems are the logs generated by servers across different tiers. For instance, a web server log could yield valuable insights on application's workload helping the development team to re-create production workload in test environment. Application server logs would provide insights on transactions and any exceptions that triggered the problem.  Thus, it's imperative they are made available to development team.

Now, the real challenge seems to be this - how sooner can the development team gain access to logs? As with most organizations, the production environment and logs are managed by infrastructure team. For any problem analysis, the development team needs to request logs for required duration from infrastructure team. The existing log management practices, security and privacy regulations, and approval channels could induce certain unavoidable delays in delivering these logs. Nevertheless, the maturity of your processes is weighed by how these delays can be optimized. How much time does the development team need to wait before they get to see the logs - is it in hours or days?  Are your operational processes becoming a bottleneck here?

December 21, 2010

Why a 'guesstimate' while identifying the workload mix may invalidate the complete performance testing findings

Modeling workload for a web application plays an important role in defining the success of the performance testing exercise. Though it does not directly impact the accuracy of the test results obtained, it does impact the accuracy of the tests carried out to verify the performance. To simply put, if the workload is not modeled accurately, the confidence level to ascertain the claims defined as the performance testing goals is negatively affected.

Most often it is believed that the main objective of assessing the workload for a system is to generate manageable workload so that the system is neither overloaded nor underloaded. However contrary to what might be thought the focus of accurately modeling the workload is more on loading the system 'appropriately', be it overloaded or underloaded, i.e. load the system in accordance with what is to be expected in production environment.

A typical performance testing exercises would undertake workload modeling as a task to identify the load to be generated on the servers. A graph is plotted, as shown below, which represents the load generated on the web server, an entry point to the application, against time.

Wload.pngWhat follows is a 'guesstimation' to come up with the load which has to be generated on the servers. Being a peak period, time t' is considered for the determining the workload and an appropriate figure for the load, wload, is calculated. This wload comprises of the total number of page hits the web server has attended to. Further the hits to the different URL's are then taken into consideration. Thus based on this URL analysis, the 'workload mix' is finally defined.

However what is often neglected is the accuracy of this workload mix, i.e. the distribution of various tasks to be sent to the servers. This distribution is of utmost importance as it finally governs how the different servers are going to be loaded.

To illustrate using an example, let's assume the 'wload' mentioned above can be broken into of 3 different sets of URLs which carry out 3 separate tasks, task1, task2, task3 (for example Search, Add to Cart, Checkout). Now it is fairly simple to understand that any task will generate a sequence of events at different servers in any application distributed across tiers. A task1 may be CPU intensive task at Application tier; a task2 may be memory heavy at Application tier or a task3 a Database intensive one.

So if during time t' when the workload was analyzed, if the distribution of the workload among these tasks is not determined accurately, the sole base of the performance testing exercise to be carried out will be deteriorated. So let's say instead of loading more of task1's, task3's are loaded more. This would lead to generating more load on Database tier instead of Application tier. This is for sure not a production like scenario, thus invalidating the complete sets of results generated out of these performance tests.

Hence it is of critical importance to accurately identify this distribution of workload among the mix of the different tasks, typically referred to as transactions. This calls for an analysis of the workload by looking at sets of URLs together, not just individual URLs.

December 7, 2010

How long is 'healthy' response time - response time standards?

One of the key aspects of Performance is the response times of interactions with the system. Typically talked-worried-designed is the end user response time or the online user experience. Throughout the stages of SDLC this aspect is more debated and finally accepted-convinced what the test results demonstrate. What the business wants is 'faster' responses - in essence 'fast' implying to the satisfaction of end users. The quantification of 'fast' is typically 'too fast' during the requirements phase, so can there be any way of arriving at rational expectations, are there any response time standards ?

(i)            Standards are subjective

Note that there are no 'specified standards' when it comes to response times for user facing transactions. The traditional figures of 3 secs or maximum 8 secs are 'legacy' figures, moreover advances in technology can yield-facilitate responses at the level of milli secs. What matters most in determining the standards for a given system is the 'user perception' about the service being offered by the system. Rather than looking 'outside' for response time thresholds, its important to put yourself in the place of the end user and arrive at the numbers that suit the given enterprise-application.

Business - designers - architects need to take into account the factors that affect the user perception about the service being offered. Arrive at rational figures based on the variety of transactions-responses associated with the system (during planning & requirements phase). Cascade the end to end responses across the components within the system. Evaluate - Monitor - calibrate the responses throughout the coding to testing phases. Validate the 'final' response times to be mutually satisfactory.

(ii)          Response time bands

Based on the measurements of 'attention span' - following are some basic guidelines to take into consideration:

Sub second response: (less than 0.5 sec on an average): These latencies are typically not recorded as 'taking time' by end users -its like 'done immediately' from the perspective of human. Typically the case when users want to proceed quickly towards the next action - do not expect-want to wait for the outcome of the 'previous' action being performed. For example - users 'operating' fields-entities in the UI.

Response in seconds: (less than 2 sec on an average): These times make the users notice the delay - users expect that the system is 'working' on the inputs provided and has returned response fairly soon without having to unduly wait. Users do not feel that the system is sluggish, they do not lose the sense of 'smooth flow' in their journey of completing the task. For example - user operations that require providing credentials or navigation to 'next' step post current actions.

Extended Response: (more than 5 sec on an average): This is typically a limit for users to keep their attention on the current task. Anything slower than 8 secs must have some way of bringing out to the  end user "percent-done" indication and a clear way-facility to halt-interrupt the operation at whatever stage it has completed. Under these response times, users should not be expected to 'remain' on the same page-task, rather they 're-orient to the previous task' when they return to the response after doing some 'other task'. Any delays longer than 10 sec result in natural break in the user's current work-flow.

 

 (iii)         Choose what suits best for you

Based on the response time bands - attention spans as above and the service being offered, choose the SLAs that work best for you. The factors for selection need to be (i) Service-business being done : Banking, Telecom, Health, Call centre etc (ii) Distance of end user - direct user interaction versus users reaching the service through via-media; multiple users consuming same service, system inner complexity - need to get responses from 'outside' systems (iii) Factors affecting attention span - age groups of service consumers, emotions-temperament of users, professions-occupations of users, possible user attrition reasons, different time phased ways of accomplishing the same task  (iv) Real time criticalness of transactions under consideration (v) Current market trends and 'competitors' offering 'similar' services (vi) Balance between architecture - technology - business

Having irrational targets and failing to meet them or settling for slower targets leads to 'internal' dissatisfaction about your own system. Balancing the factors and arriving at prudent targets helps better for IT as well as business. Design the responses keeping in mind how the end user will utilize them - make right choices for suitable transactions keeping in mind that the users never accomplish 'all' the tasks in a single flow-span !

November 11, 2010

Bottleneck Analysis of J2EE Applications using Performance Management Tools

As J2EE applications are distributed in nature, interaction of components across layers is required to fulfill a single request. Also, components that behave well in isolation might have unacceptable performance when working under load. In a complex J2EE Environment which is facing performance issues, identifying the problem layer or component is the most difficult task. Under these scenarios, performance management tools can help in isolating, analyzing and resolving performance issues in complex applications occurring under testing phase

 

J2EE Performance Management tools make it possible to monitor a J2EE application under load conditions and isolate the bottleneck causing components. These tools have typically the following capabilities:

·         These tools have very low overheads and can be deployed on testing and production environments to identify bottlenecks.

·         These tools provide mechanism to integrate with Load Testing tools. These tools can monitor applications in a load testing environment and detect performance regressions inside applications invisible to testing tools, and to precisely isolate complex performance issues

·         These tools provide layer-wise and request-wise performance metrics. The tools have the capability to track a request across different layers in the J2EE environment and report its performance metric in each layer. For e.g., the tool can capture the time the request spent in the Servlet/JSP layer, the EJB layer and the JDBC layer. This is very useful in narrowing the layer which is causing the performance problem

·         Provides the capability to drill down for more precise performance measurements from the layer level to the actual method call level

·         Performance issues across all variables are captured by these tools in context of all or selected transactions and correlated with environmental influences such as virtualization, latencies, and configurations

·         Provides expert tips on indicating the potential bottlenecks and possible causes for those bottlenecks

 

Leveraging these performance management tools provides following benefits

·         Quick and easy diagnostic of application failures and performance problems in real-time or offline without having to reproduce problem scenarios on their local workstations, in turn freeing up key development resources for building new features.

·         Reduce and accelerate required test cycles by eliminating test runs with additional log-options to drill into certain problem allowing more time to be spent on strategic activities.

 

 

October 6, 2010

Performance Extrapolation that Uses Industry Benchmarks with Performance Models

A white paper written by me and my colleague Kiran C Nair was presented in SPECTS'10 conference (http://atc.udg.edu/SPECTS2010/program.php). This conference targeted professionals involved in performance evaluation of computer and telecommunication systems. The paper described an approach to predict performance of applications using Industry standard benchmarks and Queuing Network Models. Multiple requests for extrapolating applications performance to different hardware was the key motivation for coming up with this paper.

The industry benchmarks like SPEC and TPC provide a standard way to compare system performances and also act as pointers for capacity planning. Some analytical methods use these benchmarks for linearly projecting the system utilizations and throughputs. However when it comes to prediction of performance metrics (Utilization, Response Time, Throughput etc) for a multi-layered application like an OLTP application, distributed across multiple resources - - this approach of extrapolation does not provide holistic results.

Then there are performance modeling techniques like QNM, QPN, LQN, etc. that provide detailed understanding of application performance for varying scenarios to enterprise architects. These models are created using the measurements from existing system, and any change in hardware would change these measured values and the performance model itself.

So industry benchmarks were useful to compare hardware performances, but they could not provide much insight on performance metrics for distributed applications. For what-if analysis of such multi-tiered applications, in horizontal scaling and varying workload scenarios, performance models were used. But independently they could not predict vertical scaling impact. Thus to get the overall performance prediction for scaling of hardware and other changes, a hybrid of these two approaches was experimented. The approach and the findings are detailed in this paper.

September 29, 2010

Network Latency in Performance Tests

In performance testing, the response time of web pages at the client end (browser-side) is an important measure.  It indicates how end users perceive the performance of the system at large. Even if an application is able to serve requests faster, the client-side response could be impacted because of network delays. In a complex, resource intensive web page - this impact is amplified as the client may have to do multiple round trips to the server to fetch web resources for loading the complete page. Thus, in troubleshooting any performance problems, it is important to isolate and understand whether the problem is with the system or the network.

Continue reading "Network Latency in Performance Tests" »

September 23, 2010

Process Modeling Series VI: What do you want to model as part of Enterprise Business Process Modeling?

As discussed in the initial blog, enterprises are organized structurally as business units/departments/divisions and virtually as functions; all these organization are in turn to enable business processes for bringing out products/services to customers effectively. Michael Porter's classic Value Chain principle is a better way of looking into an enterprise structure - primary functions (inbound logistics, operations, outbound logistics, marketing & sales, services) which bring in products/services to customers and support functions (firm infrastructure, human resource management, technology development and procurement) which support the overall process of bringing value to customers. Michael Porter introduced Value Chain Analysis as a 'systematic way' of examining all the activities a firm performs and how they interact for analyzing the sources of competitive advantage. Value Chain disaggregates a firm into its strategically relevant activities in order to understand the behavior of costs and the existing potential sources of differentiation. With this in mind, we can safely utilize Value Chain analysis as a start for process architecture definition and by defining the primary and secondary functions of the enterprise first and then drill down hierarchically identifying major business processes. Once major processes are listed down, then using a process modeling methodology and process modeling tool one can get into the act of information gathering exercise and model the business processes along with all possible business resources that are part of the business processes.

The other viewpoint is to take a 'Value Stream' view, which cuts across the structural set of business units/divisions as well as virtual organization of business functions (CRM, Supply Chain, etc); an enterprise can be divided into typically 12 value streams like order to cash, concept to design, manufacturing to distribution, recruitment to retire etc. A classical information rich 'Enterprise - Value Stream or Capability Hierarchy' description can be found in the following link - http://www.enterprisebusinessarchitecture.com/model/Enterprise%20-%20Entity/Enterprise%20-%20Value%20Stream%20or%20Capability%20Hierarchy.htm (by Ralph Whittle & Conrad Myrick - authors of 'Enterprise Business Architecture - The formal link between strategy and results' book). This will give an idea of defining an enterprise and how to drill down to major business processes part of the value stream and from there defining the details of the activity/task flow using a process modeling methodology and tool.

So, for an enterprise wide process modeling there are two top-down approaches to identify and list business processes - one is a value chain based approach and other is value stream based approach. Both these views are quite efficient in nature to see the bigger picture of the enterprise in terms of business processes and of interest to the top management community. If I have to provide a representative view of a value chain based approach for process modeling for a generic enterprise, I would represent the bigger picture as something like this - this is just a representation and is not complete or real life enterprise representation:

Process Modeling Blog VI.gifThe significant idea in this representation is that we have the Corporate Planning and Performance Management process that involves business planning, strategy formulation and business motivation development as a value stream that can be represented in detail through hierarchically listing down the various business processes part of this stream. The Primary Functions/Operations involves Production/Manufacturing & Services Design & Development and other major business processes as represented - these value streams can be broken down further granularity and business processes are to be modeled. For Secondary Functions - there can be two classifications - one business enablement and control and other related to human resources; subsequently these major processes/streams can be represented in more detail by hierarchically breaking down the process flows. One draw back in this representation is that no where we see 'customers' here - so it is highly necessary to include customer side view point - wherein the journey of how customer reaches out to the enterprise and how the enterprise reaches out to the customer is to be modeled and analyzed for better results. Modeling "Customer Journey and Handshakes" processes is crucial for improving business processes apart from having a wider & bigger representation of the enterprise itself through value chain or value stream representation.

I would like to quote one important insight from the book "Improving Performance - How to manage the white space on the organization chart" by Rummler and Brache. Rummler and Brache communicate that "Many managers don't understand their businesses. Given the recent "back to basics" and "stick to the knitting" trend, they may understand their products and services. They may even understand their customer and their competition. However, they often don't understand, at a sufficient level of detail, how their businesses get products developed, made, sold and distributed. We believe that the primary reason for this lack of understanding is that most managers (and non-managers) have a fundamentally flawed view of their organizations". With this insight, they go on to communicate that there are three standard ways of viewing an enterprise - 1) The Traditional (Vertical) View of an Organization 2) The Systems (Horizontal) View of an Organization and 3) The Organization as an Adaptive System; the authors list the details of these views in the book.

From the above insight, we can clearly have a take away - "viewing the organization" with a sufficient level of understanding is essential and I would suggest that this answers the question of the blog - what would you like to model as part of Enterprise Process Modeling - one would like to create "views" of the organization as process architecture blueprint and from there drill down to the lower levels of information detail of how a product/service is made, developed and reaches customers. It is foremost importance to educate managers and process modeling is a better language to do that; if managers are aware where do they stand in the giant enterprise machinery and how to do they contribute to the satisfaction of customer - it is the first step for process improvement. So, join the journey of Enterprise Process Modeling and travel the path from enterprise side and from customer side and fix the gaps and breaks so as to improve business and customer satisfaction!

In subsequent blogs we shall discuss the operational aspects of enterprise process modeling - the complexity associated with the effort and how to attack the operational hiccups.

September 22, 2010

Identifying Network Latency is the key to improve the accuracy of the System Performance Models

Network plays an important role in defining the user experience for a distributed application accessible over the internet. Majority of effort is focused on improving the response times at server; however the time it takes for the response to reach a client cannot be neglected. Network latency is a measure of the time delay observed when a packet of data is transmitted from one designated point to another. Some usages also term network latency as the time spent by the data for a complete round-trip i.e. from source to destination and from destination back to source.

 

In an ideal network, data should be transmitted instantly between one point and another (that is, without any delay at all). However there are different elements that introduce their own respective delays and in turn contribute to the factor of network delay. Following are the few key factors:

  • Network Interface Delays: It is the time the designated point in the data transfer, sender or receiver, take to convert the data into or from the physical data transfer media.
  • Network Element Delays: It is the delay caused by various activities performed along the path by different network elements like routers, switches or gateways. These activities can be any of the following:
    • Processing: The time spent by these elements to process the received packets of data to determine what action needs to be taken
    • Forwarding: The time spent by routers and switches to understand and switch/forward the data to designated destination
    • Queuing: The time spent at the routers and switches while the packet is waiting to be forwarded to the destination. (This queuing happens because only a single packet can be forwarded by the routers/switches at a time to a destination.)
  • Network Propagation Delay: It is the time spent by the data in the travel through the physical transfer media.

So there is an amount of time spent in the transfer of the data from a source to a destination. Considering the importance of the quicker responses from the server machines expected today, even a slightest of the delay because of high-latency network can significantly degrade the overall application experience for the user.

 

Moreover in any distributed application environment, this network also exists between different tiers, for example web, application and database tiers. So together it finally forms a significant part of the overall transaction response time observed at the client side.


NetworkLatencythroughTiers.png

Network Latency (NTime) forms a significant part of the overall response time observed at the client end, along with the server processing time (Proc).

Performance engineers aim to include every possible component that adds to the response times, and thus to the server utilizations and transaction throughputs, in the system performance models. However one tends to model only the components for which the effort asked for by a particular task is known. So a web server, an application server or a database server processing a task can easily be modeled as the service demand values for the particular servers can be found during the testing phase too.

However network latency still remains comparatively complex part to calculate based simply on the testing results. We can calculate the network latency from the production data. But that asks for additional monitoring data which tends to further delay the modeling exercise. Moreover in case the data cannot be produced, it also asks for an additional investment in monitoring setup. Hence network latency factor happens to be neglected or be assumed as a constant delay, thus adding to the inaccuracy of the model.

Accurate identification of these network latency values at different server tiers through a comparatively easier and efficient way will definitely improve the accuracy of the overall system performance model.

September 21, 2010

Process Modeling Series V: Value Proposition for Enterprise Process Modeling

Oops....the big question - what are the benefits and value proposition of Enterprise Process Modeling and how one can sustain them? - More closer to questions like benefits of Enterprise Architecture or benefits of BPM or benefits of SOA which researchers, academics and practitioners are trying to provide concrete and up-to-date answers. We need to be a little clearer here on how process modeling fits into an organization's wider programs/initiatives. There are multiple theories wherein one can fit process modeling: process modeling can be fitted into business architecture which is again part of enterprise architecture, one can fit process modeling into BPM initiative, or one can fit process modeling as part of the business excellence function of the enterprise which is to ensure business processes are improved upon, or one can fit process modeling into any other initiatives like lean management, six sigma etc. So, wherever enterprise wide process modeling fits into, I can list a standard set of 18 value propositions that can be achieved out of process modeling:

1.    Ensuring Robust Architecture - multi level architecture drill down with standard set of symbols and elements through modeling business processes

2.    End to End Process Connections - create and develop various views through modeling - enterprise value chain view, function view, business division view etc for decision making

3.    Consistent terminology/methodology - utilize consistent approach if the enterprise is using various tools - Aris, Casewise or Visio etc to make people speak same language

4.    Repository Control & Governance - establish high quality repository through governance for better reporting out of the process modeling tool used

5.    Process Improvement - Structure process models for improvement and enable root cause analysis and knowledge management through stakeholder involvement

6.    Best Practices Pool - develop best practices pool for modeling standards and identify quick wins for process modeling effort and improvement projects

7.    Knowledge Management using process models - collaborate, idea management, best practice management and learning aids

8.    Business Intentions and Process Models - Pilot or model organizational wide upcoming scenarios through process models; develop a live scenario and represent them through models

9.    Document Customer Journey - Build process model for particular customer journey to see handshakes and communication gaps

10.  ERP Package Implementation - standardize ways of analyzing processes supported by ERP and non ERP applications

11.  Process Measurement - Charting, reporting out of the process modeling tool and for business intelligence as whole

12.  Process Ownership - identify an owner to the process

13.  Enterprise Architecture Effort - Technology Architecture is to be supported by Business Architecture or the vice versa - model business process to support technology decisions

14.  Simulation - Model first to simulate and identify critical paths (though the real time usage of business process simulation is less utilized in business enterprises)

15.  Impact Analysis - business impact analysis through process models and their associations

16.  Help define new processes - you need to model it to socialize it

17.  Process Reusability Scenarios - standardization and harmonization is possible through process modeling at first place

18.  IT Requirements modeling - surely will help for robust requirements and minimal missing requirements and I have practically experienced this

Without a clear picture on value of process modeling, it is definitely going to be a turbulent journey all the way - if organizations are starting with a process modeling journey it is advisable to have a structured 3 months, 6 months and 12 months goal and ways to achieve them. Without destination, the journey can take any path and might not be useful as it will not be there when it is needed. Ensuring that the "motivation" aspect of the process modeling framework that I have talked about earlier is in place will breed success for the effort. There are various other frameworks that practitioners can adopt including balanced scorecard, process maturity analysis framework etc to define and govern process modeling value proposition. Understanding the business scenarios which are to be tackled within the enterprise - M&A scenario, Business Unit Transformation scenario etc can further help define goals and utilize process modeling which definitely can be used to talk loud on value proposition. If process modeling and analysis is used for Enterprise Decision Management that is the ultimate trophy for process modeling effort; EDM mostly utilizes business analytics and intelligence stream for decision making.

Chart out your journey for process modeling and then ensure that value proposition is listed out all along the journey goals!

September 20, 2010

Process Modeling Series IV: Process Hierarchy and Granularity Definition in Enterprise Process Modeling

Thumbnail image for Process Modeling Blog IV.gifBusiness Processes are defined or modeled hierarchically so as to comprehend them easily. Processes are decomposable to multiple levels of granularity till then one reach the basic atomic task which further cannot be decomposable and doesn't make any meaning. Business Process Architecture definition usually starts with hierarchical process definitions up to a certain level of decomposition (major processes) and then business processes (minor processes) are represented as a 'flow' detailing how work/task flows among business roles and gets accomplished. Major Processes are represented hierarchically so that one can understand the value streams, the subsequent process groups that are part of the value chain which are more of major processes that cannot be represented as "flows"; for example, Accounting to Reporting is a value stream which in turn contains major processes like, accounts payable, accounts receivables, intercompany processes, reporting processes etc. These major processes can once again decomposed to subsequent levels wherein one can represent them as flows detailing how the activity/task is performed by organization roles.

There are various references one can find in literature regarding process architecture hierarchy representation. Here I summarize through a comparison of process architecture hierarchy representation through literature survey:

Process Modeling Blog IV.gif

1.    BP Trends (Refer: Book: "Business Process Change - A Manager's Guide to Improving, Redesigning and Automating Processes" by Paul Harmon): Suggests a Value Chain perspective per se and we can strongly interpret that a 'Major Business Process' can have minimum three levels of sub-granularity depending on the process complexity nature.

2.    ARIS Hierarchy (Refer: Book: "ARIS Design Platform - Getting Started with BPM" by Rob Davis): According to ARIS approach, a process architecture typically consists of 4 to 6 levels of process models. Besides the structure of process models, the architecture representation shall also include other views of the ARIS concept (e.g., organizational diagrams, data models, objective diagrams, IT landscape model etc). There are 6 levels of representation and it is difficult to comprehend the business process hierarchy easily.

3.    Value Creation Hierarchy (Refer: Adopted from article of Geary Rummler and Alan Ramias, Performance Design Labs for BP Trends): One can clearly see here that processes decompose into sub processes and further into tasks (activities if one like to call) and then to sub-tasks (tasks if one like to call).

4.    Generic BPM hierarchy: Easy to interpret - start with Mega Processes which decompose into Major Processes. Major Processes in turn can include granularity of three levels - sub process, activity and tasks. This representation enables modelers and analysts to structure the business processes so that one can easily comprehend them.

5.    eTOM Model: (Refer: www.tmforum.org ): As per eTOM reference model, the first level - Conceptual Level (Level 0) is more like an organization view. Level 1 (Top Level) is more like Value Chain; Level 2 (Configuration Level) & Level 3 (Process Element Level) is more like Process & Sub Process level; Level 4 (Implementation Level) corresponds to activity flow wherein roles become visible - more of a flow representation.

6.    SCOR Hierarchy (Refer: www.supply-chain.org ): As per SCOR hierarchy, the first three levels are higher level representation of the supply chain function and below level 3, each process element is described by classic hierarchical process decomposition into any number of levels.

7.    APQC Hierarchy (Refer: www.apqc.org ): APQC suggests four major levels - Category, Process Group, Process and Activity. Category includes Operating Processes and Management & Support Processes - 12 in number. According to APQC PCF, activities are then specific to individual enterprise in the industry wherein they will be differentiated for gaining competitive advantage.

Based on the comparison above, it is clear that in process architecture definition, process hierarchy and granularity definition is a crucial step. But this varies from enterprise and enterprise and can also vary for function to function in the enterprise. It is important to have a definition of process architecture terms clear so that one can easily refer to them so as to avoid confusion in process hierarchy definition. Here is my attempt to define the various levels in a process hierarchy - seven levels while some of the levels can be broken down further as per the complexity of the process in representing them in a process model:

Process Modeling Blog IV_2 Revised.gif

These generic business rules help in defining the "granularity" of the process (or activity or process item) within the 7 levels of hierarchy terms defined here. Though one might not be so worried in stand alone process modeling for system development or process improvement within a function, but when modeling is done at an enterprise level, it becomes crucial to have a definition for process hierarchy and business rules for process granularity definition. Once done, this helps enormously in comprehending business processes and increases knowledge transition of processes easily!

September 16, 2010

Process Modeling Series III: Composite Business Process Modeling Framework - components of enterprise wide process modeling initiative

Thumbnail image for Process-Modeling_Blog-III.jpgProcess Modeling at an enterprise wide level is now a days a major initiative in many organizations. There are heaps of case studies in APQC website (refer: www.apqc.org ) wherein enterprises have communicated their journey towards business process management and enterprise wide business process modeling initiatives - particularly the case studies regarding Boeing and Coors were very useful. The Process Classification Framework (PCF) from APQC is a very good start for any organization as a reference model to structure and understand their business processes (refer: http://www.apqc.org/pcf ). There are industry wide standard classification for PCF - banking, automotive, broadcasting etc and each of these reference models list the high level business processes starting from the value stream (or at times as the primary business processes and support processes) and drill down subsequently to two or more levels (up to activity levels) and enterprises can make use of them as a head start. Many process consultants also come with this kind of reference models for consulting purposes to ensure that all aspects of particular industry specific processes are captured. There are other reference models like SAP Reference Model, TOGAF Business Architecture Content Framework/Metamodel etc which are also helpful.

Given there are various reference models, but the program or initiative any enterprise undertake for process modeling should in-turn have a framework to ensure that the program/initiative is structured well and taken to the wider audience/stakeholders across the enterprise. A framework is something like organizing things for easier interpretation - for example, if we go to a home, we are immediately able to recognize where is the dining area, reception area, hall, kitchen, washrooms and bedrooms; and once you go to kitchen, we are able to interpret where one can find fridge, burner, cooking utensils and cooking items - so this mental framework helps us distinguishes things and breaks them into components and we are in-turn able to appreciate the whole "home". Similarly, a process modeling framework shall help structure the components of the initiative and the subsequent tasks that are part of the framework which will be easier for stakeholders to understand and contribute further. One can also say this is something like a framework for setting up and sustaining a process modeling center of excellence. Please refer to this wonderful article for BPM Framework provided by Rosemann et al for BPTrends website (http://www.bptrends.com/publicationfiles/FOUR%2009-09-ART-Framework%20for%20BPM%20Ctr%20Excellence-Jesus%20et%20al.pdf ). This is a classic example of a "framework" for BPM.

Now coming to a Process Modeling Framework, from my viewpoint and experience there are seven major items that a process modeling initiative should contain and here I detail the seven components: I would call this as Composite Process Modeling Framework for enterprise wide process modeling:

Process-Modeling_Blog-III.jpg

 

1.    Motivation - Motivation for why the enterprise has opted for an organizational wide process initiative and what they are wishing to achieve. Motivation part of the framework shall include the following aspects:

a.    Vision - Vision of the initiative - what is the ultimate goal of the process modeling exercise and what the enterprise wishes to achieve in the long run

b.    Mission - Mission of the initiative - the short term goal that the enterprise aim to address using process modeling so that it can impress stakeholders with some short term achievements of importance

c.    Objectives - define the objectives of the process modeling exercise; what are the factors associated with/for ensuring success of the program; this will in turn help define the key performance indicators for the program/initiative

d.    KPI - performance indicators for the program; define what is considered success for process modeling exercise and in turn define the indicator that can help measure the same.

2.    Governance - Governance is all about two major things - a) what are the decisions to be made and b) who will make these decisions. We need to be clear here in one thing - Process Modeling Governance is different from Process Governance itself. Process Modeling Governance is about ensuring that there are structures and patterns/policies are in place for what/where/why/who/when/how process modeling will happen. The representation of processes as a model is to be useful to the stakeholders and various stakeholders might have various requirements; process modeling governance shall ensure that all the end requirements are met as well as things are in structure. Governance part of the framework shall include the following aspects:

a.    Governance Framework - a framework defining how to classify decisions based on impact of the decision on the program; based on the impact, list the stakeholders who are responsible for the decisions/outcomes involved; based on decisions/outcome define how it can be implemented. This Governance Framework should have to be adhered strictly and templates are to be defined to explain the sequence/logic on which decisions are taken up. It is to be noted here that this Governance Framework is always evolving and should be visited for changes at regular intervals.

b.    Maturity Framework - a framework defining how one can classify the state of affairs for process modeling across the enterprise; this about creating a universal five stage set up enabling the key stakeholders to do a review at regular interval to understand where exactly the program is at a particular time; the process modeling maturity stages might include the usual stages for any maturity framework - basic or initial, repeatable, defined, managed & optimized (though I don't know why all maturity models have five stages always!!). All these stages has to be defined taking into consideration as per the organizational/enterprise current scenario.

c.    Operating Model - this is very important for the success of the overall program and it is also crucial to be defined as part of the Governance aspect of the process modeling framework. Operating Model is all about how the entire program is to be run and who are all the roles involved and what are their responsibilities. Roles and Responsibilities are to be very clear in process modeling initiatives because is all about information collection and presentation of the same information in a common understandable manner; so who will model, how often one will model, and how efficient the modeling environment will be - this is to be very clear. There are multiple operating model categories that one can think of - having a centralized team catering to all business functions/units for process modeling; business units taking care of their own processes and model them with some training; is process modeling can be effectively offshored and how much quality we are expecting for process modeling - all these are factors that help decide the operating model.

d.    Alignment with other enterprise initiatives - Process modeling effort should adhere to other organizational level initiatives like enterprise architecture, business process management, six sigma, lean processes and other quality assurance programs if any. Governance should be applicable for enabling how one ensures that process modeling is in line with the objectives of the other major initiatives.

3.    Modeling & Architecture Definition - This is the main part of the entire initiative - the "modeling" of business processes. Modeling at enterprise wide has its own challenges and various approaches/methodologies for to address variety of requirements from stakeholders. The various aspects of modeling & architecture include:

a.    Process Modeling Methodology - A standard up-to-date process modeling methodology is the foremost important thing. Usually there are various tools that are applicable in any modeling environment and we shall talk about the tools in the subsequent section. But a methodology is all about what information that is to be gathered, how these information are to be represented using a modeling approach/methodology (EPC, Flow charting, Catalyst Approach, Petri-net etc) and what are the rules around which we are going to model business processes; the rules include the semantics for process modeling and things like meta-model definition, hierarchy definition, various models that are to be represented (process model, organizational model, location model, KPI model etc) and how to relate process information effectively. Depending upon the level of information needed and available for modeling, process modelers/analysts go on to capture using a common language with the help of the defined methodology and represent business processes as models for analysis. Apart from information requirements, process modeling methodology must also be flexible enough so as to accommodate various end goals through analysis of process modeling including knowledge management, simulation analysis, system requirements etc.

b.    Architecture Definition - Architecture Blueprint definition is one of the key aspects for enterprise level process modeling. A top down approach for process modeling ensures that the higher level business processes including Value Streams are represented as frozen levels and then the subsequent level or hierarchy of process modeling are modeled as per the needs and information availability. Without proper classification of enterprise process architecture blueprint, it becomes very difficult to hook-in and hook-out process models of lower level granularity. Also freezing the top levels of minimum 3 to 4 hierarchy ensures that stakeholders are able to relate to process flows across various functional and organizational boundaries of the enterprise. Please refer to my previous blog detailing how to define a Process Architecture Blueprint - http://www.infosysblogs.com/setlabs/2009/11/process_architecture_blueprint_1.html#more

c.    Process Modeling Quality - Ensuring process modeling quality through a structured approach while modeling at an enterprise level is very important. This includes enforcing the adopted methodology so that important things are taken care of leading to effective analysis of process models. Some of the basic hygiene quality issues include utilization of proper verbs for activity modeling (Verb Standards), utilization of proper role definitions/swim lane roles across models (Role Standards), utilization of standard hierarchical definition of processes - major processes, minor processes, activities and tasks (Hierarchy Standards), utilization of standards for representing process associations (Association Standards) etc. All these quality assurance aspects when followed effectively ensures that process models are kept live longer for better knowledge management.

4.    Library Management - Process Modeling Library Management is an important aspect as part of process modeling framework - this is the mechanism which ensures that the process models are available to various stakeholders in various formats as it is applicable for. Process Model library is the repository or warehouse for storing process models in a structured way so that all the process models are available easily to right stakeholders - there are statistics stating that the number of process models/diagrams in an enterprise can range from few hundreds or up to 3000 in number. The usual mechanisms include html repositories, portals, team spaces, word, PPT, excel or PDF documents and on demand reports. Following are the various aspects that are part of Library Management:

a.    List of process models - hierarchical as well as alphabetical process models

b.    Process Modeling Glossary

c.    List of models other than process models

d.    Search, View, Comment & Permission features

5.    Tool Administration - Another critical part of enterprise wide process modeling - there are various tools available in the market and tool is a crucial vehicle for modeling as well as analyzing business processes. A standard operating procedure for tool administration is a essential for success of enterprise process modeling. The critical aspects under this topic would include:

a.    Tool Availability and Access

b.    Management of Users and Stakeholders

c.    Ensuring Tool operations and import/export of data in required format

6.    Stakeholder Management - This part of the framework is people management part of the exercise - without people there is no information gathered and with no information there is no success. Stakeholders for such a large exercise include from top management, to enterprise architect community, to BPM stakeholders & to employees implementing processes in real life. Buy in of stakeholders for the program, ensuring stakeholder availability and ensuring stakeholder needs are met is nothing but essential for successful implementation of process modeling exercise. There are various literature surveys available for stakeholder management for large BPM exercises and the issues that arise because of poor stakeholder management. TOGAF 9 has included a special chapter on Stakeholder Management and this shows the importance of this part in the framework. The critical aspects under this topic would include:

a.    Buy-in from various stakeholder communities for the program

b.    Communication and engagement of stakeholders

c.    Stakeholder time and effort management

7.    Training - Training to enterprise employees is last but not least in the framework; ensuring proper training on the methodology concepts and tool administration shall improve results effectively. In fact, the number of hours of training for employees is an important KPI. Training aspects vary from enterprise to enterprise depending upon the stakeholder community interest and requirements.

This framework is very generic in nature and depending upon the nature of the initiative enterprises can have multiple other aspects added to the framework.

September 8, 2010

Process Modeling Series II: Business Process - Sea of Glossaries/Terms; where does Process Modeling stand?

Process-Modeling_Blog-II.gifWelcome to the playground of business processes in the modern organizations/enterprises!! How many terms and how many glossaries!! Even the best of process consultants and gurus will get stumped on the new terms that are used in this subject/discipline. Basic terms of importance include business process management (BPM), business process modeling (BPm) and Business Process Improvement (BPI); other terms leading the race include business process reengineering (BPR - as coined by management guru Michael Hammer), business process standardization, business process harmonization, business process simulation and business process monitoring; there are multiple other techniques/methods that are closely related to business processes - including process hierarchy definition, process granularity definition, business process design, business process identification, activity based costing analysis, business process redesign, business process optimization, business process outsourcing etc. All terms have got meanings and applicability - but my quest is to define a mental classification framework of these numerous terms/glossaries so that as business process analyst/consultant one can clearly relate to the definitions of these terms/glossaries closer to heart. A thing which is closer heart is never forgotten while thing which is just in the mind. Apart from these terms there are number of techniques/methods that are applicable for each of these terms - for example, business process modeling in turn includes multiple business process modeling approaches namely, event process chain, unified modeling language, catalyst approach, petri-net etc. So, my quest is to define things at a higher level - a classification framework of business process terms that are important at a consulting level rather than at operational level.

There are three important "groups of terms" according to me that forms the universal set for the various process terms: 1) Business Process Management 2) Business Process Modeling and 3) Business Process Improvement. Let us take up the definition of these three terms to a closer look:

1)    Business Process Management - The way business processes are organized and managed so that they are effective to provide competitive advantage in terms of cost, quality, time or flexibility for enterprises so as to fulfill the needs of their customers through products/services. Business Process Management is not limited to Information Technology enabled workflow management or automation of business processes, but it definitely includes them as well. So Business Process Management is the universal management of business processes which are manual as well as automated currently to ensure that value is delivered to customers through effective management of the same. There are exceptions to business process management and it does not cover the whole gamut of business - for example, strategy formulation, business policy decision making and creation of business functions are usually out of the realms of BPM. The operational aspects of business which are actually the action items/tasks/activities that bring value to customers are to be managed and BPM is the name for it. By this definition, I include business process modeling as well as business process improvement as part of BPM. BPM in turn involves the business motivation for improving business processes, governance for business processes and monitoring and alignment of business processes with business strategy as well.

 

Process-Modeling_Blog-II.gif 

2)    Business Process Modeling - As explained in the previous blog, business process modeling is multi-dimensional and it is the visual representation of the process which is a common language that is easily interpreted by various stakeholders involved. Business Process Modeling usually utilizes some modeling approach as necessary for the end use of the modeling. Modeling is the first step for business process analysis - business process analysis is in turn the first step for business process improvement. So, modeling involves gathering information about and around business process so that the process can be analyzed.

3)    Business Process Improvement - Improvement is the universal growth mantra. Everything has to be improved and can be improved. Business Processes ensure that our products/services are produced/serviced as per the needs for the customer and are available to them as needed (at optimum cost, quality, time & flexibility). Business Processes while bringing the products/services to customers fail and there are breaks in the processes which drop the value that enterprises decide to provide customers. Business Process analysis helps one to define the improvement path that the enterprise can take up. There are famous process improvement cultures like Six Sigma, Lean Manufacturing and Quality Improvement programs. Process Analysis includes understanding the existing business processes and associated improvement or waste reduction opportunities which can help improve cost, quality, time or flexibility of bringing the product/service to customer - there are various techniques involved and few to name include, as-is process analysis using five whys, value stream analysis, activity based cost analysis, simulation analysis, business process reengineering etc. Programs like process harmonization, process standardization, process optimization, process automation etc are business process improvement programs that are widely utilized.

 We shall further discuss each of these three groupings in detail and talk about the concepts that are part of these three major groupings in the subsequent blogs.

September 7, 2010

Process Modeling Series I: Process Modeling - Art and Science for understanding business

In this series of blogs that am going to write, I would be predominantly concentrate on Process Modeling - Business Process Modeling Service, Business Process Modeling Tools, Business Process Modeling Methodology, Business Process Modeling Governance as well as Applications of Business Process Modeling. There are number of researchers, professors/academics and practitioners who are passionate about process modeling and there are lot of insights one can gather from the research that has been already conducted (the famous Albert Einstein's statement - "Standing over the Giant's Shoulders!!); we shall try to dissect and understand these existing researches as well.

Modeling has a different meaning from "drawing" - drawing is a two dimensional stuff wherein one is able to communicate things through representing the details in a diagrammatic fashion and it serves two basic purposes: 1) represent things visually as it can be understood by stakeholders (visual representation is easier to understand as compared to written communication) and 2) make things easier in communicating complex information. Modeling is multi-dimensional in nature wherein apart from diagrammatic representation of information in two dimensions, we try to gather and enrich the information with other relevant data that are part of executing the process or that are essential to know as well to view the overall business scenario. Sometimes we try to represent multiple dimensions/information sets and compress them into a two dimensional map - but thanks to technology and tools available, we are now able to easily collect, relate and assimilate information related to business processes.

Business Process Modeling (BPm) is many a time closely/loosely used to suggest business process mapping. In my opinion, mapping is more of a representation of finite details and might or might not be used to analyze (complex analysis including statistical and simulation analysis) the business process and mapping can be termed for a specific purpose of process representation to communicate what is happening at present. The moment one gathers associated business resources (data, systems, materials, roles etc involved) and make a step further to analyze business processes, mapping is less apt a word while modeling is more closer. So we need to be aware of the interchangeable usage of words - mapping and modeling. I have not come across a very clear definition of these terms and if I do so shall share in this series of blog.

Process Modeling has been there since for long wherein it is changed its title as according to the situations/solutions expected - task analysis, flow charting, activity analysis, value analysis & time and motion analysis etc. The applicability has grown wider now wherein process modeling has been used for knowledge management, business intelligence, system development, product development, enterprise architecture and other upcoming business functions.  So, process modeling has lived long and is supposed to grow as well - but how structured it would be and how serious are organization roles in modeling processes which they are part of are questions to ponder for answer. We need to find out some good survey results for process modeling usage - though we might find lots of surveys related to business process management (BPM).

Before we get into the huge glossary of terms one has come across with relation to business processes (business process management, business process modeling, business process analysis, business process improvement, business process simulation, business process optimization, business process identification, business process design, business process monitoring etc), lets get it right why it is important to model and understand business processes:

·         There are products/services from businesses at one side and customers at other side; process is the mechanism that helps in bridging the gap so that products/services reach customers. There are various ways an enterprise is organized structurally as well as virtually as functions to enable this gap is bridged. Also note that this bridge has to efficient in terms of cost, quality, time and flexibility so as to outsmart competition.

·         So processes are important to know and there are gaps in this bridging process and processes fail miserably at both sides of the bridge; it is essential for business to build this bridge effectively so that they develop their competitive priorities.

·         An understanding of business process is nothing but necessity for enterprises to build their bridges (as architecturally efficient).

 

Process-Modeling_Blog-I.gif 

August 17, 2010

Database Scaling Methods for SaaS based Multi-tenant Applications

Scalability is one of key requirements of SaaS based Applications as it has to support users and data belonging to multiple tenants. Also it should be scalable in addressing future requirements once SaaS Provider provisions more tenants in the future.

 

SaaS Providers are inclined towards adopting shared database and shared schema strategy to support multiple tenants due to cost effectiveness involved in leveraging this strategy. Adopting this approach however brings one major challenge pertaining to database scaling as database is shared among all the tenants supported by SaaS Application.

 

SaaS Applications adopting shared database, shared schema approach should be designed considering that it will need to be scaled when it can no longer meet baseline performance metrics in the future, as when too many users will try to access the database concurrently or the size of the database will be causing queries and updates to take too long to execute. One of the ways to scale out a shared database is database sharding.  It is the most effective way to scale out database as rows in shared schema are differentiated for the tenants based on tenant ID and database can be easily partitioned horizontally based on tenant ID. This makes it easy to move data belonging to each of the tenant to individual partition. Database Sharding provides many advantages such as faster reads and writes to the database, improved search response, smaller table sizes and distribution of tables based on the need. 

 

But while partitioning data of multi-tenant SaaS Application, we need to consider factors like performance degradation due to increased number of concurrent users or increased database size due to provisioning of multiple tenants which may impact performance characteristics of existing database.  This will help to select appropriate technique to partition based on database size requirement of individual tenant or number of users of individual tenant accessing database concurrently.

 

August 13, 2010

Blunders in Performance life cycle within SDLC

Performance is embedded into various stages of SDLC typically subject to the perception of the architects to developers to deployment teams. Though "in principle" everybody is aware of the importance of performance - the actual implementation of the "ideal practices" depends on the availability of the time and expertise. Often it swings from strict implementations without weighting the criticality to sheer ignorance to postponement owing to constraints at various stages. Below are some points to take into consideration while doing the "trade-off" during various phases of development to deployment cycle. While most of these points would appear to be "obvious" - they are invariably the most "slipped out" ones !

(i)            Over-Planning in design phase

Extensive preparations and excessive attention put into performance aspects in the planning and design phases often turns out to be not as productive. During this phase one needs to balance the amount of effort spent and their proportional impact on performance - very likely large time spent has an infinitesimal effect of actual performance since things are yet "on paper".

Over-planning typically leads to over-confidence : when you are sure that you have not left any hole in your design phase, it becomes hard to figure out where to start when a performance issue occurs. Projects are managed with a waterfall approach - thorough designs integrated with modeling tools lead to the consideration of being bullet-proof. This perception then persists across architects and project managers to the developers and QA teams. It then turns into a case of missing the forest while focusing on the trees.

(ii)          Under-Planning to enable Performance trouble shooting in development stage

It is imperative to use a logging API in order to help locate performance degradations. Ensure the places of the logging points to be in line with the application flow and that it leads to the right trails rather than just a scatter and clutter of the code execution. Logging APIs invariably include a check for logging levels. While this lets developers have the freedom to include logging statements generously during development - the purpose and the positions should not be restricted to "functional" aspects only which typically remains the case. Every call to an external and/or downstream system must have an appropriate log statement. Any internal algorithms (single or a bunch of methods) that is likely to take longer than a few milliseconds should log at the beginning, the end and during any significant calls made during its execution. Most logging APIs have a configuration where the log entries include the class and a time stamp - thus its not required to create timers to quantify the length of a call.

All exceptions must be logged, and be logged irrespective of the logging levels. This imposes a "restriction" on coding where in exceptions should be used only for exceptions! Exception should not be used as a return value when it can be anticipated to occur - in other words a "catch block" should never have business logic. In such cases unnecessarily lot of time needs to be spent for tracking down performance issues.

(iii)         Passing the ball across when performance issues crop up post deployment

It's human to believe that everything one does is so well done that the problem must be somewhere else ! Make sure that when problems surface the investigation starts with evaluation of the area one is associated with. Simply passing ball from one court to the other does not lead to the solution especially when multiple sub-teams / streams are involved. Projects typically have a team working on the web application and other teams developing service-layer APIs etc. Typically whichever team discovers a performance issue will invariably contact the other team and demand they fix the problem. What helps is rather specifying an estimation of the cause by going through your area and making sure it's "elsewhere". If it is a simple fix, it takes far less time to fix it than to pass it off to someone else to fix. While in case of complex issues - working cohesively leads to a productive solution.

July 19, 2010

Workload Modeling of SaaS based Multi-tenant Applications

One of the technical challenges of SaaS based Multi-tenant Application is to ensure that multi-tenant application addresses performance requirements of all the tenant accessing the application.

 

Major performance problems of Web Application including SaaS based Multi-tenant Web Application can only be corrected by recreating the production scenario in a controlled environment and arrive at the solution through performance testing and analysis. The key to this approach lies in accurately identifying parameters like Hits/second, response time per request, number of concurrent uses, think time etc.  per tenant by Mining of Web Server access logs that can help recreate the production workload for load testing.

 

One of the key challenges is identifying tenant specific above mentioned parameters from centralized log files maintained by Web Server of Multi-tenant SaaS Application.  With logfile analysis, information not normally collected by the web server can only be recorded by modifying the URL. As long as URL of the multi-tenant SaaS Application contains tenant identifier we can track tenant specific above mentioned parameters.  But many SaaS Application providers also use session mechanism to track tenant instead of appending tenant identifier to each of the URL.

 

One of the approaches to address this challenge is to capture user name information logged by Web Server for each of the request and use User to Tenant mapping data used by SaaS Application to figure out tenant to whom the user belongs. Like this we can categorize each of the requests into group of tenant specific requests. Once we form group of request for each of the tenant, we can mined the data to arrive at tenant specific parameters like Hits/second, response time per request, number of concurrent uses, think time etc.

 

This approach will fail in scenarios where one user belongs to multiple tenants. Other approach is to use Page Tagging technique to obtain tenant specific parameters like Hits/second, response time per request, number of concurrent uses, think time etc.  Are there any other approaches to address this challenge?

 

 

January 22, 2010

Can BPMS expedite Application Development?

BPMS brings in the capability to model, execute and monitor processes carrying with it the promise of flexibility, workflow automation and process management. It is finding increasing use in organizations at different levels: be it for better management of its processes or just for integrating different applications.

 

Continue reading "Can BPMS expedite Application Development?" »

November 29, 2009

Business Collaboration - Is it a Business Function or Business Capability?

I am new to "Technology Enabled Collaboration in an Enterprise" as a subject. I list here a perspective of traditional, competitive business and the structure of how people, process and tools/technologies so far has come closer to achieve the enterprise vision.

Continue reading "Business Collaboration - Is it a Business Function or Business Capability?" »

November 23, 2009

Gartner EA Magic Quadrant 2009......what's in store.....

The recent Gartner Enterprise Architecture Magic Quadrant was released on Nov 12th. There is good sign for EA in the economic downturn scenario - it is reported that EA tool vendors have reported revenue growth of greater than 20% in 2008 as well as growth throughout 2009. From M&A among tool vendors, the most watched out M&A of Software AG acquisition of IDS Scheer is expected to get completed in 2009. EA tools are essential to the survival of EA as a concept as well as to cope up with the need to organize enterprise information in a structured manner - EA is a complex, mammoth documentation and business reporting effort.

 

Continue reading "Gartner EA Magic Quadrant 2009......what's in store....." »

November 19, 2009

Business Architecture Series: Who is a business architect?

Sometime before, I wrote in another forum, regarding the hot topic of ‘Who is a business architect (BA)’ – what is the underlying ability a consultant should have in order to qualify as a business architect? These are the following options I listed down:

 

Continue reading "Business Architecture Series: Who is a business architect?" »

November 10, 2009

Business Architecture Series: Define Business Architecture in 140 words

Was reading through the book Re-imagine! by Tom Peters. In the foreword, he defines/explains short and sweet “what is an Enterprise”. I quote him here: “Enterprise at its best is……..an emotional, vital, innovative, joyful, creative, entrepreneurial endeavor that elicits maximum concerted human potential in pursuit of Excellence and the wholehearted provision of services to others”. An amazing abstraction defining what is an Enterprise.

Continue reading "Business Architecture Series: Define Business Architecture in 140 words" »

November 7, 2009

Does your business strategy manifests in your processes?

There is always the million dollar question of how to link the enterprise business strategy to operational processes so that the strategy gets executed effectively. Enterprises adopt various mechanisms and programs to institutionalize strategy. Strategy is like an invisible enigma. It drives the enterprise but it often does not manifests itself visibly. Enterprises these days try to socialize their strategy through Strategy Statements. But that is often not enough or rather not better than making it visible through linking the strategy element to process execution. If such a mechanism can be built in and consensus is reached on this mechanism, there will be improved evidence for strategy execution for business decision makers to make agile course correction of strategy.

Continue reading "Does your business strategy manifests in your processes?" »

November 5, 2009

Process Architecture Blueprint Definition - See the forest inorder to see the trees......

Organizations struggle to define their Process Architecture Blueprint which represents the portfolio of business processes of the organization. Process Architecture definition is a crucial phase in the development of BPM as well as Enterprise Business Architecture solution. Enterprises often take a piecemeal approach while defining process architecture blueprint and jump directly into modeling task level business processes without able to link to the end to end value stream of the business effectively. An attempt to have a structured approach to define the process architecture blueprint through defining what is process architecture blueprint, why it is needed, how to build it and how quickly one can build it is essential for an enterprise's BPM or Business Architecture journey. Success of Business Process Management teams depends on this crucial phase of process architecture definition and here I attempt to provide a framework helping enterprises embark into process journey.

Continue reading "Process Architecture Blueprint Definition - See the forest inorder to see the trees......" »

October 21, 2009

SETLabs and Infosys at BPM 2009 Conference

BPM 2009 is the foremost academic conference on BPM today. Most of the participants and presenters in the conference were from Universities. There was little presence of product vendors like BizAgi, Signavio. Most of the key notes were done by Gurus of business process management like Prof Scheer, John Hoogland. Among the academics there was a presence of few participants from Research labs of organizations like IBM Research Labs and SETLabs of Infosys.

Continue reading "SETLabs and Infosys at BPM 2009 Conference" »

Process mining at the BPM 2009 Conference

Recently I attended BPM 2009, which is the foremost academic BPM conference today. This was held at Ulm in southern Germany. The conference was attended by researchers from various Universities across the globe. In addition, there was participation from the research labs of some organizations like IBM Labs and two of us from SETLabs at Infosys. The keynote speakers included Prof August-Wilhelm Scheer, founder of IDS Scheer  and John Hoogland, CEO of Pallas Athena .

Continue reading "Process mining at the BPM 2009 Conference" »

October 13, 2009

SETLabs and Infosys participation in SOPOSE workshop AT Services Computing 2009 conference

Recently I chaired  SOPOSE 2009 workshop, the fourth international workshop Service and Process oriented Software Engineering (http://www.dsl.uow.edu.au/sopose/index.php ),  the workshop co-located with SCC 2009. The workshop aimed at addressing software engineering issues related to constructing service oriented architecture and business process management.

Continue reading "SETLabs and Infosys participation in SOPOSE workshop AT Services Computing 2009 conference" »

Humanworkflows in BPM - Post 5

In continuation with my earlier post, the second type of Human workflow pattern in Task execution is termed as Asynchronous task.

* Asynchronous task with inbox: A human participant interacts with a long running business process, where the subsequent task might be time consuming or require another human participant involvement e.g. any approval process. In this mode of interaction a human participant executes a task and subsequently gets the next set of tasks via an inbox/ task list.

While modeling a business process it should be possible to define task flow mode for individual human activities i.e. if the subsequent UI should be provisioned immediately or will the UI be provisioned at some later point of time in Inbox. Also If a Task is initially declared as Synchronous however a Human participant does not work on it immediately it should be moved to a Inbox (i.e. treated a asynchronous task after some given duration.)

Asynchronous task flows are used frequently to handle most of real life human workflows involving multiple human users, long running process and intermediate steps which are run as scheduled tasks or batch tasks.

Krishnendu Kunti

Krishnendu_kunti@infosys.com

October 12, 2009

Human Workflows in BPM - Post 4

The second set of human workflows can be categorized as Task execution.

Task execution involves all kinds of interaction where a human participant interacts with a BPM system to execute a task; the first type the task execution is Synchronous task flow:

 * Synchronous task flow: A human participant interacts with a process using synchronous user   interface (for example interacting with a web based UI), where user submits a request and the system immediately provisions subsequent screen. This type of interaction is required when the same human participant interacts with process and performs a set of consecutive tasks e.g. creation of application for bank account opening. This sort of interaction to a similar to interacting with a portal except that the page provisioning and page flow is managed by a BPM engine in the background. In traditional way of creating synchronous workflows i.e. web applications; an application is responsible for state and data maintenance. Whereas in synchronous workflows implemented using BPM engine, the engine takes care of data and state maintenance.
 
 The following are the advantages of implementing synchronous workflows using BPM engines:
 
 - Capability to easily create/alter such workflows and usage of common platform to rapidly create/ deploy any synchronous workflow in standalone mode of in context of other application (e.g. creation of account workflow in a core banking solution)
 
 - Business users can create workflow applications without knowing anything about creation of page flows, data maintenance, session maintenance etc.

October 5, 2009

Human Workflows in BPM - Post 3

The first set of human workflow can be categorized under Process level tasks

Process level tasks are used by BPM system administrators and users either to manage a process life cycle related events or get visibility into the process.
 
 * Process Deployment and Undeployment: A human with required credentials interacts with  a process engine to either deploy or undeploy a process.

 * Process Staged Deployment: A human participant is given authority to deploy a process   in a BPM engine however the process is not available for execution unless it has been   approved by another human participant or a set of human participants. 

 * Process Administration: A human participant interacts with a process instance or a set  of process instances to manage run time administration tasks such as initiating a  process instance, monitoring process instances, terminating a process instance and  altering scheduling parameters.

 * Process Visibility: A human participant (administrator or business user) interacts  with a process to get information on the following:

  -The overall execution status of a process i.e. number of pending and executed     activities and their state.

  - All pending activities across multiple process instances for a human     participant and their dependent activities which are allocated to other human   participants.
  
  -All pending activities for a human participant and their impact on process SLA.
  All activities allocated to subordinate roles and their execution status.

 Though these set of activities may not constitute business interaction pattern, all the same these functions are often requested as a part of functional specifications in BPM projects.

 

September 30, 2009

Human Workflows in BPM - Post 2

Workflow pattern represent recurring requirements in workflows, where workflow might include interactions involving both system and human participants. Patterns in workflows involving only systems are structured in nature [http://www.workflowpatterns.com/patterns/index.php ]; whereas human workflow patterns are unstructured in nature.Until now, we have found good literature on system workflow patterns; however these patterns are not enough to capture human interaction in real life workflows.

In my subsequent posts I will be identifying and grouping an exhaustive list of human workflow patterns in real life business processes, where grouping is done based on similarity of objective across workflows. The identified patterns can be used in a number of ways including, determining platform centric implementation, product evaluation for human workflow support, creation of applicable human workflow patterns catalogue to name a few.The workflow patterns are derived from real life process requirements primarily from financial services industry where as much as 70% of workflows involve human participation [. http://www.bpminstitute.org/research/research/article/research-brief-bpm-and-banking.html].

September 23, 2009

Human Workflows in BPM

In an enterprise, a significant part of the business processes are supported by human workflows. Human work flows involve interactions among human participants and information systems. Often these workflows have fair degree of complexity as they are required to support long running interactions spanning more than one human participant and information systems. With maturity in BPM space there is an ever increasing need to support collaborative development, deployment, execution, governance and run time behavior modification of workflows involving human participants. In this blog I will be posting about patterns in human workflows and direction of evolution. Participants are welcome to enrich this body of knowledge by providing insights from their leanings.

Thank you

-Krishnendu Kunti

krishnendu_kunti@infosys.com 


 

August 27, 2009

SETLabs Participation in AMCIS 2009

The 15th Americas Conference on Information Systems (AMCIS), 2009 was recently held in San Francisco. The conference was themed as 'The Golden Gate to the Future of IS (Information Systems)'. As far as i know, SETLabs was one of the only industry R&D groups that participated in this conference. We participated at two different levels.

A paper titled "Key Performance Indicators Framework - A Method to Track Business Objectives, Link Business Strategy to Processes and Detail Importance of Key Performance Indicators in Enterprise Business Architecture”, was presented by Eswar Ganesan (Senior Associate Consultant).  The paper was co-authored by Eswar and Ramesh Paturi (Senior Consultant) from the InFlux team.

As part of SETLabs, Web 2.0 Research Lab Dr. Jai Ganesh and I chaired three mini-tracks in this conference. These were:

  1. Web 2.0 and Collaborative Value Creation
  2. Business Impact of Virtual Worlds and Web 2.0
  3. Web Accessibility - Challenges, Regulation and Reality

The overall conference was replete with variety of parallel tracks including those on Design Theory,  Analytical Modeling and Simulation, Decision Support Systems, diffusion of IT, eBusiness and eCommerce, Enterprise Systems all the way to Social issues of IT.

Continue reading "SETLabs Participation in AMCIS 2009" »

August 5, 2009

BPM and Compliance

Compliance is most important and critical aspect of any business. One of the challenges is adherence to multiple standards. BPM based solutions can take care of these compliance to multiple standards.

Continue reading "BPM and Compliance" »

June 23, 2009

Process Mining - Existing Methods and Challenges

Process Mining has been gaining lots of attention from business analysts lately and is considered one big breakthrough in Business Process Management Paradigm. Process mining has been there as one research item for a long time, I wonder what has made it popular all of a sudden now. Is it because process mining has been presented as commercial tools and consultant’s aid by some companies? Academics are active in field of process mining for quite some time now and have created some useful open source tools as well.

Continue reading "Process Mining - Existing Methods and Challenges" »

June 15, 2009

Can BPM support you in your Compliance Challenge?

A BPM based solution enables the design, analysis, optimization and automation of business processes. BPM separates process logic and rules from the execution engines, manages relationships between individuals and applications, and monitors performance.
One of the major challenges the organizations are facing today are growing requirements for managing compliance to various standards and frameworks.  These compliance standards include both regulatory (SOX,HIPPA) and Non regulatory (COBIT,CMMI,ISO). Regulatory compliance is a must for the organization, non compliance to the regulatory standards will have huge liabilities to the company and to the senior leadership. Compliance to the non regulatory standards is more strategic in nature and it will provide distinct advantages for accomplishing objectives of the organization.

Continue reading "Can BPM support you in your Compliance Challenge?" »