Infrastructure Services are definitely undergoing a major transformation. How does one navigate the web of emerging technology trends and stay ahead of the game? Read on to learn more on our Infra Matters blog.

Main

August 7, 2015

Driving down IT and Business risk from unsecured endpoints

In the Cloud and Bring Your Own Device (CBYOD) age, securing endpoints both inside and outside the corporate network is equally important. This is where Secure-Ops comes into play - ie the combination of secure practices integrated within regular service operations.

 

In the past I have written about how to deal with privileged IT endpoints. Again practicing sound IT Risk Management will lead one to the look at compensating controls which this post deals with.

 

Consistent processes drive effective controls. Change management is unique in that it is both a process and a control. The 10 questions for Enterprise change will open key areas within IT Service Management for further analysis. And it will complement evolving Trust Models for effective governance.

 

The 2015 Annual GRC conference is collaboration between the Institute of Internal Auditors (IIA) and the Information Systems Audit and Control Association (ISACA).

 

The conference is being held between Aug 17th to 19th at Phoenix, AZ and will be a great forum to learn more about emerging trends in IT compliance and controls.

 

I'm equally excited for having the opportunity to speak in my session on Aug 17th, 'Attesting IT Assets and key Configuration items as a pre-audit measure: The why and the how'.

 

More in my next post.

October 22, 2014

Developing operational controls through an ACM practice

 

In my last entry I talked about the need to have a sound Asset and Configuration Management (ACM) practice as the foundation for an effective Cyber Security strategy. So what does this start to look like? As simple as it may sound, designing, setting up and managing an ACM practice is actually a complex endeavor.

Why? ACM faces multiple ongoing and evolving challenges. Here are a few

-          Proliferation of IT devices and form factors- both fixed and mobile

-          Product vendors running varied licensing models for software product

-          Multiple asset "owners"- almost every operational entity has an interest in the device - e.g.- Audit, Access Control, Information Security, Network Operations, Change Management & Facilities

-          Focus on one-time 'catchup efforts' at inventory vs an ongoing accounting and reconciliation based systems approach.

-          Multi-sourced operational vendors begin their own ACM silos for contractual and service level needs which makes it hard to see a single picture across the organization

-          Emphasis on asset depreciation and cost amortization resulting in a 'we don't care, as long as finance has it depreciated on the books' view

Will going to the cloud make all these challenges go away? - Or even better will cloud make the need for an ACM practice go away? Hardly! Just ask IT Security or even better the External Auditor. As ACM evolves within major Fortune 500 organizations, so will the need for the cloud vendors to support the customer's ACM efforts through sound management, accurate reporting and alerting.

 So what does an organization need? The below is an attempt to list down the key components that will comprise an effective ACM practice

-          Discovery capabilities for internal environments

-          Service Provider discovery feeds for outsourced environments

-          Any other manual feeds- e.g. data from a facilities walkthrough

-          Direct asset input/output system feeds from procurement and asset disposal

-          Automated Standardization Engine

-          System for device normalization and accurate identification

-          Reconciliation rules for comparisons between overlapping feeds, comparison between auto discovery and feeds

-          A dedicated Asset Management database (AMDB)- this is asset information for a distinct set of stakeholders ( procurement, IT planning and finance, DC Facilities, Asset receiving and Asset disposal)

-          A dedicated Configuration Management database (CMDB) tracking asset and attribute relationships and for requirements of specific stakeholders (Change management, Release management, Information Security, Software license management, incident and problem management, capacity management, application support, enterprise architecture)

-          Automated business service modeling tool

-          Asset analytics platform for standard and advanced reporting

-          Integration with change management module

-          Release management module integration

-          Business as usual processes and governance mechanisms

Bringing these components together requires dedicated investment, time and resources but when done, dramatically improve the overall level of control that the organization has over its IT investments. Let's explore how that is achieved in my next note..

September 29, 2014

The foundation for effective IT Security Management

 Of late the news on the IT Security front has been dominated by the mega hacks. Retailers in particular have taken the brunt of bad press with a large US home Improvement Company, the latest in the process of admitting to being compromised. The cyber criminals in all these cases took away credit card data belonging to retail customers. This in turn has resulted in a chain reaction where Financial Services firms are battling the growth of credit card fraud. The resulting bad press, loss of reputation and trust has affected the companies and their businesses.

The tools and exploits in these attacks were new, however the overall pattern is not. Cyber criminals have a vested interest in finding out new ways to penetrate the enterprise and that is really not going to go away anytime soon. What enterprises can do is to lower the risk of such events happening. That seems like a simple enough view but in reality the implementation is complex. Reactive responses to deal with security breaches involve investigations in collaboration with law enforcement on the nature of the breach, source, type of exploit used, locations, devices, third party access etc. But that along does not address the issue of enterprise risk.

Yes, a comprehensive approach is required. Many pieces of Enterprise Security have to come together to work as a cohesive force to reduce the risk of future attacks. These components include Security Awareness and Training, Access Control, Application Security, Boundary Defense and Incident Response amongst others. But effective IT Security Management is incomplete without the addressing one vital element. As an enterprise the understanding of 'what we own', 'in what state', 'where' and 'by whom' is often lost between the discussions and practices of penetration testing, discovery and audit.

These 4 elements coupled with the fifth one of 'management' on a 24*7 basis is typically in an area not within IT Security.  It is within IT Service Management (ITSM)- Asset & Configuration Management (ACM).  The foundation for effective IT Security begins with a strong collaboration and technology integration with the ACM practice. Without a capable ACM presence, IT Security Management is left to answer these questions by themselves.

So why have enterprises ignored or enabled for a weak ACM practice. Over the last decade, there are several reasons-  technological, structural and business related. From a technology standpoint, the available solutions had partial answers, long implementation times and not seen as robust enough. From a structural standpoint, the focus within ITSM was on 'Services' with Incident Management taking the lion's share of the budget and focus. From a business standpoint, multi-sourcing has played a huge role in the compartmentalization of the enterprise. Rightly so, Service providers focus is on achievement of service levels and watching what they are contracted to do and no more.

I would also argue that effective ACM is a key pillar to effective IT governance. The ability to know exactly what areas are being governed and how, from a non-strategic view, also depends on a sound ACM practice.  Again in a software centric world there is no application software, without effective Software Configuration Management (SCM) and tools like Git and Subversion.  So ignoring ACM, undermines the very functionality and availability of the software.

But our focus is on IT Security, so where does one start? Depending on the state of the ACM practice in the enterprise, there are may be a need to fund this central function, expand it's scope and bring in greater emphasis on tools, technology and people. More in my next blog .....

January 21, 2014

Hybrid ITSM: Evolution and Challenges

(Posted by Manu Singh)

When you compare an ITSM solution based on public cloud with that of an on-premise solution, there is no way to determine which one is superior. Although public cloud based ITSM solutions provide an on-demand self-service; flexibility at a reduced cost is not the only factor that should be considered while choosing deployment options.

Customization has always been a major issue while deploying a cloud based ITSM solution. While every organization has its own way of handling incidents, problem, change and release management; it's the business needs that determine how the IT service management solution is deployed. Cloud based ITSM solutions can be inflexible at times - a kind of one-size-fits-all proposition. Any change / customization will go through testing across the entire user base for all the clients which will lead to unnecessary delay in deploying the required functionality.  In some cases, a release may not even be implemented at all if a few users do not approve of the change.

In other words, using a standard application across multiple users gives limited options for changes in configuration. Organizations may face a risk as requirements continue to change as dictated by a dynamic business environment. Limited options to change configuration settings may not be the best solution in such a scenario.

Another reason organizations are unlikely to stick with a cloud-only solution is that it gets expensive as the years go by. Analysts have also predicted that SaaS based ITSM tools may not be the preferred option as the amount of effort invested in implementing, integrating, operating and maintaining tools would likely result in increasing actual costs rather than reducing it.

But this does not mean that the cloud based ITSM model is likely to vanish. It will still be a good bet for organizations that have limited IT skills on-site and are only looking for standardization of their processes without much customization and dynamic integration requirements.
It stands to reason, that organizations would prefer to have both options - i.e. a cloud-based ITSM offering that can be rapidly deployed and a premise-based one which would support on-going customization and dynamic integration.

Hybrid ITSM combines best of both worlds' i.e. public and on-premise/private clouds.  It focuses on increasing the scalability, dependability and efficiency by merging shared public resources) with private dedicated resources.
However, implementing a hybrid model is not as easy as it seems, as it comes with its own set of challenges, some of which are listed below:

  • Management and visibility of resources that fall outside the scope of managed services
  • Ensuring the consistency of changes implemented between the on-premise and the cloud service provider
  • Supporting open tool access with consistency in the data / look and feel
  • Managing shared network between the on-premise data center and the cloud service provider
  • Seamless integration between on-premise and cloud infrastructure in order to share workload at peak times

Looking at the above challenges, it is clear that organizations need to do a thorough due diligence to identify:

  • Data and network security requirement (data encryption, firewall etc.)
  • Tool usage requirement, storage requirement (on-premise or cloud)
  • Robust change management in order to track and coordinate changes between the two environments
  • Fail-safe infrastructure set up so that the availability of the tool is not hampered
  • A robust asset and configuration management  to track the assets within business control and dependency with assets on public cloud
  • A framework defining governance, statutory and support requirements

Ideally, organizations need to follow an approach that incorporates the aforesaid requirements early on during the discovery and design phase.
My next post will cover the implementation approach for Hybrid ITSM along with the mitigation strategies for some of the common challenges.

November 7, 2013

DevOps: Befriending the alien

We are living in interesting times where organizations are re-thinking their commitment to value. There is more focus on the question: "Are we doing this right and creating value?"

 

Globally, production cycle times are getting reduced and time to market for any product is expected to be "quick". There is a concerted focus on automation which has given birth to relatively new jargons such as DevOps, Orchestration etc. Despite decades of process and policy re-engineering, we run the risk of missing out on success without automation systems to support us. 


So what are we trying to bring in our lives?  


DevOps is an interesting practice that has silently invaded the IT professional's sphere of influence - almost like the friendly neighborhood alien made famous by Steven Spielberg.

Let us figure out for a minute on what is DevOps and why it makes a difference to us here?


"DevOps is associated with a collaborative set of practices that "influences" IT Development, Testing & Operations/Service Management staff to join hands and deliver high quality IT Services/applications to the end-users "more frequently & consistently".


Based on my recent experiences with clients and prospects, I can say that nearly every global IT organization has heard about it and is interested to explore this 'jargon' (I say this is a jargon since there is no official framework which is authoritative enough to set a global definition and prescription to implement DevOps).

 

I have a rather radical observation and interpretation here. Innovations in technology mainly around Cloud, Mobility & Social space has taken a big lead compared to people practice maturity levels. There are expectations now to roll out frequent application/service releases - in some cases, a daily release.


This has resulted in the need of more "people centric" development methodologies that sometimes need radical shifts in organizational philosophy. For e.g. how many of us have actually seen  application development and IT operations members sitting together in the same room working daily to rollout regular releases?

 

In the next couple of years, this debate is likely to continue about how much of local collaboration is required to make DevOps a realistic goal. Again, technology has moved ahead in the game here, and it can be actually seen among the DevOps tool vendors where they aggressively claim to make this happen.


There are tools in the market that claim to automate the entire software delivery provided the developer writes the code (many a times using re-usable components), tester has uploaded the test cases at the same time when code is written and the environment/infrastructure is readily available on-demand. In a nutshell, you can check-in the code into a repository and go home; the rest is taken care of by the systems to make the new release available to the end-users.

But reality is different. It will not be so easy to adopt this disciplined practice - it is like applying moral science lessons in a chemistry lab!


The success of this concept hinges on the level of understanding of all the stakeholders involved and befriending this concept - however alien it may sound. (In the end, even the scientist ended up liking ET!)

October 28, 2013

Traditional Data versus Machine Data: A closer look

(Posted on behalf of Pranit Prakash)

You have probably heard this one a lot - Google's search engine processes approximately 20 petabytes (1 PB=1000 TB) of data per day and Facebook scans 105 terabytes (1 TB=1000 GB) of data every 30 minutes.
Predictably, very little of this data can be fit into the rows and columns of conventional databases given the unstructured type and volume of this data. The complexity of this data is commonly refered to as Big Data.

The question then arises - how is this type of data different from  system generated data? What happens when we compare system generated data - Logs, syslogs and the likes,  with Big Data?

We all understand that conventional data warehouses are one's where data is stored in form of table based structures and useful business insights  can be provided on this data by employing a relational business intelligence (BI) tool . However, analysis of Big Data is not possible using conventional tools owing to the sheer volume and complexity of data sets.
Machine or system generated data refers to the data generated from IT Operations and from infrastrucutre components such as server logs, syslogs, APIs, applications, firewalls etc. This data also requires special analytics tools to provide smart insights related to infrastructure uptime, performance, threat and vulnerabilities, usage patterns etc.

So where does system data differ from Big data or traditional data sets?
1. Format: Traditional data is stored in the form of rows and columns in a relational database whereas system data is stored in  the form of text that is loosely structured or even unstructured. The format of big data remains highly unstructured and contains even raw form of data that is generally not categorized but is partitioned in order to index and store.
2. Indexing: In traditional data sets, each record is identified by a key which is also used as index. In machine data, each record has unique time-stamp that is used for indexing unlike big data, where there is no criteria for indexing.
3. Query Type: There are pre-defined questions and searches conducted on the basis of structured language in traditional data analysis. In system or machine data, there is a wide variety of queries mostly on the basis of source-type, logs and time-stamps while in big data, there is no limit to the number of queries and it depends on how the data is configured.
4. Tools: Typical SQL and relational database tools are used to handle traditional data sets. For machine data, there are specialized log collection and analysis tool like Splunk, Sumologic, eMite which install an agent/forwarder on the devices to collect data from IT applications and devices and then apply statistical algorithms to process this data. In Big Data, there are several categories of tools ranging from areas of storage and batch processing(such as Hadoop) to aggregation and access (such as NoSQL) to processing and analytics (such as MapReduce).

When an user logs in to a social networking site, details such as name, age and other attributes, entered by the user, get stored in form of strucutred data and constitute traditional data - i.e. stored in the form of neat tables. On the other hand, data that is generated automatically during a user transaction such as the time stamp of a login constitutes system or machine data. This data is amorphous and cannot be modified by end users.

While analysis of some of the obvious attributes - name, age etc. gives an insight into consumer patterns as evidenced by BI and Big Data analysis, system data can also yield information at the infrastructure level. For instance, server log data from internet sites is commonly analyzed by web masters to identify peak browsing hours, heat maps and the like. The same can be done for an application server as well.

 

October 22, 2013

The Lean ITIL connection

(Posted on behalf of Manu Singh)

While trying to improve IT operations, the application of ITIL best practices alone does not necessarily guarantee effectiveness and efficiency in IT processes. ITIL, in fact, recognizes this, and for that reason, the ITIL v3 framework defines a service lifecycle stage - Continual Service Improvement (CSI) - intended to measure and improve processes and services over time. However, the 7-step improvement processes defined in CSI is perhaps too focused on improvements as opposed to reducing wastage of effort.
There is a significant amount of effort wastage while performing routine tasks and activities. So, any activity that does not focus on delivering value to the customer is a potential waste and needs to be removed or at least reduced.

And this is where Lean comes in.

Lean principles were originally developed to improve quality and reduce costs in manufacturing. But, over time, Lean principles have been used in the services industry as well.  Lean thinking has now evolved towards improving quality, eliminating waste, reducing lead times for implementations and, ultimately, reducing costs.

So, how do Lean principles compliment IT service management?

Let me give you an example: IT organizations around the globe follow the same practices i.e. detailing client requirements, designing the solution and getting it approved.  At the next stage, they build the solution, take it live and support the same. In a way all the ITSM processes are followed, however, the extent of detailing these processes will depend on many factors such as- the size of the organization, support requirements, geographic spread (for incorporating business rules for different countries) etc. Some of these processes may include wasteful effort that does not really add any value.

Lean helps in identifying  'fit for purpose' ITSM processes i.e. identifying the right fit based on organization requirements and removing those activities that are irrelevant for the business or which create unnecessary overheads. In this way, the correlation of Lean and ITSM principles can be seen as a natural progression towards delivering value in IT services - while Lean focuses on waste reduction in alignment to client requirements, ITSM focuses on delivering services that meet client expectations.

The best approach towards embarking on a Lean ITSM journey is to first identify what the business (internal and external stakeholders) perceives as Value and Non Value adds and then defining a "To-Be" value stream which will act as a baseline for the future improvement journey.  This "To-Be" value stream would take inputs from the corporate business strategy along with current and future business requirements.

Another important aspect is to define the change management and roll-out strategy so that the new/improved processes make sense to the process stakeholders. For this, organizations would need to focus on incremental process roll-outs by bundling them in a logical manner and involve all stakeholders to contribute in solution designing so as to reduce the resistance to change as everyone has the opportunity to contribute to the definition of the solution.

Over a period of time the incorporation of Lean principles in IT service management has evolved towards improving support efficiency, accelerated issue management and reducing costs through better allocation and utilization of support staff and budget funds. 
In the current market scenario, where IT spending is expected to slow significantly, it makes even more sense to apply Lean to gain cost advantages.

(Manu Singh is a Senior Consultant with the Service Transformation practice at Infosys. He has more than 8 years of experience in the industry and is focused on Service Transition, Program Management, IT service management, Process design and gap analysis.)

September 26, 2013

7 steps to a smarter IT Front End

 

We often praise a particular front end as compared to another. The world of Graphical user interfaces has transcended from PC's to Mac's to smartphones. But quite often the IT department ignores the 'Front End' that the modern user expects from IT. Most are fixated on the Service Desk as that all empowering front end. Even ITIL has prescriptive definitions. One can argue that this is not at all the case especially from an end user perspective.

 

We often hear the complaints of IT being slow, ineffective or behind on support commitments. Though there may be some truth to this, there's much to do with ignoring perceptions that have built up over time in user's minds. So what is that 'Front end'- I would define that as a cohesive combination of Resources, Service Desk response times, average speed of resolution, automated Service Catalog and a comprehensive Knowledge base.

 

So how does an organization build up that smart IT front end? Here are 7 steps to get going-

 

1)     Handle all actionable Service Requests through a single service catalog- Basically 100% of Service Requests should go centrally into one service catalog. Insist that the service should not exist if it does not exist on the Service Catalog! Obviously this requires a major change to sunset all kinds of tools and manual services, but the effort to consolidate on one clean interface is worth the time and effort.

2)     Support the Service Catalog through an automated back end - All actionable Service Requests should flow through an automated back end working their way through approvals, procurement, provisioning and fulfillment. Of course automating all of this is ideal and the holy grail! But make the move towards that goal and measure progress. Again shoot for 100% of backend processes; you will reach a high mark. E.g.-new user accounts, requesting a development environment, licenses, adding application access etc.

3)      Enable Problem to Incident (P2I) conversions- Resolving a problem is not the end of the day. Confirming that Level 1 teams understand what to do if the incident rears up again is a must. Consistently enforcing this policy of P2I connection and conversions will work wonders over a defined duration resulting in more incidents resolved faster and efficiently at Level 1 itself.

4)      100% self service for user induced incidents- Setup a Self Service gateway to manage all such common incidents. This will dramatically reduce time to improve speed of response. Examples include Account Lock Out, Password changes and resets, information /document upload, profile changes etc.

5)     Setup and maintain a corporate Wiki- Information discovery and ease of information consumption should play a key role in the roadmap of the IT Front end. Too often we see lack of information on how-to's, problems with finding the right document and obsolescence. An annual check on all key docs, along with the user's ability to edit and update docs will foster a sense of shared ownership within the user community. Enable access through all devices, especially smartphones. Experts will bubble up to the top and become allies of IT.

6)     100% of software installs via End users- through the self-service capability and service catalog automation, enable users to receive a temporary download link to software that they are allowed to install. In the long run, diminish the need for this install capability through adoption of Software as a Service and/or internal web applications. E.g. - Office 365, Sharepoint Online and Lync

7)     Periodic user engagement- IT often gets flak for not being there when it matters or simply not being around. Enabling user feedback, technology awareness sessions and formal internal training periodically can go to a great extent in bringing IT closer to the business community.

 

The organization of tomorrow requires a smart technology front end. Transforming from now to then requires investment of time, effort and resources. These steps can get you started. And there may be more. Do you have a take on additional steps- then do write in.

September 24, 2013

Service Redundancy - 5 Lessons from the Nirvanix blowout

 

Earlier last week one of the most prominent Cloud Storage Providers Nirvanix, gave a shocker to the tech community when it announced that it will be going out of business. Customers and partners were asked to stop replicating data to their storage infrastructure immediately and move out their data in about 2 weeks. I have been fascinated about this story and here are the facts.

-          Nirvanix pulled about $70 Mn in venture funding during its lifetime- starting Sept of 2007

-          It's key backers kept up 5 rounds of funding right upto May of 2012

-          Rated well by industry analysts and media

-          The cloud storage service was sold through several enterprise software resellers and service providers.

-          The service pulled in several key customers and was challenging the likes of Amazon's AWS S3 storage services.

 

What is evident is that the company was burning cash faster than generating revenues and it all came to an abrupt end, when it could not find any buyers/execute an exit strategy. One would have thought that enough value (IP or otherwise) would be generated to a potential buyer in 6 yrs of existence, but no more detail seems available. Nirvanix is by no means the first to go belly up- EMC had pulled the plug on Atmos Online service in 2010, though that was perceived as far smaller in impact.

 

From the enterprise standpoint, if an organization had been using their services, these 2 weeks are a time for scramble. Moving data out of the cloud is one tall order. Second issue is to find a new home for the data. And what if the data was being used in real time as the back end of some application. More trouble and pain. So here's my take on the top 5 areas clients/providers can address for Service Redundancy (leaning more on Cloud storage services)

 

1)     Architect multiple paths into the cloud- Have a redundant storage path into the cloud. Ie host data within 2 clouds at once. Now this depends also on the app that is using the service, geo and users, a primary/ secondary configuration, communication links and costs. For eg- a client could have an architecture where the primary cloud storage was on Nirvanix and the secondary on AWS. Throw in the established value from traditional in-house options and established disaster recovery providers.

2)     Be prepared to move data at a short notice- Based on the bandwidth available from source to target and the size of data in consideration, we can easily compute how much time it would take to move data out of a cloud. Add a factor of 50% efficiency (which could happen due to everyone trying to move data out), frequent testing and we have a realistic estimate of how long data migration will take. Now given that the 2 weeks from Nirvanix is a new benchmark, clients may choose to use this as a measure of how much data to store in one cloud- ie if it takes more than 2 weeks to move important data, then consider adding communications costs for better links or including a new provider into the mix.

3)     Consume through a regular service provider - Utilize the Service through a regular managed services provider. This has two benefits for clients a) the ability to enter into an enterprise type contract with the provider and ensure service levels b) the ability to gain financially in case of breach of service levels. Of course service providers in turn have to be vigilant and ensure that they have proper flow down provisions in their contracts with cloud providers and secondly there is an alternative to the service in case of issues.

4)     Establish a periodic service review process- Often we see the case of buy once and then forget. A regular review of the Service will often provide early warning of issues to come. For eg in the case of a cloud storage provider, tracking how much storage they are adding (new storage growth%) and new client labels signed on will give a good indication of any issues on the capacity front. This may in turn point to lack of available investment for growth.

5)     Understand provider financials- Cloud providers today focus on ease of use and new gee-whiz features- this often masks how exactly they are doing financially. And the market continues to value them on revenue growth. It does not matter whether the company is public/ private but as a client and under confidentiality, there exists a right to understand the financial performance and product roadmap, even if it is as a high level.

 

Cloud Solutions offer multiple benefits, but at the end of the day, they still serve another business as a service and there are issues with all services- cloud based, or traditional. Was Nirvanix an outlier or the warning of things to come? We don't know that yet, but as service providers and clients, this may be the 'heads-up' we need, to stay focused on the essentials for effective transformation of Infrastructure and Storage Services.

July 11, 2013

Of Service Level Agreements and Non-Production environments

(Posted on behalf of Pradeep Kumar Lingamallu)

In one of my earlier engagements, I was asked to review the service level agreement (SLA) of a service provider for a financial organization. Despite the said organization having a large non-production environment, the existing SLA did not have any service levels or priorities listed for this environment. A long round of negotiations ensured that due importance was given to the non-production environment in the SLA.

This is what happens most of the time - When a service level agreement (SLA) is drafted, all the focus is on production support and resolution of the production issues and availability of the production environment.  Neither the non-production environment nor support for this environment is documented properly in the SLA.  This may be largely due to lack of recognition of the importance of the non- production environment, especially in the case of new application roll-outs and training.


A typical non-production environment consists of a development environment, test environment and a training environment. Each one plays a crucial role during new deployments and rollout of applications. However, incidents in the non-production environments are generally not given the priorities that they deserve. Just as any incident on the development/testing environment will have a critical impact on the release calendar, any incident on the training environment will have a severe impact on the trainings scheduled. Application deployments get affected by both release and training of the personnel. Delays in either one of the environments are bound to have an impact on the release schedule.

I have seen this happen in a number of engagements that I have been part of. In one incident, the test servers were down and testing activities were delayed. As a result the entire release calendar was affected - in this case, it was a major release, so you can imagine the impact on business.

In another case, a downtime in the training environment again resulted in a delay in the release since the personnel could not complete their training on schedule. This may appear to be a small incident from a provider perspective, but for the organization, it is a significant delay.

Any downtime in the non-production environment is bound to affect production as well - but this fact is generally ignored to the buyer's peril. By specifying SLAs for support of non-production environments, organizations have an additional safeguard against any unplanned downtime that could affect the quality of service.

 

(Pradeep Kumar Lingamallu is a Senior Consultant at Infosys. He has over 18 years of experience in the field of IT service management including certifications in ITIL, CISA, CISM and PMP)


 

July 4, 2013

IT Operations Analytics - Tapping a goldmine of data within IT

(Posted on behalf of Pranit Prakash)

Enterprise IT faces a continuous challenge to maximize availability of systems while optimizing the cost of operations. The need to maintain uptime while managing complex tiers of infrastructure force organizations to spend considerable amount of time and money in identifying the root cause of failures leading to downtime. 

According to Gartner's top 10 critical technology predictions for 2013,  "for every 25% increase in functionality of a system, there is  a 100% increase in complexity."

On the other hand, IT operations generate massive amount of data every second of every day. As of 2012, about 2.5 exabytes of data was created every single day and this number is said to double every 40 months.
This data remains in mostly unstructured format in form of log files but can contain a wealth of information not only for Infrastructure and operations but in areas related to end-user experience, vendor performance and business capacity planning.

As organizations face increasing complexity in their IT operations, they are unable to decipher any patterns that could actually help them improve current operations. This is because existing toolsets are largely confined to silos and are unable to analyze and consolidate details - For example, there are toolsets for IT service management, application performance management etc. that do not talk to each other. The result is a mass of unstructured data that if analyzed in detail, could provide valuable actionable information.

And this is where IT Operations Analytics comes in.

IT Operations Analytics enables enterprise to collect, index, parse and harness data from any source as system logs, network traffic, monitoring tools and custom applications. These data points are churned through a set of smart algorithms to result in meaningful insights for IT operations leading real-time trending and proactive monitoring that ensures higher uptime and co-relation across data-sources.

But IT Operations Analytics is not same as traditional BI which deals with structured data only. The real capability of this exciting field lies in two factors -

a. Comprehensiveness to include any amount of data in any format from anywhere and
b. Capability to co-relate the data to provide centralized view at the finest detail

The possibilities of using analytics to gather actionable insights from configuration and transactional data are endless. In my next set of posts I will explore key use cases for operational analytics and how organizations can adopt this evolving technology trend to their advantage.

(Pranit is a Lead Consultant with Infosys with a rich experience in IT infrastructure and consulting services. He is a certified CISA, Certified DC professional(CDCP) and a qualified BSI ISO 27001 Lead auditor)

March 13, 2013

Achieving and sustaining Service Excellence

One of the most difficult challenges is in identifying where does the onus lay on driving Service Excellence? Should it be driven using a top-down approach or a bottom-up approach? In an IT service scenario, most of the service delivery governance or management teams are involved in 'fixing' operational defects and people management. Driving Service Excellence is largely reactive - it gets triggered only when something goes wrong.

The challenge increases manifold if the service provider embarks on a journey towards service excellence by managing a multi-vendor environment. The main challenge here is to align the path of achieving Service Excellence with the client's organizational priorities and objectives. IT delivery teams that work as individual silos in hope of achieving such objectives end up impairing the process of achieving Service Excellence itself. One has to be vigilant in identifying 'improvement opportunities' and carry out the necessary due diligence in understanding the impact. If the impact is widespread and positive for client business operations, then such desirable opportunities should be quickly exploited.

For a large program where Infosys embarked on the journey of achieving service excellence, the focus was on bringing transparency into service performance and we emphasized the value delivered by explicitly linking the efforts spent by IT delivery teams and the business outcomes.

Our next set of posts will reveal more on where to focus and how to overcome challenges. Watch this space for more!

March 5, 2013

Service Excellence... What's that?

I have been contemplating to write something on Service Excellence Office (SEO) for a while now. And finally, here it is!

At Infosys, we finished one of the most successful SEO implementations for one of our largest clients. As we went along this journey, there was tremendous learning over the last two years. From this experience and from the queries that followed from different Infosys programs spanning across industry verticals, I realized that the most logical way to de-codify SEO was to start by defining what we mean by "Service Excellence" in the first place!

Service excellence is about embedding predictability and standardization in the way we deliver services to our clients (be it one time services or continual improvements). It is all about delighting our clients a feel of the "true-value" that these services deliver.

Imagine your favorite mechanic or handyman - why trust him and not the others? It is all in predictability and quality of service delivered by him! Service excellence is all about setting and exceeding our own internal benchmarks - about persistently competing with ourselves to create a compelling service experience. In an IT setup, it is about enabling mechanisms to share best practices across various IT services and reaping the benefits together. It is about creating a healthy competition between different IT services working towards a common goal. 

Service excellence is neither Science nor Art. It is a mix of both. It is scientific in the sense that it brings in predictability through mechanisms to demonstrate measurable business value. The artistic aspect comes in with the innovation, creativity and passion presented by the teams.

In an IT organizations' daily routine there are multiple occasions where opportunities are available to increase satisfaction of clients, reduce TCO and demonstrate value from IT. It is here that SEO works closely with operations' team to increase client satisfaction by driving process efficiencies, improving response and resolution statistics and remediating client pain points.

Further, SEO supports TCO reduction by helping achieve cost savings through rationalization of teams, optimization of processes and helping in optimum resource utilization through helping reduce overheads. Finally, SEO is most useful in demonstrating IT value to business; it is here where the tire meets the road. SEO does this impeccably by identifying key improvement levers that positively impact efficiency and effectiveness, builds agile measurement frameworks and communicates the benefits achieved to business in a timely and impactful manner.

In the next post we will gradually unravel what different types of challenges come in way of achieving Service excellence and the kind of organizational focus that is required to meet these challenges. Stay tuned!

October 12, 2012

Lean IT implementation for Service Improvements continue......

In continuation to my last post on Lean for IT Service improvement, I will now focus on implementation guidelines for these Lean principles.

Below are three tables that summarize how Lean principles can actively help organizations eliminate common types of 'wastes' in an IT scenario:

Continue reading "Lean IT implementation for Service Improvements continue......" »

October 1, 2012

IT Governance - a lot more than what meets the (I) eye

Imagine the best traffic system - wide roads, well-lit signals, separate lanes for each type of vehicular movement, pedestrian and bicycle path, etc. Wouldn't it seem to be the best place to drive? And doesn't it seem to be like one of the best traffic management systems too? Well, it is true.
 
But come to think of it, even if you have the best traffic management system, how would you ensure that folks keep to their lanes or that they do not jump signals? There would be CCTVs installed to monitor the functioning of this traffic management system. 

That is Governance.

 

Not only does this show the difference between Management and Governance, but that even the best management practice needs governance in place.  For more about Governance and Management, read on...
 

Continue reading "IT Governance - a lot more than what meets the (I) eye" »

August 28, 2012

Knowledge Management: Through a Consultant's Prism

Knowledge, as they say, leads up to wisdom. This sounds like an obvious transition at a spiritual level. For an organisation, however, 'knowledge' - if stored, managed, controlled and governed effectively - itself could prove more important to have than wisdom. Simply put, my view is that knowledge for today's organisations is a 'must have', and wisdom is a 'nice to have' entity...

Continue reading "Knowledge Management: Through a Consultant's Prism" »

June 20, 2012

Service Design for Cloud

The second session of Special Interest Group (SIG) on "Cloud Service Management" from ITSMF Australia, Victoria Chapter, was conducted on 17th June 2012 at the Infosys Australia Docklands office. Topic of the discussion was "Service Design for Cloud". What followed was a fantastic focused group discussion that brought forward very innovative points being put up by the participants on key aspects of Service Design for Clouds - on service catalogues, cloud services and service integration management. Here is an excerpt of few interesting points discussed ...

Continue reading "Service Design for Cloud" »

May 14, 2012

Infosys@ Knowledge12 Conference

Gumbo anyone?

This year's Knowledge 12 conference is being held at the Hyatt Regency in New Orleans, the annual user conference hosted by ServiceNow.

Continue reading "Infosys@ Knowledge12 Conference" »

March 28, 2012

What you (want to) 'Know' is what you (want to) see!

As experience always dictates and what typically people with grey hair always say - 'Knowledge is present everywhere. It is up to an individual to learn and understand.' But in today's world, knowledge is essential; knowledge is solution to an issue which in turn means that knowledge is customer satisfaction. The one who has more knowledge actually is better-placed to excel. It could be as simple as possessing a word document that everyone is searching for or as complex as understanding a concept that others are finding it difficult to grasp! To know more, read on. 

Continue reading "What you (want to) 'Know' is what you (want to) see!" »

March 21, 2012

ITSMF Australia "Cloud Service Mgmt SIG" launched in Victoria

ITSMF Australia SIG on "Cloud Service Management" was successfully launched on 14th March at the Infosys Docklands office in Melbourne. We had a great group of around 25 participants gathered to share their experiences on cloud service management. Topics discussed were various cloud service models (IaaS, PaaS, SaaS, BPaaS!!), their impact on Service Management functions such as financial management, compliance issues and participants experiences of cloud services among others. What a session it was !

Continue reading "ITSMF Australia "Cloud Service Mgmt SIG" launched in Victoria" »

October 21, 2011

Infosys' day out at the annual itSMF AZ LIG summit

It was an awesome day today. To follow up on my previous post about our participation at the annual itSMF Arizona Local Interest Group summit, this was indeed the day for sharing best practices - a packed, day-long event filled with industry veterans and luminaries. 

Continue reading "Infosys' day out at the annual itSMF AZ LIG summit" »

September 28, 2011

PKI - Public Key Infrastructure (Part 1)

"Trust"

The word that defines the essence of business.

 

Some of you may be aware that even today it is a general practice of diamond merchants in Antwerp to close millions worth deals / trading of diamonds just by a handshake and Verbiage without any written documents/ agreements. Still millions of traditional business runs over both parties completing agreements over phone calls.

 

In the world of business, trust had and will plays the essential part and in a typical business relationship either parties has a level of trust built upon on face to face interactions, meetings, word of mouth etc. But with the internet exploding and the ecommerce becoming a major driver in the last 3 decades and the regular set of business activities moved to cyber domain, there was an ever increasing need to have business run over internet. End to End lifecycle of a Products or services are now transacted over internet. B2B and B2C type of business including Marketing, Sales, and Finance etc are handled through Internet.

 

Building and maintaining considering Security aspects and particularly Trust over Internet is always a challenge. How can an organization conduct business with someone or some organization over cyber world where it is possible that the data's confidentiality, integrity, authenticity and non-repudiation of either could not be guaranteed?

 

This is where the Public Key Infrastructure came into play and we have a trusted 3rd party would secure the above security essentials of the information / business transactions carried over an Internet.

 

In series of blog, I would be covering on the Basics of PKI, Components of PKI and then the Design, Implementation / Deployment and Technology Guidelines / Considerations for PKI.

 

In this 1st blog let me try to bring about the Basics of PKI and its different Components.

 

Basics of PKI:

Simply put, Public Key Infrastructure or PKI is not just a technology consisting of Software and supporting hardwares but is a framework covering people, process, policies and services to ensure Confidentiality, Integrity, Authenticity and Non-repudiation of electronic transactions using Public Key Cryptographic technology.

 

Before we move further it would be apt to understand how Public Key cryptography works and the basis of the PKI.

 

Public Key Cryptography -

 

Cryptography is technique uses keys for encrypting or decrypting data. Cryptography is being used for many thousands of years and in principal uses a single or private key to encrypt or decrypt the data. The challenge in this private key technique is the distribution of keys. The Keys essentially need to be sent to the receiver before or separately and is always prone to compromises.

 

This challenge of key management was addressed in 1976 with the introduction of Public Key cryptography by Diffie & Hellman. This technique involves use of a Key pair - Public and Private Key which are mathematically related but very much impossible to compute. The Private Key is held secret by the User and the Public Key is published and available to anyone who needs to communicate with the user. For e.g., if User A want to send a confidential data to User B, he uses the private key of User B which is publicly available. This message can only be decrypted by User B as he is the one who possess the related Private Key. Similarly Integrity and Non-repudiation can be addressed by using the key pair accordingly. This is a breakthrough technique which enabled secure communications over public network which was not feasible earlier.

 

The Service of securing communication over public network using Public Key Cryptography is the basic of PKI

 

PKI Components:

Generally speaking PKI contains Policies for Key and certificate management, Operational procedures, Supporting Software & Hardware for Key and certificate generation, distribution, management and storage etc.

 

Following are the Components of PKI -

 

Certification Authority (CA): Trusted 3rd party for Management and Issuance of certificates

 

Registration Authority (RA): Help Certification Authority with the management and signing of the certificates, registration process

 

Certificates: A digital certificate issued by the CA or RA essentially validates the identity of the User for electronic transactions. It contains Serial no, Name and Signature of the CA, Name and Public Key of the Owner/User, Expiry date of the certificate etc.

 

Repository / Stores: Storage for Certificates and Public Keys including Distribution mechanism.

 

In my next blog, I will cover the Design, Implementation and Technology considerations for PKI

May 18, 2011

ITSM Data Architecture - A key enabler to successfully deploy and manage ITSM tools

Organizations aspire to comply with the ITIL v3 processes and implement an ITSM tool with holistic deployment of these processes. However, the important piece we need to look is the type of data that flows among the various ITSM processes and how quickly we can capture the specific data elements to achieve the optimum results during tools and processes implementation.

Due to numerous data flows it takes enormous amount of time to identify all the data elements, their integration points and relationships which results in the following impacts:

·         Prolonged duration for ITSM tool implementation

·         Less visibility for IT alignment with Business

·         Ineffective rules for Data validation for various processes

·         Identification of internal and external data relationships

Driven by the customer needs, Infosys has built an 'Integrated ITSM Data Architecture Model' that is strongly positioned to enable faster ITSM tools implementations.

The 'Integrated ITSM Data Architecture Model' is a structured and holistic view to demonstrate the data elements and relationships that need to be built for successful deployment of IT Service Management processes into the respective tools and platforms. It can be used as a guiding reference to manage and govern these entity relationships during the overall ITSM tools development lifecycle. This model has an effective future proofing mechanism which will enable an organization to assess their data requirements based on their existing service management processes.

This Data Model was being created by preparing individual data model for key ITSM processes: Incident, Problem, Change, Release, Service Asset and Configuration, Service Level and Access Management, integrating them all to arrive at the comprehensive Data Model. Each process is identified as an entity and respective attributes are listed in entity column to form a reference point for all other associated entities. The explicit data elements are available as related within the entity relationships among various ITSM processes.

Integrated ITSM Data Architecture Model enables the Application Architects and Process Analysts to analyze the organizational needs in terms of process relationships and the respective data elements that are required and managed throughout the information management lifecycle. The Data architecture model also accelerates the requirements elicitation process and enables faster alignment for the overall requirements. This truly reduces the overall systems requirements analysis time by 15% - 20% and also builds the foundation towards better Data Governance.

May 3, 2011

ITSM - choice matters!

<Posted on behalf of Vicrant Pradhan, Senior ITSM consultant, Infosys Technologies. Can be contacted at vicrant_pradhan@infosys.com. >

One of the first things I did when I recently relocated to a new country was buy a mobile phone, a necessity these days. I went to the electronics section in a supermarket and was instantly blown by the range of choices. Although choices are there for variety to help us make easy decisions it somehow never fails to distract most of us. So, after some thought and help I zoomed down to a handful. I was left with some usual suspects and a few recommends from friends.

To boil down to the final choice I decided based on a few considerations...

Continue reading "ITSM - choice matters!" »

December 3, 2010

On operations and agreements...

So I happened to meet an old friend of mine at a mall. We got chatting and she happened to refer to my last blog. She was quite interested in knowing how, in reality, can an ITSM organization integrate output from multiple vendors and look at it holistically.

She ordered some chopsuey for herself from one of the stores at the food court while I got myself a cheeseburger from a burger store at the same food court. We, of course, could sit together as the entire food court was managed by the mall instead of having each store manage an area of its own. Somehow it seemed to be related to what we were discussing, isn't it?!

Continue reading "On operations and agreements..." »

October 11, 2010

Too many cooks need a good head chef!

Let us imagine!

Imagine that you own a mall with a food court in it. Imagine that there are about 50 different stores catering to (no pun intended) different cuisines and a seating space of around 1500 people. Now let's look at a simple example: Imagine that each store has its own set of cutlery! Now let's look at the ramifications because of this simple example.

Continue reading "Too many cooks need a good head chef!" »

March 30, 2010

ITSM – Thou shall be omnipresent!

In my last blog, I had given a primer about what is Cryptography and had promised an insight into how ITSM can join hands with cryptography. Delivering on that promise, let’s go ahead and see how this can be accomplished.

The climax of the last blog was the assurance that ITSM will cater to requirements of managing cryptographic keys. So let’s look at one of the ways (definitely one that worked) to do this.

Continue reading "ITSM – Thou shall be omnipresent!" »

March 29, 2010

Cryptography can be ‘ITSMized’

I just finished working on an engagement which involved ITSM consulting. We work on so many projects which involve ITSM consulting every day. So what is so special about this one? – Well, this one was in a completely new field (for ITSM) called Cryptography. Rarely do we find ITSM being applied to cryptography. But after having worked on this one, I would vouch for the fact that cryptography and ITSM do go hand in hand.

Continue reading "Cryptography can be ‘ITSMized’" »