Infrastructure Services are definitely undergoing a major transformation. How does one navigate the web of emerging technology trends and stay ahead of the game? Read on to learn more on our Infra Matters blog.

« September 2013 | Main | November 2013 »

October 28, 2013

Traditional Data versus Machine Data: A closer look

(Posted on behalf of Pranit Prakash)

You have probably heard this one a lot - Google's search engine processes approximately 20 petabytes (1 PB=1000 TB) of data per day and Facebook scans 105 terabytes (1 TB=1000 GB) of data every 30 minutes.
Predictably, very little of this data can be fit into the rows and columns of conventional databases given the unstructured type and volume of this data. The complexity of this data is commonly refered to as Big Data.

The question then arises - how is this type of data different from  system generated data? What happens when we compare system generated data - Logs, syslogs and the likes,  with Big Data?

We all understand that conventional data warehouses are one's where data is stored in form of table based structures and useful business insights  can be provided on this data by employing a relational business intelligence (BI) tool . However, analysis of Big Data is not possible using conventional tools owing to the sheer volume and complexity of data sets.
Machine or system generated data refers to the data generated from IT Operations and from infrastrucutre components such as server logs, syslogs, APIs, applications, firewalls etc. This data also requires special analytics tools to provide smart insights related to infrastructure uptime, performance, threat and vulnerabilities, usage patterns etc.

So where does system data differ from Big data or traditional data sets?
1. Format: Traditional data is stored in the form of rows and columns in a relational database whereas system data is stored in  the form of text that is loosely structured or even unstructured. The format of big data remains highly unstructured and contains even raw form of data that is generally not categorized but is partitioned in order to index and store.
2. Indexing: In traditional data sets, each record is identified by a key which is also used as index. In machine data, each record has unique time-stamp that is used for indexing unlike big data, where there is no criteria for indexing.
3. Query Type: There are pre-defined questions and searches conducted on the basis of structured language in traditional data analysis. In system or machine data, there is a wide variety of queries mostly on the basis of source-type, logs and time-stamps while in big data, there is no limit to the number of queries and it depends on how the data is configured.
4. Tools: Typical SQL and relational database tools are used to handle traditional data sets. For machine data, there are specialized log collection and analysis tool like Splunk, Sumologic, eMite which install an agent/forwarder on the devices to collect data from IT applications and devices and then apply statistical algorithms to process this data. In Big Data, there are several categories of tools ranging from areas of storage and batch processing(such as Hadoop) to aggregation and access (such as NoSQL) to processing and analytics (such as MapReduce).

When an user logs in to a social networking site, details such as name, age and other attributes, entered by the user, get stored in form of strucutred data and constitute traditional data - i.e. stored in the form of neat tables. On the other hand, data that is generated automatically during a user transaction such as the time stamp of a login constitutes system or machine data. This data is amorphous and cannot be modified by end users.

While analysis of some of the obvious attributes - name, age etc. gives an insight into consumer patterns as evidenced by BI and Big Data analysis, system data can also yield information at the infrastructure level. For instance, server log data from internet sites is commonly analyzed by web masters to identify peak browsing hours, heat maps and the like. The same can be done for an application server as well.


October 22, 2013

The Lean ITIL connection

(Posted on behalf of Manu Singh)

While trying to improve IT operations, the application of ITIL best practices alone does not necessarily guarantee effectiveness and efficiency in IT processes. ITIL, in fact, recognizes this, and for that reason, the ITIL v3 framework defines a service lifecycle stage - Continual Service Improvement (CSI) - intended to measure and improve processes and services over time. However, the 7-step improvement processes defined in CSI is perhaps too focused on improvements as opposed to reducing wastage of effort.
There is a significant amount of effort wastage while performing routine tasks and activities. So, any activity that does not focus on delivering value to the customer is a potential waste and needs to be removed or at least reduced.

And this is where Lean comes in.

Lean principles were originally developed to improve quality and reduce costs in manufacturing. But, over time, Lean principles have been used in the services industry as well.  Lean thinking has now evolved towards improving quality, eliminating waste, reducing lead times for implementations and, ultimately, reducing costs.

So, how do Lean principles compliment IT service management?

Let me give you an example: IT organizations around the globe follow the same practices i.e. detailing client requirements, designing the solution and getting it approved.  At the next stage, they build the solution, take it live and support the same. In a way all the ITSM processes are followed, however, the extent of detailing these processes will depend on many factors such as- the size of the organization, support requirements, geographic spread (for incorporating business rules for different countries) etc. Some of these processes may include wasteful effort that does not really add any value.

Lean helps in identifying  'fit for purpose' ITSM processes i.e. identifying the right fit based on organization requirements and removing those activities that are irrelevant for the business or which create unnecessary overheads. In this way, the correlation of Lean and ITSM principles can be seen as a natural progression towards delivering value in IT services - while Lean focuses on waste reduction in alignment to client requirements, ITSM focuses on delivering services that meet client expectations.

The best approach towards embarking on a Lean ITSM journey is to first identify what the business (internal and external stakeholders) perceives as Value and Non Value adds and then defining a "To-Be" value stream which will act as a baseline for the future improvement journey.  This "To-Be" value stream would take inputs from the corporate business strategy along with current and future business requirements.

Another important aspect is to define the change management and roll-out strategy so that the new/improved processes make sense to the process stakeholders. For this, organizations would need to focus on incremental process roll-outs by bundling them in a logical manner and involve all stakeholders to contribute in solution designing so as to reduce the resistance to change as everyone has the opportunity to contribute to the definition of the solution.

Over a period of time the incorporation of Lean principles in IT service management has evolved towards improving support efficiency, accelerated issue management and reducing costs through better allocation and utilization of support staff and budget funds. 
In the current market scenario, where IT spending is expected to slow significantly, it makes even more sense to apply Lean to gain cost advantages.

(Manu Singh is a Senior Consultant with the Service Transformation practice at Infosys. He has more than 8 years of experience in the industry and is focused on Service Transition, Program Management, IT service management, Process design and gap analysis.)

October 1, 2013

Transforming Enterprise IT through the cloud

Cloud technologies offer several benefits. With solutions that are quick to adopt, always accessible and at extreme scale to the capex-to opex benefit that the CFO likes, clouds have come of age. In the coming years, the IT organization is going to increasingly see the Cloud as a normal way to consume software, hardware and services. Clouds are transformational. Several companies today are already enjoying a competitive business advantage as opposed to their competition by being early adopters of this transformational paradigm. Today, however, we hear similar questions on near term and long term adoption from IT leaders, the summary of them being as follows- 

  • Where do I get started?
  • What are the quick wins?
  • What should we be doing in the next 3 yrs?

I have tried to address some of these common questions as below. These can be thought of as basic transformational patterns and a strategy for cloud adoption within the enterprise

  • Start with Software as a Service (SaaS)- explore ready to use solutions in key areas such as CRM, HR and IT Service Management. Some of the SaaS solutions such as Salesforce, Workday and ServiceNow have been in existence for some time now and deploy proven engagement models, so these will be quick wins to consider. Pick functionality, an app and a suitable sized footprint, so as to have a project scope that can create a real organizational level impact. From a time to market perspective SaaS can arguably be the quickest way to attain the benefits of cloud.
  • Explore the private cloud- Take virtualization to the next level by identifying an impactful area within the organization to start a private cloud project. One example with an immediate benefit is a way to enable application development teams to automate the request of development and test environments. Getting this done through a service catalog front end through to a private cloud back end, can cut provisioning times by 50% or more with an added benefit of freeing resources to focus on providing support to production environments. There are different product architectures to consider with pros and cons that are beyond the scope of this note- choose one that works for the organization and get going.
  • Explore Data Center and Infrastructure consolidation- Many large Fortune 500 organizations today have to deal with equipment sprawl. The issue with sprawl can manifest itself from technology rooms and closets into co-located space to even entire data centers. This can deal with the full stack of IT equipment from servers, network switches, storage devices to even desktops.  Private clouds can be used as a vehicle to consolidate, reduce footprint and increase overall levels of control and capacity of this infrastructure. An added benefit can be in terms of higher performance, lower energy costs and replacement of obsolete equipment.
  • Identify specific public cloud use cases- Depending on the business, some areas can benefit from adopting public clouds. For e.g.- Requiring a large amount of computing horsepower for a limited duration to do data analytics is a requirement for organizations in the pharmaceutical, healthcare, and financial industries. This is a challenging use case for traditional IT as this is capital and resource inefficient. Public clouds are the perfect answer to these types of workloads. The business unit can pay for what they use and does not get limited by the equipment available in-house.
  • Create a multi-year roadmap for expansion - These initiatives are only the beginning. IT leaders need to create a 3 year roadmap that plans how these initiatives can be expanded to their fullest potential within the organization. Enabling a strong project management practice and a proven project team will go a long way to ensuring success during execution. Create a financial cost- benefit analysis for each of these areas and ensure a positive Net Present Value (NPV) case for each one. Identify what partners bring to the table that is proven, yet unique to the cloud solution at hand. Go with the base assumption that in spite of replacements of some legacy tools and solutions, these Cloud initiatives will more or less continue alongside current in-house IT and management practices.

In summary, it is important to have a pragmatic view.  Cloud is not the silver bullet that will solve all IT problems. And neither will an organization automatically attain the promised benefits. No two organizations are alike, even those that are in the same industry and hence understanding what to do and which steps to take first will put IT on a course to being 'In the cloud'.