Infrastructure Services are definitely undergoing a major transformation. How does one navigate the web of emerging technology trends and stay ahead of the game? Read on to learn more on our Infra Matters blog.

August 7, 2015

Driving down IT and Business risk from unsecured endpoints

In the Cloud and Bring Your Own Device (CBYOD) age, securing endpoints both inside and outside the corporate network is equally important. This is where Secure-Ops comes into play - ie the combination of secure practices integrated within regular service operations.

 

In the past I have written about how to deal with privileged IT endpoints. Again practicing sound IT Risk Management will lead one to the look at compensating controls which this post deals with.

 

Consistent processes drive effective controls. Change management is unique in that it is both a process and a control. The 10 questions for Enterprise change will open key areas within IT Service Management for further analysis. And it will complement evolving Trust Models for effective governance.

 

The 2015 Annual GRC conference is collaboration between the Institute of Internal Auditors (IIA) and the Information Systems Audit and Control Association (ISACA).

 

The conference is being held between Aug 17th to 19th at Phoenix, AZ and will be a great forum to learn more about emerging trends in IT compliance and controls.

 

I'm equally excited for having the opportunity to speak in my session on Aug 17th, 'Attesting IT Assets and key Configuration items as a pre-audit measure: The why and the how'.

 

More in my next post.

April 13, 2015

Hybrid ITSM - Key points to consider

IT service management(ITSM) tools play a pivotal role in managing diverse enterprise environments. There is a concerted movement towards hybrid IT where enterprises may leverage cloud for some workloads in addition to traditional data centers. As pointed out in my earlier post, a "one size fits all" approach does not work in the case of ITSM tool requirements. Each organization has its own requirements and faces unique challenges during the implementation.

Let's look at some of the key parameters that need special attention while implementing ITSM tool in hybrid IT environments:


While deciding on the cloud option for your ITSM deployment, Integration is one of the key areas that can foretell the success of ITSM implementation. Enterprises have a wide range of tools deployed across their IT; sometimes each department may have different tools to perform similar tasks. For instance, some departments may choose to use Nagios while others may use SCOM for monitoring the infrastructure. However, no ITSM tool can exist in isolation without working in cohesion with the other tools.
So here, the key considerations include: availability of plug-ins for integration with other tool sets, provide an end to end view of services to the customer (single source of truth) and enable effective problem management by integrating disconnected environments and platforms.


Security is another important aspect while implementing ITSM especially if the deployment is on a Cloud based platform. In accordance with enterprise security requirements, the cloud providers' practices with respect to the enterprise regulatory, auditory and compliance requirements need to be assessed.
In addition, the security of the connection between the cloud and on premise environments needs to be assessed especially with respect to the ability to move workloads from cloud to on premise data centers, as required by business. If the data is confidential then it is better to have it stored in a data center.

Configuration Management in hybrid IT environments is another factor which should be kept in mind while implementing ITSM tools. Cloud is known for its elasticity, multi tenancy and ability to assign resources on demand. With such dynamic changes it becomes difficult to track configuration changes and assess the impact of such changes to the cloud services at the same time. So, it is imperative that a robust CMDB strategy is in place that ensures cloud services don't fail due to inadvertent configuration changes. A simple way of tracking would be to have server based agents that can provide real time machine level statistics or use monitoring tools to generate alerts across the hybrid environment. These alerts can be routed to a central repository where it can be analyzed and appropriate action can be taken.


As enterprises move workloads across hybrid environments, process control and Governance become major issues. In many cases, enterprises may have decentralized operations with multiple tools for a similar process across locations. Needless to say, this makes it difficult to visualize process outcomes at any given time. A governance layer defining the responsibilities of each vendor, having SLA and OLAs in place that assigns responsibilities to the relevant teams, service reporting can avoid  issues like  delays, outages and inefficient operations.


An integrated approach towards IT service management spanning hybrid environments allows the enterprise govern the entire IT though a single lens. Process maturity is a key consideration here.

 

Posted by Manu Singh

March 31, 2015

Driving ACM governance with Discovery Controls

 

Previously I blogged about components that become key controls for an effective Asset and Configuration Management (ACM) program with the Information Security group being one of the core consumers of this capability.

A first step in that journey was an integrated discovery capability. Discovery solutions need to go across the enterprise and find out what exists, report back accurately, flag exceptions requiring attention and redo the process all over again. Discovery systems run almost daily within the enterprise and is a key control to ensure IT Asset Management governance.

Today no single tool capability can do all the discovery of assets entirely by themselves. Certain tools are more focused and do a better job on the network side -e.g. Solar Winds and/or Ixia versus others that target virtual servers, applications and databases. As a result there may be multiple tools in use within the organization. Depending on how IT is structured and existing portfolios, these tools may be deployed across technical/ functional domains. E.g. all network management domain by one or two discovery tools or Network management for a single business unit. Large organizations often have a great opportunity to leverage disparate discovery feeds into the Asset Management discovery workflow. However due to organizational complexity and priorities, this is often not the case.

Integrating discovery tools into one single feed can provide a huge benefit to Information Security practitioners within the entity. Apart from a single window view, this capability will allow to adjust for scan schedules. For example having one tool run for a few hours and let the other one carry on where the first one left off. Another could be to standardize overlaps in information through a reconciliation engine and create a single integrated view of the asset that is generated from all the intelligence generated from the different discovery tools.

Most discovery tools run on a pre-defined schedule. The process itself is push based from a central discovery server collecting device information from scans or agents on clients pushing information back to the server. Most tools lack a real-time capability to alert based on asset configuration changes. Having a single window into multiple tools can help to separate schedules such that multiple scans can be applied on a single asset. This will still not attain real-time, enhance network utilization efficiency, but significantly improve outcomes from scans done by a single tool.

For discovery feeds to make sense to ACM and Info-security stakeholders, they must pass through a reconciliation engine. Either one built in the tool or a rules engine based outside the discovery system. Then there is the question of action. Missing fields is a common problem - e.g. not retrieving server name but the server location or obtaining the server IP address but not the MAC address. In our fast paced virtualized instances' world, where machines spin up and down every few minutes, the problem is more acute due to lack of information input during instance creation.

Organizational Asset maintenance policies and standards should clearly state what should be the next step to remediate the issue of missing information. However even before this step, the discovery system should be configured to flag issues. What will be those key Configuration Items (CI's) per asset type that need to be monitored. E.g. - having a server OS version information missing may be acceptable, but not a server location attribute.

Asset data quality can become a key partner to information security by offloading the monitoring of discovery and flagging of exceptions. This will enable cyber security teams to focus on the areas that need more attention- e.g. a changed IP address for a server which did not come in through a formal change management request may indicate an unauthorized change request that requires further investigation.

ACM can provide enormous value to Cyber security programs in large entities. However all ACM components need to work together to return that return on investment. The Discovery policy, process and tools are key steps to enable ACM effectiveness.

October 22, 2014

Developing operational controls through an ACM practice

 

In my last entry I talked about the need to have a sound Asset and Configuration Management (ACM) practice as the foundation for an effective Cyber Security strategy. So what does this start to look like? As simple as it may sound, designing, setting up and managing an ACM practice is actually a complex endeavor.

Why? ACM faces multiple ongoing and evolving challenges. Here are a few

-          Proliferation of IT devices and form factors- both fixed and mobile

-          Product vendors running varied licensing models for software product

-          Multiple asset "owners"- almost every operational entity has an interest in the device - e.g.- Audit, Access Control, Information Security, Network Operations, Change Management & Facilities

-          Focus on one-time 'catchup efforts' at inventory vs an ongoing accounting and reconciliation based systems approach.

-          Multi-sourced operational vendors begin their own ACM silos for contractual and service level needs which makes it hard to see a single picture across the organization

-          Emphasis on asset depreciation and cost amortization resulting in a 'we don't care, as long as finance has it depreciated on the books' view

Will going to the cloud make all these challenges go away? - Or even better will cloud make the need for an ACM practice go away? Hardly! Just ask IT Security or even better the External Auditor. As ACM evolves within major Fortune 500 organizations, so will the need for the cloud vendors to support the customer's ACM efforts through sound management, accurate reporting and alerting.

 So what does an organization need? The below is an attempt to list down the key components that will comprise an effective ACM practice

-          Discovery capabilities for internal environments

-          Service Provider discovery feeds for outsourced environments

-          Any other manual feeds- e.g. data from a facilities walkthrough

-          Direct asset input/output system feeds from procurement and asset disposal

-          Automated Standardization Engine

-          System for device normalization and accurate identification

-          Reconciliation rules for comparisons between overlapping feeds, comparison between auto discovery and feeds

-          A dedicated Asset Management database (AMDB)- this is asset information for a distinct set of stakeholders ( procurement, IT planning and finance, DC Facilities, Asset receiving and Asset disposal)

-          A dedicated Configuration Management database (CMDB) tracking asset and attribute relationships and for requirements of specific stakeholders (Change management, Release management, Information Security, Software license management, incident and problem management, capacity management, application support, enterprise architecture)

-          Automated business service modeling tool

-          Asset analytics platform for standard and advanced reporting

-          Integration with change management module

-          Release management module integration

-          Business as usual processes and governance mechanisms

Bringing these components together requires dedicated investment, time and resources but when done, dramatically improve the overall level of control that the organization has over its IT investments. Let's explore how that is achieved in my next note..

September 29, 2014

The foundation for effective IT Security Management

 Of late the news on the IT Security front has been dominated by the mega hacks. Retailers in particular have taken the brunt of bad press with a large US home Improvement Company, the latest in the process of admitting to being compromised. The cyber criminals in all these cases took away credit card data belonging to retail customers. This in turn has resulted in a chain reaction where Financial Services firms are battling the growth of credit card fraud. The resulting bad press, loss of reputation and trust has affected the companies and their businesses.

The tools and exploits in these attacks were new, however the overall pattern is not. Cyber criminals have a vested interest in finding out new ways to penetrate the enterprise and that is really not going to go away anytime soon. What enterprises can do is to lower the risk of such events happening. That seems like a simple enough view but in reality the implementation is complex. Reactive responses to deal with security breaches involve investigations in collaboration with law enforcement on the nature of the breach, source, type of exploit used, locations, devices, third party access etc. But that along does not address the issue of enterprise risk.

Yes, a comprehensive approach is required. Many pieces of Enterprise Security have to come together to work as a cohesive force to reduce the risk of future attacks. These components include Security Awareness and Training, Access Control, Application Security, Boundary Defense and Incident Response amongst others. But effective IT Security Management is incomplete without the addressing one vital element. As an enterprise the understanding of 'what we own', 'in what state', 'where' and 'by whom' is often lost between the discussions and practices of penetration testing, discovery and audit.

These 4 elements coupled with the fifth one of 'management' on a 24*7 basis is typically in an area not within IT Security.  It is within IT Service Management (ITSM)- Asset & Configuration Management (ACM).  The foundation for effective IT Security begins with a strong collaboration and technology integration with the ACM practice. Without a capable ACM presence, IT Security Management is left to answer these questions by themselves.

So why have enterprises ignored or enabled for a weak ACM practice. Over the last decade, there are several reasons-  technological, structural and business related. From a technology standpoint, the available solutions had partial answers, long implementation times and not seen as robust enough. From a structural standpoint, the focus within ITSM was on 'Services' with Incident Management taking the lion's share of the budget and focus. From a business standpoint, multi-sourcing has played a huge role in the compartmentalization of the enterprise. Rightly so, Service providers focus is on achievement of service levels and watching what they are contracted to do and no more.

I would also argue that effective ACM is a key pillar to effective IT governance. The ability to know exactly what areas are being governed and how, from a non-strategic view, also depends on a sound ACM practice.  Again in a software centric world there is no application software, without effective Software Configuration Management (SCM) and tools like Git and Subversion.  So ignoring ACM, undermines the very functionality and availability of the software.

But our focus is on IT Security, so where does one start? Depending on the state of the ACM practice in the enterprise, there are may be a need to fund this central function, expand it's scope and bring in greater emphasis on tools, technology and people. More in my next blog .....

June 24, 2014

Application Portability - Going beyond IaaS and PaaS - Part 2

Renjith Sreekumar

 

In my last post,  we looked at the concept of running thousands of hardware-isolated applications on a single physical or virtual host. Applications are isolated from one another, each thinking that it has the whole virtual machine dedicated to itself. The container technology allows for application and its dependencies to be packaged in a virtual container that can run on any server running the same OS. This enables flexibility and portability on where the application can run -on premise, public cloud, private cloud or bare metal, etc. Sharing metadata reduces application footprint and start-up time along with improved performance and multi-tenancy.

Should we build multiple VMs to isolate applications or build complicated abstractions and layering on a VM to achieve this? Here is where the container technology can help.  Docker is one such container technology that extends the native Linux kernel capabilities and namespaces to create hardware-isolated OS images with their own isolated allocations of memory, storage, CPU, and network.

The base OS image will be customized using Docker to create a customer image. Docker file system can merge the various layers of customization on the base image together during run-time. As the container can abstract the underlying OS, it may not require a VM and hence can actually run on the bare metal OS as well. Containers may well be the Next VM !!!

Let's look at some of the practical use cases:

1) PaaS Delivery: Today most of the PaaS providers use sandbox methodologies to application colocation on a single OS instance. With the adoption of container technology makes it much easier to abstract the application environments, support multiple languages, databases and also improve manageability and security.

2) DevOps: Container-based PaaS provides app developers the flexibility to build and deploy application environments with much more ease. This reduces the provisioning lead time and also alleviates worries about OS and middleware management, allowing developers just  focus on just their applications.

3) Scale out and DR: While most hypervisor technologies allows moving apps around in VMs, we need a compatible hypervisor to run those VMs. However  Virtual Containers can run on any server running the same OS, whether on premise, public cloud, private cloud or bare metal allowing scale out to any types of clouds supporting the same OS.

Finally, what benefits and changes can we anticipate?
The Logical boundary of applications ecosystem will move from VMs to Containers while the mobility aspect of applications will move beyond single hypervisor zones: 
• A container holding just the application binaries reduces the complexity in provisioning and managing applications
• The coexistence of isolated apps in the same physical or virtual servers will reduce the platform development and management cost
• The use of standard frameworks, instead of platform-specific APIs (sandbox driven),will improve the user adoption
• The container based application is entirely self-contained making it inherently more secure

So what do you think? Are you considering container based Application isolation and delivery as well?

May 29, 2014

Application Portability - Going beyond IaaS and PaaS - Part 1

Renjith Sreekumar

The other day, I was studying economies of scale for virtual machine (VMs) deployments for a client and tried to benchmark these with standard cloud deployment models - on and off premise. The VMs provide a dedicated run-time environment for a specific application based on the supported stack. In today's world, while the VM densities are increasing based on the processing and compute innovations at the hardware layer, application densities are not increasing due to the inability to support multiple stacks/ runtimes at a single OS level.
I tried to analyze this scenario with use cases that we have tried to address with client virtualization solutions - Citrix XenApp, App-v etc. Application virtualization essentially achieves this kind of isolation, where applications are packaged and streamed to client OS irrespective of the local OS preference. This results in multiple versions of the same application and applications certified on multiple stacks to be virtually run from the same client OS.

How can this process be replicated within an enterprise? As I pondered on this issue, I happened to meet with an old friend and a Linux geek in the subway. Our discussions led to deeper insights into the next wave of virtualization that we believe will redefine the way applications will be delivered in future. This will be a model that will enable cost effective PaaS model that provide massive scale, security and portability for applications. This prompted the following questions:

  • How can we enable standard VMs to support multiple apps- each having its own run-time dependencies?
  • Today's application teams need a plethora of development frameworks and tools. Can these be supported in one "box"?
  • How are PaaS providers delivering the environments today? How do they achieve the economies of scale while supporting these needs - or do they?
  • How complicated is it to use sandboxes to isolate application runtimes?
  • Can the Dev team get a library that is not in the standard "build" that the technology team offers today? Do we need to update technology stacks such that this library can be supported?

To put it simply - Do we give application teams, a fully layered OS image, complete with libraries in a VM - OR  would a "light-weight" container for the application to be packaged with its dependencies (libraries and run-time needs) isolated from the underlying OS and H/W suffice?

Welcome to the world of Application Containers.Here is how it works:

The infrastructure team creates a container image for the application team to layer with dependencies and later hand it over to infrastructure team for management. Now, the application is packaged as if it is running in its own host, with no conflicts with other applications that may also be running on the same machine.
Instead of having 100 VMs per server, I am now talking about thousands of hardware-isolated applications running on a single physical or virtual host. Such application container technologies can package an application and its dependencies in a virtual container that can run on any virtual or physical server. This helps enable flexibility and portability on where the application can run, be it on-premise public cloud, private cloud or otherwise.

In my next post, I will talk more about the underlying technology, its benefits, strategies around adoption and migrations.

January 27, 2014

Access management - Road to user experience nirvana?

(Posted by Praveen Vedula)

It's a bright Monday morning and today is the first day at your new job. You are excited as you are shown to your desk. After filling in all the mandatory forms, you try to get down to business....only to realize that you have to raise a multitude of requests just to get access to the necessary applications. Most of you have been there, done that already and can understand what a harrowing experience it can be.

Now consider this: It is possible to reorient this entire process in a way that is user friendly and in accordance with IT requirements; all it requires is a careful analysis of   the access product life cycle and how it overlaps with service catalogue from an ITSM point of view.

There is a thin line between role management and entitlement management. Role management deals with the administrative nature of roles while entitlement management deals with the functional aspect of access though both fall under the umbrella of Identity access management (IAM).

Control, accountability and transparency are the central tenets of Identity access management. So, how do we control or detect access violations? Most organizations depend on IT service management to have a seamless process of ordering products through a service catalogue. However, it remains a challenge to manage the user access lifecycle given the number of authorizations involved and may not be easy to manage due to its sheer volume and structure.  There are several products like Axiomatics , Securent (acquired by Cisco) in the market which manage authorizations. However, it will be a while before we have an end to end entitlement management product as pointed out by Earl Perkins from Gartner research, in his blog.

Having said that; there are three key issues which need to be addressed while managing access roles and entitlements-

  • How do we present the access roles as orderable items in service catalog?
  • How do we enforce the policies and rules for the access roles while ordering them?
  • How do we update CMDB with relevant entitlement data to drive IT service management? 

One of the most important aspects of a service catalog is the ease with which it can be accessed and browsed. The key challenge here is to transform an access product into an orderable item that can be accessed by users who have the requisite rights as determined by their roles. Given the flexibility of cloud based ITSM tools, it is quite possible to manage the search parameters on the front end while a compliance check is run by authorization tools in the back end. The governing rules of the access products can be centrally defined and managed at the application layer making it simpler to manage them at one go. In order to make life easy for business users, the orderable access items can also be grouped based on the job level or job description or any other parameter based on the organizational structure.

So, going back to the first example, a new employee has to simply select the access products required from the service catalog. This has been a success story at a large reinsurance firm in Europe that was recognized by the European Identity & Cloud awards 2013 for its project on access management using cloud and authorization tools.
Based on his or her role identity, it will be easy to assign the right levels of access to a given user. In one shot, a pleasant user experience and adherence to IT policies can be achieved.

January 21, 2014

Hybrid ITSM: Evolution and Challenges

(Posted by Manu Singh)

When you compare an ITSM solution based on public cloud with that of an on-premise solution, there is no way to determine which one is superior. Although public cloud based ITSM solutions provide an on-demand self-service; flexibility at a reduced cost is not the only factor that should be considered while choosing deployment options.

Customization has always been a major issue while deploying a cloud based ITSM solution. While every organization has its own way of handling incidents, problem, change and release management; it's the business needs that determine how the IT service management solution is deployed. Cloud based ITSM solutions can be inflexible at times - a kind of one-size-fits-all proposition. Any change / customization will go through testing across the entire user base for all the clients which will lead to unnecessary delay in deploying the required functionality.  In some cases, a release may not even be implemented at all if a few users do not approve of the change.

In other words, using a standard application across multiple users gives limited options for changes in configuration. Organizations may face a risk as requirements continue to change as dictated by a dynamic business environment. Limited options to change configuration settings may not be the best solution in such a scenario.

Another reason organizations are unlikely to stick with a cloud-only solution is that it gets expensive as the years go by. Analysts have also predicted that SaaS based ITSM tools may not be the preferred option as the amount of effort invested in implementing, integrating, operating and maintaining tools would likely result in increasing actual costs rather than reducing it.

But this does not mean that the cloud based ITSM model is likely to vanish. It will still be a good bet for organizations that have limited IT skills on-site and are only looking for standardization of their processes without much customization and dynamic integration requirements.
It stands to reason, that organizations would prefer to have both options - i.e. a cloud-based ITSM offering that can be rapidly deployed and a premise-based one which would support on-going customization and dynamic integration.

Hybrid ITSM combines best of both worlds' i.e. public and on-premise/private clouds.  It focuses on increasing the scalability, dependability and efficiency by merging shared public resources) with private dedicated resources.
However, implementing a hybrid model is not as easy as it seems, as it comes with its own set of challenges, some of which are listed below:

  • Management and visibility of resources that fall outside the scope of managed services
  • Ensuring the consistency of changes implemented between the on-premise and the cloud service provider
  • Supporting open tool access with consistency in the data / look and feel
  • Managing shared network between the on-premise data center and the cloud service provider
  • Seamless integration between on-premise and cloud infrastructure in order to share workload at peak times

Looking at the above challenges, it is clear that organizations need to do a thorough due diligence to identify:

  • Data and network security requirement (data encryption, firewall etc.)
  • Tool usage requirement, storage requirement (on-premise or cloud)
  • Robust change management in order to track and coordinate changes between the two environments
  • Fail-safe infrastructure set up so that the availability of the tool is not hampered
  • A robust asset and configuration management  to track the assets within business control and dependency with assets on public cloud
  • A framework defining governance, statutory and support requirements

Ideally, organizations need to follow an approach that incorporates the aforesaid requirements early on during the discovery and design phase.
My next post will cover the implementation approach for Hybrid ITSM along with the mitigation strategies for some of the common challenges.

November 7, 2013

DevOps: Befriending the alien

We are living in interesting times where organizations are re-thinking their commitment to value. There is more focus on the question: "Are we doing this right and creating value?"

 

Globally, production cycle times are getting reduced and time to market for any product is expected to be "quick". There is a concerted focus on automation which has given birth to relatively new jargons such as DevOps, Orchestration etc. Despite decades of process and policy re-engineering, we run the risk of missing out on success without automation systems to support us. 


So what are we trying to bring in our lives?  


DevOps is an interesting practice that has silently invaded the IT professional's sphere of influence - almost like the friendly neighborhood alien made famous by Steven Spielberg.

Let us figure out for a minute on what is DevOps and why it makes a difference to us here?


"DevOps is associated with a collaborative set of practices that "influences" IT Development, Testing & Operations/Service Management staff to join hands and deliver high quality IT Services/applications to the end-users "more frequently & consistently".


Based on my recent experiences with clients and prospects, I can say that nearly every global IT organization has heard about it and is interested to explore this 'jargon' (I say this is a jargon since there is no official framework which is authoritative enough to set a global definition and prescription to implement DevOps).

 

I have a rather radical observation and interpretation here. Innovations in technology mainly around Cloud, Mobility & Social space has taken a big lead compared to people practice maturity levels. There are expectations now to roll out frequent application/service releases - in some cases, a daily release.


This has resulted in the need of more "people centric" development methodologies that sometimes need radical shifts in organizational philosophy. For e.g. how many of us have actually seen  application development and IT operations members sitting together in the same room working daily to rollout regular releases?

 

In the next couple of years, this debate is likely to continue about how much of local collaboration is required to make DevOps a realistic goal. Again, technology has moved ahead in the game here, and it can be actually seen among the DevOps tool vendors where they aggressively claim to make this happen.


There are tools in the market that claim to automate the entire software delivery provided the developer writes the code (many a times using re-usable components), tester has uploaded the test cases at the same time when code is written and the environment/infrastructure is readily available on-demand. In a nutshell, you can check-in the code into a repository and go home; the rest is taken care of by the systems to make the new release available to the end-users.

But reality is different. It will not be so easy to adopt this disciplined practice - it is like applying moral science lessons in a chemistry lab!


The success of this concept hinges on the level of understanding of all the stakeholders involved and befriending this concept - however alien it may sound. (In the end, even the scientist ended up liking ET!)