Application Services provides a platform for IT Development and Maintenance professionals to discuss and gain insights into best practices, process innovations and emerging technologies that will shape the future of this profession.

September 16, 2016

Future of enterprise web applications: Pervasive next-generation JavaScript

Author: Arshad Sarfarz Ariff, Technology Architect

No one would have ever thought that a 10-day project, created at Netscape by Brendan Eich in 1995, would turn out to be the frontrunner for building enterprise web applications after 20 years. Today, JavaScript leads the race for building isomorphic web apps. An isomorphic application is one whose code can run both on the server and on the client. This was primarily made possible by Node.js - an open source, cross-platform JavaScript runtime environment built on Chrome's V8 JavaScript engine, which opened the doors of JavaScript to server-side coding. 

Continue reading "Future of enterprise web applications: Pervasive next-generation JavaScript" »

September 14, 2016

Minimizing risks implies investments in automation for next-gen underwriters

Author: Naveen Sankaran, Senior Technology Architect 

One of the main objectives of software is to automate work that would otherwise be done manually. This has multiple benefits including cost reduction over the long term, increase in productivity and profits, and the ability to channel human effort towards more important work.

Continue reading "Minimizing risks implies investments in automation for next-gen underwriters" »

July 18, 2016

My experience with Bare metal provisioning: OpenStack Ironic

Cloud! The name itself says a lot. No need to explain. But just think about what were there before cloud. Guess!! Yes, it is virtualization. Entire community was amazed with the capabilities and the feature virtualization technology provides. The ease of maintaining infrastructure and reducing burden on the cost was truly awesome. No doubt about it.

However, when technology evolves further and started new edge on the research and technology, cloud came up. And surprisingly, it started roaming all over the IT sky in a very short time span, it grew like anything. Now everyone talks about the cloud, what why, how and so on. Most of the organizations and products are now moving to clouds and using its benefit.

So, what next! Yes, when we talk about cloud, many people raises their eyes and ask, what about computing performance and for that I have answer, bare metal provisioning in openstack, aka Ironic!!!

Ironic: the openstack bare metal hardware provisioning service

Today, I will shed lights on the setup and challenges faced while implementing the same across projects.

As you might be already aware, the main purpose of Ironic service is to provision the hardware based on the configuration and let the guest operating system be installed on that remotely to have the E2E infrastructure provisioning done.

Components:

·         Ironic has three major components

o   Ironic API

§  Talks to Nova compute service

o   Ironic conductor

§  Talks to other openstack services  

o   Ironic DB

§  Talks to the different drivers.

Configuration:

·         Make sure that authentication system is in place before executing any openstack command.

·         You need to download the rc file from horizon dashboard and source it.

·         Actual command  : source server-openrc

·         This file contains all the variables required to locate each service and url. It asks for the password once you enter the command. you need to enter the admin password if you are using rc file of admin user

·         Every user has its own RC file which contains information related to its tenant, projects and credentials etc.

·         You need to create the endpoint for service. The service type is baremetal and service name is Ironic

·         Ironic API and Ironic conductor service can be on different machines. Ironic conductor can be installed on many machines but there version should be same to have exact function properly.  

Database:

·         Mysql DB gets used to store all data. mariaDB prompt comes for all the mysql commands.

·         Ironic database and ironic user has to be created.

RabbitMQ configuration:

·         In the first attempt, we see that rabbitMQ portal was not working. To fix that, we have to install management plugin and then it started working.

·         Get the RabbitMQ username and password from nova configuration file.

Key challenges:

·         While creating ironic database, faced the issue with sql connection. The issue was, while creating database, service was not able to access the mysql connection. The reason being, in the /etc/ironic/ironic.conf file, in the connection section the IP of controller where identity service is running was provided. Instead, it should contain that entry which is there in /etc/mysql/my.cnf

Drivers:

·         Ironic supports plenty of driver to provision the hardware and install the OS. There are various 3rdparty providers who have their own proprietor software's and drivers to work with Ironic.

o   The popular one is IPMI

o   Installed the IPMI utility.

o   Configured the service as it is and restarted the service.

·         It seems that IPMI-tool need IPMI controller hardware to be present on the machine which is being provisioned.

Configuring Compute service:

·         Nova.conf file needs to be modified to add the parameters required for Ironic to work.

·         Sometimes nova.conf file present on the both the boxes. Compute node and controller node. This is bit confusing. The file which is present on the node on which nova-scheduler service is running is the main file and is responsible for all the changes related to Ironic.

·         Once all the configuration are in place, restart nova-scheduler on controller node and nova-compute on compute node.

Enrollment process:

·         While enrolling any node, we need to provide the ironic api version. set the environment variable : export IRONIC_API_VERSION=1.11

·         Need to register the MAC address with ironic service. If there are multiple NICs , get the MAC address of that NIC which is connected to LAN

·         Node should be in available state so that compute service can see it to provision the hardware. If the node is in any another state then compute service won't see it and cannot be provisioned.

·         Node cannot be moved directly from enroll state to available state. First they should move to manageable state and then to available state.

To summarize, bare metal provisioning is really cool stuff when you design the private cloud and planning to deploy an application which requires high end computing and are very sensitive to computing performance. "pxe_wol" is the easiest driver to learn how Ironic service works and get acquainted enough to understand capabilities of Ironic. As I mentioned earlier, there are plenty of drivers, however, they need special hardware support and configuration to get it working. Try with "pxe_wol" first and move forward.

Typical Openstack Ironic conceptual design you can refer here:

http://docs.openstack.org/developer/ironic/deploy/user-guide.html

References:

https://wiki.openstack.org/wiki/Ironic

https://developer.rackspace.com/blog/how-we-run-ironic-and-you-can-too/

https://software.intel.com/en-us/articles/physical-server-provisioning-with-openstack

April 4, 2016

IT Transformation is Business Transformation! Why? How?

Author: Ravi Vishnubhotla, Senior Technical Architect (Insurance - FSI)


Today, IT transformation (IT Strategy/Application Portfolio Rationalization) has become synonymous with business transformation. In this post discusses why has this happened and how can it be achieved.

Why is IT Transformation same as Business Transformation today?
As part of IT Transformation, businesses and clients go through IT strategy for 3-5 years to replace their existing legacy technology with newer or better technology. IT Transformation is assessed in terms of People, Process and Technology. The sponsorship is mostly within IT department and implementation of the strategy is completely IT driven. However, in today's era where growth is measured in terms of revenue / profit / customer service and SLA's, IT becomes an enabler for business to achieve these goals. So when business vision, mission and goals are considered, IT transformation automatically becomes the same as business transformation. Business users play an active role during this process and  act as a key driver for the successful completion.

How can Business Transformation be achieved?
Business Transformation can be achieved by using the following methodology. This methodology is one of approaches based on my experience which can applied to small or medium sized businesses and can vary depending on businesses or industry. The key principle is to define steps using People, Process and Technology perspective.


Ravi Vishnubotla_1.jpg


  • Business Vision
    - Obtain business stakeholders vision of the future of their business; Where do they expect the business to be few years from now (generally 3-5 years depending on the size of business or industry)
    - Understand overall organization and the business
    - Understand the core services and business processes
    - Key concerns / challenges being faced in the business
    - Define key driving factors of the business
    - Create a Vision Document and core stakeholder group to oversee the transformation process
  • Current State Assessment (CSA)
    -Understand the AS-IS business process and business applications
    -Conduct discussion sessions with business stakeholders
    -Document all issues, manual processes, areas of pain points
  • Future State (CSA)
    -Define the 'To-Be State' for IT Systems, Infrastructure and business processes
    -Apply solution(s) to business vision, manual processes and pain points
    -Consider modern business, IT trends, Industry Standards and guidelines
  • GAP Analysis
    -Defines what it will take to go from current state to defined future state
    -Should consider new business processes, new IT applications
    -Apply disruptive IT solutions e.g. Mobility, Automation
  • Define IT Solution Architecture
    -Define solutions for various gaps identified and new processes/applications considered
    -Perform initial Product Evaluation for solutions if needed. Consider Buy vs Build
    -Identify Logical (functional view) and Physical (system view) IT solution and model
  • Develop CBA and Roadmap
    -Estimate the timelines and effort for the various solution defined
    -Perform cost analysis by considering price of infrastructure, IT systems (product / in house development), hiring new people, and introducing new processes
    -Break down solutions into various projects and assign stakeholders either from IT or Business
    -Propose a Road map to rollout the solutions.
  • Review and finalize strategy
    -Review the proposed transformation process as draft via a presentation or a document
    -Conduct sessions with various business unit stake holders and IT stakeholders
    -Agree on the proposed solution and roadmap
    -Refine and resolve any open issues or questions
    -Baseline Strategy for CTO and CEO approval

To summarize, this is how the business transformation process will look:


Ravi Vishnubotla_2.jpg

The steps defined here are based on my experience, working with various customers and clients. The process or approach can vary and will be different depending on the business and industry. This is not a one size fits all methodology but should give you fair idea as to what it takes to achieve business transformation from an IT perspective.

Continue reading "IT Transformation is Business Transformation! Why? How?" »

September 28, 2015

Macro to Microservice Architecture - Shrink or Sink? Part-2

Author: Archana Kamat, Product Technical Architect

In my previous blog, "Macro to Microservice Architecture - Shrink or Sink? Part 1", we explored the basic characteristics of MSA and how it differs from Service-oriented Architecture (SOA). While MSA enables higher service independency, it cannot be applied to all business scenarios. 

Continue reading "Macro to Microservice Architecture - Shrink or Sink? Part-2" »

Macro to Microservice Architecture - Shrink or Sink? Part-1

Author: Archana Kamat, Product Technical Architect

The world of software service architecture is witnessing rapid change owing to a new paradigm named Microservice Architecture (MSA). There are several debates and questions about this newcomer. Sample these:

Continue reading "Macro to Microservice Architecture - Shrink or Sink? Part-1" »

April 28, 2015

Agile Contracts


Continue reading "Agile Contracts" »

February 13, 2015

Data Analytics in IoT/M2M

IoT is 'the' happening thing right now and is expected to continue this way as we move more towards connected world. It was and is among the top spots in most of the Industry buzz word list published in 2014 & 2015.  But not many, including the technology people or enterprises, are aware as to what IoT really means to them or to the economy, or how to monetize the immense volume of data generated by the constituents (namely sensors, devices, microchips) of IoT.

Let's look at some of the use cases in diverse industries where IoT can be deployed to get a perspective:

Fleet management:

Global logistics operators who have a wide variety of fleets across rail, road, air and sea routes are now realizing that effective utilization of their fleets can save millions of dollars. For instance, volumes of data collected on speed, acceleration, braking, temperature, and fuel can be collected and analysed in real time to identify the areas where efficiencies can be improved.

Using IoT intelligence, logistics companies can see how the use of the right route and acceleration patterns can affect vehicle performance and fuel usage. In addition they can also discover the impact of driver performance and how his behaviour can not only affect fuel efficiency, but also the longevity of the asset in regards to maintenance. Also telematics, GPS data, and local map software can be combined to make companies aware in real time of routes that may be affected by traffic jams, speed traps, or weather conditions.

Real estate/Building management:

Imagine working in a skyscraper that adjusts temperature and humidity to suit the number of people in your office, provides access to designated places and keeps elevator and power outages to a minimum. Now imagine all this can be remotely controlled for all the buildings that are owned by an enterprise or run by building Management Company across the globe.

The concept of the Internet of things (IoT) where everyday things are connected to the Internet presents unprecedented opportunities for the management and operation of real estate. With IoT, any part of a building can become a point to capture and send data. This data when analysed and made actionable will have the opportunity to explore, relate to, and interact with buildings in amazing new ways, to move from building management to full building automation.

So what seems to be crucial in each of the use cases above? It's obvious, gathering and analysing data.

The IoT data that is gathered every fraction of a second can be complex. For ages enterprises have not completely exploited the vast amount of data that they gather on an on-going basis and now IoT will bombard with more heaps of data.

Of all the big numbers being thrown around about IoT - I picked the below

$15 trillion - the economic value expected to be generated by IoT by 2030

$5+ trillion - 30-40% of total IoT market which can be attributed to analytics

Types of analytics:

Data collected in IoT can be processed and analysed under two different methodologies called Predictive and Prescriptive analytics.

Predictive analytics: Predictive analytics utilizes a variety of statistical, modelling, data mining, and machine learning techniques to study recent and historical data, thereby allowing enterprises to make predictions about the future. The purpose of predictive analytics is NOT to tell what will happen in the future, its purpose is to just predict or suggest what might happen in future. 

For ex: In the fleet management use case we mentioned above, predictive analytics can be used to suggest routes that could have traffic jams during certain period.

Prescriptive analytics: The emerging technology of prescriptive analytics goes beyond descriptive and predictive models by recommending one or more courses of action and also showing the likely outcome of each decision.

For ex: In the fleet management use case we mentioned above, prescriptive analytics can be used to recommend which routes can be avoided by the driver and which route can be taken based on the time of the day and also display estimated time to travel through the recommended route.

Decentralization of Analytics processing:

Real-Time Analytics done over large tracts of data that are streaming in from all connected devices like sensors, wi-fi connections spread across geographies will generate tremendous value with tremendous impact. Also it would be much more efficient to have a decentralization of data storage, processing and analytics since there may not be just not enough network bandwidth in the future to transfer all the data in real-time. For instance, think about a ship in the middle of the ocean - do you really want to transfer all of the (low-value) log data from all sensors, machines, switches, etc to a central data analytics installation? The costs of transferring data from all over the world in real-time to a central location is much higher than the savings through economy-of-scale of a centralized solution. Furthermore, network latencies and interruptions omit the usage of centralized solutions.

The Challenge for IoT Analytics vendors:

There is a challenge for IoT Business Intelligence/Analytics vendors to create new tools that not only allow companies to capitalize on their own data but also aggregate sensors data gathered from sensor networks, public and private clouds and provide embedded predictive and prescriptive analytics services to support the enterprise decision makers in crucial decision processes that reinforce their ability to continuously improve the company's financial performance, to keep the costs down, and increase customer experience.

To thrive in the new environment, enterprises need solutions that use in-memory computing to harness the power of Big Data and advanced analytics to help them draw insights from - and make them more responsive to - the needs of digitally connected customers. 

January 27, 2015

Striking the Balance: Waterfall and Agile - Part 4

In part 1 of this blog series, we discussed on how business and application related considerations affect selection of SDLC methodology 
http://www.infosysblogs.com/application-services/2013/09/striking_the_balance_waterfall.html


In Part 2 of this blog series,  we saw impact of agile team execution location as well as what pre requisites are considered while team is formed for execution, paving way for inclination to type of methodology preferred.
http://www.infosysblogs.com/application-ervices/2013/12/striking_the_balance_waterfall_Part2.html

In Part 3 of this blog series, we analyzed information on existing situation around organization's current/proposed technologies/ tools repository and automation; it's future investment roadmap. This will help driving out methodology choices.
http://www.infosysblogs.com/application-services/2014/04/striking_the_balance_waterfall 3.html

Coming to the last part of this blog series, we will try to unfold how to balance business portfolios within projects and programs with discipline and control against agility and try to explore whether agile and waterfall are really exclusive approaches and can they share best practices and coexists.?

Today's organizations have complex business portfolios for differentiations based on customer segments in a volatile global business environment. IT has to develop applications to develop, enhance, maintain and grow business features to support business demand.

Following scenarios with context to illustrates how organizations can adopt "agile and waterfall."

Organizations have to continue with waterfall, but gain with adopting appropriate agile practices
• Customer's 'acceptance criteria' can be used to redefine milestone content
• Daily stand-ups can improve day-to-day planning and team status communication
• Retrospectives at each milestone can help inspect and adopt with faster feedback cycle to improve the remaining work according to the principle of 'fail early, fail  often, and improve incrementally and continuously'.
• Automated testing can validate requirements for completeness, consistency, traceability, and specifications.
• Demonstration of working software around high-risk areas can replace design reviews, Emphasis on validation over verification and use of simulators for early  performance validation can trigger early feedback for the team to inspect and adopt.
• Sharing the benefits achieved either due to automation or continuous improvements or reuse or accelerators with the team enhancing morale and team  motivation.
• Customer collaboration and early feedback can resolve risks, issues and impediments in the upcoming project lifecycle.
• Time boxing around meetings (planning, progress, governance) and impediment resolution helps to slice and distribute work which is manageable, triggering  early and faster feedback.

Organizations have started agile journey, realized need to optimize using waterfall practices
• Complex nature of knowledge work is involved in SDLC which constraints agile approach light in documentation as team will face issues of non-availability of  detailed documentation. This may pose challenges in retaining knowledge (Tacit /explicit) in scenarios like transition handover, attrition or scaling. This can be  tackled by managing knowledge through 'just-enough/ fit-for-purpose' documentation using wikis, intranet, blogs, and knowledge repositories.
• When business features are evolving, and release of software will contain large chuck of business functionality due to nature of business, resource constraints,  maintaining detailed release plan with baseline iteration activities of architecture and design can throw up a holistic view for stakeholders.
• Release planning with specific iterations can help effective resource planning for key team members with multiple commitments.
• Use of design patterns and architectural solutions rather than simple design can help accommodating evolving business requirements.
• Accommodating milestones based on 'definition of done' for tasks, stories, iterations within release plan help communicate project progress better to all  stakeholders.
• Laundry list of release and deployment activities and planning for integration within release plan helps all the teams avoid last minute challenges.

It is important to note that above illustrative scenarios will boost efficient communication and collaboration at program level and tackle challenges owing to the portfolio mix of both agile and waterfall projects.

Organizations have both agile and waterfall work programs having dependency with each other due to many constraints such as technology, shared services, application complexity etc.
• It is expected that during the feasibility studies and SDLC methodology selection phase, management identified and communicated business, technical  infrastructure, multivendor dependencies to relevant stakeholders.
• Projects were decided to adopt either waterfall or agile methodology based on various considerations. Management establishes high dependencies within  projects.
• To achieve desired business functionality, agile and waterfall teams require the mature team level communication and collaboration.
• Agile projects run with time boxed iterations, will develop faster than waterfall and ready for integration.
• Also, agile teams will run iterations, knowing fully that all the information is not available upfront but it will come through as iteration progress.
• Essentially agile team work with certain assumptions about dependencies and associated risks/ issues and move towards integration.
• Waterfall projects will progress with phase gate approach, touch short of speed in showing end product and will not have many assumptions on the way, but  there is no real time feedback on working software entill the end.
• Careful synchronization of agile release planning and program plan with milestones can help all teams succeed.
• Setting of program release team with regular Scrum of Scrums (SoS's)/ SAFe /LSS/Disciplined Agile Delivery through progress meetings can mitigate dependency, integration, velocity, resources risks as  all teams can validate these assumptions on a regular basis and take corrective action accordingly.

Organizations have successfully differentiated agile and waterfall work within portfolios which are low / minimally dependent on each other.
• Project and program tracks for waterfall and agile projects can run if they found to be almost independent in business, technical infrastructure, multivendor  considerations.
• Both these track can progress with minimum interactions between teams throughout the project/program life cycle.
• The agile team can adjust release plans to align with the waterfall milestones for better integration to ensure portfolio consistency in terms of reporting,  governance and metrics monitoring.

Summary
It is impractical to select waterfall or agile exclusively as binary. Methodology selection dimensions discussed in this blog series can help organizations to evaluate critically, keeping in mind their long term strategic and tactical business objectives, culture, business environment against risks, complexity and ,constraints of individual projects. Thus, the methodology should be chosen to suit the project, rather than forcing projects to suit the methodology.

Continue reading "Striking the Balance: Waterfall and Agile - Part 4" »

October 30, 2014

How 'Kanban' can help in Agile development?

One of the reasons for many agile projects to not achieve their release targets could be that the team signs up too many work items without moving each of the items into 'Done' state. This could be because of 'waiting for dependency with other work items', pending with the testing team, environmental issues or build break issues. During the last weeks of the sprint there is always a rush to claim maximum story points committed for the sprint and hence the team hastily starts other work items although the other work items are still pending. For ex: The developers have completed coding for the user story and it is pending for testing and the testing team is unable to test the user story due to some environment issue. But the developers, as discussed earlier, in order to claim maximum story points continue to sign up newer user stories. In the end the team might achieve target velocity for the sprint but there is no potentially shippable product. So velocity often may not be sufficient to track the success of a sprint and hence we need other metrics to track to achieve sprint goal.

Teams can overcome this kind of scenario through Kanban. Kanban enforces continuous improvement and lean practice through metrics like WIP, Cycle time and Throughput which are transparent and actionable. Transparency here is the visibility into the team's progress. Let's see the definitions of the above metrics relevant to agile methodology.

WIP (Work in progress): It includes all the tasks which are in between the 'To Do' and 'Done' status on the sprint task board.

Cycle time: It is the total time elapsed for the task to reach from the status 'To Do' on sprint task board to 'Done' status.

Throughput: Throughput is the number of 'Done' work items per unit of time (like day, week or iteration)

These three metrics are related through 'Little's law' which states that:

 

Average cycle time = Average work in progress/ Throughput

 

Thus change in any of these parameters results in a change of the other. So if we want to decrease the cycle time of the tasks then we need to decrease the WIP. Hence to bring a positive change we need not undertake a complex transformation but simply control the number of things that are being worked on at any point of time. For ex: In the above example when there is an environment issue at the testing team end, instead of signing up for newer user stories the developers should help the testing team to resolve the issue and probably even test the user story by themselves to bring back the testing team to get rid of the backlog which piled up due to the issue. This is the whole idea of Agile which is about team spirit and self-organizing, cross functional team with team members able to switch roles when the situation demands to achieve team goals instead of the typical handoffs and blame game that happens in traditional development models.

.