The Infosys Labs research blog tracks trends in technology with a focus on applied research in Information and Communication Technology (ICT)

June 29, 2018

Adaptive Inspection: Hurricane Season

 2017, hurricane Harvey and Irma hit the US coast with winds exceeding 130 miles per hour, leaving in its wake 103 people dead and an estimated damage of $200 billion.  The double whammy within 2 months of each other and the severity of the hurricanes is expected to slow US GDP by 1%.

In Florida alone, the total insured losses were estimated at more than $5.8 billion, with more than 689,000 residential property claims and 51,396 commercial property claims due to Hurricane Irma. Insurance companies were inundated with claims and scrambled to process the claims submitted by their customers. The frenzy was aggravated by the fact that Hurricane Harvey had hit Texas less than 3 weeks before Hurricane Irma hit Florida.

One of the greatest challenges that Insurers faced during the 2017 hurricane season was the shortage of adjusters. The first step for insurers to process claims was to have the adjusters visually assess the damage and estimate the loss. Unfortunately, most of the adjuster were in Texas assessing damage due to Hurricane Harvey leading to a shortage of adjusters in Florida and an increase in adjuster prices in the range of 15% to 25%. The shortage was only amplified by the lack of access and safety concerns.

Adaptive Inspection technologies which combine the capabilities of artificial intelligence in the form of computer vision and image analytics, and edge computing enable insurance companies to leverage autonomous agents such as drones to inspect property claims more efficiently and effectively. Drones are capable of flying closer to structures to capture miniscule details through high resolution images providing a more thorough report than humans adjusters while reducing the time from 1 hour to 15 minutes. Edge computing capabilities enable the drones to avoid obstacles, reach the location and provide images for the image analytics to analyse, estimate damage and create coverage reports.

his process lays redundant the erstwhile paper based process resulting in errors, and speeds up the claims process while preventing adjuster injuries. The technology can also be used to assess property damage in calamity affected areas before receiving claim requests in order to speed the process and prevent consumer grief.

Companies like USAA, AIG and Allstate have already deployed drones to enable adjusters to view hard to reach areas from a safe location and analyse the images. The technology has rapidly matured over the years and stands to change the way adjusters and insurance companies assess claims while changing the way organizations all over the world inspect their physical assets.


Reference:

https://news.nationalgeographic.com/2017/11/2017-hurricane-season-most-expensive-us-history-spd/

https://www.cnbc.com/2017/09/15/texas-and-florida-face-economic-blow-from-hurricanes-harvey-and-irma.html

https://www.insurancejournal.com/news/southeast/2017/11/30/472582.htm

https://weather.com/storms/hurricane/news/2017-11-03-hurricane-200-billion-dollar

https://www.inc.com/will-yakowicz/storm-damage-irma-harvey-drones.html

https://www.mckinsey.com/~/media/McKinsey/Industries/Financial%20Services/Our%20Insights/Insuring%20hurricanes%20Perspectives%20gaps%20and%20opportunities%20after%202017/Insuring-hurricanes-Perspectives-gaps-and-opportunities-after-2017.ashx

http://www.govtech.com/public-safety/How-Fleets-of-Drones-Are-Helping-Assess-Damage-from-Hurricane-Harvey.html

https://www.cio.com/article/3252093/financial-it/taking-to-the-skies-using-drones-in-the-insurance-industry.html

https://www.trenchless-australasia.com/2018/03/20/melbourne-water-uses-drones-to-monitor-waterways/

http://www.govtech.com/public-safety/How-Fleets-of-Drones-Are-Helping-Assess-Damage-from-Hurricane-Harvey.html

https://www.cio.com/article/3252093/financial-it/taking-to-the-skies-using-drones-in-the-insurance-industry.html

http://money.cnn.com/2017/09/10/news/economy/hurricane-irma-harvey-economic-damage/index.html

June 4, 2018

Blockchain Next: Social Media and AI

The Facebook data breach saga became a global phenomenon with reports suggesting that more than 87 million user profiles were compromised. The company lost over 14% of its market capitalization with #DeleteFacebook trending on Twitter. With over 3 billion active users on various social media platforms, this data breach might just be the tip of the iceberg.

Continue reading "Blockchain Next: Social Media and AI" »

May 3, 2018

Facial biometrics going mainstream...

Recognizing someone by sight has been the building block of human interaction and more importantly has helped conduct commerce through the course of known history. It has helped build trust over time and eased many interactions and transactions. Of course, humans carry their very own powerful computer that instantly helps them recognize, recollect, build context and communicate effectively. In the recent times however, interactions with machines have increased substantially bringing in the need for many artificial means to establish identity - mechanisms such as cards, passwords, finger prints etc. While these have helped to an extent, humans have had to learn new ways to interact with systems while also opening up potential loop holes for exploitation.

Continue reading "Facial biometrics going mainstream..." »

Quantum Computing- The next computing revolution

In a conference hosted by MIT's Laboratory for Computer Science in 1981, Richard Feynman proposed the concept of computers which would harness the strange characteristics of matter at the atomic level to perform calculations. Last year, IBM open-sourced its quantum computing network called the IBM Q- Experience to encourage researchers and enterprises to explore various possibilities of quantum computing. Other companies like Google, Microsoft and Intel are also in the race to build their own quantum computer to leverage its exceptional computing capabilities.

Continue reading "Quantum Computing- The next computing revolution" »

March 29, 2018

Cognitive System-Mimicking Human Understanding

With advancements in artificial intelligence algorithms, it's possible for machines to mimic human understanding. They are able to analyze and interpret information, make deductions and identify patterns from the information sets analogous to human brain. These new generation of machines are categorized as cognitive systems. These systems aggregate machine intelligence, predictive analytics, machines learning, natural language engines and image/video/text analytics to enhance human-machine interaction.

Continue reading "Cognitive System-Mimicking Human Understanding" »

March 12, 2018

Trends and Innovation in HR

"Human resources isn't just a thing we do, it's THE thing that runs our business"
- Steve Wynn, Entrepreneur

The importance of the HR department has, till recent times, been overlooked. The HR department was initially handling record keeping, compliance to laws and regulations and compensation & benefits for employees. Over the past decade, with an onset and adoption of technologies and automation, the HR department has evolved remarkably. In addition to payroll process automation and streamlined on-boarding, new platforms and technology has enhanced the talent management systems, allowing more focus on personalized employee engagement.

Continue reading "Trends and Innovation in HR" »

January 31, 2018

Emerging Tech in Airports

Emerging Tech in Airports

Air travel has become the most favored and convenient mode of long distance travel with the 20 busiest airports in the world moving more than 700 million travelers last year.

Meanwhile airports are becoming more than a gateway for people to travel through on their way to their destinations. Today, airports offer hospitality services, duty free shopping and dining experiences to billions of travelers who walk through their doors. Airports are leveraging technology to engage with travelers passing through their facilities to provide customized and seamless services.


Continue reading "Emerging Tech in Airports" »

January 30, 2018

Computer Vision enabling a Retail Utopia

Computer vision (CV) is the technology that enables a machine to 'see' and 'understand' its surroundings, just like or even better than humans. As per the British Machine Vision Association, "computer vision is concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images (video). It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding." It plays a vital role in providing innovative, immersive, futuristic solutions and applications across industries, including traffic management, surveillance, medical image analysis, payments, autonomous vehicles, quality control and many more.

Continue reading "Computer Vision enabling a Retail Utopia" »

December 16, 2013

Data Virus Guard

Clients are, or soon will be, ingesting all sorts of data thanks to information brokerages and the Internet of Things (IoT) and processing that data in novel ways thanks to the Big Data movement and Advanced Analytics.  Decisions made through business intelligence systems require that the data being used is trusted and of good quality.  How will companies ensure that the data being ingested and acted upon is untainted?  This has been an interest of mine as I work to protect the integrity of my clients' decision making processes and systems.

Last year I shared a forward looking concern about the concept of a data virus: data that has been purposefully manipulated to render operations on an entire data set flawed, and it perpetuates its induced error. As noted in the original What will you do about a Data Virus? blog, a tricky situation arises when data fed into the enterprise is determined to be corrupted.  How do you unroll all the down stream systems that have made decisions based on the bad data?  Maintaining this data contamination is tricky.  Many legacy enterprise systems simply don't have the ability to "roll back" or "undo" decisions and/or persisted synthetic information.  So, the first and obvious line of defense is blocking, or sequestering, suspect data before it enters the enterprise.  Much as a network Firewall blocks suspect requests to ports or machines in your network, a similar concept can be employed..... a Data Virus Guard if you will .... in many situations as a first line of defense.

Please keep in mind that my focus has been on streaming sources of data, which are typically sensor based (maybe a velocity reading, or temperature, or humidity, or ambient light, ...) and associated with a thing (a car, train, or airplane for example) and comes in for processing in a streaming manner.  What I'm sharing in this blog could be applied to other kinds of "streaming" things such as feeds from Social Web systems, for example.

What is a Data Virus Guard? 
A Data Virus Guard is a logical unit that has the responsibility of identifying, annotating, and dealing with suspicious data. 

Where should a Data Virus Guard be deployed?
A Data Virus Guard should be deployed at the initial ingestion edge of your data processing system, within the data capture construct.  The data capture sub-system normally has the responsibility of filtering for missing data, tagging, and/or annotating anyway so it is the perfect location to deploy the Data Virus Guard capability.  If you identify and contain data at the "edge", then you run less risk of it containing your enterprise.

How do you Identify a Data Virus?
This area of the Data Virus Guard is what drew my research interest .... how do you go about discerning between normal data and data that has been manipulated in some way?  The approach that I've been taking is focusing on steady state data flows because I'm interested in a generalized solution, one that can work in most cases.  If one can discern what constitutes steady state, then deviations to steady state can be used as a trigger for action.   More elaborate, and case specific, identification approaches can be created and placed easily with the framework I'm proposing.

What kind of Annotation do you do?
As data enters into an enterprise, ideally there is meta-data that helps with maintaining data lineage.  That is, what was the source system that produced the data, what is the "quality" of the data, when was the data generated, when did the data enter the enterprise, is it synthetic (computed versus a sensor reading), etc. etc.  Added to this could be an annotation that indicates which Data Virus Guard algorithm was applied (model, version), and the resulting score of likely suspicion. 

How would the Data Virus Guard deal with suspect data?
Based on the rules of your data policies, the data judged as suspect may be set free to flow into your enterprise, discarded as if it never existed, or kept in containment ponds for further inspection and handling.  In the former case, if you let it in the enterprise and it was annotated as suspect, when data scientists work with the data, they will see that it is suspect.  If you have automated algorithms that make decisions, they could use the suspect score to bias the thresholds of making a choice. 

What are characteristics of a Data Virus Guard?
In the search for "the best ways" to guard against a data virus, a few criteria have popped out to make the system practical.  Firstly, it has to work on all common types of data.  To be truly useful in an enterprise setting, the Data Virus Guard can't work with only strings or only integers, it must work on all common types to provide true utility.  Secondly, its determination of suspicious or not data must be very fast.  How fast?  As fast as practically possible as the half-life of data value is short. This is a classic "risk vs reward" scenario, however, and can be done on a scenario by scenario basis.  Thirdly, it must have the ability to learn and adjust on its own of what constitutes normal, or not-suspicious, data.  Without this last capability, I suspect enterprises would start strong with a Data Virus Guard, but then it would find itself out of date as other pressing matters would trump updating the Data Virus Guard with the latest Data Virus identification models.  In summary, it must work with all types of data, it must be fast, and it must learn on its own.

How would you implement a Data Virus Guard?
Putting together a Data Virus Guard can be a straight forward endeavor.  By blending a stream processing framework with a self-tuning "normal" state algorithm, it would be possible to identify, and annotate, data flows that deviate from some norm (be it values, range of values, patterns of values, times of arrival, etc.).  One could envision a solution coming to life by using, for example, Storm, the open source streaming technology that powers Twitter, and a frequency histogram implemented as a Storm "bolt" (the processing unit of a Storm network) to discern out of norm conditions.

Admittedly, the usage of a frequency histogram would create a weak Data Virus Guard, but it would get the Data Virus Guard framework off the ground and be easy to put in place.  However, by using Storm as the underlying stream processing framework, swapping in a more powerful "out of norm" algorithm would be relatively easy.  Do you go with a Markov chain, a Boltzmann machine, or even the very interesting Hierarchical Temporal Memory approach of Numenta? This would all depend upon your system, the characteristics of the data you're ingesting and the amount of false-positives (and false-negatives) your enterprise can withstand.  Of course, you even go further and apply all three of the approaches and come up with some weighted average for discerning if some piece of data is suspicious. 

Summary
This is a forward looking post about what we can expect to be issues in Enterprises as all companies embrace the concepts of Big Data, Advanced Analytics, the Internet of Things, and true Business Intelligence: a Data Virus, and what we can do about it: a Data Virus Guard.  My work in this area is still evolving, and is intended to keep our clients a few steps ahead of what's coming.  Bad data plagues all enterprises.  It can be incomplete, malformed, incorrect, unknown, or all of these.  Unfortunately, we now also have to watch for malicious data.  Putting in safeguards for this condition now before the malicious data issue becomes rampant is a much cheaper proposition than re-hydrating your enterprise data stores once a contamination occurs. If nothing else, if you don't implement a Data Virus Guard, be sure you have your data policies in place for addressing this coming issue.

October 11, 2013

Can formal requirement methods work for agile?

By Shobha Rangasamy Somasundaram & Amol Sharma

Formal methods adapted and applied to agile, provide clear and complete requirement, which is fundamental to the successful build of any product. The product might be developed by following Methodology-A or Methodology-B, which changes very few things as far as knowing what to build goes. So we could safely state that the development methodology used by the project team could be any, but good requirements are absolutely necessary. The manner in which we go about eliciting and gathering requirements would differ, and needless to say, this holds true for agile development too.

Continue reading "Can formal requirement methods work for agile?" »