Realize business value from big data with Infosys data analytics solutions.

April 16, 2018

VISITOR/PROFILE STITCHING IN THE AGE OF GDPR

Posted by Rohan Kanungo (View Profile | View All Posts) at 10:02 AM

(1) Use of cookies or similar technologies: Whenever you set cookies or similar technologies on a user´s equipment for marketing purposes, you need to obtain cookie consent. Cookie consent would need to be provided by all affected consumers. This is not safeguarded if different consumers use the same device once one consumer has provided consent and the cookie settings store this choice. However, this problem is difficult to overcome in practice.


Regarding the tracking/profiling also on third-party websites, the use of a cookie to track consumer´s behavior on third party websites before it enters your website cannot be legitimized with cookie consent only.


2) Collection and processing of consumer´s personal data: The most sensitive issue is the justification for the collection and processing of consumer´s personal data (such as consumer´s browsing habits in connection with its ID etc.).


Tracking/profiling through account: If you track consumers through their account we think that the profiling may be justified without explicit consent but based on customer's legitimate interests. You may argue that account holders are existing customer (where GDPR generally allows broader leeway. Aspects which need to be considered with the balancing of interests in our view:


  • Privacy intrusion is little when ads are merely shown on your website;
  • Personalization only relies on information gathered from your website (and not from third-party websites);
  • Consumer is an existing consumer and is informed about that tracking via the Privacy Policy; and
  • Consumer can also withdraw its cookie consent at any time to end the tracking (as it is usually emphasized in the Privacy/Cookie Policy)

Tracking/profiling through device:


  • Tracking/profiling restricted to your website: If you track consumers through their device on your website only, we think the collection/processing of personal data in relation to existing consumers (i.e. those with account) can still be based on legitimate interest. In relation to consumers without account, we do not think that the justification of legitimate interest will work. This issue is a dark grey area, requires a risk assessment and discussion with your DP team.
  • Tracking/profiling also on third-party websites: We do not think that the collection/processing of personal data on third party websites for marketing purposes can be based on legitimate interest alone. This tracking is very sensitive and would hardly be acknowledged as covered by legitimate interests that outweighs the privacy interests of the consumer by data protection authorities ("DPAs"). We recommend that at least the most sensitive part which is the collection /processing of personal data should be covered by a proper GDPR consent.



BE GDPR - READY WITH INFOSYS

https://www.infosys.com/gdpr/

April 12, 2018

Who should drive the GDPR Program?

Posted by Rohan Kanungo (View Profile | View All Posts) at 7:15 AM

There is an increasing awareness of GDPR regulations and organizations are coming to terms with it. Having said that, many are grappling on how to structure and execute the program. Why is this a vexing problem? Structuring the GDPR program is not a trivial task. While past experiences in delivering security programs and regulations can provide some guidance, it cannot be replicated in the GDPR scenario. The primary reason for this is because of the nature of the GDPR itself. GDPR is not a 'prescriptive' document, it does not lend itself to a 'check list' that can be deployed. May be couple of years down the line, it could be possible, but not right now. GDPR requires subjectivity and interpretation; 'Risk Management' and proportionate response in accordance with the risk threshold is inbuilt into the structure. Coupled with this is the fact that while the 'intent' of the regulation is clear, there are several grey areas when it comes to contextualizing and operationalizing it to a specific business case. Secondly, data security and protection is in a 'Darwinian' moment. Stakes with GDPR are high. It is being looked upon as a 'role model' in terms of data privacy regulations and in many ways will pave the path for future action in this space. Organizations are acutely aware of this and they are determined to make an informed and calibrated decision on how to approach this situation. The costs associated with a tepid initiation of GDPR will be manifold and will set the organizations' back significantly.

Key Success Factors

What is required to deliver any GDPR program is a high level of management awareness, the right organization, efficient tools, employee education, and an effective implementation model. 

The key success factors for a delivering a GDPR program are -

1.    Alignment to overall Business Strategy & Operations

2.    Decision Making Mandate

3.    Budgetary Control

4.    Ability to drive organization & create awareness 

5.    Ability to execute

We are of the opinion that only a combined implementation model is effective in achieving and demonstrating compliance. Combined efforts are typically required to achieve a clear mapping of regulatory requirements to the entire organization and all its operations, including IT.

We recommend a 'GDPR Task Force' to be constitute under the auspices of the Office of the CEO. This task force will be led by by the CEO and will have representation from all the departments of the organization including the CXO suite and all the business functions - CFO, CIO, CDO, CSO, Legal, Marketing, Sales, HR, Procurement etc.With its wider management focus and with project groups across different functions--such as legal, marketing, and IT--will help with strategic considerations, since it reviews what customer data is collected, how it is used, and how it could be done better to create competitive advantage. This ensures that "privacy by design," as required by the GDPR rules. Privacy by design means taking data protection into account at every step of a company's processes, from R&D and business development to marketing and sales.

BE GDPR - READY WITH INFOSYS

https://www.infosys.com/gdpr/

April 10, 2018

GDPR -Managing Data in the Digital Age

Posted by Rohan Kanungo (View Profile | View All Posts) at 2:55 PM

Hallways of businesses across the world, especially in Europe, are abuzz with the newly minted regulation -- General Data Protection Regulation (GDPR). As an upgrade to the previous Directive 95/46/EC, the GDPR upholds the rights of EU citizens to protect their personal data irrespective of the location of processing. The recent fracas with Facebook and unauthorized usage of personal data has brought data security and privacy into the public domain in a never before way. Today, most individuals are eager to know how their data is being used and what are organizations doing to ensure that their interest are adequately safeguarded.

 

The central theme in GDPR is data privacy as a fundamental human right. GDPR is unique because of this fundamental assertion that data is now central to our way of life, and therefore, its treatment cannot be trivial or an afterthought. But then, the prevalent model of data usage and treatment is not holistic and it is not focused on the right way of handling this asset, but on a narrow vision of collecting data and then curating it without an overall harmonious strategy. The basic question of the times that we live in then comes down to addressing this question of how do we handle this -- do we continue forward on the path of collecting and using data by whatever means possible? Definitely Not.

 

In this digitally enabled world, data is all-pervasive. It is driving the business. Unimaginable quantities and varieties of data are moving to and fro in the digital world. In this highly fungible ecosystem, it is a matter of fact that personal data and sensitive information is collected, maybe curated, and then made available for consumption. There are very few organizations who can confidently state that they have a complete handle on all the data elements in their organization.

 

Hence, we believe that adopting the GDPR process will make companies review their data management policies and processes, and evaluate if their data organization is aligned to the digital world and the new-age economy.

 

GDPR is not a set of isolated activities pertaining to legal, consulting or data management, but a combination of different processes working integrally

 

Adopting and assimilating GDPR in the ethos of your organization will be a catalyst for taking the necessary steps to build strong digital capabilities and creating a competitive advantage. Some of the key initiatives could be -

 

Data Discovery & Classification - identifying all personal data lying in fragmented or scattered systems; then categorizing to help understand the type of data within the organization and associated risk of exposure.

 

Data Cataloguing enabling organizations to understand and form data relationships to various business processes regardless of its sources and platforms.

 

Data Standardization - cleansing and consistent formatting of data coming from disparate sources subjecting it to further transformation.

 

Data Profiling & Quality Checks ensuring data accuracy and its completeness in a holistic fashion

 

Data Ownership defining clear specification of data controller's rights while modifying and deleting personal information of an individual. It also advocates for recording consent of data subjects' for storing and processing their personal data.

 

 

GDPR reinforces what has been a best-kept secret in the industry that data holds the key to competitive advantage, and treating data strategically will be a key differentiator between being hugely successful and just scratching the surface.


BE GDPR - READY WITH INFOSYS

For more information on Infosys GDPR, visit https://www.infosys.com/gdpr/  



March 28, 2018

AI Reborn

Posted by Shahnawaz Qureshi (View Profile | View All Posts) at 11:52 PM

 

During my academic days we had a dedicated subject for Artificial Intelligence (Al) in our Final year. Of what I could remember, it was all about Algorithms and a unique less known language called LISP (list Processing) for which we had labs back then. It was one of my favorite subject as it was futuristic and not normal science. But coming out of college I believed that Al was something that would continue to be a research topic rather being a practical implementation. Why? Because how would you come up with an algorithm that could mimic Intelligence? It is one thing to write a code for Chess that could analyze all the "n" possible moves and choose the one that has the smallest high probable path for victory but it's totally a different case to provide a generic Intelligence. Those were the days post IBM Deep Blue's victory stories against Human Grand Masters. So, called intelligence in domain like chess is quite possible as it has boundaries. There are limited rules and probable moves. A good set of algorithm and powerful processing power would give any grandmaster a run for his money. But in a more variable domain, things get more fiction than being realistic.

Fast forward seventeen years and Al seems to be more realistic and evolving towards realization. And I get a feeling YES, it's possible. So, what has changed? To answer this in two words -"Data Analytics".  Today's AI is driven by Data Analytics, Algorithms being developed are focused more on Data. Other than this most of the advancement in Today's Information Technology landscape is aiding Data Analytics.

The foundation of Today's AI has been data; algorithms under the field of Machine Learning and Data Mining are focused around data and harvesting patterns from it, these harvested patterns forms the building blocks of today's AI. Unlike yesteryears, today data is available in huge volume, the digital world around us has been spewing data all around, and this data in form of Big Data provide the mining field for patterns which usually remain unseen on surface until right analytics has been applied. With real time analytics, intelligence no longer need to be harvested from historical data but can be attained and recognized as it happens.

Transformation in technology landscape has aided the evolution of AI as well. Cloud computing is providing virtually unlimited storage and computing power required to run data analytics at scale. Another crucial element that is contributing to the success of AI today is "Connected System" through Web services and API's. AI yesteryears was conceived to be more of a monolithic, an AI system back then was supposed to house most of its intelligence capability, but today intelligence is distributed. Today if you want your system to recognize human speech than you need not build it from scratch but can leverage existing speech recognition services from Microsoft, Google or Amazon. Similarly if you need an image recognition capability than you can look forward for Vision API's again from players like Microsoft, Google and Amazon.

Just like distributed architecture the evolvement of AI has also been distributed and community driven. Popular statistical computing language "R" widely used for Data Analytics is open source. What makes "R" so popular for data analysis is its vast range of packages developed and contributed by independent developers solving certain problem domains and making it available for reuse this collective effort has been contributing towards the advancement of Data Analysis and AI.   

These are some of the factors providing the right ecosystems that is making AI to thrive and become a reality today. As hinted above no one entity is building the whole AI ecosystem but it's been evolving gradually in bits and pieces. Bunch of these services are baked locally to provide certain AI implementation like "Self Driving Car". Gradually such focused implementation can be augmented with one another to give rise to more intelligent system with broader capability that could someday rival humans. And probably one day, tables might turn around and machines might be working on making humans more intelligent and efficient so that we humans can serve their purpose, Scary uh!

October 31, 2017

Deciphering the Minority Report on AI

Posted by Ramaswami Mohandoss (View Profile | View All Posts) at 5:29 PM

If Thomas Friedman decides to revise his best seller (The World is Flat) in the future, I think he would include AI/ML as an important disruptor. Per Jeff Bezos, Artificial Intelligence is in its golden age. Jeff calls AI an enabling layer that will improve every business. At the World Economic Forum in Davos, Satya Nadella said AI could be a vital driver for growth. Mark Zuckerberg predicts AI to deliver many improvements in our lives in the next 10 years. Google co-founder Sergey Brin says he's 'surprised' by pace of AI and calls it revolutionary.

But not everyone seems to be on the same page. Not Elon Musk for sure. Last year, he compared AI to summoning the demon. This year he went ahead and called it the biggest risk we face as a civilization. While the majority seem to feel positive about AI, it's hard to ignore the minority report, especially when it comes from one of the most respected visionaries in the current times.

So, what is the truth? Is AI really a threat to our existence? Categorizing machines into 4 kinds and evaluating the risks introduced by each of these, I have tried to find an answer to the earlier question.

  1. The flawless clerk
  2. The expert system
  3. The invisible machine
  4. The silicon poet

For the complete perspective, please click here

June 2, 2017

Artificial Intelligence - Unconventional use cases which will be reality soon.

Posted by Arav Narasimhamurthy (View Profile | View All Posts) at 9:46 PM

As our CEO, Dr. Sikka says AI - Pursuit of building something intelligent is as old as humanity. It has grown leaps and bounds from the time AI was discussed in 1956.

Since its inception for what determined a machine to be "intelligent" AI has evolved overcoming challenges during 90's and has entered its golden era.

Increasing investments from Nvidia on GPU and Google in TPU has made Moore's law more and more relevant in current scenario. Availability of these powerful processing units have accelerated Deep learning, Automate Machine learning and artificial neural networks.

Startups have revolutionized AI world and are providing disruptive solutions to solve complex business problems, these days every organization is adapting AI in some or other forms, may be as simple as for customer segmentation, social integration, personalized offers, supply change or building complex solutions for solving human problems around cancer treatment, self-driving cars and many more.

Over next couple of years maturity of AI will rapidly increase and more unconventional use cases will turn into reality for consumption.

Here is first in series of few such use cases in my view as Infrastructure, platform and products becomes available to implement.

Emotional AI:

Deep learning is the study of artificial neural networks related to machine learning algorithm containing more than one hidden layer similar to human brain. Based on deep learning AI can distinguish between dogs and cats, good vs criminals but how do you feel when you are treated from a robot, how does families feel when AI is able to identify depression in struggling students?

Among lot of other things AI is able to recognize faces, turn sketches to pictures, identify voice and many more.  

Days are not far when Siri or Alexa can detect emotions based on the sound of your voice and have a conversation, recommend a therapy session or send an alert to your loved ones to order a bunch of flowers to make you happy.

As human beings, we understand contexts and empathy. Not many AI models have it today. Companies that can implement these into their technology will have more success.

AR Chatbot:

2017-05-29-PHOTO-00000140.jpg

Chatbot is common platform these days and everybody has a version of right from Microsoft, Facebook, Watson, Google or custom built.

People are probably going to be more drawn into engaging with chatbots which has personality; it has to be companion to whom people can engage with.

AI integrated with augmented reality can do wonders. If bots can be integrated with AI, emotional AI and AR then humans-robot interactions will take a huge leap forward. Humans often struggle with appropriate responses due to complexity of emotions, if technologies can decipher this then the output will be very impressive.

To be continued...

May 10, 2017

How Advanced BI Tools can add value to the Shipping and Liner Industry

Posted by Mangalika Ghosh (View Profile | View All Posts) at 6:42 AM

 

With the world wide growing demand of faster business decisions, the Liner and Logistics companies should definitely start trying advanced reporting tools like Tableau or Qlikview etc as a powerful weapon in container forecasting, demand driven supply chain and other trend analysis.

Some shipping giants are still using traditional BI or manual query executions or sometimes manual graph preparation in excel to understand trend and improve business KPIs. These methods are not only time consuming but also effort intensive. Who does not agree that viewing a whole lot of data in terms of instant pictorial presentations is way better than browsing monotonous excels and comparing the numbers or sums.

Take below examples, where you can quickly get a comparative trend of payment against shipment delay per Shippers.

[Samples from Tableau site]

Hovering mouse on a specific dot gives you other details like Shipper/Shipment numbers/Delay in hrs/route etc in a pop-up window.

Clicking on a specific dot, the dashboard will show further details including average delay etc by that shipper.



Generic Shipping LOBs where advanced reporting tools can definitely benefit the users:

  1. Better Asset utilization and optimization by single dashboard with pixel perfect visualization

  1. Container utilization by regions/by months
  2. Vessel allocation efficiency by regions/ by terminals

     

  1. Better Cost Management by sharable and easily customizable Dashboard:

  1. Revenue per TEU by routes/ ports etc.
  2. Vessel Operation Budget vs Actuals by locations/by months etc.
  3. Balance Outstanding per shippers

     

  1. Better Productivity by single dashboard with heat-maps, scatter plots etc:

  1. Productivity by booking offices etc.
  2. Work Order target vs completions by repair vendors
  3. Number of Load Discharge activities per Terminals
  4. EDI rejection analysis by Partners, by regions.

     

  1. Better Demand Forecast by single dashboard with predictive analysis:

  1. Booked commodities Vs Actual Shipments by container types/ by locations etc
  2. Booked empty containers Vs dispatched Containers by months/ by regions etc

     

  1. Mobility :

    Dashboard / Report / Visualization Interactivity on iOS/Android Mobile.

These high level reports can be customized with few clicks on tools like tableau/qlikview with further search parameters or can be drilled down to more detailed levels within seconds. Also deployment is faster than traditional BI reports.

You can refer to the comparative studies on different advanced tools and choose your own as per the suitability of your organization.

But finally, all shipping companies, using static reports, may experience a complete revolution once they start using advanced tools. It helps to improve operational efficiency, profitability and inventory utilizations by quickly providing trend in most understandable manner. These are also user friendly; simple drag and drop options for customization purpose give more power to users without having dependency on developers all the time. Users can easily transform from an ordinary biz analyst to a data champion and the enterprises can remain way ahead than other competitors.

References:

https://www.tableau.com/solutions/gallery/shipment-analysis



October 30, 2016

Pragmatic Data Quality Approach for a Data Lake

Posted by Ketan Puri (View Profile | View All Posts) at 4:27 AM

On 26th Oct 2016, we have presented our thought paper at the PPDM conference hosted in Calgary Telus Spark science center(Calgary Data Management Symposium, Tradeshow & AGM)

http://dl.ppdm.org/dl/1830

Abstract:

With the increase in amount of data produced from sensors, devices, interactions and transactions,
ensuring ongoing data quality is a significant task and concern for most E&P companies. As a result, most of the systems that are sources of data have deferred the task of data clean-up and quality improvement to the point of usage. Within the Big Data world, the concept of Data Lake which allows ingesting all type of data from source systems without worrying about the type or quality of data, further complicates the aspect of data quality as the data structure and usage is left to the consumer. Without a consistent governance framework and set of common rules for data quality, Data Lake may quickly end up into a Data Swamp. This paper examines the important aspects of data quality within Upstream Big Data context, and proposes a balanced approach for data quality assurance across data ingestion and data usage, to improve data confidence and readiness for downstream analytical efforts.
 

The key points/messages that we presented were,

1. Data quality is NOT about transforming or cleansing the data to fit into the perspectives...instead  it's about putting right perspective to the data....


2. Data by itself is not Good or Bad it's just data, pure in its most granular form


3. Quality is determined by the perspective through which we look at the same data


4. Architectural approach to abstract data from the perspectives or standards and build a layer of semantics to view the same data from different point of views. We don't need to populate data into models (PPDM, PODs etc.) instead we put models on top of the existing data promoting the paradigm of "ME and WE" where each consumer of the data has their view point of the same data. The concept of the WELL can be viewed in reference to Completion, Production, Exploration etc. without duplicating the data in the data lake.


5. Deliver quick value to the business and build their trust on the data in the data lake scenario


Please refer to the below link for the details

http://dl.ppdm.org/dl/1830

July 13, 2016

Industrial Internet of Things (IIoT) - Conceptual Architecture

Posted by Ketan Puri (View Profile | View All Posts) at 6:29 AM

 

The popularity of Internet of Things (IoT) is growing rapidly. More and more devices (things) are getting connected to the internet every day. The value potential through these connected devices is enormous. We have witnessed just a fraction of its potential yet. Many startups are in process of building data driven value products, solutions or services that can disrupt the traditional operational procedures. Major cloud vendors have also ventured into it, providing IoT as a key offering in their product stack.

Industrial IoT extends the general concept of IoT to an industrial scale. Every industry has their own set of devices, home grown or proprietary applications with limited interfaces and for some even network bandwidth is of a major concern. Considering the challenges and limitations, varying from industry to industry, there is no single solution that fits all. Every industry is unique in itself with varied set of use cases and require custom tailoring.

This article will talk about the conceptual architecture for an Industrial Internet of Things (IIoT), agnostic of technology or solution.

Below are the key components of any typical IIoT landscape


IIoT-ConceptualArchitecture-2.jpg

a) Industrial Control Systems (ICS)

These provide first hand view of events across industrial systems to the field staff to manage the industrial operations. They are generally deployed at industrial sites and includes Distributed Control Systems (DCS), Programmable Logic Controllers (PLCs), Supervisory Control and Data Acquisition (SCADA) systems and other industry specific control systems.

b) Devices

These are industry specific components that interfaces with digital or analog systems and expose data to the outside digital world. They provide machine to machine, human to machine and vice versa capability for ICS to exchange information (real-time or near real-time) enabling other components of the IIoT landscape. It includes sensors, interpreters, translators, event generators, loggers etc.

They interface with the ICS, Transient Data Stores, Channels, and Processors

c) Transient Store

This is a temporary optional data store that is connected to a device or an ICS. Its primary purpose is to ensure data reliability during outages and system failures including networks.  It includes attached storage, flash, discs etc.

They generally come as an attached or shared storage to  the devices .

d) Local Processors

These are low latency data processing systems located near or at the industrial sites. They provide fast processing of the small data. It includes data filters, rule based engines, event managers, data processors, algorithms, routers, signal detectors etc.

They generally feeds data into the remote applications deployed at the industrial sites. At times these are integrated with the devices itself for data processing. 

e) Applications (Local, Remote, Visualization)

These are deployed on site or offshore to meet business specific needs. They provide insights/views of the field operations in real time (for the operators), real time and historical (for business users and other IT) staff enabling them to make effective and calculated decisions.  It includes web based applications, tools to manipulate the data, manage devices, interact with other systems, alerts, notifications, visualizations, dashboards etc.

f) Channels

These are the mediums for data exchange between devices and outside world. It includes satellite communication, routers, network protocols (Web based or TCP)   etc.

g) Gateways

These provide communications across multiple networks and protocols enabling data interchange between distributed IIoT components. It includes protocol translators, intelligent signal routers etc.

h) Collectors

These are data gatherers that collect and aggregate data from gateways leveraging standard protocols. It can be custom built or off-the-self products that vary from industry to industry. For example, OPC data, event stream management systems, application adapters, brokers etc.

i) Processors

These are the core of any IIoT solution. Their function is primarily to cater to specific business needs. It includes stream processors, complex event processing, signal detection, scoring analytical models, data transformers, advance analytical tools, executers for machine training algorithms, ingestion pipelines etc.

j) Permanent Data Store and Application Data Store

These are the long term data storage systems generally linked to an IIoT solution. They act as a historians for the device data along with data from other sources. They feed data into the processors for advanced analytics and model building. It includes massively parallel processing (MPPs) data stores, on-cloud/on-prem data repositories, data lakes providing high performance and seamless data access to both business and IT. For example historians, RDBMS, open source data stores etc.

k) Models

There are two type of models that are widely used in the IIoT solutions i.e. Data Models and Analytical Models. The data models defines a structure to the data while the analytical models are custom built for catering to industry specific use cases. Models play an important role in any IIoT solution. They provide a perspective to the data. Models are generally built by leveraging the data in the permanent data stores, human experience, and industry standards. The analytical models are trained leveraging historical data sets or through machine based training process. Some examples of the analytical models are clustering, regression, mathematical, statistical etc. Some examples of data models are Information models, semantic models, Entity relationships mapping, JSON, XML/XSD etc.

The models are fed back into the data stores, processors, applications, and gateways

l) Security

Security is the most important aspect of any IIoT application. It runs through entire pipeline from source to the end consumption. It is very critical for small, medium and large data driven digital enterprises dealing with their data in IIoT world. It includes data encryption, user access, authentication, authorization, user management, network, firewalls, redaction, and masking etc. 

m) Computing Environments

These vary from industry to industry depending upon their business landscape and nature of the business (Retail, Health Care, Manufacturing, Oil and Gas, Utilities etc.)

  • Fog Computing - Bringing analytics near to the devices/source

  • Cloud Computing - Scaling analytics globally across the enterprise

  • On-Prem Computing - Crunching data in existing high performance computing centers

  • Hybrid Computing - Mix of on-cloud, on-prem and fog computing optimizing operations tailored for specific industrial business needs   




 


Continue reading "Industrial Internet of Things (IIoT) - Conceptual Architecture" »

April 11, 2016

How to make a 'Data Lake'

Posted by Ketan Puri (View Profile | View All Posts) at 2:30 PM

Data Lake has become a buzz word these days and we see enterprises actively investing to have their own Data Lake.

As part of Digital Agenda for most of the enterprises, Data Lake is one of the most prominent focus areas. Investments are happening in terms of Data Acquisition, Storage (Cloud or On-Prem) and Analytics. The success rate for most of the enterprises is dismal. The reason is not the capability or technology, instead the right direction and focus on the Value.

The hype to have all the data at one place and think of its usage later has created more Data Swamps than a valuable Data Lake.

My article in the Digital Energy Journal (Issue 60- AprMay2016) is a first step to give some structure to the concept of the Data Lake.

Below is the image taken from the article with the permission from the Editor.

DataLake.png
Subscribe to this blog's feed
Bloggers

Infosys on Twitter