Testing Services provides a platform for QA professionals to discuss and gain insights in to the business value delivered by testing, the best practices and processes that drive it and the emergence of new technologies that will shape the future of this profession.

« October 2012 | Main | December 2012 »

November 30, 2012

Recommended structure for Organizational Security Assurance team

 

Security defects are sensitive by nature, always raised as top priority tickets and costlier than functional and performance defects. Apart from the business impact, there is impact on the company's image, lost data costs, loss of end-user confidence and it leads to compliance and legal issues. So, with such high levels of risk associated with security defects, it is surprising to see that many organizations do not have an internal structure towards security assurance.

 

Internal security assurance is needed for any organization to increase security awareness across the enterprise, have a structure to deal with various security compliance aspects and to use this structure to strengthen and build and test processes. Setting clear goals, reporting structure, defining activities and enlisting performance measurement criteria helps in smoother functioning of security assurance team. To know more about a team structure that is capable of providing enterprise-wide security assurance service for Web applications, read our POV titled "3-Pillar Security Assurance Team Structure for ensuring Enterprise Wide Web Application Security" at http://www.infosys.com/IT-services/independent-validation-testing-services/white-papers/Documents/security-assurance-team.pdf.

November 28, 2012

SOA Testing and its benefits - Do we really reap them?

 

SOA being an abstract concept, understanding its core principles is essential to succeed. Stories of SOA implementation failures have been doing rounds primarily due to overlooking of its fundamental principles of business involvement and governance. The case with SOA testing is no different. The term is even more ambiguous, and any misinterpretations can result in failures of test implementations. The foundation of SOA testing lies on three basic beliefs - early testing, optimized coverage and faster regression. When it violates one or more of these, it is more likely to fail. From my experience, here are some of the typical situations where SOA testing does not give its intended benefits:

 

1.     Late start - Services are instituted in the foundation layers of the architecture, with core business functionality. Detection of defects in such functionality prior to integration helps avoid cost escalations due to rework. As you get later into the life cycle for testing these services, this benefit is lost.

 

2.     Testing of non-core services - Certain services are there just for the sake of it. They are neither reusable nor have any core business functionality. Such services can be better covered as part of system integration/end to end testing. Otherwise, it just becomes an overhead.

 

3.     Poor choice of SOA testing tool - Selection of a SOA testing tool has to take into account several parameters like its support for message formats, transport protocols, automation, virtualization and commercial elements. For instance, a tool that does not support the required protocols or one that cannot adapt to WSDL changes can hit the ROI drastically. You can read my article http://www.infosys.com/IT-services/independent-validation-testing-services/Pages/maximize-ROI-through-SOA.aspx on the important parameters for SOA testing tool selection.

 

4.     Ineffective automation - Automation of frequently changing enterprise services is essential to improve the efficiency of regression testing. However, just because you have a tool does not mean that you have to automate everything. Automating the wrong ones, almost static and too application specific web services for instance, can lead to wastage of effort and late completion of testing.

 

5.     Redundant testing - SOA testing is a testing methodology which helps in improving coverage of SOA based systems. At the time of test strategy, it has to be planned on how the system requirements will be covered at various phases such as services testing, system integration testing and end to end testing. However, most projects treat SOA testing as a different track (due to different technical skill set needs of the team) which leads to lack of collaboration and hence redundant testing (the same test cases covered in multiple phases).

 

 

November 5, 2012

Big Data: Is the 'Developer testing' enough?

 

A lot has been said about the What, the Why and the How of Big Data. Considering the technical aspect of Big Data, isn't it enough that these implementations can be production ready with just the developers testing it? As I probe deeper into the testing requirements, it's clear that 'Independent Testers' have a greater role to play in the testing of Big Data implementations. All arguments in favor of 'Independent testing' hold equally true for the Big Data based implementations. In addition to the 'Functional Testing' aspect, the other areas, where 'Independent Testing' can be a true value add are:

· Early Validation of Requirements

· Early Validation of Design

· Preparation of Big Test Data

· Configuration Testing

· Incremental load Testing

In this blog, I will touch upon the listed additional areas and what should be the focus of 'Independent Testing'.

Early Validation of Requirements

In the real world, Big Data Implementations are mostly system integrated with existing 'Enterprise Data Warehouse (EDWH)' systems or 'Business Intelligence' systems and clients want to decipher and see the business value coming out from both the already-being-explored-data sources as well as never-before-explored-data sources.

In the Requirements validation stage, the tester should ensure whether the requirements are mapped to the right data sources and whether all feasible data sources, for the customer's business, have been considered in the Big Data Implementation. If a certain data source has not been considered, then this should be raised as probable defect. It would be resolved as either 'Not a Defect' as the data source in question does not provide any cost effective way for analysis or 'A New Requirement' that should be implemented in the future version of the system implementation.

Early Validation of Design

In the context of Big Data, it is important that the implementation is 'Data based'. It implies that both storage and analytics are done using the right components, based on the nature of the data. For ex: there is no advantage of copying the structured data in EDWH to Hadoop Files System (HDFS) and querying them using HiveQL, while the data would continue to reside in EDWH. Similarly, there is no advantage of analyzing a few MBs of new structured data within HDFS.

In the Design Validation stage, the tester should ensure whether the right external data source is mapped to the right internal data source. Any concerns on this have to be resolved earlier in the development cycle as not selecting the right internal data sources for storage and analytics would defeat the reality of cost-effective Big Data implementation.

Another area where the tester can provide a value-add is to check for the data duplicates between EDWH and HDFS and whether there is any real business benefit in duplicating the data. Any miss in synchronizing the duplicate data between various sources during data maintenance would result in misguiding data analytics and might prove to be detrimental to the customer's business.

Preparation of Big Test Data

Whether it is 1000 files or 100000 files, it does not make any difference to the developers when scripting a Map Reduce Code that analyses the data. But, testing with near-real-volumes of data is very important to check the inherent scalability of the scripts, any inadvertent hardcoding of the paths and data sources in the scripts, handling of the erroneous data by the scripts etc.

To ensure production confidence, the tester should intelligently replicate data files, with some incorrect schema and erroneous data and ensure that the map reduce code has taken care of all possible variations of input data.

Configuration Testing

Hadoop Eco System is highly configurable. A tester should probe on identifying the configurable parameters in the Hadoop Eco system and determining the default and acceptable customization ranges, for the Big Data implementation that's being tested. Specific tests have to be created on configurable parameters to test the behavior of the system.

Incremental Load Testing

Tester should plan for testing with additional (at least one cluster) clusters added, removed dynamically. This testing should be done along with configuration testing to ensure that the system design has taken care of the node/cluster scalability with appropriate configuration parameters.

Conclusion

While, testing the functional requirements of Big Data Implementation can be achieved to a certain extent by 'Developer Testing', the overall production confidence of the system can be achieved only with focused 'Independent testing' that would cater to both the 'Conformance to Requirements' as well as "Fitness to use".

Click here to read a blog on Testing Big Data vs. Data Warehouse implementations.