- Functional testing: Extracting a subset of production data to act as input values for data-driven testing or provide the appropriate level of test databases means testers spend less time on operational activities and more time on actual testing.
- Performance testing: Stability, load, benchmark, and other types of sustained tests require data for hundreds or even thousands of records to execute performance tests over several hours. Automated test data management makes the test data readily available.
- Service virtualization: Virtualized components require realistic test data to simulate the behavior of the live service or software they are emulating. Leveraging a test data management strategy to subset production data while masking sensitive information meets these requirements.
Essential steps: Streamlined test data management
Implementing a test data management approach involves a few steps that can help simplify the testing process by applying five best practices to test data management before going to production after testing is complete.
Discover and understand the test data
Aggregate data from various systems and formats.
Capture E2E business process to identify associated data.
Create a core set of 'Happy Path' E2E tests & develop a 'GoldCopy' data set & giving it a familiar reference name.
Suggested manageed datasets could come in the form of:
State1 Default State:
This dataset should represent the data set that exists at initial deployment of a system or application
Health-check & Postbuild tests could implement
State2 Functional Testing:
This dataset should lean towards maximizing feature coverage and could be quite large
Regression & New Feature testing could implement
State 3 Integtation Testing:
This dataset should support tests that exercise boundaries where separate applications connect
Postbuild and Deployent testing could implement
State4 Performance Testing:
This dataset would conceivably be a concise package for exercising systems.
Less data more repititions
State5 Security & User Management:
...
State6 Live Data Capture
Capture data from a live production environment
useful for testing system updates
by updating then committing this data for each release, updates from & to any version is possible
Bugfixes can be tested or recreated.
Parse
Data is scattered across systems and resides in different formats. In addition, different rules may be applied to data depending on its type and location. Organizations should identify their test data requirements based on the test cases, which means they must capture the end-to-end business process and the associated data for testing. Capturing the proper test data could involve a single application or multiple applications. For example, a business may have a customer relationship management (CRM) system, an inventory management application, and a financial application that are all related and require test data.
Extract a subset of production data from multiple data sources
Extracting a subset of data is designed to ensure realistic, referentially intact test data from across a distributed data landscape without added cost or administrative challenges. In addition, the best approaches to collecting a data subset include obtaining metadata in the subset to accommodate data model changes quickly and accurately. In this way, obtaining a subset creates realistic test databases small enough to support rapid test runs but large enough to accurately reflect the variety of production data. Part of an automated subset gathering process involves creating test data to force error and boundary conditions, which includes inserting rows and editing database tables along with multilevel undo capabilities.
Mask or de-identify sensitive test data
Masking helps secure sensitive corporate, client, and employee information and also helps ensure compliance with government and industry regulations. Capabilities for de-identifying confidential data must provide a realistic look and feel, and should consistently mask complete business objects such as customer orders across test systems.
Automate expected and actual result comparisons
The ability to identify data anomalies and inconsistencies during testing is essential in measuring the overall quality of the application. The most efficient way to achieve this goal is by employing an automated capability for comparing the baseline test data against results from successive test runs—speed and accuracy are essential. Automating these comparisons helps save time and identify problems that might otherwise go undetected.
Refresh test data
During the testing process, test data often diverges from the baseline, which can result in a less-than-optimal test environment. Refreshing test data helps improve testing efficiencies and streamline the testing process while maintaining a consistent, manageable test environment.
Case study: The importance of test data management
Proper test data management can be an essential process for cost-effective continuous testing. Consider the following scenario in a US insurance company.1 The director of software quality was fed up because lead project managers and quality assurance (QA) staff were complaining almost daily about the amount of time they spent acquiring, validating, organizing, and protecting test data.
Complicated front-end and back-end systems in this scenario consistently caused budget overruns. Contingency plans were being built into project schedules because the team expected test data failures and reworking. Project teams added 15 percent to all test estimates to account for the effort to collect data from back-end systems, and 10 percent of all test scenarios were not executed because of missing or incomplete test data. Costly production defects were the result.
With 42 back-end systems needed to generate a full end-to-end system test, the organization in this example could not confidently launch new features. Testing in production was becoming the norm. In fact, claims could not be processed in certain states because of application defects that the teams skipped over during the testing process. Moreover, IT was consuming an increasing number of resources, yet application quality was declining rapidly.
The insurance company in this scenario clearly lacked a test data management strategy aligned to business results. Something had to change. The director of software quality assembled a cross-functional team and asked the following tough questions:
What is required to create test data?
How much does test data creation cost?
How far does the problem extend?
How is the high application defect rate affecting the business?
Finding the answers to these questions was an involved process. No one had a complete understanding of the full story.
Through the analysis process, the team in this scenario discovered that requests for test data came too late, with too many redundancies. There were no efficient processes to provide test data for all of them. Teams would use old test data because of the effort involved in getting new test data, but using old test data often resulted in a high number of defects. In addition, the security risks of exposing sensitive data during testing were rampant.
After fully analyzing the problems, the team in this example concluded that with every new USD14 million delivery, a hidden USD3 million was spent on test data management. Hidden costs were attributed to the following sources:
Labor required to move data to and from back-end systems and to identify the right data required for tests
Time spent manipulating data so it would work for various testing scenarios
Storage space for the test data
Production defects not tested because test data was not available
Masking sensitive data to protect privacy
Skipped test scenarios
After implementing a process to govern test data management, the insurance company in this scenario was able to reduce the costs of testing by USD400,000 annually. The organization also implemented IBM solutions to help deliver comprehensive test data management capabilities for creating fictionalized test databases that accurately reflect end-to-end business processes.
The insurance company in this example can today easily refresh test systems from across the organization in record time while finding defects in advance. The organization now has the enhanced ability to process claims across all 50 states cost-effectively. Testing in production is no longer the norm. In this scenario, implementing test data management not only helped the organization achieve significant cost savings, it helped reduce untested scenarios by 44 percent during a 90-day period and minimize required labor by 42 percent annually.
The insurance company in this case study scenario now has an enterprise test data process that helps reduce costs, improve predictability, and enhance testing—including enabling automation, cloud testing, mobile testing, and more. People, processes, and technologies came together to make a real change.
Automation Test
http://www.infosysblogs.com/testing-services/2012/09/warehouse_management_system_Automation.html
A warehouse management system, or WMS, is a key part of the supply chain and primarily aims to control the movement and storage of materials within a warehouse and process the associated transactions, including shipping, receiving, putaway and picking. Most of the Retailers use third party products like Manhattan, Redpraire etc to handle the WMS business process flows.
Automation is a key activity to consider to reduce the regression testing effort and improves the test coverage thus improves the quality.
It is a well-known fact that automation methods are Data driven, Key word driven and Hybrid. In a traditional data driven approach, the input data file contains standard data consumed by the automation scripts either from environment configuration parameters or the standard look up parameters.
In WMS third party products, there are a plethora of application configurable parameters and they keep changing and hence a need arises to ensure that the automation scripts do not fail frequently and cost of maintenance is low.
Key considerations to build Automation;
- Selecting an Automation Framework:
- Approach: Selecting an automation framework is the key. Use Modular approach. Divide the test scenario into multiple components and functions. Use descriptive language to address frequently changing objects.
- How it helps: Particular component / function pertaining to a change will be modified and the automation scripts are ready to execute. Object names are used at run time to minimize the impact
- Configuring the Automated WMS application Parameters: The application configuration parameters like Inventory adjustment reason code, Policy for BR code, Policy for RR code etc will change frequently with in DC or across DCs. There is a need to ensure that the automation scripts do not fail when the parameter value does not exist as an input parameter.
- Approach: Maintain the configuration parameters required for an automation script to run in an input file. Automate the retrieval of configuration parameters from WMS system and compare with the input file. Assuming that the application config parameter is updated such that any of the available options for a parameter in the dropdown gets changed or removed. In such a situation, the config parameters are refreshed by the script and data sheet is updated with the appropriate test data. The scenario execution done after the above mentioned process will ensure successful run.
- How it helps: Reduces the Automation script failures and improves the availability for frequent runs
- Automated Test Data Management approach:
- Approach: Use Test Data Sheets as Data inputs to Automation Scripts. Automate the retrieval of Master and Transactional Data Required for the Automation scripts by directly connecting WMS System database before the each run.
- How it helps: Speeds up the Transactional Test data creation process, avoids failures due to incorrect Test Data and Restricts the need for Data Architects
- Last but not least, domain knowledge is critical for an automation expert or bring in Automation expert and domain expert together for successful WMS system test automation.
In conclusion, a well define WMS Automation framework considering the automated test data management and automated application configuration parameters ensures no script failures due to data inconsistencies but also reduces the maintenance cost and improves the test coverage.
Comments
Very insightful and agree on WMS aspects , would like to extend this to Integrationtest and automation ...sice most of these have to be integrated to the mainstream ERP or biz applications..any thoughts?
Posted by: Hitesh | September 24, 2012 3:00 PM
Absolutely. The approach can easily be extended to integration scenarios as well.
Posted by: Bhagi | September 26, 2012 1:24 AM
Thanks for sharing the insight about the WMS.
Test automation tools which can be used for WMS
Posted by: pichai | September 29, 2016 8:35 AM
WMS(warehouse management system is one of the main aspect of SCM business. The post about the system test is quite good and I guess I am gonna have many things from the same.
Posted by: Suman Verma | November 2, 2016 6:46 AM