QA: Testing Categories

 

The main groups of Testing are Functional and Non-Functional..Security Testing could be at this general level as if not part of a specification. If Security Testing is addressed in a specification, it would be deemed part of the functional tests.

 

Functional Testing

  1. bases test cases on the specifications of the software component under test
  2. The identification of functions that the software is expected to perform
  3. The creation of input data based on the function's specifications
  4. The determination of output based on the function's specifications
  5. The execution of the test case
  6. The comparison of actual and expected outputs

 

Unit Testing

  1. I feel this is a developer or 'developer-in-test' function, as today's philosophies encourage unit test creation at the time of OR BEFORE the function is actually created.
  2. In OO, this could represent either an entire class or smaller entities such as its methods.

Integration Testing

  1. once individual and compartamentalized peices of functionality or modules are working on their own, this testing verifies the components play together nicesly and still adhere to spec.
  2. Interfaces are critical here as assumptions from one dev team might differ from anothers approach. Good specifications go a LONG way in ensure this goes well.
  3. I prefer to work form the bottom up, pooling components together to aid in troubleshooting issues. This approach also allows for you to learn the system from components to communities as opposed to outside in where many questions remain until the end. Bottom up keeps the complexity and scope of problems in proportion to your understaning of the system.

System Testing

  1. After integration testing is complete and the product is now working when the bulk of the main components are known to be working well together, testing the system as a whole now comes into play.
  2. Whereas integration testing places much energy at intersections and interfaces, system testing takes a step back and forgets these connections and attempts to analyze point to point functions.

Regression Testing

  1. A type of testing where automation excels at, regression testing seeks to establish a baseline, then verify new features and bugfixes don't alter the baseline. Once it works, it should always work.
  2. Exceptions to this would be where new features impact existing functionality and can post a real nightmare if the automation framework or test approach wasn't well thought out. Breaking 5000 tests means QA team must decide to dedicate resources fixing what was already working or providing value to ensuring new features are developed correctly.

( User )Acceptance Testing

  1. Determines if the requirements were met.
  2. If done by the customer, considered UAT, but principle applies if testing done internally prior to UAT by end-user
  3. These are usually more basic in nature and tie directly to a BRD
  4. Whereas functional testing verifies very finite peices of functionality, UAT often involves longer 'stories' or sequences of actions - tests tend to be end to end.

Database Testing

  1. Considered a 'black art', database testing usually involves in scalling up the scope of testing to large volume traffic or large datasets.
  2. Security is in the mix as well as SQL injection is a concern for outward facing web services.
  3. Load testing, while flexing the software, can surface data pointer and lock errors as the computing systems struggle to keep up or simultaneous requests compete for resources.
  4. ACID properties should be mentioned here
    1. Atomicity - strives to manage requests in such a way  to be 'all or nothing' to avoid leaving requests hanging or having dependencies missing
    2. Consistency - Atomoicity aims to ensure consistency. If transactionss fail, the database can be left in a corrupted state and could literally crater an entire system in a cascading domino effect as bad values are propagated into later requests.
    3. Isolation - packaging requests holistically in order to avoid changes to the database before one set of actions against it is allowed to complete. This principle prevents DB inconsistency and corruption.
    4. Durability - Once data is written to the DB, failsafe mechanisms need to be in place to ensure the data is persistent - even through system crashes or communications errors to the DB servers.

 

Non-Functional Testing

  1. Speed of response, and asthetics aren't always scoped out directly, so this testing is a bit more ethereal.
  2. Non-Functional Testing deals with everything else that isn't directly to a feature of an application. True, there will be requirements to handle high capacity or recover from error conditions, but the end user is not involved with these facets and these types of tests deal with scalability or environmental conditions, not so much system functionality.

Performance Testing

  1. Peformance Testing, while possibly using the same suite of tests and tools as Load Testing, aims to measure system performance at various benchmarks.
  2. Ideally, performance should remain steady at all scales of use, up to and exceeding 100% to 200% expected volume. This might be throttled back to 150% if real-world analysis is desired.
  3. Exceeding production environment demands won't necessarily play into a Test Plan, but can provide valuable feedback on where bottlenecks lie and the amount of headroom the system has compared to requirements.

Load Testing

  1. Place the system in various states of demand to determine where system capacity lies and where dysfunction arises from heavy traffic/use. Does system have a reasonable limit to its ability to handle throughput in a production environment.
  2. If concurrent users would never reach over 1,000, testing 5,000 users would only provide academic information for system architects. A more reasonable line of inquiry would be to ramp up volume from 500 to 2,000 users and verify there is no performance degredation across the expected threshold, and then exceed this threshold by a predetermined percentage of possibly 50%.

Configuration Testing

  1. Most systems provide for a variety of configurations. By performing tests under all possible configurations, errors in particular modules or hardware configurations can be rooted out and managed.
  2. In some cases, such as low memory, errors will be expected and no fixes implemented to mitigate. In these cases KNOWING the limitations can be a great use and should be noted in documentation accompanying the system. Many applications will have minimum software and hardware requirements and these are often determined by configuration.

Recovery Testing

  1. Cause intentional system failures by hard resets, killing DB connections, taking depended resources offline or by injecting bad data. The system should have a means to recover on its own, or at a minimum allow for a manual recovery via safety nets such as adequate logging and period data backup routines.
  2. Failures can and do occur in production environments; the system should be able to gracefully manage these.

Security Testing

  1. While systems that have specifications spelling out Security robustness should be included in a QA plan, IT or System Admin groups have a role in this as security holes can be introduced by 3rd party and OS components as well as routing and network hardware/software.
  2. Due to the specialized requirements and know-how of security related issues, such as cyptography, QA groups can be limited in their ability to effectively perform this type of testing and might need to be outsourced for cost-effective and realiable testing and benchmarking

Usabiltiy Testing

  1. Most of the Acceptance, Functional and Performance testing renders this particular category moot, there might be a test suite dedicated to re-hashing previously run test to have a line item reflected in reports for usability. Perhaps there are system slowdowns during report generation that don't create enough of an alarm in the midst of the rest of the sucessful tests, but would be made obvious when grouped and analyzed as a dedicated group.
  2. If possible, it would be more efficient to NOT create additional tests, but rather create additional QUERIES in the test case database so that the tests are REPORTED twice, not EXECUTED twice.
Tags