The application of computer software has crossed into many different fields, with software being an essential part of industrial, commercial and military systems. Because of its many applications in safety-critical systems, software reliability is now an important research area. Although software engineering is becoming the fastest developing technology of the last century, there is no complete, scientific, quantitative measure to assess them. Software reliability testing is being used as a tool to help assess these software engineering technologies
To improve the performance of software products and the software development process, a thorough assessment of reliability is required. Reliability testing is performed to ensure that the software is reliable, satisfies the purpose for which it is made, for a specified amount of time in a given environment, and is capable of rendering a fault-free operation.
This testing helps discover many problems in the software design and functionality. The main purpose of reliability testing is to check whether the software meets the requirement of customer reliability.
According to ANSI, Software Reliability is defined as the probability of failure-free software operation for a specified period of time in a particular environment.
Using the following formula, the probability of failure is calculated by testing a sample of all available input states.
Mean Time Between Failure(MTBF)=Mean Time To Failure(MTTF)+ Mean Time To Repair(MTTR)
Probability = Number of failing cases / Total number of cases under consideration
Reliability testing will be performed at several levels. Complex systems will be tested at the unit, assembly, subsystem, and system levels.
To verify the reliability of the software via testing:
A sufficient number of test cases should be executed for a sufficient amount of time to get a reasonable estimate of how long the software will execute without failure. Long-duration tests are needed to identify defects (such as memory leakage and buffer overflows) that take time to cause a fault or failure to occur.
The distribution of test cases should match the actual or planned operational profile of the software. The more often a function or subset of the software is executed, the greater the percentage of test cases that should be allocated to that function or subset.
Types of reliability testing
Software reliability testing includes feature testing, load testing, and regression testing.
Feature testing checks the features provided by the software and is conducted in the following steps:
Each operation in the software is executed once.
Interaction between the two operations is reduced and Each operation is checked for its proper execution.
The feature test is followed by the load test.
This test is conducted to check the performance of the software under the maximum workload. Any software performs better up to some amount of workload, after which the response time of the software starts degrading. For example, a website can be tested to see how many simultaneous users it can support without performance degradation. This testing mainly helps for Databases and Application servers. Load testing also requires software performance testing, which checks how well some software performs under workload.
Regression testing is used to check if any new bugs have been introduced through previous bug fixes. Regression testing is conducted after every change or update in the software features. This testing is periodic, depending on the length and features of the software.
Reliability testing is more costly compared to other types of testing. Thus while doing reliability testing, proper management and planning is required. This plan includes testing process to be implemented, data about its environment, test schedule, test points, etc.
Some common problems that occur when designing test cases include:
Test cases can be designed simply by selecting only valid input values for each field in the software. When changes are made in a particular module, the previous values may not actually test the new features introduced after the older version of software.
There may be some critical runs in the software which are not handled by any existing test case. Therefore, it is necessary to ensure that all possible types of test cases are considered through careful test case selection.
Software Failure and Reliability Assessment Tool (SFRAT)
Recent National Academies report on Enhancing Defense System Reliability recommends use of reliability growth models to direct contractor design and test activities. Several tools are available that automatically apply reliability models and automate reliability test and evaluation. SFRAT is an open source application that Allows users to answer following questions about a software system during test
1. Is the software ready to release (has it achieved a specified reliability goal)?
2. How much more time and test effort will be required to achieve a specified goal?
3. What will be the consequences to system’s operational reliability if not enough testing resources are available?