Glossary of Testing Terminology

In order to avoid the ambiguities in different software testing terms, I am enclosing a software testing glossary here.

All the software testing terms are included in this glossary.

 List of Software Testing terms

A)

Acceptance testing

Acceptance testing is the final stage of a testing cycle. This is when the customer or end-user of the software verifies that it is working as expected. There are various forms of acceptance testing including user acceptance testing, beta testing, and operational acceptance testing.

Ad hoc Testing

Software Testing performed without proper planning and documentation.
Testing is carried out with the knowledge of the tester about the application and the tester tests randomly without following the specifications/requirements.

Agile development

A development method that emphasizes working in short iterations. Automated testing is often used. Requirements and solutions evolve through close collaboration between team members that represent both the client and supplier.

Alpha testing

Operational testing conducted by potential users, customers, or an independent test team at the vendor’s site. Alpha testers should not be from the group involved in the development of the system, in order to maintain their objectivity. Alpha testing is sometimes used as acceptance testing by the vendor.

Automated testing

Automated testing describes any form of testing where a computer runs the tests rather than a human. Typically, this means automated UI testing. The aim is to get the computer to replicate the test steps that a human tester would perform.

Autonomous testing

Autonomous testing is a term for any test automation system that is able to operate without human assistance. As with most things, autonomy is a spectrum. If you want to learn more, download our eBook on the levels of test autonomy.

B)

Beta testing

Test that comes after alpha tests, and is performed by people outside of the organization that built the system. Beta testing is especially valuable for finding usability flaws and configuration problems.

Black box testing

Testing in which the test object is seen as a “black box” and the tester has no knowledge of its internal structure. The opposite of white box testing.

Bottom-up integration

An integration testing strategy in which you start integrating components from the lowest level of the system architecture. Compare to big-bang integration and top-down integration.

Bug

A bug is a generic description for any issue with a piece of software that stops it from functioning as intended. Bugs can range from minor UI problems up to issues that cause the software to crash. The aim of software testing is to minimize (or preferably eliminate) bugs in your software. See also Defect and Regression.

C)

Code coverage

A generic term for analysis methods that measure the proportion of code in a system that is executed by testing. Expressed as a percentage, for example, 90 % code coverage.

Code review

See Review.

Code standard

Description of how a programming language should be used within an organization. See also naming standard.

Compilation

The activity of translating lines of code written in a human-readable programming language into machine code that can be executed by the computer.

Component

The smallest element of the system, such as a class or a DLL.

Component integration testing

Another term for the integration test.

Component testing

Test level that evaluates the smallest elements of the system. See also component. Also known as unit test, program test and module test.

CSS selector

A CSS selector is usually used to determine what style to apply to a given element on a web page. In Selenium and other forms of Test script, a CSS selector can be reused to actually choose an element on the page.

D)

Defect

A defect is a form of software Bug. Defects occur when the software fails to conform to the specification. Sometimes, this will cause the software to fail in some way, but more often a defect may be visual or may have limited functional impact.

Distributed testing

In distributed testing, your tests run across multiple systems in parallel. There are various ways that distributed testing can help with test automation. It may involve multiple end systems accessing one backend to conduct more accurate stress testing. Or it might be multiple versions of the test running in parallel on different systems. It can even just involve dividing your tests across different servers, each running a given browser.

Debugging

The process in which developers identify, diagnose, and fix errors found. See also bug and defect.

Decision table

A test design and requirements specification technique. A decision table describes the logical conditions and rules for a system. Testers use the table as the basis for creating test cases.

Dynamic testing

Testing performed while the system is running. The execution of test cases is one example.

E)

End to End Testing

Testing the overall functionalities of the system including the data integration among all the modules is called end-to-end testing.

Exploration Testing

Exploring the application and understanding the functionalities adding or modifying the existing test cases for better testing is called exploratory testing.

Entry criteria

Criteria that must be met before you can initiate testing, such as that the test cases and test plans are complete.

Error

A human action that produces an incorrect result.

Error description

The section of a defect report where the tester describes the test steps he/she performed, what the outcome was, what result he/she expected, and any additional information that will assist in troubleshooting.

Execute

Run, conduct. When a program is executing, it means that the program is running. When you execute or conduct a test case, you can also say that you are running the test case.

F)

Factory acceptance test

Acceptance testing carried out at the supplier’s facility, as opposed to a site acceptance test, which is conducted at the client’s site.

Failure

Deviation of the component or system under test from its expected result.

Formal review

A review that proceeds according to a documented review process that may include, for example, review meetings, formal roles, required preparation steps, and goals. Inspection is an example of a formal review.

Functional integration

An integration testing strategy in which the system is integrated one function at a time. For example, all the components needed for the “search customer” function are put together and tested one by one.

Functional testing

Functional tests verify that your application does what it’s designed to do. More specifically, you are aiming to test each functional element of your software to verify that the output is correct. Functional testing covers Unit testing, Component testing, and UI testing among others.

G)

Gray-box testing

Testing uses a combination of white box and black box testing techniques to carry out software debugging on a system whose code the tester has limited knowledge of.

Globalization Testing (or) Internationalization Testing(I18N)

Checks if the application has a provision of setting and changing languages date and time format and currency etc. If it is designed for global users, it is called globalization testing.

I)

IEEE 829

An international standard for test documentation published by the IEEE organization. The full name of the standard is IEEE Standard for Software Test Documentation. It includes templates for the test plan, various test reports, and handover documents.

Impact analysis

Techniques that help assess the impact of a change. Used to determine the choice and extent of regression tests needed.

Incident

A condition that is different from what is expected, such a deviation from requirements or test cases.

Intelligent test agent

An intelligent test agent uses machine learning and other forms of artificial intelligence to help create, execute, assess, and maintain automated tests. In effect, it acts as a test engineer that never needs sleep, holidays or days off. One example of such a system is Functionize.

J)

JUnit

A framework for testing Java applications, specifically designed for automated testing of Java components.

K)

Keyword-driven framework

Keyword-driven testing frameworks are a popular approach for simplifying test creation. They use simple natural language processing coupled with fixed keywords to define tests. This can help non-technical users to create tests. The Functionize ALP™ engine understands certain keywords, but it is also able to understand the unstructured text. Making it a more powerful approach

L)

Load testing

A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of concurrent users and/or numbers of transactions. Used to determine what load can be handled by the component or system. See also performance testing and stress testing.

M)

Maintainability

A measure of how easy a given piece of software code is to modify in order to correct defects, improve or add functionality.

Maintenance

Activities for managing a system after it has been released in order to correct defects or to improve or add functionality. Maintenance activities include requirements management, testing, development amongst others.

Manual testing

This form of testing involves a human performing all the test steps, recording the outcome, and analyzing the results. Manual testing can be quite challenging because it often involves repeating the same set of steps many times. It is essential to always perform each Test step correctly to ensure consistency between different rounds of testing.

Monkey Testing

Test the functionality randomly without knowledge of application and test cases is called Monkey Testing.

Mutation testing

This is a form of Unit testing, designed to ensure your tests will catch all error conditions. To achieve this, the source code is changed in certain controlled ways, and you check if your tests correctly identify the change as a failure.

N)

Naming standard

The standard for creating names for variables, functions, and other parts of a program. For example, strName, sName and Name are all technically valid names for a variable, but if you don’t adhere to one structure as the standard, maintenance will be very difficult.

Negative Testing

Testing a software application with a negative perception to check what system is not supported to do is called negative testing.

Non-functional testing

Testing of non-functional aspects of the system, such as usability, reliability, maintainability, and performance.

NUnit

An open-source framework for automated testing of components in Microsoft .Net applications.

O)

Open source

A form of licensing in which software is offered free of charge. Open-source software is frequently available via download from the internet, from www.sourceforge.net for example.

Operational testing

Tests carried out when the system has been installed in the operational environment (or simulated operational environment) and is otherwise ready to go live. Intended to test operational aspects of the system, e.g. recoverability, co-existence with other systems and resource consumption.

Outcome

The result after a test case has been executed.

P)

Pair programming

A software development approach where two developers sit together at one computer while programming a new system. While one developer codes, the other makes comments and observations, and acts as a sounding board. The technique has been shown to lead to higher quality thanks to the de facto continuous code review – bugs and errors are avoided because the team catches them as the code is written.

Pair testing

Test approach where two persons, e.g. two testers, a developer and a tester, or an end-user and a tester, work together to find defects. Typically, they share one computer and trade control of it while testing. One tester can act as observer when the other performs tests.

Peak testing

A peak test is a form of Performance test where you check how well your system handles large transient spikes in load. These flash crowds are often the hardest thing for any system to cope with. Any public-facing system should be tested like this alongside Stress testing.

Penetration testing

Penetration testing is a form of Security testing where you employ an external agent to try and attack your system and penetrate or circumvent your security controls. This is an essential part of Acceptance testing for any modern software system.

Performance testing

Performance testing is interested in how well your system behaves under real-life conditions. This includes aspects such as application responsiveness and latency, server load, and database performance. See also Load testing and Stress testing.

Positive testing

A test aimed to show that the test object works correctly in normal situations. For example, a test to show that the process of registering a new customer functions correctly when using valid test data.

Postconditions

Environmental and state conditions that must be fulfilled after a test case or test run has been executed.

Preconditions

Environmental and state conditions that must be fulfilled before the component or system can be tested. May relate to the technical environment or the status of the test object. Also known as prerequisites or preparations.

Q)

Quality

The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

Quality assurance (QA)

Systematic monitoring and evaluation of various aspects of a component or system to maximize the probability that minimum standards of quality are being attained.

R)

Re-Testing

Testing functionality repetitively is called re-testing.
Re-testing gets introduced in the following two scenarios.
Testing is functionality with multiple inputs to confirm the business validation are implemented (or) not.
Testing functionality in the modified build is to confirm the bug fixers are made correctly (or) not.

Regression Testing

It is a process of identifying various features in the modified build where there is a chance of getting side-effects and retesting these features.
The new functionalities added to the existing system (or) modifications made to the existing system. It must be noted that a bug fixer might introduce side-effects and a regression testing is helpful to identify these side effects.

Release

A new version of the system under test. The release can be either an internal release from developers to testers, or release of the system to the client. See also release management.

Release management

A set of activities geared to create new versions of the complete system. Each release is identified by a distinct version number. See also versioning and release.

Release testing

A type of non-exhaustive test performed when the system is installed in a new target environment, using a small set of test cases to validate critical functions without going into depth on any one of them. Also called smoke testing – a funny way to say that, as long as the system does not actually catch on fire and start smoking, it has passed the test.

Risk

A factor that could result in future negative consequences. Is usually expressed in terms of impact and likelihood.

Risk-based testing

A structured approach in which test cases are chosen based on risks. Test design techniques like boundary value analysis and equivalence partitioning are risk-based. All testing ought to be risk-based.

RUP

The Rational Unified Process; a development methodology from IBM’s Rational software division.

S)

Sanity Testing

This is the basic functional testing conducted by test engineer whenever receive build from development team.

Sandwich integration

An integration testing strategy in which the system is integrated both top-down and bottom-up simultaneously. Can save time, but is complex.

Scalability testing

A component of non-functional testing, used to measure the capability of software to scale up or down in terms of its non-functional characteristics.

Security testing

As the name suggests, this is about verifying that your application is secure. That means checking that your AAA system is working, verifying firewall settings, etc. A key element of this is Penetration testing, but there are other aspects too.

Selectors

A selector is used by test scripts to identify and choose objects on the screen. These are then used to perform some action such as clicking, entering text, etc. There are various forms of selectors that are commonly used. These include simple HTML IDs, CSS-selectors and XPath queries.

Smoke Testing

This is also basic functional testing conducted by developer or tester before releasing the build to the next cycle.

State transition testing

A test design technique in which a system is viewed as a series of states, valid and invalid transitions between those states, and inputs and events that cause changes in state.

Static testing

Testing performed without running the system. Document review is an example of a static test.

Stress testing

Testing meant to assess how the system reacts to workloads (network, processing, data volume) that exceed the system’s specified requirements. Stress testing shows which system resource (e.g. memory or bandwidth) is first to fail.

T)

Test

A test is the specific set of steps designed to verify a particular feature. Tests exist in both manual and automated testing. Some tests are extremely simple, consisting of just a few steps. Others are more complex and may even include branches. See also Test plan and Test step.

Test case

A test case is the complete set of pre-requisites, required data, and expected outcomes for a given instance of a Test. A test case may be designed to pass or to fail. Often this depends on the data passed to the Test. Some test cases may include a set of different data (see Data-driven testing).

Test data

Information that completes the test steps in a test case with e.g. what values to input. In a test case where you add a customer to the system the test data might be customer name and address. Test data might exist in a separate test data file or in a database.

Test driven development

A development approach in which developers writes test cases before writing any code.

Test environment

The technical environment in which the tests are conducted, including hardware, software, and test tools. Documented in the test plan and/or test strategy.

Test execution

The process of running test cases on the test object.

Test level

A group of test activities organized and carried out together in order to meet stated goals. Examples of levels of testing are component, integration, system, and acceptance test.

Test log

A document that describes testing activities in chronological order.

Test manager

The person responsible for planning the test activities at a specific test level. Usually responsible for writing the test plan and test report. Often involved in writing test cases.

U)

UI testing

UI testing involves checking that all elements of your application UI are working. This can include testing all user journeys and application flows, checking the actual UI elements, and ensuring the overall UX works. For many applications, UI testing is vital. Typically, when people talk about automated testing, they mean automated UI testing.

UML

Unified Modeling Language. A technique for describing the system in the form of use cases. See also use case.

Unit testing

Unit tests are used by developers to test individual functions. This is a form of White-box testing. That is, the developer knows exactly what the function should do. Good unit tests will check both valid and invalid outputs. One way to check this is with Mutation testing.

Usability testing

This is the process of checking how easy it is to actually use a piece of software. It measures the user experience (UX). One common approach is A/B testing.

User acceptance testing

User acceptance testing is one of the two main types of Acceptance testing. Here, the aim is to check that end-users are able to use the software as expected. This may be done using focused user panels, but more often is done using approaches such as Beta testing.

V)

V-model

The V-model for testing associates each stage of the classic waterfall development model with an equivalent stage of testing. It originated in hardware testing. Despite the link with waterfall, it also works for agile development.

Validation

Validation is a key aspect of Acceptance testing. It involves verifying that you are building a system that does what it is expected to. If you are building software for a client, then validation involves checking with them that you have delivered what they wanted. If you are building software for general release, then the Product Owner or Head of Product becomes the final arbiter. Contrast this with Verification.

Verification

Verification is the process of checking that software is working correctly. That is, checking that it is bug- and defect-free and that the backend works. This covers all stages of testing up to System testing, along with Load and Stress testing.

Visual testing

Visual testing relies on Object recognition, but couples it with other forms of artificial intelligence. In visual testing, screenshots are used to check whether a test step has succeeded or not. Functionize uses screenshots taken before, during, and after each test step. It compares these against all previous test runs. Any anomalies are identified and highlighted on the screen. The system will ignore any elements that are expected to change, and can cope with UI design changes (e.g. styling changes). It also knows to ignore elements that are expected to change like the date.

W)

Waterfall model

A sequential development approach consisting of a series of phases carried out one by one. This approach is not recommended due to a number of inherent problems.

White box testing

A type of testing in which the tester has knowledge of the internal structure of the test object. White box testers may familiarize themselves with the system by reading the program code, studying the database model, or going through the technical specifications. Contrast with black box testing.

X)

XPath query

An XPath query is an expression used to locate a specific object within an XML document. Since HTML is a form of XML, it can be used to very precisely locate any object within a web page, even objects within nested DOMs. However, constructing XPath queries is quite complex, so usually, other Selectors such as CSS Selectors are used instead.