ISTQB CTFL Glossary of Software Testing Terms

ISTQB CTFL Term

Description

Acceptance testingTesting conducted to determine whether or not a system satisfies acceptance criteria and to enable the customer to determine whether or not to accept the system.
Accessibility testingTesting to determine whether a product is usable by people with disabilities, for example, people with visual impairments, hearing impairments, or physical disabilities.
Ad hoc testingTesting carried out informally without the use of a recognized test design technique or test plan, typically where there are no documented requirements or specifications.
Agile testingTesting that is performed within the context of an Agile software development approach, typically involving short iterations and frequent customer involvement.
Alpha testingTesting carried out by a customer or a testing team at the developer’s site, but not at the customer’s site, to determine whether the software will work as expected in the customer’s environment.
Application programming interface (API)A set of routines, protocols, and tools for building software and applications, specifying how software components should interact.
AssertionA statement in code that is expected to be true at a certain point in the code. Assertions are typically used to check the results of a computation or the state of a system.
AuditA review or inspection of a software product or process to determine its conformity with regulatory, contractual, or other specified requirements.
Automated testingTesting in which the execution of tests is automated using specialized software tools, usually with the aim of improving efficiency, repeatability, and consistency of the testing process.
BaselineA point in the software development life cycle where a defined set of deliverables has been completed and approved, which serves as the basis for further development or maintenance.
Beta testingA type of testing that is conducted by a limited number of end-users or customers, in a live or simulated environment, with the purpose of collecting feedback and identifying issues before the product is released to the market.
Black box testingA testing technique where the software is tested without any knowledge of the internal workings of the system or its implementation details, focusing solely on the externally visible behavior and functionality.
Bottom-up testingA testing approach where testing starts at the lowest level of the software architecture and progresses towards the higher levels, with lower level modules being tested before higher level modules are integrated and tested.
Boundary value analysisA technique used for selecting test cases by identifying the boundary values of input variables and testing them, as these values often cause the most defects.
Branch testingA testing technique where all possible branches of the code are tested at least once, in order to ensure that all code paths have been executed.
BugA defect or issue in the software that causes it to fail to perform its intended function, or to behave in an unintended manner.
Capability maturity model integration (CMMI)A framework for improving the maturity of an organization’s software development processes, by providing a set of best practices and guidelines for process improvement across various disciplines.
Capability maturity model (CMM)A model that defines a set of best practices and guidelines for software development process improvement, organized into five maturity levels.
Cause-effect graphingA technique for selecting test cases by identifying the cause-and-effect relationship between input and output variables, using a graphical representation of these relationships.
Change controlA process that ensures that all changes to the software and its associated documentation are controlled, tracked, and approved in a systematic manner.
CheckA review technique where the reviewer examines the work product informally, looking for defects or issues, without following a predefined process or script.
Code coverageA measure of the amount of code that is executed by a set of tests, expressed as a percentage of the total lines or statements in the code.
Code reviewA process of examining the source code of the software with the purpose of identifying defects, improving quality, and ensuring that it conforms to coding standards and best practices.
Compatibility testingA testing technique used to ensure that the software can operate correctly in different environments, such as different operating systems, browsers, or hardware configurations.
ComplexityThe degree to which a component or system has a design or implementation that is difficult to understand, maintain, or test.
Component integration testingTesting the interactions between software components that have been integrated into a larger system.
Condition coverageA test coverage measure that reports whether each condition in a decision has taken on all possible outcomes at least once.
Configuration managementThe process of identifying and defining the items in the system, controlling the changes to these items, and recording and reporting the status of items and change requests.
Confirmation testingTesting that is performed to confirm that defects have been corrected and that the software still meets its specified requirements.
CoverageA measure used to describe the degree to which a specified coverage item has been exercised by a test suite.
Cyclomatic complexityA software metric that measures the complexity of a program by counting the number of decision points in the program’s control flow graph.
Data definitionA specification of a data item in terms of its name, usage, and attributes.
Data flow analysisA method of verifying the correctness of a program by examining the flow of data through the program.
Data flow testingA white box testing technique that involves selecting test cases based on the flow of data through a program.
DebuggingThe process of identifying, analyzing, and removing the causes of software defects.
DefectAny flaw in a component or system that can cause the component or system to fail to perform its required function.
Defect densityA measure of the number of defects per unit size of a component or system.
Defect managementThe process of identifying, analyzing, and managing defects that are found during testing.
Defect reportA document that describes a defect and the circumstances under which it was discovered.
Defect tracking toolA software tool that supports the defect management process by enabling defects to be logged, tracked, and managed.
Delphi techniqueA structured communication method used to obtain the opinions of experts on specific topics or issues, and to reach a consensus through a series of iterative rounds.
Design-based testingA test design technique that uses architectural, functional, or detailed design documents as input to identify test conditions and design test cases.
Development testingA type of testing that is performed by developers to check their own code or components, and to identify and fix defects before the software is passed to the testing team.
Documentation testingA type of testing that verifies the correctness, completeness, and consistency of the software documentation, including user manuals, installation guides, and help files.
Domain analysisThe process of understanding the domain, business, or industry in which the software will be used, and using this understanding to identify test conditions and design test cases.
Dynamic analysisA technique used to evaluate the behavior of software during execution, by analyzing data values and program state at run-time.
End-to-end testingA type of testing that verifies the entire system, including all components and interfaces, to ensure that they function together correctly and meet the specified requirements.
Equivalence partitioningA test design technique that divides input values into classes or partitions, and identifies representative values or test cases from each partition to achieve maximum test coverage with minimum effort.
Error guessingA test design technique where test cases are derived from the tester’s prior experience, knowledge, and intuition to identify defects based on errors made in the past.
Exhaustive testingA type of testing where all possible combinations of input values and preconditions are tested to ensure that the system behaves correctly under all possible circumstances. It is practically impossible to achieve in most cases, and other approaches such as risk-based testing or prioritized testing are usually used.
Experience-based testingA testing approach where test cases are derived from the tester’s knowledge, experience, and intuition to identify defects based on what has worked in the past or what has caused problems in the past. It includes techniques such as exploratory testing, error guessing, and checklists.
Exploratory testingA testing approach that emphasizes on tester’s creativity, freedom, and independence in exploring the application or system under test to discover defects. Testers may use their knowledge and experience to identify and investigate issues without a pre-defined test plan or scripts.
FailureA deviation from the expected or desired output of a system or component.
Failure modeThe manner in which a component or system fails to function correctly or as expected.
Failure mode and effect analysis (FMEA)A systematic approach to identifying potential failure modes, evaluating their risks and impacts, and implementing actions to prevent or reduce their occurrence or severity. It is typically used in design or process engineering to identify and mitigate potential failures in a product or process.
FaultAn abnormal condition or defect in a component or system that can cause it to fail to perform its intended function. A fault is often the result of a mistake or error made during development or design.
Fault injectionA testing technique where faults are intentionally introduced into a system to evaluate the system’s ability to handle the faults.
Fault toleranceThe ability of a system or component to continue functioning despite the presence of faults.
Functional requirementA requirement that specifies a function that a system or component must perform.
Functional testingTesting conducted to verify that a system or component conforms to its functional requirements.
Gantt chartA bar chart that shows the schedule of activities required to complete a project or set of tasks, with bars representing the duration of each activity.
Goodness of testA measure of the effectiveness of a test suite in detecting faults, based on the number and severity of faults detected by the test suite.
Gray box testingA testing technique that combines elements of both white box and black box testing, where the tester has some knowledge of the internal workings of the system being tested.
High-level test planA document that outlines the approach, objectives, and scope of testing for a project or product. It typically includes the test strategy, test objectives, test schedule, test deliverables, and test environment.
IncidentAn unplanned event that occurs during the testing process and requires investigation. It could be a defect, an error message, or any other unexpected behavior.
Incident reportA document that captures details about an incident, including its nature, severity, location, and steps to reproduce it. It is used to communicate the incident to stakeholders and to facilitate its resolution.
IndependenceThe concept of ensuring that testing is carried out by people who are free from bias, influence, or conflict of interest. Independence could be achieved through organizational structure, reporting lines, or the use of external testers.
Input domainThe set of all possible inputs to a system or component being tested. Input domain testing involves selecting test cases from this set in order to uncover defects or faults.
InspectionA type of static testing that involves a formal and rigorous examination of a document or code artifact to detect defects, improve quality, and ensure compliance with standards. Inspections typically involve a team of peers who examine the artifact systematically and provide feedback.
Installation testingA type of testing that verifies that software or hardware is installed and configured correctly in the target environment. It typically involves checking system requirements, installation procedures, and compatibility with other components.
Integration testingA type of testing that verifies that different software modules or components work together as intended, and that the interfaces between them are functioning correctly. Integration testing can be performed at different levels of granularity, such as component integration, system integration, or end-to-end integration.
Interface testingA type of testing that focuses on the interfaces between software components, such as APIs, web services, or user interfaces. Interface testing verifies that data and control flows correctly through the interfaces, and that data is correctly transformed and validated.
Internationalization testingA type of testing that verifies that a software application can function correctly in any language or cultural setting.
Iterative development modelA software development model in which the development process is divided into smaller, iterative cycles.
Keyword-driven testingA test design technique in which keywords are used to represent the actions that the test script will perform.
Load testingA type of performance testing that involves testing a system or application under high workload conditions.
Localization testingA type of testing that verifies that a software application is compatible with the local language, currency, and culture.
Low-level test planA test plan that focuses on the testing of individual components or modules within a software application.
Maintenance testingA type of testing that is performed after a software application has been deployed to ensure that it continues to function.
Management reviewsA type of review in which senior management evaluates the status, progress, and quality of a software project.
Memory leakA situation where a software program is using more memory than it should be, due to improper allocation or deallocation of memory.
MetricsThe use of measurements to quantify various aspects of software, such as the quality of the code, the efficiency of the testing process, etc.
Model-based testingA testing approach that uses models of the system being tested to generate test cases and assess the system’s behavior.
Modification testingA type of testing performed to ensure that changes made to an existing system do not negatively impact its existing functionality.
Multiple condition coverageA testing technique that involves verifying that all possible combinations of multiple conditions in a decision have been executed.
Mutation testingA technique that involves modifying parts of the code in order to create artificial faults, and then running tests to see if the faults are detected.
Negative testingA testing technique that involves testing the system’s behavior when it is presented with invalid or unexpected inputs or conditions.
Non-functional requirementA requirement that describes how the system should behave in terms of non-functional attributes, such as performance, reliability, usability, etc.
Non-functional testingTesting that focuses on the non-functional aspects of a system, such as performance, security, usability, etc.
ObjectAn instance of a class or type that encapsulates data and functionality. In software testing, objects are tested using various techniques such as state-based testing, behavior-based testing, and interaction-based testing.
Operational testingTesting conducted to evaluate a system or component in its operational environment. It includes testing of system performance, reliability, availability, maintainability, and other attributes.
Orthogonal array testingA test design technique that uses orthogonal arrays to generate a set of test cases that can provide maximum coverage with a minimum number of test cases.
Pair testingA testing technique where two team members work together to test the same system or component simultaneously. One team member acts as the driver and executes the tests while the other team member observes and provides feedback.
Pairwise testingA test design technique that generates a set of test cases that covers all possible combinations of two input parameters, using a minimal number of test cases.
Path coverageA measure of the percentage of executable statements or code paths that have been tested. Path coverage is a type of code coverage that is used to assess the effectiveness of testing.
Performance efficiencyA performance attribute that measures the system’s ability to perform its functions under specific conditions, using the minimum resources required.
Performance testingTesting conducted to evaluate the performance of a system or component under specific conditions, such as load, stress, and volume. Performance testing helps to identify performance bottlenecks and to determine whether the system meets its performance requirements.
Personal capability maturity model (PCMM)A model that defines the competencies and capabilities required for individuals to perform specific roles in software development organizations. The model provides a framework for assessing and improving the capabilities of individuals and teams.
Pesticide paradoxThe phenomenon where repeatedly executing the same test cases can lead to fewer defects being found over time
Pilot testingTesting a subset of the system, often in a production environment, to evaluate its functionality and performance
Portability testingTesting the ability of a system to run on different hardware, software, and operating systems
Positive testingTesting the system with valid input data to ensure that it behaves as expected under normal circumstances
Process capabilityA measure of how well a process can produce output that meets customer requirements
Process improvementA systematic approach to improve the efficiency and effectiveness of a process
Process modelA diagram or description that shows the steps involved in a process and the relationships between those steps
Product riskThe risk associated with a defect in the product itself, such as its impact on customer satisfaction or reputation
Project riskThe risk associated with project management, such as delays or cost overruns
QualityThe degree to which a system, component, or process meets specified requirements and customer/user needs and expectations.
Quality assuranceA set of planned and systematic actions aimed at providing adequate confidence that a product or service will meet given requirements for quality.
Quality controlA set of activities designed to evaluate the quality of a product, service or process against given requirements or standards, and to identify defects or deficiencies.
Quality management systemA set of interrelated or interacting elements that organizations use to direct and control the quality of products or services provided to customers.
Random testingTesting that involves using input values that are not suggested by the specification or other source, e.g., using a random number generator to create test data or choosing input values from a set of all possible values.
Recovery testingTesting that involves intentionally causing a system or component to fail and verifying that the system or component recovers to its normal state within an acceptable amount of time and with no loss or corruption of data.
Regression testingTesting performed to verify that a change in a component or system has not adversely affected functionality that was previously working and that defects have not been introduced due to the change.
ReliabilityReliability refers to the ability of a software system to perform its intended functions under specific conditions for a defined period of time. It measures the likelihood of a system or component to perform its required functions without failure or errors. In software testing, reliability testing involves assessing the stability and dependability of a software product.
Requirements reviewRequirements review is a process of evaluating the requirements specifications to ensure that they are complete, consistent, unambiguous, and testable. It is a type of review that is conducted before the development process starts, with the aim of identifying and correcting errors in the requirements as early as possible. Requirements review can be performed manually or using automated tools.
Requirements testingRequirements testing is a process of verifying that the software product satisfies the specified requirements. It involves testing the software against the functional and non-functional requirements to ensure that the product is working as expected. Requirements testing can be performed manually or using automated tools.
ReviewA review is a process of evaluating a document, code, or other deliverables to identify errors, defects, and other issues. Reviews can be performed at any stage of the software development life cycle and can be conducted manually or using automated tools. Reviews can help improve the quality of software by identifying defects early in the development process.
RiskRisk is the potential of harm or damage to the software product, project, or organization. It is the likelihood of an event occurring that will have a negative impact on the project or product. In software testing, risks can include the failure of a software product, the cost of testing, the impact on end-users, and the reputational damage to the organization.
Risk analysisRisk analysis is a process of identifying, assessing, and prioritizing risks to determine the likelihood and impact of those risks on the software product or project. It involves identifying potential risks, analyzing their potential impact, and determining the likelihood of those risks occurring. Risk analysis can help organizations develop effective risk mitigation strategies and allocate resources appropriately.
Risk identificationRisk identification is a process of identifying potential risks that could impact the software product or project. It involves identifying risks that could arise from the software product, the development process, or external factors. Risk identification can help organizations develop effective risk management strategies and mitigate potential risks before they occur.
Risk managementA process of identifying, assessing, prioritizing, and mitigating risks that may affect the project or the quality of the product. It involves analyzing potential risks, determining their likelihood and impact, and developing strategies to minimize or eliminate them.
Root cause analysisA systematic approach used to identify the underlying cause(s) of defects or problems in a software product or process. It involves analyzing data and evidence to identify the root cause of a problem, rather than just treating its symptoms.
Sanity testingA quick and shallow testing technique that is performed to verify whether the basic functionalities of the software are working as expected after making minor changes. It helps to identify if the build is stable enough for further testing or not.
Scalability testingA type of testing that evaluates a software application’s performance when subjected to increased workload or users. The aim is to measure how well the application can scale up or down and maintain its performance under different conditions.
Security testingA process of testing software to identify vulnerabilities and security weaknesses in the system. The aim is to evaluate the system’s ability to resist unauthorized access, theft, and destruction of data.
SeverityA measure of the impact or degree of harm caused by a defect or issue in the software. It is usually classified into different levels such as high, medium, and low based on the impact on the system and the urgency to fix it.
Smoke testingA quick and shallow testing technique that is performed to verify whether the critical functionalities of the software are working as expected after a new build or major changes. The aim is to identify any critical issues that may prevent further testing or release.
Software engineeringThe systematic approach to the development, operation, and maintenance of software systems. It includes various software development activities like analysis, design, coding, testing, and maintenance.
Software life cycle modelsA software development model is a process or framework that outlines the stages involved in the software development process. Some common software development models include the Waterfall model, Agile model, V-model, and Spiral model.
Software quality attributesThe characteristics that define the overall quality of a software system, such as reliability, usability, performance, security, maintainability, and portability.
Software quality characteristicsThe measurable attributes that describe the performance of software. Examples include correctness, efficiency, reliability, usability, maintainability, and portability. These characteristics help to ensure that the software meets the user’s needs and expectations.
Software quality managementThe process of defining, implementing, and maintaining a set of standards for software development and testing. It includes activities like quality planning, quality assurance, and quality control. The goal of software quality management is to ensure that software products meet customer expectations and are delivered on time and within budget.
Software test planA document that outlines the objectives, scope, approach, resources, and schedule for testing a software system. It includes details about the testing environment, test cases, and expected outcomes. The test plan helps to ensure that testing is conducted systematically and thoroughly.
Software testingThe process of evaluating a software system or component to determine whether it meets specified requirements. It involves executing software with the intent of finding defects or errors, and may include activities like test planning, test design, test execution, and defect reporting. The goal of testing is to identify and fix defects before the software is released to users.
Software testing processThe sequence of activities involved in testing software, including planning, preparation, execution, and result analysis.
Specification-based testingA testing technique that uses the documented requirements or specifications to design test cases and verify software behavior.
State transition testingA testing technique that focuses on the behavior of the software when it moves from one state to another state.
Statement coverageA measure of the code coverage achieved by a test suite, which indicates the percentage of program statements executed by tests.
Static analysisThe process of analyzing the software’s source code or other artifacts without executing the program to detect defects early.
Static testingA testing technique that involves reviewing and analyzing the software or its documentation to find defects before execution.
Structural testingA testing technique that involves testing the internal structure of the software, such as code, to verify the behavior of the software.
System integration testingSystem Integration Testing is a level of testing that verifies the interactions between different systems or components.
System testingSystem Testing is a level of testing that verifies the functionality, performance, and reliability of the entire system.
Test approachTest Approach is the overall testing strategy that defines the scope, objectives, and methods for testing a system.
Test automationTest Automation is the use of software tools to control the execution of tests and compare the actual results with expected results.
Test basisTest Basis is the documentation or artifacts used as a reference to design and execute tests.
Test caseTest Case is a set of inputs, execution conditions, and expected results designed to verify a specific system behavior.
Test case design techniqueTest Case Design Technique is a method for selecting and designing test cases based on specific criteria.
Test case specificationTest Case Specification is a document that describes the test case, including inputs, steps, and expected results.
Test case suiteA collection of test cases that are related to each other in some way, such as by the functionality being tested or the test level.
Test closureThe activities that take place after a test phase or project has been completed, such as documenting lessons learned and archiving testware.
Test conditionA testable aspect or feature of the test object that needs to be verified or validated.
Test controlThe process of initiating, monitoring, and adjusting test activities to ensure that they are on track and meeting their objectives.
Test dataThe inputs that are used for testing the software under test, including valid and invalid values.
Test data preparationThe process of creating, selecting, and modifying test data to ensure that it is appropriate for use in the tests.
Test designThe process of creating and documenting a plan for testing the software based on the test objectives and test conditions.
Test design specificationA document that describes the test conditions, test cases, and test procedures that will be used to verify and validate the software.
Test driverA software component that interacts with the component being tested and calls the test cases. It may also provide test input data and/or mock object behavior.
Test environmentThe set of hardware, software, and/or network configurations needed to conduct testing, including any necessary data or other resources.
Test estimationThe process of predicting the effort and cost required to complete testing activities, based on available information about the project and its objectives.
Test executionThe process of running a test suite according to a test plan and recording the results.
Test execution scheduleA plan that details when and how test cases will be executed, including dependencies and prerequisites for each test case.
Test execution strategyAn approach to testing that defines the scope, objectives, and methods of testing, including the use of automated tools, resources, and techniques.
Test execution toolA software tool used to perform some or all of the testing activities, such as test design, test execution, or test management.
Test harnessA collection of software and test data that is used to execute a set of test cases against a component or system.
Test incident reportA document that describes any unexpected or anomalous behavior observed during testing, including details about the issue, its impact, and any steps taken to reproduce it.
Test itemAn entity or object to be tested, which may include software, hardware, documents, processes, or systems.
Test logA document that records information about test activities, including test execution, test results, and incidents.
Test managementThe planning, coordinating, and controlling of test activities, including the selection and use of resources and the application of testing techniques and tools.
Test management toolA software application that supports test management activities, such as test planning, test design, test execution, and test reporting.
Test maturity model integration (TMMi)A framework for assessing and improving an organization’s test process, which includes five levels of maturity and focuses on best practices for test planning, execution, and management.
Test modelA representation of the test object or system, used to guide test design and execution, which may include test cases, test scenarios, test data, and test scripts.
Test objectiveA statement of the intended outcome or purpose of a test, which may include identifying defects, verifying functionality, or validating requirements.
Test oracleA mechanism for determining the expected results of a test, which may include specifications, requirements, or previous versions of the system.
Test phaseA stage in the software development lifecycle in which testing is performed, which may include unit testing, integration testing, system testing, and acceptance testing.
Test planA document that describes the objectives, scope, approach, and focus of software testing activities. It identifies the items to be tested, the testing tasks, the personnel responsible for each task, the risks associated with the plan, and the schedule and budget for test activities.
Test plan documentA deliverable that describes the test plan, including the objectives, scope, approach, and focus of software testing activities. It identifies the items to be tested, the testing tasks, the personnel responsible for each task, the risks associated with the plan, and the schedule and budget for test activities.
Test planningThe process of defining the objectives, scope, approach, and focus of software testing activities. It identifies the items to be tested, the testing tasks, the personnel responsible for each task, the risks associated with the plan, and the schedule and budget for test activities.
Test policyA high-level document that describes the principles, approach, and objectives of software testing within an organization. It provides a framework for testing activities and defines the expectations for testing activities across the organization.
Test procedureA document that describes the steps that are to be taken to execute a test. It includes the preconditions, input values, expected results, and post-conditions of the test.
Test procedure specificationA deliverable that describes the test procedure, including the steps that are to be taken to execute a test. It includes the preconditions, input values, expected results, and post-conditions of the test.
Test processA set of interrelated activities that are performed to achieve a specific objective related to software testing. It includes activities such as test planning, test design, test execution, and test closure.
Test process improvementThe process of identifying and implementing changes to the test process to improve its effectiveness and efficiency. It involves measuring the current process, identifying areas for improvement, and implementing changes to the process.
Test progress reportA document that provides information about the status of testing activities, including the progress made, the issues encountered, and the risks associated with the testing activities. It is used to communicate the status of testing activities to stakeholders.
Test reportA document that provides stakeholders with information about the testing activities conducted, including test results and analysis, as well as any issues that arose during testing. Test reports are used to assess product quality and make informed decisions regarding the release of the software.
Test repositoryA centralized location where test artifacts are stored and managed, such as test plans, test cases, test scripts, and test results. The test repository ensures that all testing artifacts are organized and easily accessible to the testing team. It is also used to maintain version control and ensure that the latest versions of test artifacts are used for testing.
Test resultThe outcome of a test case or a test suite, which indicates whether the test passed or failed. Test results also include other relevant information, such as the test environment and configuration used, the date and time of testing, and any defects found during testing.
Test scriptA set of instructions that describes how to run a particular test case, including the necessary input data and expected output. Test scripts are typically automated using test automation tools to ensure consistency and repeatability of the test execution. Test scripts are also used to document the steps taken during testing, making it easier to reproduce and debug issues.
Test specificationA document that describes the test approach, objectives, and test design techniques used to verify a specific software feature or requirement. Test specifications typically include a list of test cases and expected results, as well as the test environment and test data required to execute the tests.
Test strategyA high-level document that outlines the overall approach and objectives for testing a software system, including the test types, test levels, and test design techniques used. The test strategy also includes the roles and responsibilities of the testing team, as well as the resources and timelines required for testing.
Test summary reportA concise document that provides an overview of the testing activities conducted, including the number of test cases executed, the number of defects found, and the overall test results. The test summary report is typically used to communicate testing progress to stakeholders and to provide a snapshot of the current state of the software being tested.
Test suiteA collection of related test cases that are designed to verify a specific software feature or requirement. Test suites are typically organized by test type or by test level and are executed together to ensure that all aspects of the software have been tested. Test suites can be automated using test automation tools to speed up the testing process and improve test coverage.
Test targetThe component or system to be tested. It is often used interchangeably with “test object.”
Test techniqueA systematic approach to perform testing activities, such as black-box testing or white-box testing.
Test toolA software tool that supports one or more testing activities, such as test management, test execution, or defect management.
Test tool criteriaThe requirements that a test tool must meet to be selected for a particular testing task, such as compatibility with existing tools or platforms.
Test typeA category of testing activities that are organized and executed in a systematic way, such as functional testing or performance testing.
TestabilityThe degree to which a system or component can be tested effectively and efficiently, typically measured by its ease of testing, controllability, and observability.
TraceabilityThe ability to identify related items in different documents and to trace the history of changes to those items.
Traceability matrixA document that traces the relationship between requirements and test cases, or between any two sets of documents that have a logical relationship.
Usability testingTesting to evaluate the degree to which the user interface of a system or component is easy to use and understand.
Use case testingUse case testing is a black box test design technique in which test cases are designed to execute user scenarios. It involves identifying the interactions between the actors and the system, defining use case scenarios, and then designing test cases based on these scenarios.
User acceptance testing (UAT)User Acceptance Testing is a type of testing performed to verify whether the software meets the requirements of the end-users and to ensure that the software is ready for release. It is performed by end-users or subject matter experts (SMEs) and involves executing predefined test cases, exploring the system, and reporting defects if any.
User storyA User Story is a short, simple description of a feature or requirement written from an end-user perspective. It is used in Agile development to describe a specific functionality or requirement that is valuable to the end-user. A user story usually follows the format of “As a <user>, I want <functionality>, so that <benefit>.”
ValidationValidation is the process of evaluating software or a system to determine whether it satisfies the specified requirements and meets the user’s needs. It ensures that the software or system is fit for its intended purpose and is able to deliver the expected results.
VerificationVerification is the process of evaluating software or a system to determine whether it meets the specified requirements and is consistent with the design and development specifications. It involves reviewing and testing the software or system to ensure that it is correct, complete, and reliable.
V-modelThe V-model is a software development model that emphasizes the relationship between each phase of the development life cycle and its associated testing phase. It depicts the testing activities associated with each phase of the software development life cycle and illustrates how the testing activities should be integrated with the development activities.
WalkthroughA walkthrough is a review of a software product, document, or process that is led by the author or designer of the product. The purpose of a walkthrough is to identify defects, ensure completeness and accuracy, and gather feedback from the participants.
White box testingWhite box testing is a testing technique in which the tester has knowledge of the internal workings of the system or component being tested. It involves testing the system’s code, internal structures, and logic to ensure that they function as intended. It is also known as structural testing or code-based testing.