• RSS
  • Facebook
  • Twitter

One Center for all the Testing Need. OneTestingCenter is for all, for all QA, for all Automation. Automation is easy, fun and enriching experience. Happy Learning and Happy Testing!

  • Access to All QA

    At OneTestingCenter, you find lots of books,articles,blogs and detailed guides. Everyone has access to QA. Happy Learning, Happy Testing.

  • Global Trainings

    All the QA/Auotmation trainings are available at OneTestingCenter. QA, Automation, QTP/UFT, Selenium, LoadRunner, QC/ALM, Appium...

  • Automation Learning Fun

    Technology is always evolving. Automation is always challenging. To be frank, Automation is fun, enthusiastic and enrching experience.

    Testing Glossary: QA & Software Testing

    Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system.

    Affinity Diagram: A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.

    Alpha Testing: Testing of a software product or system conducted at the developer’s site by the end user.

    Accessibility Testing: Verifying a product is accessible to the people having disabilities.

    Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.

    Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.

    Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across different system platforms and environments.

    Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

    Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality

    Audit: An inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the “eyes and ears” of management.

    Automated Testing: That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.

    Beta Testing: Testing conducted at one or more end user sites by the end user of a delivered software product or system.

    Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.

    Black Box Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black Box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.

    Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

    Bottom-up Testing: An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test.

    Boundary Testing: Testing that focuses on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

    Boundary Value Analysis: A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.

    Brainstorming: A group process for generating creative and diverse ideas.
    Branch Coverage Testing: A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed at least once.

    Branch Testing: Testing wherein all branches in the program source code are tested at
    least once.

    Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.

    Bug: A design flaw that will result in symptoms exhibited by some object (the object under test or some other object) when an object is subjected to an appropriate test.

    Cause-and-Effect (Fishbone) Diagram: A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause.

    Cause-effect Graphing: A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

    Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

    Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

    Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

    Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored,
    to analyze the programmer's logic and assumptions.

    Coding: The generation of source code.
    Clear-box Testing: Another term for white-box testing. Structural testing is sometimes referred to as clear-box testing; since “white boxes” are considered opaque and do not really permit visibility into the code. This is also known as glass-box or open-box testing.

    Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

    Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

    Client: The end user that pays for the product received, and receives the benefit from the use of the product.

    Control Chart: A statistical method for distinguishing between common and special cause variation exhibited by processes.

    Customer (end user): The individual or organization, internal or external to the producing organization that receives the product.

    Cyclomatic Complexity: A measure of the number of linearly independent paths through a program module.

    Data Flow Analysis: Consists of the graphical analysis of collections of (sequential) data definitions and reference patterns to determine constraints that can be placed on
    data values at various points of executing the source program.

    Defect: NOTE: Operationally, it is useful to work with two definitions of a defect:
    1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product.
    2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether
    in the statement of requirements or not.

    Defect Analysis: Using defects as data for continuous quality improvement. Defect analysis generally seeks to classify defects into categories and identify possible causes in order to direct process improvement efforts.

    Defect Density: Ratio of the number of defects to program length (a relative number).

    Desk Checking: A form of manual static analysis usually performed by the originator. Source code documentation, etc., is visually checked against requirements and standards.

    Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.

    Data Driven Testing: Testing in which the action of a test case is parameterized by
    externally defined data values, maintained as a file or spreadsheet.

    Debugging: The process of finding and removing the causes of software failures.

    Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

    Depth Testing: A test that exercises a feature of a product in full detail.

    Dynamic Testing: Testing software through executing it.

    Dynamic Analysis: The process of evaluating a program based on execution of that program. Dynamic analysis approaches rely on executing a piece of software with
    selected test data.

    Error: 1) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition; and
    2) a mental mistake made by a programmer that may result in a program fault.

    Error-based Testing: Testing where information about programming style, error-prone language constructs, and other programming knowledge is applied to select test data capable of detecting faults, either a specified class of faults or all possible faults.

    Evaluation: The process of examining a system or system component to determine the extent to which specified properties are present.

    Execution: The process of a computer carrying out an instruction or instructions of a computer.

    Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

    End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

    Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

    Failure: The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.

    Failure-directed Testing: Testing based on the knowledge of the types of errors made
    in the past that are likely for the system under test.

    Fault: A manifestation of an error in software. A fault, if encountered, may cause a failure.

    Fault Tree Analysis: A form of safety analysis that assesses hardware safety to provide failure statistics and sensitivity analyses that indicate the possible effect of critical
    failures.

    Fault-based Testing: Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically, frequently occurring faults.

    Flowchart: A diagram showing the sequential steps of a process or of a workflow around a product or service.

    Formal Review: A technical review conducted with the end user, including the types of reviews called for in the standards.

    Function Points: A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present, 1 = minor influence, 5 = strong influence.

    Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.

    Gorilla Testing: Testing one particular module, functionality heavily.

    Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

    Heuristics Testing: Another term for failure-directed testing.

    Histogram: A graphical description of individual measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.

    Hybrid Testing: A combination of top-down testing combined with bottom-up testing of prioritized or available components.

    Incremental Analysis: Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.

    Infeasible Path: Program statement sequence that can never be executed.

    Inputs: Products, services, or information needed from suppliers to make a process work.

    Inspection: 1) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems.
    2) A quality improvement process for written material that consists of two dominant

    components: product (document) improvement and process improvement (document production and inspection).

    Instrument: To install or insert devices or instructions into hardware or software to monitor the operation of a system or component.

    Integration: The process of combining software components or hardware components, or both, into an overall system.

    Integration Testing: An orderly progression of testing in which software components or hardware components, or both, are combined and tested until the entire system has been integrated.

    Interface: A shared boundary. An interface might be a hardware component to link two devices, or it might be a portion of storage or registers accessed by two or more computer programs.

    Interface Analysis: Checks the interfaces between program elements for consistency and adherence to predefined rules or axioms.

    Intrusive Testing: Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.

    Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

    IV&V: Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.

    Life Cycle: The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a retirement phase.

    Localization Testing: This term refers to making software specifically designed for a specific locality.
    Loop Testing: A white box testing technique that exercises program loops.

    Manual Testing: That part of software testing that requires operator input, analysis, or evaluation.

    Mean: A value derived by adding several qualities and dividing the sum by the number of these quantities.

    Measurement: 1) The act or process of measuring. A figure, extent, or amount obtained by measuring.

    Metric: A measure of the extent or degree to which a product possesses and exhibits a certain quality, property, or attribute.

    Monkey Testing: Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.

    Mutation Testing: A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.

    Non-intrusive Testing: Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another platform.

    Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".

    Operational Requirements: Qualitative and quantitative parameters that specify the desired operational capabilities of a system and serve as a basis for deter-mining the operational effectiveness and suitability of a system prior to deployment.

    Operational Testing: Testing performed by the end user on software in its normal operating environment.

    Outputs: Products, services, or information supplied to meet end user needs.

    Path Analysis: Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any path.

    Path Coverage Testing: A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of classes; one path from each class is tested.

    Peer Reviews: A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.

    Policy: Managerial desires and intents concerning either process (intended objectives) or
    products (desired attributes).

    Problem: Any deviation from defined standards. Same as defect.

    Procedure: The step-by-step method followed to ensure that standards are met.

    Process: The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures.

    Process Improvement: To change a process to make the process produce a given product faster, more economically, or of higher quality. Such changes may require the product to be changed. The defect rate must be maintained or reduced.

    Product: The output of a process; the work product. There are three useful classes of

    products: manufactured products (standard and custom), administrative/ information
    products (invoices, letters, etc.), and service products (physical, intellectual, physiological, and psychological). Products are defined by a statement of requirements; they are produced by one or more people working in a process.
    Product Improvement: To change the statement of requirements that defines a product to make the product more satisfying and attractive to the end user (more competitive). Such changes may add to or delete from the list of attributes and/or the list of functions defining a product. Such changes frequently require the process to be changed. NOTE: This process could result in a totally new product.

    Path Testing: Testing wherein all paths in the program source code are tested at least once.

    Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

    Positive Testing: Testing aimed at showing software works. Also known as "test to
    pass".

    Productivity: The ratio of the output of a process to the input, usually measured in the same units. It is frequently useful to compare the value added to a product by a process to the value of the input resources required (using fair market values for both input and output).

    Proof Checker: A program that checks formal proofs of program properties for logical correctness.

    Prototyping: Evaluating requirements or designs at the conceptualization phase, the requirements analysis phase, or design phase by quickly building scaled-down components of the intended system to obtain rapid feedback of analysis and design decisions.

    Qualification Testing: Formal testing, usually conducted by the developer for the end user, to demonstrate that the software meets its specified requirements.

    Quality: A product is a quality product if it is defect free. To the producer a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to “quality means meets requirements.
    NOTE: Operationally, the work quality refers to products.
    Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

    Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

    Quality Management: That aspect of the overall management function that determines and implements the quality policy.

    Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.

    Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

    Quality Assurance (QA): The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved in order to produce products that meet specifications and are fit for use.

    Quality Control (QC): The process by which product quality is compared with applicable standards; and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function, that is, the performance of these tasks is the responsibility of the people working within the process.

    Quality Improvement: To change a production process so that the rate at which defective products (defects) are produced is reduced. Some process changes may require the product to be changed.

    Random Testing: An essentially black-box testing approach in which a program is tested by randomly choosing a subset of all possible input values. The distribution may be arbitrary or may attempt to accurately reflect the distribution of inputs in the application environment.

    Regression Testing: Selective retesting to detect faults introduced during modification of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets its specified requirements.

    Ramp Testing: Continuously raising an input signal until the system breaks down.

    Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

    Reliability: The probability of failure-free operation for a specified period.

    Requirement: A formal statement of: 1) an attribute to be possessed by the product or a function to be performed by the product; the performance standard for the attribute or function; or 3) the measuring process to be used in verifying that the standard has been met.

    Review: A way to use the diversity and power of a group of people to point out needed improvements in a product or confirm those parts of a product in which improvement is either not desired or not needed. A review is a general work product evaluation technique that includes desk checking, walkthroughs, technical reviews, peer reviews, formal reviews, and inspections.

    Run Chart: A graph of data points in chronological order used to illustrate trends or cycles of the characteristic being measured for the purpose of suggesting an assignable cause rather than random variation.

    Scatter Plot (correlation diagram): A graph designed to show whether there is a relationship between two changing factors.

    Semantics: 1) The relationship of characters or a group of characters to their meanings, independent of the manner of their interpretation and use.
    2) The relationships between symbols and their meanings.

    Software Characteristic: An inherent, possibly accidental, trait, quality, or property of software (for example, functionality, performance, attributes, design constraints, number of states, lines of branches).

    Software Feature: A software characteristic specified or implied by requirements documentation (for example, functionality, performance, attributes, or design constraints).

    Software Tool: A computer program used to help develop, test, analyze, or maintain another computer program or its documentation; e.g., automated design tools, compilers, test tools, and maintenance tools.

    Standards: The measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured.

    Standardize: Procedures are implemented to ensure that the output of a process is maintained at a desired level.

    Statement Coverage Testing: A test method satisfying coverage criteria that requires each statement be executed at least once.

    Statement of Requirements: The exhaustive list of requirements that define a product. NOTE: The statement of requirements should document requirements proposed and rejected (including the reason for the rejection) during the requirements determination process.

    Static Testing: Verification performed without executing the system’s code. Also called static analysis.

    Statistical Process Control: The use of statistical techniques and tools to measure an ongoing process for change or stability.

    Structural Coverage: This requires that each pair of module invocations be executed at least once.

    Structural Testing: A testing method where the test data is derived solely from the program structure.

    Stub: A software component that usually minimally simulates the actions of called components that have not yet been integrated during top-down testing.

    Supplier: An individual or organization that supplies inputs needed to generate a product, service, or information to an end user.

    Syntax: 1) The relationship among characters or groups of characters independent of their meanings or the manner of their interpretation and use.
    2) The structure of expressions in a language, and
    3) the rules governing the structure of the language.

    Sanity Testing: Brief test of major functional elements of a piece of software to determine if it is basically operational.

    Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in workload.

    Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

    Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

    Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
    Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

    Software Testing: A set of activities conducted with the intent of finding errors in software.

    Static Analysis: Analysis of a program carried out without executing the program.

    Static Analyzer: A tool that carries out static analysis.

    Static Testing: Analysis of a program carried out without executing the program.

    Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

    Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

    Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software.

    System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

    System: A collection of people, machines, and methods organized to accomplish a set of specified functions.

    System Simulation: Another name for prototyping.

    Technical Review: A review that refers to content of the technical material being reviewed.

    Test Bed: 1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test of a logically or physically separate component.
    2) A suite of test programs used in conducting the test of a component or system.

    Test Development: The development of anything required to conduct testing. This may include test requirements (objectives), strategies, processes, plans, software, procedures, cases, documentation, etc.

    Test Executive: Another term for test harness.

    Test Harness: A software tool that enables the testing of software components that links test capabilities to perform specific tests, accept program inputs, simulate missing components, compare actual outputs with expected outputs to determine correctness, and report discrepancies.

    Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

    Testing: The process of exercising software to verify that it satisfies specified requirements and to detect errors.
    The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

    Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

    Test Case: Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

    Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

    Test Driver: A program or test tool used to execute tests. Also known as a Test Harness.

    Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

    Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP). It requires that programmers do not write any production code until they have first written a unit test.

    Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

    Test Procedure: A document providing detailed instructions for the execution of one or more test cases.

    Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

    Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

    Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

    Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.

    Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

    Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

    Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction.

    Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.

    Test Objective: An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation.

    Test Plan: A formal or informal plan to be followed to assure the controlled testing of the product under test.

    Unit Testing: The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled, loaded, and tested) satisfies its functional specification or its implemented structure matches the intended design structure.

    Usability Testing: Testing the ease with which users can learn and use a product.

    Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

    V- Diagram (model): a diagram that visualizes the order of testing activities and their corresponding phases of development

    Verification: The process of determining whether or not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

    Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

    Validation: The process of evaluating software to determine compliance with specified requirements.

    Walkthrough: Usually, a step-by-step simulation of the execution of a procedure, as when walking through code, line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.

    White-box Testing: Testing approaches that examine the program structure and derive test data from the program logic. This is also known as clear box testing, glass-box or open-box testing. White box testing determines if program-code structure and logic is faulty. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.

    Workflow Testing: Scripted end-to-end testing which duplicates specific workflows, which are expected to be utilized by the end-user.

    'Software Quality Assurance'
    Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

    What is the 'software life cycle'?
    The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.


    Reactions:

    Software Testing Glossary : QA - Testing All in One

    Testing Glossary: QA & Software Testing

    Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system.

    Affinity Diagram: A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.

    Alpha Testing: Testing of a software product or system conducted at the developer’s site by the end user.

    Accessibility Testing: Verifying a product is accessible to the people having disabilities.

    Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well.

    Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.

    Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across different system platforms and environments.

    Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

    Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality

    Audit: An inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the “eyes and ears” of management.

    Automated Testing: That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.

    Beta Testing: Testing conducted at one or more end user sites by the end user of a delivered software product or system.

    Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.

    Black Box Testing: Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black Box testing indicates whether or not a program meets required specifications by spotting faults of omission -- places where the specification is not fulfilled.

    Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

    Bottom-up Testing: An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test.

    Boundary Testing: Testing that focuses on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

    Boundary Value Analysis: A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.

    Brainstorming: A group process for generating creative and diverse ideas.
    Branch Coverage Testing: A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed at least once.

    Branch Testing: Testing wherein all branches in the program source code are tested at
    least once.

    Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.

    Bug: A design flaw that will result in symptoms exhibited by some object (the object under test or some other object) when an object is subjected to an appropriate test.

    Cause-and-Effect (Fishbone) Diagram: A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause.

    Cause-effect Graphing: A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

    Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

    Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

    Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

    Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored,
    to analyze the programmer's logic and assumptions.

    Coding: The generation of source code.
    Clear-box Testing: Another term for white-box testing. Structural testing is sometimes referred to as clear-box testing; since “white boxes” are considered opaque and do not really permit visibility into the code. This is also known as glass-box or open-box testing.

    Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

    Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

    Client: The end user that pays for the product received, and receives the benefit from the use of the product.

    Control Chart: A statistical method for distinguishing between common and special cause variation exhibited by processes.

    Customer (end user): The individual or organization, internal or external to the producing organization that receives the product.

    Cyclomatic Complexity: A measure of the number of linearly independent paths through a program module.

    Data Flow Analysis: Consists of the graphical analysis of collections of (sequential) data definitions and reference patterns to determine constraints that can be placed on
    data values at various points of executing the source program.

    Defect: NOTE: Operationally, it is useful to work with two definitions of a defect:
    1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product.
    2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether
    in the statement of requirements or not.

    Defect Analysis: Using defects as data for continuous quality improvement. Defect analysis generally seeks to classify defects into categories and identify possible causes in order to direct process improvement efforts.

    Defect Density: Ratio of the number of defects to program length (a relative number).

    Desk Checking: A form of manual static analysis usually performed by the originator. Source code documentation, etc., is visually checked against requirements and standards.

    Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.

    Data Driven Testing: Testing in which the action of a test case is parameterized by
    externally defined data values, maintained as a file or spreadsheet.

    Debugging: The process of finding and removing the causes of software failures.

    Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

    Depth Testing: A test that exercises a feature of a product in full detail.

    Dynamic Testing: Testing software through executing it.

    Dynamic Analysis: The process of evaluating a program based on execution of that program. Dynamic analysis approaches rely on executing a piece of software with
    selected test data.

    Error: 1) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition; and
    2) a mental mistake made by a programmer that may result in a program fault.

    Error-based Testing: Testing where information about programming style, error-prone language constructs, and other programming knowledge is applied to select test data capable of detecting faults, either a specified class of faults or all possible faults.

    Evaluation: The process of examining a system or system component to determine the extent to which specified properties are present.

    Execution: The process of a computer carrying out an instruction or instructions of a computer.

    Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

    End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

    Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.

    Failure: The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.

    Failure-directed Testing: Testing based on the knowledge of the types of errors made
    in the past that are likely for the system under test.

    Fault: A manifestation of an error in software. A fault, if encountered, may cause a failure.

    Fault Tree Analysis: A form of safety analysis that assesses hardware safety to provide failure statistics and sensitivity analyses that indicate the possible effect of critical
    failures.

    Fault-based Testing: Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically, frequently occurring faults.

    Flowchart: A diagram showing the sequential steps of a process or of a workflow around a product or service.

    Formal Review: A technical review conducted with the end user, including the types of reviews called for in the standards.

    Function Points: A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present, 1 = minor influence, 5 = strong influence.

    Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.

    Gorilla Testing: Testing one particular module, functionality heavily.

    Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.

    Heuristics Testing: Another term for failure-directed testing.

    Histogram: A graphical description of individual measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.

    Hybrid Testing: A combination of top-down testing combined with bottom-up testing of prioritized or available components.

    Incremental Analysis: Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.

    Infeasible Path: Program statement sequence that can never be executed.

    Inputs: Products, services, or information needed from suppliers to make a process work.

    Inspection: 1) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems.
    2) A quality improvement process for written material that consists of two dominant

    components: product (document) improvement and process improvement (document production and inspection).

    Instrument: To install or insert devices or instructions into hardware or software to monitor the operation of a system or component.

    Integration: The process of combining software components or hardware components, or both, into an overall system.

    Integration Testing: An orderly progression of testing in which software components or hardware components, or both, are combined and tested until the entire system has been integrated.

    Interface: A shared boundary. An interface might be a hardware component to link two devices, or it might be a portion of storage or registers accessed by two or more computer programs.

    Interface Analysis: Checks the interfaces between program elements for consistency and adherence to predefined rules or axioms.

    Intrusive Testing: Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.

    Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

    IV&V: Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.

    Life Cycle: The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a retirement phase.

    Localization Testing: This term refers to making software specifically designed for a specific locality.
    Loop Testing: A white box testing technique that exercises program loops.

    Manual Testing: That part of software testing that requires operator input, analysis, or evaluation.

    Mean: A value derived by adding several qualities and dividing the sum by the number of these quantities.

    Measurement: 1) The act or process of measuring. A figure, extent, or amount obtained by measuring.

    Metric: A measure of the extent or degree to which a product possesses and exhibits a certain quality, property, or attribute.

    Monkey Testing: Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.

    Mutation Testing: A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.

    Non-intrusive Testing: Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another platform.

    Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail".

    Operational Requirements: Qualitative and quantitative parameters that specify the desired operational capabilities of a system and serve as a basis for deter-mining the operational effectiveness and suitability of a system prior to deployment.

    Operational Testing: Testing performed by the end user on software in its normal operating environment.

    Outputs: Products, services, or information supplied to meet end user needs.

    Path Analysis: Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any path.

    Path Coverage Testing: A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of classes; one path from each class is tested.

    Peer Reviews: A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.

    Policy: Managerial desires and intents concerning either process (intended objectives) or
    products (desired attributes).

    Problem: Any deviation from defined standards. Same as defect.

    Procedure: The step-by-step method followed to ensure that standards are met.

    Process: The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures.

    Process Improvement: To change a process to make the process produce a given product faster, more economically, or of higher quality. Such changes may require the product to be changed. The defect rate must be maintained or reduced.

    Product: The output of a process; the work product. There are three useful classes of

    products: manufactured products (standard and custom), administrative/ information
    products (invoices, letters, etc.), and service products (physical, intellectual, physiological, and psychological). Products are defined by a statement of requirements; they are produced by one or more people working in a process.
    Product Improvement: To change the statement of requirements that defines a product to make the product more satisfying and attractive to the end user (more competitive). Such changes may add to or delete from the list of attributes and/or the list of functions defining a product. Such changes frequently require the process to be changed. NOTE: This process could result in a totally new product.

    Path Testing: Testing wherein all paths in the program source code are tested at least once.

    Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

    Positive Testing: Testing aimed at showing software works. Also known as "test to
    pass".

    Productivity: The ratio of the output of a process to the input, usually measured in the same units. It is frequently useful to compare the value added to a product by a process to the value of the input resources required (using fair market values for both input and output).

    Proof Checker: A program that checks formal proofs of program properties for logical correctness.

    Prototyping: Evaluating requirements or designs at the conceptualization phase, the requirements analysis phase, or design phase by quickly building scaled-down components of the intended system to obtain rapid feedback of analysis and design decisions.

    Qualification Testing: Formal testing, usually conducted by the developer for the end user, to demonstrate that the software meets its specified requirements.

    Quality: A product is a quality product if it is defect free. To the producer a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to “quality means meets requirements.
    NOTE: Operationally, the work quality refers to products.
    Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

    Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

    Quality Management: That aspect of the overall management function that determines and implements the quality policy.

    Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.

    Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

    Quality Assurance (QA): The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved in order to produce products that meet specifications and are fit for use.

    Quality Control (QC): The process by which product quality is compared with applicable standards; and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function, that is, the performance of these tasks is the responsibility of the people working within the process.

    Quality Improvement: To change a production process so that the rate at which defective products (defects) are produced is reduced. Some process changes may require the product to be changed.

    Random Testing: An essentially black-box testing approach in which a program is tested by randomly choosing a subset of all possible input values. The distribution may be arbitrary or may attempt to accurately reflect the distribution of inputs in the application environment.

    Regression Testing: Selective retesting to detect faults introduced during modification of a system or system component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or system component still meets its specified requirements.

    Ramp Testing: Continuously raising an input signal until the system breaks down.

    Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

    Reliability: The probability of failure-free operation for a specified period.

    Requirement: A formal statement of: 1) an attribute to be possessed by the product or a function to be performed by the product; the performance standard for the attribute or function; or 3) the measuring process to be used in verifying that the standard has been met.

    Review: A way to use the diversity and power of a group of people to point out needed improvements in a product or confirm those parts of a product in which improvement is either not desired or not needed. A review is a general work product evaluation technique that includes desk checking, walkthroughs, technical reviews, peer reviews, formal reviews, and inspections.

    Run Chart: A graph of data points in chronological order used to illustrate trends or cycles of the characteristic being measured for the purpose of suggesting an assignable cause rather than random variation.

    Scatter Plot (correlation diagram): A graph designed to show whether there is a relationship between two changing factors.

    Semantics: 1) The relationship of characters or a group of characters to their meanings, independent of the manner of their interpretation and use.
    2) The relationships between symbols and their meanings.

    Software Characteristic: An inherent, possibly accidental, trait, quality, or property of software (for example, functionality, performance, attributes, design constraints, number of states, lines of branches).

    Software Feature: A software characteristic specified or implied by requirements documentation (for example, functionality, performance, attributes, or design constraints).

    Software Tool: A computer program used to help develop, test, analyze, or maintain another computer program or its documentation; e.g., automated design tools, compilers, test tools, and maintenance tools.

    Standards: The measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured.

    Standardize: Procedures are implemented to ensure that the output of a process is maintained at a desired level.

    Statement Coverage Testing: A test method satisfying coverage criteria that requires each statement be executed at least once.

    Statement of Requirements: The exhaustive list of requirements that define a product. NOTE: The statement of requirements should document requirements proposed and rejected (including the reason for the rejection) during the requirements determination process.

    Static Testing: Verification performed without executing the system’s code. Also called static analysis.

    Statistical Process Control: The use of statistical techniques and tools to measure an ongoing process for change or stability.

    Structural Coverage: This requires that each pair of module invocations be executed at least once.

    Structural Testing: A testing method where the test data is derived solely from the program structure.

    Stub: A software component that usually minimally simulates the actions of called components that have not yet been integrated during top-down testing.

    Supplier: An individual or organization that supplies inputs needed to generate a product, service, or information to an end user.

    Syntax: 1) The relationship among characters or groups of characters independent of their meanings or the manner of their interpretation and use.
    2) The structure of expressions in a language, and
    3) the rules governing the structure of the language.

    Sanity Testing: Brief test of major functional elements of a piece of software to determine if it is basically operational.

    Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in workload.

    Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

    Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

    Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
    Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

    Software Testing: A set of activities conducted with the intent of finding errors in software.

    Static Analysis: Analysis of a program carried out without executing the program.

    Static Analyzer: A tool that carries out static analysis.

    Static Testing: Analysis of a program carried out without executing the program.

    Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

    Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

    Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software.

    System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

    System: A collection of people, machines, and methods organized to accomplish a set of specified functions.

    System Simulation: Another name for prototyping.

    Technical Review: A review that refers to content of the technical material being reviewed.

    Test Bed: 1) An environment that contains the integral hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test of a logically or physically separate component.
    2) A suite of test programs used in conducting the test of a component or system.

    Test Development: The development of anything required to conduct testing. This may include test requirements (objectives), strategies, processes, plans, software, procedures, cases, documentation, etc.

    Test Executive: Another term for test harness.

    Test Harness: A software tool that enables the testing of software components that links test capabilities to perform specific tests, accept program inputs, simulate missing components, compare actual outputs with expected outputs to determine correctness, and report discrepancies.

    Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

    Testing: The process of exercising software to verify that it satisfies specified requirements and to detect errors.
    The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

    Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

    Test Case: Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

    Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

    Test Driver: A program or test tool used to execute tests. Also known as a Test Harness.

    Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

    Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP). It requires that programmers do not write any production code until they have first written a unit test.

    Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.

    Test Procedure: A document providing detailed instructions for the execution of one or more test cases.

    Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

    Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

    Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

    Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.

    Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

    Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

    Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction.

    Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.

    Test Objective: An identified set of software features to be measured under specified conditions by comparing actual behavior with the required behavior described in the software documentation.

    Test Plan: A formal or informal plan to be followed to assure the controlled testing of the product under test.

    Unit Testing: The testing done to show whether a unit (the smallest piece of software that can be independently compiled or assembled, loaded, and tested) satisfies its functional specification or its implemented structure matches the intended design structure.

    Usability Testing: Testing the ease with which users can learn and use a product.

    Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

    V- Diagram (model): a diagram that visualizes the order of testing activities and their corresponding phases of development

    Verification: The process of determining whether or not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.

    Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.

    Validation: The process of evaluating software to determine compliance with specified requirements.

    Walkthrough: Usually, a step-by-step simulation of the execution of a procedure, as when walking through code, line by line, with an imagined set of inputs. The term has been extended to the review of material that is not procedural, such as data descriptions, reference manuals, specifications, etc.

    White-box Testing: Testing approaches that examine the program structure and derive test data from the program logic. This is also known as clear box testing, glass-box or open-box testing. White box testing determines if program-code structure and logic is faulty. The test is accurate only if the tester knows what the program is supposed to do. He or she can then see if the program diverges from its intended goal. White box testing does not account for errors caused by omission, and all visible code must also be readable.

    Workflow Testing: Scripted end-to-end testing which duplicates specific workflows, which are expected to be utilized by the end-user.

    'Software Quality Assurance'
    Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

    What is the 'software life cycle'?
    The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects.