KEMBAR78
Types of Software Testing | PPT
Importance of Testing in SDLC & Various Kinds of Testing
Software Development Lifecycle All software development can be characterized as a problem solving loop in which  four distinct stages are encounter: - Status quo : “represents the current state of affairs”;  Problem definition:  identifies the specific problem to be solved Technical development : solves the problem through the application of some  technology. Solution integration:  delivers the results (e.g., documents, programs, data, new  business function, new product) to those who requested the solution in the first  Place.
Waterfall Model Analysis Design Code Test System/information engineering
The Prototyping Model   Listen to customer Build/revise mock-up Customer test-drives mock-up
The RAD Model Team #1 Team #2 Team #3 Business modeling Data modeling  Process   modeling Application modeling Test and turnover Business modeling Data modeling  Process   modeling Application modeling Test and turnover Business modeling Data modeling  Process   modeling Application modeling Test and turnover
Boehm’s Spiral Model
V- Model SRS Unit test Tested modules Integration Test Integrated software System  Integration Test Tested software System Test, AcceptanceTest Requirements Specification System Design Detailed Design Coding System  Design SRS Module designs Code User Manual
Importance of Software testing in SDLC Its helps to verify that all the software requirements are implemented correctly or not. Identifying defects and ensuring they are addressed before software deployment. Because if any defect will found after deployment and force to fixed it, than the correction cost will much higher than the cost of it fixed it at earlier stage of development. Effective testing is demonstrates that software-testing function appear to be working according to specification, that behavioral and performance requirement appear to have been met. Whenever any system is developed in different components, its helps to verify the proper integration/interaction of each component to rest of the system. Data collection as testing is conducted provide a good indication of software reliability and some indication of software quality as a whole.
Different Types of Testing Dynamic v/s static testing. Development v/s independent testing. Black v/s white box testing. Behavioral v/s structural testing. Automated v/s manual testing. Sanity, acceptance and smoke testing . Regression testing. Exploratory and monkey testing. Debugging v/s be bugging.
Dynamic v/s static Static Testing: This testing refers to testing something that’s not running-Examining and reviewing it. Dynamic Testing: This you would normally think of as testing-running and using the software.
Development v/s independent testing Development testing denotes the aspects of test design and implementation  most appropriate for the team of developers to undertake. This is in contrast to Independent Testing. In most cases, test execution initially occurs with the developer testing group who designed and implemented the test, but it is a good practice for the developers to create their tests in such a way so as to make them available to independent testing groups for execution. Independent testing denotes the test design and implementation most appropriately performed by someone who is independent from the team of developers. You can consider this distinction a superset, which includes Independent Verification & Validation. In most cases, test execution initially occurs with the independent testing group that designed and implemented the test, but the independent testers should create their tests to make them available to the developer testing groups for execution
Black v/s white box testing The purpose of a black-box test is to verify the unit's specified function and observable behavior without knowledge of  how  the unit implements the function and behavior. Black-box tests focus and rely upon the unit's input and output. A white-box test approach should be taken to verify a unit's internal structure. Theoretically, you should test every possible path through the code, but that is possible only in very simple units. At the very least you should exercise every  decision-to-decision path  (DD-path) at least once because you are then executing all statements at least once. A decision is typically an if-statement, and a DD-path is a path between two decisions.
Behavioral v/s structural testing Behavioral Testing:  This is another name commonly given to Black Box Testing as you are testing the behavior of the software when it’s used without knowing the internal logics how they are implemented.  Structural Testing:  This is another name commonly used for white Box testing  in which you can see and use the underlying structure of the code to design and run your tests.
Automated v/s manual Automated Testing:  Software testing assisted with software tools that require no operator input, analysis, or evaluation. Manual Testing : That part of software testing that requires human input, analysis, or evaluation.
Sanity, Acceptance and Smoke testing Sanity Testing:  Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications.  It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.  Acceptance testing:  Acceptance testing is the final test action before deploying the software. The goal of acceptance testing is to verify that the software is ready and can be used by your end users to perform those functions and tasks for which the software was built.  Smoke Testing:  Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
Regression testing The selective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the modifications and that newly added features have not created problems with previous versions of the software.  Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity.
Exploratory and monkey testing Exploratory testing involves simultaneously learning, planning, running tests, and reporting / troubleshooting results. Monkey testing- This is another name for "Ad Hoc Testing"; it comes from the joke that if you put 100 monkeys in a room with 100 typewriters, randomly punching keys, sooner or later they will type out a Shakespearean sonnet. So every time one of your ad hoc testers finds a new bug, you can toss him a banana. The use of monkey testing is to simulate how your customers will use your software in real time.
Debugging v/s bebugging Debugging :  The process of finding and removing the causes of failures in software. The role is performed by a programmer.  Bebugging:  The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program
Black Box & White Box Testing Techniques
Black-Box Testing Program viewed as a Black-box, which accepts some inputs and produces some outputs Test cases are derived solely from the specifications, without knowledge of the internal structure of the program.
Functional Test-Case Design Techniques Equivalence class partitioning Boundary value analysis Cause-effect graphing Error guessing
Equivalence Class Partitioning Partition the program input domain into equivalence classes (classes of data which according to the specifications are treated identically by the program) The basis of this technique is that test of a representative value of each class is equivalent to a test of any other value of the same class. Identify valid as well as invalid equivalence classes For each equivalence class, generate a test case to exercise an input representative of that class
Example Example:  input condition  0 <= x <= max valid equivalence class  :  0 <= x <= max invalid equivalence classes :  x < 0,  x > max 3 test cases
Guidelines for Identifying Equivalence Classes Input Condition Valid Eq Classes Invalid Eq Classes range of values   one valid   two inavlid (eg. 1 - 200) (value within range)   (one outside each   end of range) number N valid   one valid   two invalid values (none, more than N) Set of input values   one valid eq class   one each handled    for each value (eg. any value not differently by the    in valid input set ) program (eg. A, B, C)
Guidelines for Identifying Equivalence Classes Input Condition Valid Eq Classes Invalid Eq Classes must be condition   one   one (e.g. Id name must begin    (e.g.. it is a letter)  (e.g.. it is not a letter) with a letter ) If you know that elements in an equivalence class are not handled identically by the program, split the equivalence class into smaller equivalence classes.
Identifying Test Cases for Equivalence Classes Assign a unique number to each equivalence class Until all valid equivalence classes have been covered by test cases, write a new test case covering as many of the uncovered valid equivalence classes as possible. Each invalid equivalence class cover by a separate test case.
Boundary Value Analysis Design test cases that exercise values that lie at the boundaries of an  equivalence class and for situations just beyond the ends. Example: input condition  0 <= x <= max Test for values  :  0, max  ( valid inputs) :  -1, max+1 (invalid inputs)
Cause Effect Graphing A technique that aids in selecting test cases for combinations of input conditions in a systematic way.
Cause Effect Graphing Technique 1. Identify the causes (input conditions) and effects (output conditions) of the program under test. 2. For each effect, identify the causes that can produce that effect. Draw a Cause-Effect Graph. 3. Generate a test case for each combination of input conditions that make some effect to be true.
Example Consider a program with the following:  Input conditions Output conditions   c1:  command is credit e1:  print invalid command c2: command is debit  e2:  print invalid A/C c3: A/C is valid  e3:  print debit amount not valid c4: Transaction amount not  e4:  debit A/C valid  e5:  credit A/C
Example: Cause-Effect Graph C1 C2 C3 C4 E1 E2 E3 E5 E4 and and or and not and and not and and not not
Example: Cause-Effect Graph C1 C2 C 3 C4 E1 E2 E5 E4 not not not not and E3 and and or and and and and
Example Decision table showing the combinations of input conditions that make an effect true. (Summarized from Cause Effect Graph) Write test cases to exercise each Rule in decision Table. Example:   C1 C2 C3 C4 0 0 - - 1 - 0 - - 1 1 0 - 1 1 1 1 - 1 1 E1 E2 E3 E4 E5 1 1 1 1 1
Error Guessing From intuition and experience, enumerate a list of possible errors or error prone situations and then write test cases to expose those errors.
White Box Testing White box testing is concerned with the degree to which test cases exercise or cover the logic (source code) of the program. White box Test case design techniques Statement coverage Basis Path Testing Decision coverage Loop testing Condition coverage Decision-condition coverage Multiple condition coverage Data flow testing
White Box Test-Case Design Statement coverage write enough test cases to execute every statement at least once TER (Test Effectiveness Ratio) TER1 = statements exercised / total statements
Example void function eval (int A, int B, int X ) { if  ( A > 1)  and ( B = 0 ) then X = X / A; if ( A = 2 ) or ( X > 1) then  X = X + 1; } Statement coverage test cases: 1)  A = 2,  B = 0, X = 3  ( X can be assigned any value)
Decision coverage write test cases to exercise the true and false  outcomes of every decision TER2 = branches exercised / total branches Condition coverage write test cases such that each condition in a  decision takes on all possible outcomes atleast once may not always satisfy decision coverage White Box Test-Case Design
Example void function eval (int A, int B, int X ) { if  ( A > 1)  and ( B = 0 ) then X = X / A; if ( A = 2 ) or ( X > 1) then X = X + 1; } Decision coverage test cases: 2)  A = 2,  B = 1,  X = 1  ( abe ) 1)  A = 3,  B = 0, X = 3  (acd) A > 1 and B = 0 A = 2 or X > 1 X = X+1 X = X/ A a c T F b e T F d
Example Condition coverage test cases must cover conditions A>1, A<=1,  B=0,  B !=0 A=2, A !=2,  X >1, X<=1 Test cases: 1)  A = 1,  B = 0,  X = 3  (abe) 2)  A = 2,  B = 1,  X = 1  (abe) does not satisfy decision coverage X = X+1 a c T F b e T F d A > 1 and B = 0 A = 2 or X > 1 X = X/ A
White Box Test-Case Design Decision Condition coverage write test cases such that each condition in a decision takes  on all possible outcomes at least once and each decision  takes on all possible outcomes at least once Multiple Condition coverage write test cases to exercise all possible combinations of  True and False outcomes of conditions within a decision
Example Decision Condition coverage test cases must cover conditions A>1, A<=1,  B=0,  B !=0 A=2, A !=2,  X >1, X<=1 also ( A > 1 and B = 0)  T, F ( A = 2 or  X > 1)  T, F Test cases: 1)  A = 2,  B = 0, X = 4  (ace) 2)  A = 1,  B = 1,  X = 1  (abd) X = X+1 a c T F b e T F d A > 1 and B = 0 A = 2 or X > 1 X = X/ A
Example Multiple Condition coverage  must cover conditions 1)  A >1, B =0  5)  A=2, X>1 2)  A >1, B !=0  6)  A=2, X <=1 3)  A<=1, B=0  7)  A!=2, X > 1 4)  A <=1, B!=0  8)  A !=2, X<=1 Test cases: 1)  A = 2,  B = 0, X = 4  (covers 1,5) 2)  A = 2,  B = 1,  X = 1  (covers 2,6) 3)  A = 1,  B = 0,  X = 2  (covers 3,7) 4)  A = 1,  B = 1,  X = 1  (covers 4,8)
Basis Path Testing 1. Draw  control flow graph of program from the program detailed design or code. 2. Compute the   Cyclomatic complexity V(G) of the flow graph   using any of the formulas: V(G) = #Edges - #Nodes + 2 or  V(G) =  #regions in flow graph or  V(G) =  #predicates + 1
Example 1 2 3 4 5 10 6 7 8 9 R4 R3 R2 11 12 13 R1 R6 R5 V(G)  =  6 regions V(G) = #Edges - #Nodes + 2 = 17 - 13 + 2 = 6 V(G) = 5 predicate-nodes + 1 = 6 6 linearly  independent paths
Basis Path Testing ( contd ) 3.   Determine a basis set of linearly independent paths. 4. Prepare test cases that will force execution of each path in the Basis set. The value of Cyclomatic complexity provides an upper bound on the number of tests that must be designed to guarantee coverage of all program statements.
Loop Testing Aims to expose bugs in loops Fundamental Loop Test criteria 1) bypass the loop altogether 2) one pass through the loop 3)  two passes through the loop before exiting 4)  A typical number of passes through the loop, unless covered by some other test
Loop Testing Nested loops 1) Set all but one loop to a typical value and run  through the single-loop cases for that loop.  Repeat for all loops. 2) Do minimum values for all loops simultaneously. 3)  Set all loops but one to the minimum value and repeat the  test cases for that loop. Repeat for all  loops. 4)  Do maximum looping values for all loops  simultaneously.
Data Flow Testing Select test paths of a program based on the Definition-Use (DU) chain of variables in the program. Write test cases to cover every DU chain is at least once.
Thank You… Any Questions ????

Types of Software Testing

  • 1.
    Importance of Testingin SDLC & Various Kinds of Testing
  • 2.
    Software Development LifecycleAll software development can be characterized as a problem solving loop in which four distinct stages are encounter: - Status quo : “represents the current state of affairs”; Problem definition: identifies the specific problem to be solved Technical development : solves the problem through the application of some technology. Solution integration: delivers the results (e.g., documents, programs, data, new business function, new product) to those who requested the solution in the first Place.
  • 3.
    Waterfall Model AnalysisDesign Code Test System/information engineering
  • 4.
    The Prototyping Model Listen to customer Build/revise mock-up Customer test-drives mock-up
  • 5.
    The RAD ModelTeam #1 Team #2 Team #3 Business modeling Data modeling Process modeling Application modeling Test and turnover Business modeling Data modeling Process modeling Application modeling Test and turnover Business modeling Data modeling Process modeling Application modeling Test and turnover
  • 6.
  • 7.
    V- Model SRSUnit test Tested modules Integration Test Integrated software System Integration Test Tested software System Test, AcceptanceTest Requirements Specification System Design Detailed Design Coding System Design SRS Module designs Code User Manual
  • 8.
    Importance of Softwaretesting in SDLC Its helps to verify that all the software requirements are implemented correctly or not. Identifying defects and ensuring they are addressed before software deployment. Because if any defect will found after deployment and force to fixed it, than the correction cost will much higher than the cost of it fixed it at earlier stage of development. Effective testing is demonstrates that software-testing function appear to be working according to specification, that behavioral and performance requirement appear to have been met. Whenever any system is developed in different components, its helps to verify the proper integration/interaction of each component to rest of the system. Data collection as testing is conducted provide a good indication of software reliability and some indication of software quality as a whole.
  • 9.
    Different Types ofTesting Dynamic v/s static testing. Development v/s independent testing. Black v/s white box testing. Behavioral v/s structural testing. Automated v/s manual testing. Sanity, acceptance and smoke testing . Regression testing. Exploratory and monkey testing. Debugging v/s be bugging.
  • 10.
    Dynamic v/s staticStatic Testing: This testing refers to testing something that’s not running-Examining and reviewing it. Dynamic Testing: This you would normally think of as testing-running and using the software.
  • 11.
    Development v/s independenttesting Development testing denotes the aspects of test design and implementation most appropriate for the team of developers to undertake. This is in contrast to Independent Testing. In most cases, test execution initially occurs with the developer testing group who designed and implemented the test, but it is a good practice for the developers to create their tests in such a way so as to make them available to independent testing groups for execution. Independent testing denotes the test design and implementation most appropriately performed by someone who is independent from the team of developers. You can consider this distinction a superset, which includes Independent Verification & Validation. In most cases, test execution initially occurs with the independent testing group that designed and implemented the test, but the independent testers should create their tests to make them available to the developer testing groups for execution
  • 12.
    Black v/s whitebox testing The purpose of a black-box test is to verify the unit's specified function and observable behavior without knowledge of how the unit implements the function and behavior. Black-box tests focus and rely upon the unit's input and output. A white-box test approach should be taken to verify a unit's internal structure. Theoretically, you should test every possible path through the code, but that is possible only in very simple units. At the very least you should exercise every decision-to-decision path (DD-path) at least once because you are then executing all statements at least once. A decision is typically an if-statement, and a DD-path is a path between two decisions.
  • 13.
    Behavioral v/s structuraltesting Behavioral Testing: This is another name commonly given to Black Box Testing as you are testing the behavior of the software when it’s used without knowing the internal logics how they are implemented. Structural Testing: This is another name commonly used for white Box testing in which you can see and use the underlying structure of the code to design and run your tests.
  • 14.
    Automated v/s manualAutomated Testing: Software testing assisted with software tools that require no operator input, analysis, or evaluation. Manual Testing : That part of software testing that requires human input, analysis, or evaluation.
  • 15.
    Sanity, Acceptance andSmoke testing Sanity Testing: Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc. Acceptance testing: Acceptance testing is the final test action before deploying the software. The goal of acceptance testing is to verify that the software is ready and can be used by your end users to perform those functions and tasks for which the software was built. Smoke Testing: Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
  • 16.
    Regression testing Theselective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the modifications and that newly added features have not created problems with previous versions of the software. Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity.
  • 17.
    Exploratory and monkeytesting Exploratory testing involves simultaneously learning, planning, running tests, and reporting / troubleshooting results. Monkey testing- This is another name for &quot;Ad Hoc Testing&quot;; it comes from the joke that if you put 100 monkeys in a room with 100 typewriters, randomly punching keys, sooner or later they will type out a Shakespearean sonnet. So every time one of your ad hoc testers finds a new bug, you can toss him a banana. The use of monkey testing is to simulate how your customers will use your software in real time.
  • 18.
    Debugging v/s bebuggingDebugging : The process of finding and removing the causes of failures in software. The role is performed by a programmer. Bebugging: The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program
  • 19.
    Black Box &White Box Testing Techniques
  • 20.
    Black-Box Testing Programviewed as a Black-box, which accepts some inputs and produces some outputs Test cases are derived solely from the specifications, without knowledge of the internal structure of the program.
  • 21.
    Functional Test-Case DesignTechniques Equivalence class partitioning Boundary value analysis Cause-effect graphing Error guessing
  • 22.
    Equivalence Class PartitioningPartition the program input domain into equivalence classes (classes of data which according to the specifications are treated identically by the program) The basis of this technique is that test of a representative value of each class is equivalent to a test of any other value of the same class. Identify valid as well as invalid equivalence classes For each equivalence class, generate a test case to exercise an input representative of that class
  • 23.
    Example Example: input condition 0 <= x <= max valid equivalence class : 0 <= x <= max invalid equivalence classes : x < 0, x > max 3 test cases
  • 24.
    Guidelines for IdentifyingEquivalence Classes Input Condition Valid Eq Classes Invalid Eq Classes range of values one valid two inavlid (eg. 1 - 200) (value within range) (one outside each end of range) number N valid one valid two invalid values (none, more than N) Set of input values one valid eq class one each handled for each value (eg. any value not differently by the in valid input set ) program (eg. A, B, C)
  • 25.
    Guidelines for IdentifyingEquivalence Classes Input Condition Valid Eq Classes Invalid Eq Classes must be condition one one (e.g. Id name must begin (e.g.. it is a letter) (e.g.. it is not a letter) with a letter ) If you know that elements in an equivalence class are not handled identically by the program, split the equivalence class into smaller equivalence classes.
  • 26.
    Identifying Test Casesfor Equivalence Classes Assign a unique number to each equivalence class Until all valid equivalence classes have been covered by test cases, write a new test case covering as many of the uncovered valid equivalence classes as possible. Each invalid equivalence class cover by a separate test case.
  • 27.
    Boundary Value AnalysisDesign test cases that exercise values that lie at the boundaries of an equivalence class and for situations just beyond the ends. Example: input condition 0 <= x <= max Test for values : 0, max ( valid inputs) : -1, max+1 (invalid inputs)
  • 28.
    Cause Effect GraphingA technique that aids in selecting test cases for combinations of input conditions in a systematic way.
  • 29.
    Cause Effect GraphingTechnique 1. Identify the causes (input conditions) and effects (output conditions) of the program under test. 2. For each effect, identify the causes that can produce that effect. Draw a Cause-Effect Graph. 3. Generate a test case for each combination of input conditions that make some effect to be true.
  • 30.
    Example Consider aprogram with the following: Input conditions Output conditions c1: command is credit e1: print invalid command c2: command is debit e2: print invalid A/C c3: A/C is valid e3: print debit amount not valid c4: Transaction amount not e4: debit A/C valid e5: credit A/C
  • 31.
    Example: Cause-Effect GraphC1 C2 C3 C4 E1 E2 E3 E5 E4 and and or and not and and not and and not not
  • 32.
    Example: Cause-Effect GraphC1 C2 C 3 C4 E1 E2 E5 E4 not not not not and E3 and and or and and and and
  • 33.
    Example Decision tableshowing the combinations of input conditions that make an effect true. (Summarized from Cause Effect Graph) Write test cases to exercise each Rule in decision Table. Example: C1 C2 C3 C4 0 0 - - 1 - 0 - - 1 1 0 - 1 1 1 1 - 1 1 E1 E2 E3 E4 E5 1 1 1 1 1
  • 34.
    Error Guessing Fromintuition and experience, enumerate a list of possible errors or error prone situations and then write test cases to expose those errors.
  • 35.
    White Box TestingWhite box testing is concerned with the degree to which test cases exercise or cover the logic (source code) of the program. White box Test case design techniques Statement coverage Basis Path Testing Decision coverage Loop testing Condition coverage Decision-condition coverage Multiple condition coverage Data flow testing
  • 36.
    White Box Test-CaseDesign Statement coverage write enough test cases to execute every statement at least once TER (Test Effectiveness Ratio) TER1 = statements exercised / total statements
  • 37.
    Example void functioneval (int A, int B, int X ) { if ( A > 1) and ( B = 0 ) then X = X / A; if ( A = 2 ) or ( X > 1) then X = X + 1; } Statement coverage test cases: 1) A = 2, B = 0, X = 3 ( X can be assigned any value)
  • 38.
    Decision coverage writetest cases to exercise the true and false outcomes of every decision TER2 = branches exercised / total branches Condition coverage write test cases such that each condition in a decision takes on all possible outcomes atleast once may not always satisfy decision coverage White Box Test-Case Design
  • 39.
    Example void functioneval (int A, int B, int X ) { if ( A > 1) and ( B = 0 ) then X = X / A; if ( A = 2 ) or ( X > 1) then X = X + 1; } Decision coverage test cases: 2) A = 2, B = 1, X = 1 ( abe ) 1) A = 3, B = 0, X = 3 (acd) A > 1 and B = 0 A = 2 or X > 1 X = X+1 X = X/ A a c T F b e T F d
  • 40.
    Example Condition coveragetest cases must cover conditions A>1, A<=1, B=0, B !=0 A=2, A !=2, X >1, X<=1 Test cases: 1) A = 1, B = 0, X = 3 (abe) 2) A = 2, B = 1, X = 1 (abe) does not satisfy decision coverage X = X+1 a c T F b e T F d A > 1 and B = 0 A = 2 or X > 1 X = X/ A
  • 41.
    White Box Test-CaseDesign Decision Condition coverage write test cases such that each condition in a decision takes on all possible outcomes at least once and each decision takes on all possible outcomes at least once Multiple Condition coverage write test cases to exercise all possible combinations of True and False outcomes of conditions within a decision
  • 42.
    Example Decision Conditioncoverage test cases must cover conditions A>1, A<=1, B=0, B !=0 A=2, A !=2, X >1, X<=1 also ( A > 1 and B = 0) T, F ( A = 2 or X > 1) T, F Test cases: 1) A = 2, B = 0, X = 4 (ace) 2) A = 1, B = 1, X = 1 (abd) X = X+1 a c T F b e T F d A > 1 and B = 0 A = 2 or X > 1 X = X/ A
  • 43.
    Example Multiple Conditioncoverage must cover conditions 1) A >1, B =0 5) A=2, X>1 2) A >1, B !=0 6) A=2, X <=1 3) A<=1, B=0 7) A!=2, X > 1 4) A <=1, B!=0 8) A !=2, X<=1 Test cases: 1) A = 2, B = 0, X = 4 (covers 1,5) 2) A = 2, B = 1, X = 1 (covers 2,6) 3) A = 1, B = 0, X = 2 (covers 3,7) 4) A = 1, B = 1, X = 1 (covers 4,8)
  • 44.
    Basis Path Testing1. Draw control flow graph of program from the program detailed design or code. 2. Compute the Cyclomatic complexity V(G) of the flow graph using any of the formulas: V(G) = #Edges - #Nodes + 2 or V(G) = #regions in flow graph or V(G) = #predicates + 1
  • 45.
    Example 1 23 4 5 10 6 7 8 9 R4 R3 R2 11 12 13 R1 R6 R5 V(G) = 6 regions V(G) = #Edges - #Nodes + 2 = 17 - 13 + 2 = 6 V(G) = 5 predicate-nodes + 1 = 6 6 linearly independent paths
  • 46.
    Basis Path Testing( contd ) 3. Determine a basis set of linearly independent paths. 4. Prepare test cases that will force execution of each path in the Basis set. The value of Cyclomatic complexity provides an upper bound on the number of tests that must be designed to guarantee coverage of all program statements.
  • 47.
    Loop Testing Aimsto expose bugs in loops Fundamental Loop Test criteria 1) bypass the loop altogether 2) one pass through the loop 3) two passes through the loop before exiting 4) A typical number of passes through the loop, unless covered by some other test
  • 48.
    Loop Testing Nestedloops 1) Set all but one loop to a typical value and run through the single-loop cases for that loop. Repeat for all loops. 2) Do minimum values for all loops simultaneously. 3) Set all loops but one to the minimum value and repeat the test cases for that loop. Repeat for all loops. 4) Do maximum looping values for all loops simultaneously.
  • 49.
    Data Flow TestingSelect test paths of a program based on the Definition-Use (DU) chain of variables in the program. Write test cases to cover every DU chain is at least once.
  • 50.
    Thank You… AnyQuestions ????

Editor's Notes

  • #4 Waterfall Model: Sometime called the linear sequential or classic life cycle model, the waterfall model suggest symmetric, sequential approach to software development the begins at the system level and progress through analysis, design, coding, testing, and support. Figure 1 illustrates the waterfall model for software engineering. The waterfall model encompasses the following activates: System/information engineering and modeling. Because software is always part of a large system (or business), work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software. System engineering and analysis encompass requirements gathering at the system level with a small amount of top level design and analysis. Information engineering encompass requirements gathering at the strategic level and at the business area level. Software requirements analysis. The requirement gathering process is intensified and focused specifically on software. To understand the nature of the program(s) to be built, the software engineering (analyst) must understand the information domain for the software, as well as required function, behavior, performance and interface. Design. Software design is actually a multistep process that focuses on four distinct attributes of a program: data structure, software architecture, interface representation, and procedural (algorithmic) details. The design process translates requirements into a representation of the software the can be assessed for quality before coding begins. Code generation. The design must be translated in to a machine-readable form. The code generation step performs this task. If design is performed in a detailed manner, code generation can be accomplished mechanically. Testing. Once code has been generated, program testing begins. The testing process focuses on the logic internals of the software, ensuring that all statement have been tested, and on the functional externals; that is, conducting test to uncover errors and ensure that defined input will produced actual results that agree with required results. Support. Software will undoubtedly undergo change after it is delivered to the customer (a person exception is embedded software). Change will occur because errors have been encountered, because the software must be adapted to accommodate changes in external environment (changes in Operating system, peripheral devices etc), or because customer requirement, functional or performance enhancement.
  • #5 The prototype paradigm begins with requirement gathering. Developing and customer meet and define the overall objective for the software, identify whatever requirement are known, and outline areas where further definition is mandatory. A “quick design” then occurs. The quick design focuses on a representation of those aspects of the software that will be visible to the customer/user (e.g. input approaches and output format). The quick design leads to the construction of a prototype. The prototype is evaluated by the customer/user and used to refine requirement for the software to be developed. Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the development to better understand what needs to be done.
  • #6 Rapid application development (RAD) is an incremental software development process model the emphasizes an extremely short development cycle. The RAD model is a “high-speed” adaptation of the linear sequential model in which rapid development is achieved by using component-based-construction. If requirement are well understood and project scope is constrained, the RAD process enables a development team to create a “fully functional system” within very short time periods. Used primarily for information system application, the RAD approach encompasses the following phases: Business Modeling. The information flow among business function is modeled in a way that following question. What information drives the business process? What information is generated? Who generated it? Where does the information go? Who process it? Data Modeling. The flow defined as a part of the business-modeling phase is refined into a set of data object that are need to support the business. The characteristics of each object are identified and the relationship between these objects defined. Process Modeling. The data objects defined in the data-modeling phase are transformed to achieve the information flow necessary to implement a business function. Process descriptions are created for adding modifying, deleting, or retrieving a data object. Application generation. RAD assumes the use of fourth generation techniques. Rather that creating software using conventional third generation programming languages the RAD process works to reuse existing program components or create reusable components. Testing and turnover. Since the RAD process emphasizes reuse, many of the program components have already been tested. This reduces overall testing time.
  • #11 The analogy for this is inspecting a car without running it.
  • #12 Boris Beizer gives the following explanation of the different objective that independent testing has over developer testing: &amp;quot;The purpose of independent testing is to provide a different perspective and, therefore, different tests; furthermore to conduct those tests in a richer [...] environment than is possible for the developer.&amp;quot; [ BEI95 ]
  • #14 The four areas the structural or white box testing encompasses are: Directly testing low-level functions, Procedure, Subroutines, or libraries. In MS Windows these are called APIs Testing the software at the top level, as a completed program, but adjusting your test cases based on what you know about the software’s operation. Gaining access to read variables and state information form the software to help you determine whether your tests are doing what you thought. And being able to force the software to do things that would be difficult if you tested it normally. Measuring how much of the code and specifically what code you hit when you run your tests and then adjusting your tests to remove the redundant test cases and add missing ones.