Chapter5

Chapter 5 Retrospective on
Functional Testing
322 235 Software Testing
By
Wararat Songpan(Rungworawut),PH.D.
Department of Computer Science,
Faculty of Science,
Khon Kaen University, Thailand
322 235
การทดสอบซอฟต์แวร์
1
Functional testing or Black Box
•
•
We have seen 3 main types of functional testing, BVT, EC, and DT.
Boundary Value Testing (BVT):
o
o
o
o
•
Equivalence Class Testing (EC):
o
o
o
o
•
Boundary Value Analysis (BVA) – 5 extreme values – single assumption
Robustness Testing (RT) – 7 extreme values – single assumption
Worst-case Testing (WT) – 5 extreme values – Multiple assumption
Robust Worst-case Testing (RWT) - 7 extreme values – Multiple
assumption
Weak Normal (WN) - valid EC(try no repeat) -> single assumption
Strong Normal (SN) - valid EC (Every) -> multiple assumption
Weak Robust (WR) - valid & Invalid EC -> single assumption
Strong Robust (SR) - valid & Invalid EC-> multiple assumption
Decision Table-based Testing (DT):
o It finds out of some test cases that do not make sense
(Impossible test cases)
322 235
การทดสอบซอฟต์แวร์
2
Test effort (1)
•
Look at questions related to testing effort, testing efficiency,
and testing effectiveness between 3 techniques.
Number of Test Cases
high
low
Sophistication
Boundary
value
322 235
การทดสอบซอฟต์แวร์
Equivalence
class
Decision
table
3
Test effort (1)
Effort to Identify Test Cases
high
low
Sophistication
Boundary
value
322 235
การทดสอบซอฟต์แวร์
Equivalence
class
Decision
table
4
Test Identification Effort VS Test Execution Effort (BVT)
Number of Test Cases
Effort to Identify Test Cases
high
high
low
low
Sophistication
BoundaryEquivalence Decision
value
table
class
Sophistication
Boundary Equivalence Decision
value
class
table
BVA: They are mechanical on generating test cases and therefore are easy
to automate.
322 235
การทดสอบซอฟต์แวร์
5
Test Identification Effort VS Test Execution Effort (EC)
Number of Test Cases
Effort to Identify Test Cases
high
high
low
low
Sophistication
BoundaryEquivalence Decision
value
table
class
Sophistication
Boundary Equivalence Decision
value
class
table
EC: The equivalence class techniques take into account data dependencies
and the program logic however, more thought and care is required to define
the equivalence relation, partition the domain, and identify the test cases.
322 235
การทดสอบซอฟต์แวร์
6
Test Identification Effort VS Test Execution Effort (EC)
Number of Test Cases
Effort to Identify Test Cases
high
high
low
low
Sophistication
BoundaryEquivalence Decision
value
table
class
Sophistication
Boundary Equivalence Decision
value
class
table
DT: The decision table technique is the most sophisticated,
because it requires that we consider both data and logical
dependencies.
322 235
การทดสอบซอฟต์แวร์
7
Functional Test Cases: BVA (15 TCs)
Test case ID
a
b
c
Expected Results
1
100
100
1
Isosceles
2
100
100
2
Isosceles
3
100
100
100
Equilateral
4
100
100
199
Isosceles
5
100
100
200
Not a Trianle
6
100
1
100
Isosceles
7
100
2
100
Isosceles
8
100
100
100
Equilateral
9
100
199
100
Isosceles
10
100
200
100
Not a Triangle
11
1
100
100
Isosceles
12
2
100
100
Isosceles
13
100
100
100
Equilateral
14
199
100
100
Isosceles
15
200
100
100
8
Not a Triangle
Functional Test Cases: WN.EC (11 TCs)
Test case ID
a
b
c
Expected Results
1
5
5
5
Equilateral
2
5
5
3
Isosceles
3
5
3
5
Isosceles
4
3
5
5
Isosceles
5
3
4
5
Scalene
6
8
3
4
Not a triangle
7
7
3
4
Not a triangle
8
3
8
4
Not a triangle
9
3
7
4
Not a triangle
10
3
4
8
Not a triangle
11
3
4
7
Not a triangle
9
Functional Test Cases: DT (8 TCs.)
Test case ID
a
b
c
Expected Results
DT1
4
1
2
Not a Triangle
DT2
1
4
2
Not a Triangle
DT3
1
2
4
Not a Triangle
DT4
5
5
5
Equilateral
DT5
?
?
?
Impossible
DT6
?
?
?
Impossible
DT7
2
2
3
Isosceles
DT8
?
?
?
Impossible
DT9
2
3
2
Isosceles
DT10
3
2
2
Isosceles
DT11
3
4
5
Scalene
10
The number of test cases
Test cases per method - Triangle Program
350
300
250
200
150
100
50
0
RWT
322 235
WT
การทดสอบซอฟต์แวร์
RT
BVA
SR.EC
SN.EC WR.EC WN.EC
DT
11
The number of test cases
Test cases per method – NextDate Problem
350
300
250
200
150
100
50
0
RWT
322 235
WT
การทดสอบซอฟต์แวร์
RT
BVA
SR.EC
SN.EC
WR.EC WN.EC
DT
12
The number of test cases
Test cases per method –Commission Problem
350
300
250
200
150
100
50
0
RWT
322 235
WT
การทดสอบซอฟต์แวร์
RT
BVA
SR.EC
SN.EC WR.EC WN.EC
DT
13
Test cases Trend Line: All Program
Test case per method-all problem
350
300
250
200
150
100
50
0
RWT
322 235
WT
การทดสอบซอฟต์แวร์
RT
BVA
SR.EC
SN.EC WR.EC WN.EC
DT
14
Testing Efficiency (1)
• The data above, reveal the fundamental limitation of
functional testing: the twin possibilities of
o gaps of untested functionality
o redundant tests
• For example:
o The decision table technique for the NextDate program
generated 22 test cases (fairly complete)
o The worst case boundary analysis generated 125 cases. These are
fairly redundant (check January 1 for five different years, only a
few February cases but none on February 28, and February 29,
and no major testing for leap years).
o The strong equivalence class test cases generated 36 test cases
11 of which are impossible.
• There are gaps and redundancy in functional test cases, and
these are reduced by using more sophisticated techniques
(i.e. Decision Tables).
15
Testing Efficiency (2)
•
The question is how can we quantify what we mean by the term
testing efficiency.
•
The intuitive notion of an efficient testing technique is that it
produces a set of test cases that are “just right” that is, with no
gaps and no redundancy.
•
We can even develop various ratios of useful test cases (i.e. not
redundant, and with no gaps) with respect to the total number
of test cases generated by method A and method B.
•
One way to identify redundancy by annotating test cases with a
brief purpose statement. Test cases with the same purpose
provide a sense of redundancy measure.
•
Detecting gaps is harder and this can be done only by
comparing two different methods, even though there are no
guarantees for complete detection of gaps.
•
Overall, the structural methods (we will see later in the course),
support interesting and useful metrics and these can provide a
better quantification of testing efficiency.
16
Guidelines
• Kinds of faults may reveal some pointers as to which
testing method to use.
• If we do not know the kinds of faults that are likely to
occur in the program then the attributes most
helpful in choosing functional testing methods are:
o Whether the variables represent physical or logical
quantities
o Whether or not there are dependencies among variables
o Whether single or multiple faults are assumed
o Whether exception handling is prominent
17
Guidelines for Functional Testing
Technique Selection
•
The following selection guidelines can be considered:
1. If the variables refer to physical quantities and/or are
independent domain testing and BVT and EC can be
considered.
2. If the variables are dependent, EC and decision table testing
can be considered
3. If the single-fault assumption is plausible to assume, boundary
value analysis and robustness testing can be considered
4. If the multiple-fault assumption is plausible to assume, worst case
testing, robust worst case testing, and decision table testing can
be considered
5. If the program contains significant exception handling,
robustness testing and decision table testing can be considered
6. If the variables refer to logical quantities, equivalence class
testing and decision table testing can be considered
18
Bug life Cycle
322 235
การทดสอบซอฟต์แวร์
BTT= Bug
Tracking Tool
/document s
for tracking
19
Test Summary Report
322 235
การทดสอบซอฟต์แวร์
20
Test Summary Report
322 235
การทดสอบซอฟต์แวร์
21
Test Summary Report
322 235
การทดสอบซอฟต์แวร์
22
Basic of Testing effectiveness Metrics
• Test Coverage
• Productivity
• Defect Detection Effectiveness
• Defect Acceptance Ratio
• Estimation Accuracy
322 235
การทดสอบซอฟต์แวร์
23
Test Coverage
Definition
Calculation
Percentage of
requirement that
are covered in test
Execution
Percentage (%)
= (Number of
Requirement
Covered in Test
Execution/ Number
of Requirement
Specified ) * 100
322 235
การทดสอบซอฟต์แวร์
Unit
24
Productivity
Definition
Calculation
The number of test
cases developed or
Test cases
executed per
person hour of
effort
= (Number of Test /hour
Cases Developed/
Effort for Test Case
development or
Execution)
322 235
การทดสอบซอฟต์แวร์
Unit
25
Defect Detection
Effectiveness
Definition
Calculation
Percentage of the
total number of
defects reported
for the application
that are reported
during the testing
stage
Percentage (%)
=(Number of
Defects Reported
during testing and
Accepted)/
(Number of
Defects Reported
during testing and
Accepted +
Number of Defects
Reported after the
testing stage and
Accepted) * 100
322 235
การทดสอบซอฟต์แวร์
Unit
26
Defect Acceptance Ratio
Definition
Calculation
Unit
Percentage of
defects reported
that are accepted
as valid
=(Number of
Defects Accepted
as Valid)/
(Number of
Defects
Reported)*100
Percentage (%)
322 235
การทดสอบซอฟต์แวร์
27
Estimation Accuracy
Definition
Calculation
Ratio of estimated =(Estimated
effort to the actual Effort)/ (Actual
effort for testing
Effort) * 100
322 235
การทดสอบซอฟต์แวร์
Unit
Percentage (%)
28