RUPNotes.pdf

Notes from the Rational Unified Process
Table of Contents
Notes from the Rational Unified Process.............................................................................................1
1. Iterative Development......................................................................................................................1
2. Requirements Management..............................................................................................................5
3. Component Architecture.................................................................................................................12
4. Change Request Management........................................................................................................15
5. Quality Management......................................................................................................................21
6. Software Testing.............................................................................................................................28
1. Iterative Development
What is Iterative Development?
A project using iterative development has a lifecycle consisting of several iterations. An iteration
incorporates a loosely sequential set of activities in business modeling, requirements, analysis and
design, implementation, test, and deployment, in various proportions depending on where in the
development cycle the iteration is located. Iterations in the inception and elaboration phases focus
on management, requirements, and design activities; iterations in the construction phase focus on
design, implementation, and test; and iterations in the transition phase focus on test and
deployment. Iterations should be managed in a timeboxed fashion, that is, the schedule for an
iteration should be regarded as fixed, and the scope of the iteration's c
ontent actively managed to
meet that schedule.
Why Develop Iteratively?
An initial design is likely to be flawed with respect to its key requirements. Late discovery of design
defects results in costly over­runs and, in some cases, even project cancellation.
Page 1 of 34
All projects have a set of risks involved. The earlier in the lifecycle you can verify that you've
avoided a risk, the more accurate you can make your plans. Many risks are not even discovered
until you'
ve attempted to integrate the system. You will never be able to predict all risks regardless
of how experienced the development team is.
In a waterfall lifecycle, you can't v
erify whether you have
stayed clear of a risk until late in the lifecycle.
In an iterative lifecycle, you select what increment to develop in an
iteration based on a list of key risks. Since the iteration produces a
tested executable, you can verify whether you have mitigated the
targeted risks or not.
Benefits of an Iterative Approach An iterative approach is generally superior to a linear or waterfall approach for many different
reasons. Page 2 of 34
•
•
•
•
•
Risks are mitigated earlier, because elements are integrated progressively. Changing requirements and tactics are accommodated. Improving and refining the product is facilitated, resulting in a more robust product. Organizations can learn from this approach and improve their process. Reusability is increased. A customer once said: "With the waterfall approach, everything looks fine until near the end of the
project, sometimes up until the middle of integration. Then everything falls apart. With the iterative
approach, it is very difficult to hide the truth for very long."
Project managers often resist the iterative approach, seeing it as endless hacking. In the Rational
Unified Process, the interactive approach is very controlled; iterations are planned in number,
duration, and objective. The tasks and responsibilities of the participants are defined. Objective
measures of progress are captured. Some rework does take place from one iteration to the next, but
this, too, is carefully controlled.
Mitigating risks An iterative approach lets you mitigate risks earlier, because many risks are only addressed and
discovered during integration. As you unroll the early iteration, you go through all disciplines,
exercising many aspects of the project: tools, off­the­shelf software, people skills, and so on.
Perceived risks may prove not to be risks, and new, unsuspected risks will show up.
Integration is not one "big bang" at the end— elements are incorporated progressively. In reality, the
iterative approach is an almost continuous integration. What used to be a long, uncertain, and
difficult time—t aking up to 40% of the total effort at the end of a project—a nd what was hard to
plan accurately, is divided into six to nine smaller integrations that start with far fewer elements to
integrate.
Accommodating changes The iterative approach lets you take into account changing requirements as they will normally
change along the way.
Changes in requirements and requirements "creep" have always been primary sources of trouble for
a project, leading to late delivery, missed schedules, unsatisfied customers, and frustrated
developers. Twenty­five years ago, Fred Brooks wrote: "Plan to throw one away, you will anyhow."
Users will change their mind along the way. This is human nature. Forcing users to accept the
system as they originally imagined it is wrong. They change their minds because the context is
changing—th ey learn more about the environment and the technology, and they see intermediate
demonstration of the product as it's b
eing developed.
An iterative lifecycle provides management with a way of making tactical changes to the product.
For example, to compete with existing products, you may decide to release a reduced­functionality
product earlier to counter a move by a competitor, or you may adopt another vendor for a given
technology.
Page 3 of 34
Iteration also allows for technological changes along the way. If some technology changes or
becomes a standard as new technology appears, the project can take advantage of it. This is
particularly the case for platform changes and lower­level infrastructure changes.
Reaching higher quality
An iterative approach results in a more robust architecture because errors are corrected over several
iterations. Early flaws are detected as the product matures during the early iterations. Performance
bottlenecks are discovered and can be reduced, as opposed to being discovered on the eve of
delivery.
Developing iteratively, as opposed to running tests once toward the end of the project, results in a
more thoroughly tested product. Critical functions have had many opportunities to be tested over
several iterations, and the tests themselves, and any test software, have had time to mature.
Learning and improving Developers can learn along the way, and the various competencies and specialties are more fully
employed during the whole lifecycle.
Rather than waiting a long time just making plans and honing their skills, testers start testing early,
technical writing starts early, and so on. The need for additional training or external help can be
detected in the early iteration assessment reviews.
The process itself can be improved and refined as it develops. The assessment at the end of an
iteration not only looks at the status of the project from a product­schedule perspective, but also
analyzes what needs to be changed in the organization and the process to perform better in the next
iteration.
Increasing reuse An iterative lifecycle facilitates reuse. It'
s easier to identify common parts as they are partially
designed or implemented, compared to having to identify all commonality up front.
Identifying and developing reusable parts is difficult. Design reviews in early iterations allow
software architects to identify unsuspected, potential reuse, and subsequent iterations allow them to
further develop and mature this common code.
Using an iterative approach makes it easier to take advantage of commercial­off­the­shelf products.
You have several iterations to select them, integrate them, and validate that they fit with the
architecture.
Page 4 of 34
2.
Requirements Management
What is Requirements Management?
Requirements management is a systematic approach to finding, documenting, organizing and
tracking the changing requirements of a system.
We define a requirement as:
A condition or capability to which the system must conform.
Our formal definition of requirements management is that it is a systematic approach to •
•
eliciting, organizing, and documenting the requirements of the system, and establishing and maintaining agreement between the customer and the project team on the
changing requirements of the system. Keys to effective requirements management include maintaining a clear statement of the
requirements, along with applicable attributes for each requirement type and traceability to other
requirements and other project artifacts.
Collecting requirements may sound like a rather straightforward task. In real projects, however, you
will run into difficulties because: •
•
•
•
•
•
•
•
Requirements are not always obvious, and can come from many sources. Requirements are not always easy to express clearly in words. There are many different types of requirements at different levels of detail. The number of requirements can become unmanageable if not controlled. Requirements are related to one another and also to other deliverables of the software
engineering process. Requirements have unique properties or property values. For example, they are neither
equally important nor equally easy to meet. There are many interested parties, which means requirements need to be managed by cross­
functional groups of people. Requirements change. So, what skills do you need to develop in your organization to help you manage these difficulties?
We have learned that the following skills are important to master: •
•
•
•
Problem analysis Understanding stakeholder needs Defining the system Managing scope of the project Page 5 of 34
•
•
Refining the system definition Managing changing requirements Problem Analysis
Problem analysis is done to understand problems, initial stakeholder needs, and propose high­level
solutions. It is an act of reasoning and analysis to find "the problem behind the problem". During
problem analysis, agreement is gained on the real problem(s), and who the stakeholders are. Also,
you define what from a business perspective are the boundaries of the solution, as well as business
constraints on the solution. You should also have analyzed the business case for the project so that
there is a good understanding of what return is expected on the investment made in the system being
built.
Understanding Stakeholder Needs
Requirements come from many sources, examples would be customers, partners, end users, and
domain experts. You need to know how to best determine what the sources should be, get access to
those sources, and also how to best elicit information from them. The individuals who provide the
primary sources for this information are referred to as stakeholders in the project. If you’ re
developing an information system to be used internally within your company, you may include
people with end user experience and business domain expertise in your development team. Very
often you will start the discussions at a business model level rather than a system level. If you’ re
developing a product to be sold to a market place, you may make extensive use of your marketing
people to better understand the needs of customers in that market.
Elicitation activities may occur using techniques such as interviews, brainstorming, conceptual
prototyping, questionnaires, and competitive analysis. The result of the elicitation would be a list of
requests or needs that are described textually and graphically, and that have been given priority
relative one another.
Defining the System To define the system means to translate and organize the understanding of stakeholder needs into a
meaningful description of the system to be built. Early in system definition, decisions are made on
what constitutes a requirement, documentation format, language formality, degree of requirements
specificity (how many and in what detail), request priority and estimated effort (two very different
valuations usually assigned by different people in separate exercises), technical and management
risks, and initial scope. Part of this activity may include early prototypes and design models directly
related to the most important stakeholder requests. The outcome of system definition is a
description of the system that is both natural language and graphical.
Managing the Scope of the Project
To efficiently run a project, you need to carefully prioritize the requirements, based on input from
all stakeholders, and manage its scope. Too many projects suffer from developers working on so
called "Easter eggs" (features the developer finds interesting and challenging), rather than early
focusing on tasks that mitigate a risk in the project or stabilize the architecture of the application. To
Page 6 of 34
make sure that you resolve or mitigate risks in a project as early as possible, you should develop
your system incrementally, carefully choosing requirements to for each increment that mitigates
known risks in the project. To do so, you need to negotiate the scope (of each iteration) with the
stakeholders of the project. This typically requires good skills in managing expectations of the
output from the project in its different phases. You also need to have control of the sources of
requirements, of how the deliverables of the project look, as well as the development process itself.
Refining the System Definition
The detailed definition of the system needs to be presented in such a way that your stakeholders can
understand, agree to, and sign off on them. It needs to cover not only functionality, but also
compliance with any legal or regulatory requirements, usability, reliability, performance,
supportability, and maintainability. An error often committed is to believe that what you feel is
complex to build needs to have a complex definition. This leads to difficulties in explaining the
purpose of the project and the system. People may be impressed, but they will not give good input
since they don’ t understand. You should put lots effort in understanding the audience for the
documents you are producing to describe the system. You may often see a need to produce different
kinds of description for different audiences.
We have seen that the use­case methodology, often in combination with simple visual prototypes, is
a very efficient way of communicating the purpose of the system and defining the details of the
system. Use cases help put requirements into a context, they tell a story of how the system will be
used.
Another component of the detailed definition of the system is to state how the system should be
tested. Test plans and definitions of what tests to perform tells us what system capabilities will be
verified.
Managing Changing Requirements
No matter how careful you are about defining your requirements, there will always be things that
change. What makes changing requirements complex to manage is not only that a changed
requirement means that more or less time has to be spent on implementing a particular new feature,
but also that a change to one requirement may have an impact on other requirements. You need to
make sure that you give your requirements a structure that is resilient to changes, and that you use
traceability links to represent dependencies between requirements and other artifacts of the
development lifecycle. Managing change include activities like establishing a baseline, determining
which dependencies are important to trace, establishing traceability between related items, and
change control.
Types of Requirements
Functional requirements specify actions that a system must be able to perform, without taking
physical constraints into consideration. These are often best described in a use­case model and in
use cases. Functional requirements thus specify the input and output behavior of a system.
Requirements that are not functional, such as the ones listed below, are sometimes called non­
Page 7 of 34
functional requirements. Many requirements are non­functional, and describe only attributes of
the system or attributes of the system environment. Although some of these may be captured in use
cases, those that cannot may be specified in Supplementary Specifications. Nonfunctional
requirements are those that address issues such as those described below.
A complete definition of the software requirements, use cases, and Supplementary Specifications
may be packaged together to define a Software Requirements Specification (SRS) for a particular
"feature" or other subsystem grouping.
Categories of Requirements
There are many different kinds of requirements. One way of categorizing them is described as the
FURPS+ model, using the acronym FURPS to describe the major categories of requirements with
subcategories as shown below. •
•
•
•
•
Functionality Usability Reliability Performance Supportability The "+" in FURPS+ reminds you to include such requirements as: •
•
•
•
design constraints implementation requirements interface requirements physical requirements. Functional requirements may include:
•
•
•
feature sets capabilities security Usability requirements may include such subcategories as: •
•
•
•
•
•
•
human factors
aesthetics consistency in the user interface (see Guidelines: User­Interface) online and context­sensitive help wizards and agents user documentation training materials Reliability requirements to be considered are: •
frequency and severity of failure Page 8 of 34
•
•
•
•
recoverability predictability accuracy mean time between failure (MTBF) A performance requirement imposes conditions on functional requirements. For example, for a
given action, it may specify performance parameters for: •
•
•
•
•
•
•
•
speed efficiency availability accuracy throughput response time recovery time resource usage Supportability requirements may include: •
•
•
•
•
•
•
•
•
testability extensibility adaptability maintainability compatibility configurability serviceability installability localizability (internationalization) A design requirement, often called a design constraint, specifies or constrains the design of a
system.
An implementation requirement specifies or constrains the coding or construction of a system.
Examples are: •
•
•
•
•
required standards implementation languages policies for database integrity resource limits operation environments An interface requirement specifies: •
•
an external item with which a system must interact constraints on formats, timings, or other factors used by such an interaction A physical requirement specifies a physical characteristic that a system must possess; for example,
Page 9 of 34
•
•
•
•
material shape size weight This type of requirement can be used to represent hardware requirements, such as the physical
network configurations required. Traceability
Traceability is the ability to trace a project element to other related project elements, especially
those related to requirements. Project elements involved in traceability are called traceability
items. Typical traceability items include different types of requirements, analysis and design model
elements, test artifacts (test suites, test cases, etc.), and end­user support documentation and training
material, as shown in the figure below. The traceability hierarchy.
The purpose of establishing traceability is to help: •
•
•
•
•
•
•
Understand the source of requirements Manage the scope of the project Manage changes to requirements Assess the project impact of a change in a requirement Assess the impact of a failure of a test on requirements (i.e. if test fails the requirement may
not be satisfied) Verify that all requirements of the system are fulfilled by the implementation. Verify that the application does only what it was intended to do. Traceability helps you understand and manage how input to the requirements, such as Business
Rules and Stakeholder Requests, are translated into a set of key stakeholder/user needs and system
Page 10 of 34
features, as specified in the Vision document. The Use­Case model, in turn, outlines the how these
features are translated to the functionality of the system. The details of how the system interacts
with the outside world are captured in Use Cases, with other important requirements (such as non­
functional requirements, design constraints, etc.) in the Supplementary Specifications. Traceability
allows you to also follow how these detailed specifications are translated into a design, how it is
tested, and how it is documented for the user. For a large system, Use Cases and Supplementary
Specifications may be packaged together to define a Software Requirements Specification (SRS) for
a particular "feature" or other subsystem grouping.
A key concept in helping to manage changes in requirements is that of a "suspect" traceability
link. When a requirement (or other traceability item) changes at either end of a traceability link, all
links associated with that requirement are marked as "suspect". This flags the responsible role to
review the change and determine if the associated items will need to change also. This concept also
helps in analyzing the impact of potential changes.
Traceabilities may be set up to help answer the following sample set of queries: •
•
•
•
•
Show me user needs that are not linked to product features. Show me the status of tests on all use cases in iteration #n. Show me all supplementary requirements linked to tests whose status is untested. Show me the results of all tests that failed, in order of criticality. Show me the features scheduled for this release, which user needs they satisfy, and their
status. Example:
For a Recycling Machine system, the Vision document specifies the following feature: •
FEAT10:The recycling machine will allow the addition of new bottle types.
This feature is traced to a use case "Add New Bottle Type": •
The use case Add New Bottle Type allows the Operator to teach the Recycling Machine to
recognize new kinds of bottles.
This traceability helps us verify that all features have been accounted for in use cases and
supplementary specifications. Page 11 of 34
3.
Component Architecture
What Does Component Architecture Mean?
Components are cohesive groups of code, in source or executable form, with well­defined interfaces
and behaviors that provide strong encapsulation of their contents, and are, therefore, replaceable.
Architectures based around components tend to reduce the effective size and complexity of the
solution, and so are more robust and resilient.
Architectural Emphasis
Use cases drive the Rational Unified Process (RUP) end­to­end over the whole lifecycle, but the
design activities are centered around the notion of system architecture and, for software­intensive
systems, software architecture. The main focus of the early iterations of the process—mostly in the
elaboration phase—is to produ ce and validate a software architecture, which in the initial
development cycle takes the form of an executable architectural prototype that gradually evolves to
become the final system in later iterations.
By executable architecture, we mean a partial implementation of the system built to demonstrate
selected system functions and properties, in particular those satisfying non­functional requirements.
The purpose of executable architecture is to mitigate risks related to performance, throughput,
capacity, reliability, and other "ilities", so that the complete functional capability of the system may
be added in the construction phase on a solid foundation, without fear of breakage.
The RUP provides a methodical, systematic way to design, develop, and validate an architecture.
We offer templates for architectural description around the concepts of multiple architectural views,
and for the capture of architectural style, design rules, and constraints. The Analysis and Design
discipline contains specific activities aimed at identifying architectural constraints and
architecturally significant elements, as well as guidelines on how to make architectural choices. The
management process shows how the planning of the early iterations takes into account the design of
an architecture and the resolution of the major technical risks. See the Project Management
discipline and all activities associated with the Software Architect for further information.
Architecture is important for several reasons: •
It lets you gain and retain intellectual control over the project, to manage its complexity and
to maintain system integrity. A complex system is more than the sum of its parts; more than a succession of small
independent tactical decisions. It must have some unifying, coherent structure to
organize those parts systematically and it must provide precise rules on how to grow the
system without having its complexity "explode" beyond human understanding.
Page 12 of 34
The architecture establishes the means for improved communication and understanding
throughout the project by establishing a common set of references, a common
vocabulary with which to discuss design issues. •
It is an effective basis for large­scale reuse. By clearly articulating the major components and the critical interfaces between them,
an architecture lets you reason about reuse—both i nternal reuse, which is the
identification of common parts, and external reuse, which is the incorporation of ready­
made, off­the­shelf components. However, it also allows reuse on a larger scale: the
reuse of the architecture itself in the context of a line of products that addresses different
functionality in a common domain. •
It provides a basis for project management. Planning and staffing are organized along the lines of major components. Fundamental
structural decisions are taken by a small, cohesive architecture team; they are not
distributed. Development is partitioned across a set of small teams, each responsible for
one or several parts of the system.
Component­based Development
A software component can be defined as a nontrivial piece of software, a module, a package, or a
subsystem, all of which fulfill a clear function, have a clear boundary, and can be integrated in a
well­defined architecture. It's the phy
sical realization of an abstraction in your design.
Components come from different places: •
•
•
In defining a very modular architecture, you identify, isolate, design, develop, and test well­
formed components. These components can be individually tested and gradually integrated
to form the whole system. Furthermore, some of these components can be developed to be reusable, especially the
components that provide common solutions to a wide range of common problems. These
reusable components, which may be larger than just collections of utilities or class libraries,
form the basis of reuse within an organization, increasing overall software productivity and
quality. More recently, the advent of commercially successful, component infrastructures— such as
CORBA, the Internet, ActiveX, and JavaBeans—t rigger a whole industry of off­the­shelf
components for various domains, allowing you to buy and integrate components rather than
developing them all in­house. The first point in the preceding list exploits the old concepts of modularity and encapsulation,
bringing those concepts underlying object­oriented technology a step further. The last two points in
the list shift software development from programming software a line at time, to composing
software by assembling components.
The RUP supports component­based development in these ways: Page 13 of 34
•
•
•
•
The iterative approach allows you to progressively identify components, and decide which
ones to develop, which ones to reuse, and which ones to buy. The focus on software architecture allows you to articulate the structure—th e components
and the ways in which they integrate— which include the fundamental mechanisms and
patterns by which they interact. Concepts, such as packages, subsystems, and layers, are used during Analysis & Design to
organize components and to specify interfaces. Testing is first organized around components, then gradually around larger sets of integrated
components. Page 14 of 34
4. Change Request Management
Definitions
Change Request (CR) – A formally submitted artifact that is used to track all stakeholder requests
(including new features, enhancement requests, defects, changed requirements, etc.) along with
related status information throughout the project lifecycle. All change history will be maintained
with the Change Request, including all state changes along with dates and reasons for the change.
This information will be available for any repeat reviews and for final closing.
Change (or Configuration) Control Board (CCB) – The board that oversees the change process
consisting of representatives from all interested parties, including customers, developers, and users.
In a small project, a single team member, such as the project manager or software architect, may
play this role. In the Rational Unified Process, this is shown by the Change Control Manager role.
CCB Review Meeting – The function of this meeting is to review Submitted Change Requests. An
initial review of the contents of the Change Request is done in the meeting to determine if it is a
valid request. If so, then a determination is made if the change is in or out of scope for the current
release(s), based on priority, schedule, resources, level­of­effort, risk, severity and any other
relevant criteria as determined by the group. This meeting is typically held once per week. If the
Change Request volume increases substantially, or as the end of a release cycle approaches, the
meeting may be held as frequently as daily. Typical members of the CCB Review Meeting are the
Test Manager, Development Manager and a member of the Marketing Department. Additional
attendees may be deemed necessary by the members on an "as needed" basis.
Change Request Submit Form – This form is displayed when a Change Request is being
Submitted for the first time. Only the fields necessary for the submitter to complete are displayed on
the form.
Change Request Combined Form – This form is displayed when you are reviewing a Change
Request that has already been submitted. It contains all the fields necessary to describe the Change
Request.
The following outline of the Change Request process describes the states and statuses of Change
Requests through their overall process, and who needs to be notified during the lifecycle of the
Change Request.
Sample Activities for Managing Change Requests
The following example shows sample activities that a project might adopt for managing a Change
Request (CR) throughout its lifecycle (click on items in the diagram to view descriptions):
Page 15 of 34
Sample Change Request Management (CRM) Process Activity Descriptions:
Activity
Description
Responsibility
Submit CR
Any stakeholder on the project can submit a Change
Request (CR). The Change Request is logged into the
Change Request Tracking System (e.g., Rational
ClearQuest) and is placed into the CCB Review Queue,
by setting the Change Request State to Submitted.
Submitter
Review CR
The function of this activity is to review Submitted
Change Requests. An initial review of the contents of the
Change Request is done in the CCB Review meeting to
determine if it is a valid request. If so, then a
determination is made if the change is in or out of scope
for the current release(s), based on priority, schedule,
resources, level­of­effort, risk, severity and any other
relevant criteria as determined by the group.
CCB
Page 16 of 34
Confirm
Duplicate or
Reject
If a Change Request is suspected of being a Duplicate or
Rejected as an invalid request (e.g., operator error, not
reproducible, the way it works, etc.), a delegate of the
CCB is assigned to confirm the duplicate or rejected
Change Request and to gather more information from the
submitter, if necessary.
CCB Delegate
Update CR
If more information is needed (More Info) to evaluate a
Change Request, or if a Change Request is rejected at any
point in the process (e.g., confirmed as a Duplicate,
Rejected, etc.), the submitter is notified and may update
the Change Request with new information. The updated
Change Request is then re­submitted to the CCB Review
Queue for consideration of the new data.
Submitter
Assign &
Schedule Work
Once a Change Request is Opened, the Project Manager
will then assign the work to the appropriate team member
– depending on the type of request (e.g., enhancement
request, defect, documentation change, test defect, etc.) –
and make any needed updates to the project schedule.
Project
Manager
Make Changes
The assigned team member performs the set of activities
defined within the appropriate section of the process (e.g.,
requirements, analysis & design, implementation,
produce user­support materials, design test, etc.) to make
the changes requested. These activities will include all
normal review and unit test activities as described within
the normal development process. The Change Request
will then be marked as Resolved.
Assigned
Team Member
Verify Changes
in Test Build
After the changes are Resolved by the assigned team
member (analyst, developer, tester, tech writer, etc.), the
changes are placed into a test queue to be assigned to a
tester and Verified in a test build of the product.
Tester
Verify Changes
in Release Build
Once the resolved changes have been Verified in a test
build of the product, the Change Request is placed into a
release queue to be verified against a release build of the
product, produce release notes, etc. and Close the Change
Request.
CCB Delegate
(System
Integrator)
Sample States and Transitions for a Change Request The following example diagram shows sample states and who should be notified throughout the
lifecycle of a Change Request (CR) [Click on items in the diagram to view descriptions]:
Page 17 of 34
Sample Change Request Management (CRM) State Descriptions:
State
Definition
Access Control
Submitted
This state occurs as the result of 1) a new Change Request
submission, 2) update of an existing Change Request or
3) consideration of a Postponed Change Request for a
new release cycle. Change Request is placed in the CCB
Review queue. No owner assignment takes place as a
result of this action.
All Users
Postponed
Change Request is determined to be valid, but "out of
scope" for the current release(s). Change Requests in the
Postponed state will be held and reconsidered for future
releases. A target release may be assigned to indicate the
timeframe in which the Change Request may be
Submitted to re­enter the CCB Review queue.
Admin Project Manager
Page 18 of 34
A Change Request in this state is believed to be a
duplicate of another Change Request that has already
been submitted. Change Requests can be put into this
state by the CCB Review Admin or by the team member
assigned to resolve it. When the Change Request is
placed into the Duplicate state, the Change Request
number it duplicates will be recorded (on the Attachments
tab in ClearQuest). A submitter should initially query the
Change Request database for duplicates of a Change
Request before it is submitted. This will prevent several
steps of the review process and therefore save a lot of
time. Submitters of duplicate Change Requests should be
added to the notification list of the original Change
Request for future notifications regarding resolution.
Admin A Change Request in this state is determined by in the
CCB Review Meeting or by the assigned team member to
be an invalid request or more information is needed from
the submitter. If already assigned (Open), the Change
Request is removed from the resolution queue and will be
reviewed again. A designated authority of the CCB is
assigned to confirm. No action is required from the
submitter unless deemed necessary, in which case the
Change Request state will be changed to More Info. The
Change Request will be reviewed again in the CCB
Review Meeting considering any new information. If
confirmed invalid, the Change Request will be Closed by
the CCB and the submitter notified.
Admin More Info
Insufficient data exists to confirm the validity of a Reject
or Duplicate Change Request. The owner automatically
gets changed to the submitter who is notified to provide
more data.
Admin
Opened
A Change Request in this state has been determined to be
"in scope" for the current release and is awaiting
resolution. It has been slated for resolution before an
upcoming target milestone. It is defined as being in the
"assignment queue". The meeting members are the sole
authority for opening a Change Request into the
resolution queue. If a Change Request of priority two or
higher is found, it should be brought to the immediate
attention of the QE or Development Manager. At that
point they may decide to convene an emergency CCB
Review Meeting or simply open the Change Request into
the resolution queue instantly.
Admin Duplicate
Rejected
Project Manager
QE Manager
Development
Project Manager
Development
Manager
Test Manager
Project Manager
Development
Manager
QE Department
Page 19 of 34
Assigned
An Opened Change Request is then the responsibility of
the Project Manager to Assign Work based on the type of
Change Request and update the schedule, if appropriate.
Project Manager
Resolved
Signifies that the resolution of this Change Request is
complete and is now ready for verification. If the
submitter was a member of the QE Department, the
owner automatically gets changed to the submitting QE
member; otherwise, it changes to the QE Manager for
manual re­assignment.
Admin Project Manager
Development
Manager
QE Manager
Development
Department
Test Failed
Verified
Closed
A Change Request that fails testing in either a test build
or a release build will be placed in this state. The owner
automatically gets changed to the team member who
resolved the Change Request.
Admin A Change Request in this state has been Verified in a test
build and is ready to be included in a release.
Admin Change Request no longer requires attention. This is the
final state a Change Request can be assigned. Only the
CCB Review Admin has the authority to close a Change
Request. When a Change Request is Closed, the
submitter will receive an email notification with the final
disposition of the Change Request. A Change Request
may be Closed: 1) after its Verified resolution is validated
in a release build, 2) when its Reject state is confirmed, or
3) when it is confirmed as a Duplicate of an existing
Change Request. In the latter case, the submitter will be
informed of the duplicate Change Request and will be
added to that Change Request for future notifications (see
the definitions of states "Reject" and "Duplicate" for
more details). If the submitter wishes to contest a closing,
the Change Request must be updated and re­Submitted
for CCB review.
Admin
QE Department
QE Department
The state ‘tags’ provide the basis for reporting Change Request (aging, distribution or trend)
statistics.
Page 20 of 34
5. Quality Management
Introduction
Quality is something we all strive for in our products, processes, and services. Yet when asked,
"What is Quality?", everyone has a different opinion. Common responses include one or the other
of these:
•
•
"Quality ... I'm
not sure how to describe it, but I'll
know it when I see it." "... meeting requirements." Perhaps the most frequent reference to quality, specifically related to software, is this remark
regarding its absence:
"How could they release something like this with such low quality!?"
These commonplace responses are telling, but they offer little room to rigorously examine quality
and improve upon its execution. These comments all illustrate the need to define quality in a
manner in which it can be measured and achieved.
Quality, however, is not a singular characteristic or attribute. It's
multi­dimensional and can be
possessed by a product or a process. Product quality is concentrated on building the right product,
whereas process quality is focused on building the product correctly.
Definition of Quality The definition of quality, taken from The American Heritage Dictionary of the English Language,
3rd Edition, Houghton Mifflin Co.,© 1992, 1996, is:
Quality (kwol'i­t
e) n., pl. ­ties. Abbr. qlty. 1.a. An inherent or distinguishing
characteristic; a property. b. A personal trait, especially a character trait. 2. Essential
character; nature. 3.a. Superiority of kind. b. Degree or grade of excellence.
As demonstrated by this definition, quality is not a single dimension, but many. To use the
definition and apply it to software development, the definition must be refined. Therefore, for the
purposes of the Rational Unified Process (RUP), quality is defined as:
"...the characteristic of having demonstrated the achievement of producing a product
that meets or exceeds agreed­on requirements—a s measured by agreed­on measures and
criteria— and that is produced by an agreed­on process."
Achieving quality is not simply "meeting requirements", or producing a product that meets user
Page 21 of 34
needs and expectations. Rather, quality also includes identifying the measures and criteria to
demonstrate the achievement of quality, and the implementation of a process to ensure that the
product created by the process has achieved the desired degree of quality, and can be repeated and
managed.
Who Owns Quality?
A common misconception is that quality is owned by, or is the responsibility of, one group. This
myth is often perpetuated by creating a group, sometimes called Quality Assurance—othe r names
include Test, Quality Control, and Quality Engineering— and giving them the charter and the
responsibility for quality.
Quality is, and should be, the responsibility of everyone. Achieving quality must be integral to
almost all process activities, instead of a separate discipline, thereby making everyone responsible
for the quality of the products (or artifacts) they produce and for the implementation of the process
in which they are involved.
Each role contributes to the achievement of quality in the following ways: •
•
Product quality—th e contribution to the overall achievement of quality in each artifact being
produced. Process quality—th e achievement of quality in the process activities for which they are
involved. Everyone shares in the responsibility and glory for achieving a high­quality product, or in the shame
of a low­quality product. But only those directly involved in a specific process component are
responsible for the glory, or shame, for the quality of those process components (and the artifacts).
Someone, however, must take the responsibility for managing quality; that is, providing the
supervision to ensure that quality is being managed, measured, and achieved. The role responsible
for managing quality is the Project Manager.
Common Misconceptions about Quality
There are many misconceptions regarding quality and the most common are the following. Quality can be added to or "tested" into a product
Just as a product cannot be produced if there is no description of what it is, what it needs to do, who
uses it and how it'
s used, and so on, quality and its achievement cannot be attained if it's
not
described, measured, and part of the process of creating the product.
Quality is a single dimension, attribute, or characteristic and means the same thing to
everyone
Quality is not a single dimension, attribute, or characteristic. Quality is measured in many ways—
quality metrics and criteria are established to meet the needs of project, organization, and customer.
Page 22 of 34
Quality can be measured along several dimensions—som e apply to process quality; some to product
quality; some to both. Quality can be measured for: •
•
•
•
•
Progress— such as use cases demonstrated or milestones completed Variance—di fferences between planned and actual schedules, budgets, staffing
requirements, and so forth Reliability—r esistance to failure (crashing, hanging, memory leaks, and so on) during
execution Function—t he artifact implements and executes the required use cases as intended Performance—th e artifact executes and responds in a timely and acceptable manner, and
continues to perform acceptably when subjected to real­world operational characteristics
such as load, stress, and lengthy periods of operation Quality happens on its own
Quality cannot happen by itself. For quality to be achieved, a process is must be implemented,
adhered to, and measured. The purpose of the RUP is to provide a disciplined approach to assigning
tasks and responsibilities within a development organization. Our goal is to ensure the production of
high­quality software that meets the needs of our end users, within a predictable schedule and
budget. The RUP captures many of the best practices in modern software development in a form
that can be tailored for a wide range of projects and organizations. The Environment discipline
gives you guidance about how to best configure the process to your needs.
Processes can be configured and quality— criteria for acceptability—c an be negotiated, based upon
several factors. The most common factors are: •
•
•
•
•
Risk (including liability) Market opportunities Revenue requirements Staffing or scheduling issues Budgets Changes in the process and criteria for acceptability should be identified and agreed upon at the
outset of the project.
Management of Quality in the RUP
Managing quality is done for these purposes: •
•
•
To identify appropriate indicators (metrics) of acceptable quality To identify appropriate measures to be used in evaluating and assessing quality To identify and appropriately address issues affecting quality as early and effectively as
possible Page 23 of 34
Managing quality is implemented throughout all disciplines, workflows, phases, and iterations in
the RUP. In general, managing quality throughout the lifecycle means you implement, measure, and
assess both process quality and product quality. Some of the efforts expended to manage quality in
each discipline are highlighted in the following list: •
•
•
•
•
•
•
Managing quality in the Requirements discipline includes analyzing the requirements
artifact set for consistency (between artifact standards and other artifacts), clarity (clearly
communicates information to all shareholders, stakeholders, and other roles), and precision
(the appropriate level of detail and accuracy). In the Analysis & Design discipline, managing quality includes assessing the design artifact
set, including the consistency of the design model, its translation from the requirements
artifacts, and its translation into the implementation artifacts. In the Implementation discipline, managing quality includes assessing the implementation
artifacts and evaluating the source code or executable artifacts against the appropriate
requirements, design, and test artifacts. The Test discipline is highly focused toward managing quality, as most of the efforts
expended in this discipline address the three purposes of managing quality, identified
previously. The Environment discipline, like the Test discipline, includes many efforts addressing the
purposes of managing quality. Here you can find guidance on how to best configure your
process to meet your needs. Managing quality in the Deployment discipline includes assessing the implementation and
deployment artifacts, and evaluating the executable and deployment artifacts against the
appropriate requirements, design, and test artifacts needed to deliver the product to your
customer. The Project Management discipline includes an overview of many efforts for managing
quality, including the reviews and audits required to assess the implementation, adherence,
and progress of the development process. Measuring Quality
The measurement of Quality, whether Product or Process, requires the collection and analysis of
information, usually stated in terms of measurements and metrics. Measurements are made
primarily to gain control of a project, and therefore be able to manage it. They are also used to
evaluate how close or far we are from the objectives set in the plan in terms of completion, quality,
compliance to requirements, etc.
Metrics are used to attain two goals, knowledge and change (or achievement):
Knowledge goals: they are expressed by the use of verbs like evaluate, predict, monitor.
You want to better understand your development process. For example, you may want
to assess product quality, obtain data to predict testing effort, monitor test coverage, or
track requirements changes.
Change or achievement goals: these are expressed by the use of verbs such as
increase, reduce, improve, or achieve. You are usually interested in seeing how things
change or improve over time, from an iteration to another, from a project to another.
Page 24 of 34
Metrics for both goals are used for measuring Process and Product Quality.
All metrics require criteria to identify and to determine the degree or level at which of acceptable
quality is attained. The level of acceptable quality is negotiable and variable, and needs to be
determined and agreed upon early in the development lifecycle For example, in the early iterations,
a high number of application defects are acceptable, but not architectural ones. In late iterations,
only aesthetic defects are acceptable in the application.
The acceptance criteria may be stated in many ways and may include more than one measure.
Common acceptance criteria may include the following measures: •
•
•
•
•
Defect counts and / or trends, such as the number of defects identified, fixed, or that remain
open (not fixed). Test coverage, such as the percentage of code, or use cases planned or implemented and
executed (by a test). Test coverage is usually used in conjunction with the defect criteria
identified above). Performance, such as a the time required for a specified action (use case, operation, or other
event) to occur. This is criteria is commonly used for Performance testing, Failover and
recovery testing, or other tests in which time criticality is essential. Compliance. This criteria indicates the degree to which an artifact or process activity / step
must meet an agreed upon standard or guideline. Acceptability or satisfaction. This criteria is usually used with subjective measures, such as
usability or aesthetics. Measuring Product Quality
Stating the requirements in a clear, concise, and testable fashion is only part of achieving product
quality. It is also necessary to identify the measures and criteria that will be used to identify the
desired level of quality and determine if it has been achieved. Measures describe the method used to
capture the data used to assess quality, while criteria defines the level or point at which the product
has achieved acceptable (or unacceptable) quality.
Measuring the product quality of an executable artifact is achieved using one or more measurement
techniques, such as: •
•
•
reviews / walkthroughs inspection execution Different metrics are used, dependent upon the nature the quality goal of the measure. For example,
in reviews, walkthroughs, and inspections, the primary goal is to focus on the function and
reliability quality dimensions. Defects, coverage, and compliance are the primary metrics used
when these measurement techniques are used. Execution however, may focus on function,
reliability, or performance. Therefore defects, coverage, and performance are the primary metrics
used. Other measures and metrics will vary based upon the nature of the requirement.
Page 25 of 34
Measuring Process Quality
The measurement of Process Quality is achieved by collecting both knowledge and achievement
measures. 1. The degree of adherence to the standards, guidelines, and implementation of an accepted
process. 2. Status / state of current process implementation to planned implementation. 3. The quality of the artifacts produced (using product quality measures described above). Measuring process quality is achieved using one or more measurement techniques, such as: •
•
•
progress ­ such as use cases demonstrated or milestones completed variance ­ differences between planned and actual schedules, budgets, staffing requirements,
etc. product quality measures and metrics (as described in Measuring Product Quality section
above) Evaluating Quality
Throughout the product lifecycle, to manage quality, measurements and assessments of the process
and product quality are performed. The evaluation of quality may occur when a major event occurs,
such as at the end of a phase, or may occur when an artifact is produced, such as a code
walkthrough. Described below are the different evaluations that occur during the lifecycle.
Milestones and Status Assessments Each phase and iteration in the Rational Unified Process (RUP) results in the release (internal or
external) of an executable product or subset of the final product under development, at which time
assessments are made for the following purposes: •
•
•
•
Demonstrate achievement of the requirements (and criteria) Synchronize expectations Synchronize related artifacts into a baseline Identify risks Major milestones occur at the end of each of the four RUP phases and verify that the objectives of
the phase have been achieved. There are four major Milestones:
•
•
•
•
Lifecycle Objectives Milestone Lifecycle Architecture Milestone Initial Operational Capability Milestone Product Release Milestone Page 26 of 34
Minor milestones occur at the conclusion of each iteration and focus on verifying that the objectives
of the iteration have been achieved. Status assessments are periodic efforts to assess ongoing
progress throughout an iteration and/or phase.
Inspections, Reviews, and Walkthroughs
Inspections, Reviews, and Walkthroughs are specific techniques focused on evaluating artifacts and
are a powerful method of improving the quality and productivity of the development process.
Conducting these should be done in a meeting format, with one role acting as a facilitator, and a
second role recording notes (change requests, issues, questions, and so on).
The IEEE standard Glossary (1990 Ed.) defines these three kinds of efforts as: •
Review A formal meeting at which an artifact, or set of artifacts are presented to the user,
customer, or other interested party for comments and approval.
•
Inspection A formal evaluation technique in which artifacts are examined in detail by a person or
group other than the author to detect errors, violations of development standards, and
other problems.
•
Walkthrough A review process in which a developer leads one or more members of the development
team through a segment of an artifact that he or she has written while the other members
ask questions and make comments about technique, style, possible errors, violation of
development standards, and other problems.
Page 27 of 34
6. Software Testing
Stages of Testing
Testing is usually applied to different types of targets in different stages or levels of work effort.
These stages vary in importance as the software development lifecycle unfolds, but it's
important
ensure a balance of focus is reatined across these different work efforts.
Developer Testing
Developer Testing denotes the aspects of the test effort that it are most appropriate for the software
developers to undertake. The is in contrast to the System Testing effort which denotes the aspects of
the test effort that are most appropriate for a group independent of the software developers to
undertake.
Traditionally, developer testing has been thought of mainly in terms of unit testing, with occasional
focus on aspects of integration and infrequently other aspects of testing. Following this traditional
approach presents risks to software quality in that important testing concerns that are often
discovered at the boundary of these distinctions are often ignored by both work groups.
The better approach is to divide the work effort so that there is some natural overlap; the exact
nature of that overlap based on the needs of the individual project. We recommend fostering an
environment where developer and independent system testers share in a single vision of quality.
Unit Test
A more traditional distinction, unit test, implemented early in the iteration, focuses on verifying the
smallest testable elements of the software. Unit testing is typically applied to components in the
implementation model to verify that control flows and data flows are covered and function as
expected. These expectations are based on how the component participates in executing a use case,
which you find from sequence diagrams for that use case. The Implementer performs unit test as the
unit is developed. The details of unit test are described in the Implementation discipline.
Integration Test
A more traditional distinction, integration testing is performed to ensure that the components in the
implementation model operate properly when combined to execute a use case. The target­of­test is a
package or a set of packages in the implementation model. Often the packages being combined
come from different development organizations. Integration testing exposes incompleteness or
mistakes in the package'
s interface specifications.
Page 28 of 34
System Test
System testing denotes the aspects of the test effort that are most appropriate for a group
independent of the software developers to undertake. Traditionally done when the software is
functioning as a whole, an iterative lifecycle allows system testing to occur much earlier, as soon as
well­formed subsets of the use case behavior are implemented. The target is the typically the end­to­
end functioning of the system.
Acceptance Test
"User" acceptance testing is typically the final test action prior to deploying the software. The goal
of acceptance testing is to verify that the software is ready and can be used by the end­users to
perform those functions and tasks the software was built to do. Types of Test
There is much more to testing computer software than simply evaluating the functions, interface,
and response time characteristics of a target­of­test. Additional tests must focus on characteristics /
attributes such as the target­of­test's: •
•
•
•
integrity (resistance to failure) ability to be installed / executed on different platforms ability to handle many requests simultaneously ... In order to achieve this, many different types of tests are implemented and executed, each test type
having a specific objective and support technique. Each technique focuses on testing one or more
characteristics or attributes of the target­of­test.
The following test types are listed based on the most obvious quality dimension :
Functionality
•
Function test: Tests focused on validating the target­of­test functions as intended,
providing the required service(s), method(s), or use case(s). This test is implemented and
executed against different target­of­tests, including units, integrated units, application(s),
and systems.
•
Security test: Tests focused on ensuring the target­of­test, data, (or systems) is accessible
to only those actors intended. This test is implemented and executed various targets­of­
test.
•
Volume test: Testing focused on verifying the target­of­test ability to handle large
amounts of data, either as input and output or resident within the database. Volume
testing includes test strategies such as creating queries that [would] return the entire
contents of the database, or have so many restrictions that no data is returned, or data
entry of the maximum amount of data in each field. Page 29 of 34
Usability
•
Usability test: Tests which focus on: ➢ human factors, ➢ aesthetics, ➢ consistency in the user interface, ➢ online and context­sensitive help, ➢ wizards and agents, ➢ user documentation, and ➢ training materials. Reliability
•
Integrity test: Tests which focus on assessing the target­of­test's r
obustness (resistance to
failure) and technical compliance to language, syntax, and resource usage. This test is
implemented and executed against different target­of­tests, including units and integrated
units.
•
Structure test: Tests that focus on assessing the target­of­test's a
dherence to its design
and formation. Typically, this test is done for web­enabled applications ensuring that all
links are connected, appropriate content is displayed, and there is no orphaned content. •
Stress test: A type of reliability test that focuses on evaluating how the system responds
under abnormal conditions. Stresses on the system may include extreme workloads,
insufficient memory, unavailable services and hardware, or limited shared resources.
These tests are often performed to gain a better understanding of how and in what areas
the system will break, so that contingency plans and upgrade maintenance can be planned
and budgeted for well in advance. Performance
•
Benchmark test: A type of performance test that compares the performance of a [new or
unknown] target­of­test to a known, reference­workload and system.
•
Contention test: Tests focused on validating the target­of­test's c
an acceptably handle
multiple actor demands on the same resource (data records, memory, etc.).
•
Load test: A type of performance test used to validate and assess acceptability of the
operational limits of a system under varying workloads while the system­under­test
remains constant. In some variants, the workload remains constant and the configuration
of the the system­under­test is varied. Measurements are usually taken based on the
workload throughput and in­line transaction response time. The variations in workload
will usually include emulation of average and peak workloads that will occur within
normal operational tolerances. •
Performance profile: A test in which the target­of­test's timi
ng profile is monitored,
including execution flow, data access, function and system calls to identify and address
Page 30 of 34
performance bottlenecks and inefficient processes. Supportability
•
Configuration test: Tests focused on ensuring the target­of­test functions as intended on
different hardware and / or software configurations. This test may also be implemented as
a system performance test.
•
Installation test: Tests focused on ensuring the target­of­test installs as intended on
different hardware and / or software configurations and under different conditions (such
as insufficient disk space or power interrupt). This test is implemented and executed
against application(s) and systems. Measures of Test
The key measures of a test include coverage and quality.
Test coverage is the measurement of testing completeness, and is based on the coverage of testing,
expressed either by the coverage of test requirements and test cases, or the coverage of executed
code.
Quality is a measure is of reliability, stability, and the performance of the target­of­test (system or
application­under­test). Quality is based upon the evaluation of test results and the analysis of
change requests (defects) identified during the testing.
Coverage Measures
Coverage metrics provides answers to the question "How complete is the testing?" The most
commonly used coverage measures are requirements­based and code­based test coverage. In brief,
test coverage is any measure of completeness with respect to either a requirement (requirement­
based) or the code's d
esign / implementation criterion (code­based), such as the verification of use
cases (requirement­based) or execution of all lines of code (code­based).
Any systematic testing activity is based on at least one test coverage strategy. The coverage strategy
guides the design of test cases by stating the general purpose of the testing. The statement of
coverage strategy can be as simple as verifying all performance.
If the requirements are completely cataloged, a requirements­based coverage strategy may be
sufficient for yielding a quantifiable measure of testing completeness. For example, if all
performance test requirements have been identified, then the test results can be referenced to get
measures, such as 75 percent of the performance test requirements have been verified.
If code­based coverage is applied, test strategies are formulated in terms of how much of the source
code has been executed by tests. This type of test coverage strategy is very important for safety­
critical systems.
Page 31 of 34
Both measures can be derived manually (equations given below), or may be calculated by test
automation tools.
Requirements­based test coverage
Requirements­based test coverage is measured several times during the test life cycle and provides
the identification of the test coverage at a milestone in the testing life cycle (such as the planned,
implemented, executed, and successful test coverage). Test coverage is calculated by the following equation: Test Coverage = T(p,i,x,s) / RfT
where:
T is the number of Tests (planned, implemented, executed, or successful) as
expressed as test procedures or test cases.
RfT is the total number of Requirements for Test. Turning the above ratio into percentages allows the following statement of requirements­based test
coverage:
x% of test cases (T(p,i,x,s) in the above equations) have been covered with a success rate
of y%
This is a meaningful statement of test coverage that can be matched against a defined success
criteria. If the criteria have not been met, then the statement provides a basis for predicting how
much testing effort remains.
Code­based test coverage
Code­based test coverage measures how much code has been executed during the test, compared to
how much code there is left to execute. Code coverage can either be based on control flows
(statement, branch, or paths) or data flows. In control­flow coverage, the aim is to test lines of code,
branch conditions, paths through the code, or other elements of the software's
flow of control. In
data­flow coverage, the aim is to test that data states remain valid through the operation of the
software, for example, that a data element is defined before it is used.
Code­based test coverage is calculated by the following equation:
Test Coverage = Ie / TIic
where:
Ie is the number of items executed expressed as code statements, code branches, code
paths, data state decision points, or data element names.
TIic is the total number of items in the code.
Page 32 of 34
Turning this ratio into a percentage allows the following statement of code­based test coverage:
x% of test cases (I in the above equation) have been covered with a success rate of y%
This is a meaningful statement of test coverage that can be matched against a defined success
criteria. If the criteria have not been met, then the statement provides a basis for predicting how
much testing effort remains.
Quality Measures
While the evaluation of test coverage provides the measure of testing completion, an evaluation of
defects discovered during testing provides the best indication of software quality. Quality is the
indication of how well the software meets the requirements, so in this context, defects are identified
as a type of change request in which the target­of­test failed to meet the requirements.
Defect evaluation may be based on methods that range from simple defect counts to rigorous
statistical modeling.
Rigorous evaluation uses assumptions about the arrival or discovery rates of defects during the
testing process. A common model assumes that the rate follows a Poisson distribution. The actual
data about defect rates are then fit to the model. The resulting evaluation estimates the current
software reliability and predicts how the reliability will grow if testing and defect removal continue.
This evaluation is described as software­reliability growth modeling and is an area of active study.
Due to the lack of tool support for this type of evaluation, you should carefully balance the cost of
doing it with the value it adds.
Defects analysis means to analyze the distribution of defects over the values of one or more the
parameters associated with a defect. Defect analysis provides an indication of the reliability of the
software.
For defect analysis, there are four main defect parameters commonly used: •
•
•
•
Status the current state of the defect (open, being fixed, closed, etc.). Priority the relative importance of this defect having to be addressed and resolved. Severity the relative impact of this defect. The impact to the end­user, an organization, third
parties, etc. Source where and what is the originating fault that results in this defect, or what component
will be fixed to eliminate the defect. Defect counts can be reported as a function of time, creating a Defect Trend diagram or report,
defect counts can be reported as a function of one or more defect parameters, like severity or status,
in a Defect Density report. These types of analysis provide a perspective on the trends or
distribution of defects that reveal the software's r
eliability, respectively.
For example, it is expected that defect discovery rates will eventually diminish as the testing and
fixing progresses. A threshold can be established below which the software can be deployed. Defect
Page 33 of 34
counts can also be reported based on the origin in the implementation model, allowing detection of
"weak modules", "hot spots", parts of the software that keep being fixed again and again, indicating
some more fundamental design flaw.
Defects included in an analysis of this kind have to be confirmed defects. Not all reported defects
report an actual flaw, as some may be enhancement requests, out of the scope of the project, or
describe an already reported defect. However, there is value to looking at and analyzing why there
are many defects being reported that are either duplicates or not confirmed defects.
Defect Reports
The Rational Unified Process® recommends defect evaluation based on three categories of reports: •
•
•
•
Defect distribution (density) reports allow defect counts to be shown as a function of one or
two defect parameters. Defect age reports are a special type of defect distribution report. Defect age reports show
how long a defect has been in a particular state, such as Open. In any age category, defects
can also be sorted by another attribute, like Owner. Defect trend reports show defect counts, by status (new, open, or closed), as a function of
time. The trend reports can be cumulative or non­cumulative. Test results and progress reports show the results of test procedure execution over a number
of iterations and test cycles for the application­under­test. Many of these reports are valuable in assessing software quality. The usual test criteria include a
statement about the allowable numbers of open defects in particular categories, such as severity
class. This criterion is easily checked with a defect distribution evaluation. By filtering or sorting on
test requirements, this evaluation can be focused on different sets of requirements.
To be effective producing reports of this kind normally requires tool support.
Page 34 of 34