BCS501 Module 1 Notes
BCS501 Module 1 Notes
MODULE – 1
Unit I:
Software and Software Engineering: The Nature of Software, The Unique Nature of WebApps,
Software Engineering, The Software Process, Software Engineering Practice, Software Myths
Process Models: A Generic Process Model, Process Assessment and Improvement, Prescriptive
Process Models: Waterfall model, Incremental process models, Evolutionary process models,
Concurrent models, Specialized process models. Unified Process, Personal and Team process
models.
Who does it? Software engineers build and support software, and virtually everyone in the
industrialized world uses it either directly or indirectly.
Why is it important? Software is important because it affects nearly every aspect of our
lives and has become pervasive in our commerce, our culture, and our everyday activities
Software engineering is important because it enables us to build complex systems in a timely
manner and with high quality.
What are the steps? You build computer software like you build any successful product,
by applying an agile, adaptable process that leads to a high-quality result that meets the
needs of the people who will use the product. You apply a software engineering approach.
What is the work product? From the point of view of a software engineer, the work product
is the set of programs, content (data), and other work products that are computer software.
But from the user’s viewpoint, the work product is the resultant information that somehow
makes the user’s world better.
Today, software takes on a dual role. It is a product, and at the same time, the vehicle
for delivering a product. As a product, it delivers the computing potential embodied by
computer hardware or more broadly, by a network of computers that are accessible by
local hardware.
As the vehicle used to deliver the product, software acts as the basis for the control of the
computer (operating systems), the communication of information (networks), andthe
creation and control of other programs (software tools and environments).
The role of computer software has undergone significant change over the last half-
century. Dramatic improvements in hardware performance, profound changes in
computing architectures, vast increases in memory and storage capacity, and a wide
variety of exotic input and output options, have all precipitated more sophisticated and
complex computer-based systems. Sophistication and complexity can produce dazzling
results when a system succeeds, but they can also pose huge problems for those who
must build complex systems.
Software has characteristics that are considerably different than those of hardware:
2. Software doesn’t “wear out.”: Figure 1.1 depicts failure rate as a function of time
for hardware. The relationship, often called the “bathtub curve,” indicates that hardware
exhibits relatively high failure rates early in its life; defects are corrected and the failure
rate drops to a steady-state level (hopefully, quite low) for some period of time. As time
passes, however, the failure rate rises again as hardware components suffer from the
cumulative effects of dust, vibration, abuse, temperature extremes, and many other
environmental maladies.
Stated simply, the hardware begins to wear out. Software is not susceptible to the
environmental maladies that cause hardware to wear out. In theory, therefore, the failure
rate curve for software should take the form of the “idealized curve” shown in Figure 1.2.
Undiscovered defects will cause high failure rates early in the life of a program. However,
these are corrected and the curve flattens as shown. The idealized
curve is a gross oversimplification of actual failure models for software. However, the
implication is clear—software doesn’t wear out. But it does deteriorate.
New Challenges
Open-world computing—the rapid growth of wireless networking may soon lead to true
pervasive, distributed computing. The challenge for software engineers will be to develop
systems and application software that will allow mobile devices, personalcomputers, and
enterprise systems to communicate across vast networks.
Net sourcing—the World Wide Web is rapidly becoming a computing engine as well as
a content provider. The challenge for software engineers is to architect simple and
sophisticated applications that provide a benefit to targeted end-user markets worldwide.
Open source—a growing trend that results in distribution of source code for systems
applications so that many people can contribute to its development.
Legacy systems often evolve for one or more of the following reasons:
• The software must be adapted to meet the needs of new computing environments or
technology.
• The software must be enhanced to implement new business requirements.
• The software must be extended to make it interoperable with other more modern
systems or databases.
• The software must be re-architected to make it viable within a network environment.
Network intensiveness. A WebApp resides on a network and must serve the needs of
a diverse community of clients. The network may enable worldwide access and
communication (i.e., the Internet) or more limited access and communication (e.g., a
corporate Intranet).
Concurrency. A large number of users may access the WebApp at one time. In many
cases, the patterns of usage among end users will vary greatly.
Unpredictable load. The number of users of the WebApp may vary by orders of
magnitude from day to day. One hundred users may show up on Monday; 10,000 may
use the system on Thursday.
Performance. If a WebApp user must wait too long, he or she may decide to go
elsewhere.
Availability. Although expectation of 100 percent availability is unreasonable, users of
popular WebApps often demand access on a 24/7/365 basis. Users in Australia or Asia
might demand access during times when traditional domestic software applications in
North America might be taken off-line for maintenance.
Data driven. The primary function of many WebApps is to use hypermedia to present
text, graphics, audio, and video content to the end user. In addition, WebApps are
commonly used to access information that exists on databases that are not an integral
part of the Web-based environment
Content sensitive. The quality and aesthetic nature of content remains an important
determinant of the quality of a WebApp.
Continuous evolution. Unlike conventional application software that evolves over a
series of planned, chronologically spaced releases, Web applications evolve
continuously. It is not unusual for some WebApps (specifically, their content) to be
updated on a minute-by-minute schedule or for content to be independently computed for
each request.
Immediacy. Although immediacy—the compelling need to get software to market
quickly—is a characteristic of many application domains, WebApps often exhibit a time-
to-market that can be a matter of a few days or weeks.
Security. Because WebApps are available via network access, it is difficult, if not
impossible, to limit the population of end users who may access the application. In order
to protect sensitive content and provide secure modes of data transmission, strong
security measures must be implemented throughout the infrastructure that supports a
WebApp and within the application itself.
Aesthetics. An undeniable part of the appeal of a WebApp is its look and feel. When an
application has been designed to market or sell products or ideas, aesthetics may have
as much to do with success as technical design.
In order to build software that is ready to meet the challenges of the twenty-first century,
few simple realities are:
• Software has become deeply embedded in virtually every aspect of our lives, and as a
consequence, the number of people who have an interest in the features and functions
provided by a specific application has grown dramatically.
• The information technology requirements demanded by individuals, businesses, and
governments grow increasing complex with each passing year. The complexity of these
new computer-based systems and products demands careful attention to theinteractions
of all system elements. It follows that design becomes a pivotal activity.
• Individuals, businesses, and governments increasingly rely on software for strategic and
tactical decision making as well as day-to-day operations and control. If the software
fails, people and major enterprises can experience anything from minor inconvenience to
catastrophic failures.
• As the perceived value of a specific application grows, the likelihood is that its user base
and longevity will also grow. As its user base and time-in-use increase, demands for
adaptation and enhancement will also grow. It follows that software should be
maintainable.
organizational commitment to quality. Total quality management, Six Sigma, and similar
philosophies foster a continuous process improvement culture, and it is this culture that
ultimately leads to the development of increasingly more effective approaches to software
engineering. The bedrock of software engineering is a quality focus.
The foundation for software engineering is the process layer. The software engineering
process is the glue that holds the technology layers together and enables rational and
timely development of computer software. Process defines a framework that must be
established for effective delivery of software engineering technology.
The software process forms the basis for management control of software projects and
establishes the context in which technical methods are applied, work products (models,
documents, data, reports, forms, etc.) are produced, milestones are established, quality
is ensured, and change is properly managed.
Software engineering methods provide the technical how-to’s for building software.
Methods encompass a broad array of tasks that include communication, requirements
analysis, design modeling, program construction, testing, and support. Software
engineering methods rely on a set of basic principles that govern each area of the
technology and include modeling activities and other descriptive techniques.
Software engineering tools provide automated or semi automated support for the process
and the methods. When tools are integrated so that information created by one tool can
be used by another, a system for the support of software development, called computer-
aided software engineering, is established.
A process is not a rigid rather it is an adaptable approach to choose the appropriate set
of work actions and tasks. The intent is always to deliver software in a timely manner and
with sufficient quality to satisfy those who have sponsored its creation and those who will
use it.
These five generic framework activities can be used during the development of small,
simple programs, the creation of large Web applications, and for the engineering of large,
complex computer-based systems. The details of the software process will be quite
different in each case, but the framework activities remain the same.
That is, communication, planning, modeling, construction, and deployment are applied
repeatedly through a number of project iterations. Each project iteration produces a
software increment that provides stakeholders with a subset of overall software features
and functionality.
Understand the problem. It’s worth spending a little time to understand, answering a
few simple questions:
• Who has a stake in the solution to the problem? That is, who are the stakeholders?
• What are the unknowns? What data, functions, and features are required to properly
solve the problem?
• Can the problem be compartmentalized? Is it possible to represent smaller problems
that may be easier to understand?
• Can the problem be represented graphically? Can an analysis model be created?
Plan the solution. Now you understand the problem and you can’t wait to begin coding.
Before you do, slow down just a bit and do a little design:
• Have you seen similar problems before? Are there patterns that are recognizable in a
potential solution? Is there existing software that implements the data, functions, and
features that are required?
• Has a similar problem been solved? If so, are elements of the solution reusable?
• Can sub problems be defined? If so, are solutions readily apparent for the sub
problems?
• Can you represent a solution in a manner that leads to effective implementation? Can a
design model be created?
Plan the solution. Now you understand the problem (or so you think) and you can’t
wait to begin coding. Before you do, slow down just a bit and do a little design:
• Have you seen similar problems before? Are there patterns that are recognizable in a
potential solution? Is there existing software that implements the data, functions, and
features that are required?
• Has a similar problem been solved? If so, are elements of the solution reusable?
• Can sub problems be defined? If so, are solutions readily apparent for the sub
problems?
• Can you represent a solution in a manner that leads to effective implementation? Can a
design model be created?
Examine the result. You can’t be sure that your solution is perfect, but you can be sure
that you’ve designed a sufficient number of tests to uncover as many errors as possible.
• Is it possible to test each component part of the solution? Has a reasonable testing
strategy been implemented?
• Does the solution produce results that conform to the data, functions, and features that
are required? Has the software been validated against all stakeholder requirements?
1.5.2 General Principles: David Hooker has proposed seven principles that focus on
software engineering practice as a whole. They are as following.
Design, keeping the implementers in mind. Code with concern for those that must
maintain and extend the system. Someone may have to debug the code you write, and
that makes them a user of your code. Making their job easier adds value to the system.
The Fifth Principle: Be Open to the Future
A system with a long lifetime has more value. Never design yourself into a corner. Always
ask “what if,” and prepare for all possible answers by creating systems that solve the
general problem, not just the specific one. This could very possibly lead to the reuse of
an entire system.
Myth: If I decide to outsource the software project to a third party, I can just relax and let
that firm build it.
Reality: If an organization does not understand how to manage and control software
projects internally, it will invariably struggle when it outsources software projects.
Customer myths. A customer who requests computer software may be a person at the
next desk, a technical group down the hall, the marketing/sales department, or an
outside company that has requested software under contract. In many cases, the
customer believes myths about software because software managers and practitioners
do little to correct misinformation. Myths lead to false expectations and, ultimately,
dissatisfaction with the developer.
Myth: A general statement of objectives is sufficient to begin writing programs—we can
fill in the details later.
Reality: Although a comprehensive and stable statement of requirements is not always
possible, an ambiguous “statement of objectives” is a recipe for disaster. Unambiguous
requirements are developed only through effective and continuous communication
between customer and developer.
Myth: Software requirements continually change, but change can be easily
accommodated because software is flexible.
Reality: It is true that software requirements change, but the impact of change varies with
the time at which it is introduced. When requirements changes are requested early the
cost impact is relatively small.16 However, as time passes, the cost impact grows
rapidly—resources have been committed, a design framework has been established, and
change can cause upheaval that requires additional resources and major design
modification.
Practitioner’s myths. Myths that are still believed by software practitioners have been
fostered by over 50 years of programming culture. During the early days, programming
was viewed as an art form. Old ways and attitudes die hard.
Myth: Once we write the program and get it to work, our job is done.
Reality: Someone once said that “the sooner you begin ‘writing code,’ the longer it’ll take
you to get done.” Industry data indicate that between 60 and 80 percent of all effort
expended on software will be expended after it is delivered to the customer for the first
time.
Myth: Until I get the program “running” I have no way of assessing its quality.
Reality: One of the most effective software quality assurance mechanisms can be applied
from the inception of a project—the technical review. Software are a “quality filter” that
have been found to be more effective than testing for finding certain classes of software
defects.
Myth: The only deliverable work product for a successful project is the working program.
Reality: A working program is only one part of a software configuration that includes many
elements. A variety of work products provide a foundation for successful engineering and,
more important, guidance for software support.
Myth: Software engineering will make us create voluminous and unnecessary
documentation and will invariably slow us down.
Reality: Software engineering is not about creating documents. It is about creating a
quality product. Better quality leads to reduced rework. And reduced rework results in
faster delivery times.
PROCESS MODELS
What is it? When you work to build a product or system, it’s important to go through
a series of predictable steps—a road map that helps you create a timely, high-quality
result. The road map that you follow is called a “software process.”
Who does it? Software engineers and their managers adapt the process to their needs
and then follow it. In addition, the people who have requested the software have a role
to play in the process of defining, building, and testing it.
Why is it important? Because it provides stability, control, and organization to an
activity that can, if left uncontrolled, become quite chaotic. However, a modernsoftware
engineering approach must be “agile.” It must demand only those activities, controls,
and work products that are appropriate for the project team and the product that is to
be produced.
What are the steps? At a detailed level, the process that you adopt depends on the
software that you’re building. One process might be appropriate for creating software
for an aircraft avionics system, while an entirely different process would be indicated
for the creation of a website.
What is the work product? From the point of view of a software engineer, the work
products are the programs, documents, and data that are produced as a consequence
of the activities and tasks defined by the process.
Generic process framework for software engineering defines five framework activities—
communication, planning, modeling, construction, and deployment. In addition, a set of
umbrella activities—project tracking and control, risk management, quality assurance,
configuration management, technical reviews, and others—are applied throughout the
process.
Process flow—describes how the framework activities and the actions and tasks that
occur within each framework activity are organized with respect to sequence and time
and is illustrated in Figure 2.2.
A linear process flow executes each of the five framework activities in sequence,
beginning with communication and culminating with deployment (Figure 2.2a). An
iterative process flow repeats one or more of the activities before proceeding to the next
(Figure 2.2b). An evolutionary process flow executes the activities in a “circular” manner.
Each circuit through the five activities leads to a more complete version of the software
(Figure 2.2c). A parallel process flow (Figure 2.2d) executes one or more activities in
parallel with other activities.
1.7.1 Defining a Framework Activity: A software team would need significantly more
information before it could properly execute any one of these five activities
(Communication, Planning, Modeling, Construction, Deployment) as part of the software
process. For a small software project requested by one person (at a remote location) with
simple, straightforward requirements, the communication activity might encompass little
more than a phone call with the appropriate stakeholder. Therefore, the only necessary
action is phone conversation, and the work tasks (the task set) that this action
encompasses are:
1. Make contact with stakeholder via telephone.
2. Discuss requirements and take notes.
3. Organize notes into a brief written statement of requirements.
4. E-mail to stakeholder for review and approval.
If the project was considerably more complex with many stakeholders, each with
a different set of requirements, the communication activity might have six distinct actions:
inception, elicitation, elaboration, negotiation, specification, and validation. Each
of these software engineering actions would have many work tasks and a number of
distinct work products.
1.7.2 Identifying a Task Set: Each software engineering action can be represented by a
number of different task sets—each a collection of software engineering work tasks,
related work products, quality assurance points, and project milestones. You should
choose a task set that best accommodates the needs of the project and the
characteristics of your team.
All software process models can accommodate the generic framework activities, but each
applies a different emphasis to these activities and defines a process flow that invokes
each framework activity in a different manner.
1.9.1 The Waterfall Model: There are times when the requirements for a problem are
well understood—when work flows from communication through deployment in a
reasonably linear fashion. The waterfall model, sometimes called the classic life cycle,
suggests a systematic, sequential approach6 to software development that begins with
customer specification of requirements and progresses through planning, modeling,
construction, and deployment, culminating in ongoing support of the completed software
(Figure 2.3).
actions to the actions associated with communication, modeling, and early construction
activities. As a software team moves down the left side of the V, basic problem
requirements are refined into progressively more detailed and technical representations
of the problem and its solution. Once code has been generated, the team moves up the
right side of the V, essentially performing a series of tests (quality assurance actions) that
validate each of the models created as the team moved down the left side.7 In reality,
there is no fundamental difference between the classic life cycle and the V- model. The
V-model provides a way of visualizing how verification and validation actions are applied
to earlier engineering work.
The waterfall model is the oldest paradigm, the problems that are sometimes encountered
when the waterfall model is applied are:
1. Real projects rarely follow the sequential flow that the model proposes. Although the
linear model can accommodate iteration, it does so indirectly. As a result, changes can
cause confusion as the project team proceeds.
2. It is often difficult for the customer to state all requirements explicitly. The waterfall
model requires this and has difficulty accommodating the natural uncertainty that exists
at the beginning of many projects.
3. The customer must have patience. A working version of the program(s) will not be
available until late in the project time span. A major blunder, if undetected until the working
program is reviewed, can be disastrous.
There are many situations in which initial software requirements are reasonably well
defined, but the overall scope of the development effort precludes a purely linearprocess.
In addition, there may be a compelling need to provide a limited set of software
functionality to users quickly and then refine and expand on that functionality in later
software releases. In such cases, you can choose a process model that is designed to
produce the software in increments.
The incremental model combines elements of linear and parallel process flows.
Referring to Figure 2.5, the incremental model applies linear sequences in a staggered
fashion as calendar time progresses. Each linear sequence produces deliverable
“increments” of the software in a manner that is similar to the increments produced by
an evolutionary process flow.
When an incremental model is used, the first increment is often a core product. That is,
basic requirements are addressed but many supplementary features remain undelivered.
The core product is used by the customer. As a result of use and/or evaluation, a plan is
developed for the next increment. The plan addresses the modification of the core
product to better meet the needs of the customer and the
delivery of additional features and functionality. This process is repeated following the
delivery of each increment, until the complete product is produced.
The incremental process model focuses on the delivery of an operational product with
each increment. Early increments are stripped-down versions of the final product, but they
do provide capability that serves the user and also provide a platform for evaluation by the
user. Incremental development is particularly useful when staffing is unavailable for a
complete implementation by the business deadline that has been established for the
project.
Software, like all complex systems, evolves over a period of time. Business and product
requirements often change as development proceeds, making a straight line path to an
end product unrealistic; tight market deadlines make completion of a comprehensive
software product impossible, but a limited version must be introduced to meet
competitive or business pressure.
.
Evolutionary models are iterative. They are characterized in a manner that enables you
to develop increasingly more complete versions of the software. The two common
evolutionary process models are presented here.
Prototyping: Often, a customer defines a set of general objectives for software, but does
not identify detailed requirements for functions and features. In other cases, the developer
may be unsure of the efficiency of an algorithm, the adaptability of anoperating system,
or the form that human-machine interaction should take. In these,and many other
situations, a prototyping paradigm may offer the best approach.
Regardless of the manner in which it is applied, the prototyping paradigm assists you and
other stakeholders to better understand what is to be built when requirements are fuzzy.
The prototyping paradigm (Figure 2.6) begins with communication. You meet with other
stakeholders to define the overall objectives for the software, identify whatever
requirements are known, and outline areas where further definition is mandatory. A
Prototyping iteration is planned quickly, and modeling occurs. A quick design focuses
on a representation of those aspects of the software that will be visible to end users.
The quick design leads to the construction of a prototype. The prototype is deployed and
evaluated by stakeholders, who provide feedback that is used to further refine
requirements. Iteration occurs as the prototype is tuned to satisfy the needs of various
stakeholders, while at the same time enabling you to better understand what needs to
be done.
The Spiral Model. Originally proposed by Barry Boehm, the spiral model is an
evolutionary software process model that couples the iterative nature of prototyping with
the controlled and systematic aspects of the waterfall model. Boehm describes the model
in the following manner:
The spiral development model is a risk-driven process model generator that is used to
guide multi-stakeholder concurrent engineering of software intensive systems. It has
two main distinguishing features. One is a cyclic approach for incrementally growing a
system’s degree of definition and implementation while decreasing its degree of risk. The
other is a set of anchor point milestones for ensuring stakeholder commitment to feasible
and mutually satisfactory system solutions.
Using the spiral model, software is developed in a series of evolutionary releases. During
early iterations, the release might be a model or prototype. During later iterations,
increasingly more complete versions of the engineered system are produced.
A spiral model is divided into a set of framework activities defined by the software
engineering team. Each of the framework activities represent one segment of the spiral
path illustrated in Figure 2.7.
As this evolutionary process begins, the software team performs activities that are implied
by a circuit around the spiral in a clockwise direction, beginning at the center. Risk is
considered as each revolution is made. Anchor point milestones—a combination of work
products and conditions that are attained along the path of the spiral—are notedfor each
evolutionary pass.
The first circuit around the spiral might result in the development of a product
specification; subsequent passes around the spiral might be used to develop a prototype
and then progressively more sophisticated versions of the software. Each pass through
the planning region results in adjustments to the project plan. Cost and schedule are
adjusted based on feedback derived from the customer after delivery. In addition, the
project manager adjusts the planned number of iterations required to complete the
software.
Unlike other process models that end when software is delivered, the spiral model can be
adapted to apply throughout the life of the computer software. The spiral model is a
realistic approach to the development of large-scale systems and software. Because
software evolves as the process progresses, the developer and customer better
understand and react to risks at each evolutionary level.
of one software engineering activity within the modeling activity using a concurrent
modeling approach. The activity—modeling—may be in any one of the states noted at
any given time. Similarly,
other activities, actions, or tasks (e.g., communication or construction) can be
represented in an analogous manner. All software engineering activities exist
concurrently but reside in different states.
Concurrent modeling defines a series of events that will trigger transitions from state to
state for each of the software engineering activities, actions, or tasks. For example, during
early stages of design, an inconsistency in the requirements model is uncovered. This
generates the event analysis model correction, which will trigger the requirements
analysis action from the done state into the awaiting changes state. Concurrent odeling
is applicable to all types of software development and provides an accurate picture of the
current state of a project.
1.10.2 The Formal Methods Model: The formal methods model encompasses a set of
activities that leads to formal mathematical specification of computer software. Formal
methods enable you to specify, develop, and verify a computer-based system by applying
a rigorous, mathematical notation. A variation on this approach, called cleanroom
software engineering, is currently applied by some software development organizations.
When formal methods are used during design, they serve as a basis for program
verification and therefore enable you to discover and correct errors that might otherwise
go undetected.
Although not a mainstream approach, the formal methods model offers the promise of
defect-free software. Yet, concern about its applicability in a business environment has
been voiced:
• The development of formal models is currently quite time consuming and
expensive.
• Because few software developers have the necessary background to apply
formal methods, extensive training is required.
• It is difficult to use the models as a communication mechanism for technically
unsophisticated customers.
When concerns cut across multiple system functions, features, and information, they are
often referred to as crosscutting concerns. Aspectual requirements define those
crosscutting concerns that have an impact across the software architecture. Aspect-
oriented software development (AOSD), often referred to as aspect-oriented
programming (AOP), is a relatively new software engineering paradigm that provides a
process and methodological approach for defining, specifying, designing, and
constructing aspects.
1.11.1 Introduction: The Unified Process is an attempt to draw on the best features and
characteristics of traditional software process models, but haracterize them in a way that
implements many of the best principles of agile software development.
important role of software architecture and “helps the architect focus on the right goals,
such as understandability, reliance to future changes, and reuse”.
1.11.2 Phases of the Unified Process: The Unified Process is with five basic framework
activities depicted in figure 2.9. It depicts the “phases” of the UP and relates them to the
generic activities.
The inception phase of the UP encompasses both customer communication and planning
activities. The elaboration phase encompasses the communication and modeling
activities of the generic process model. The construction phase of the UP is identical to
the construction activity defined for the generic software process. The transition phase of
the UP encompasses the latter stages of the generic construction activity and the first
part of the generic deployment (delivery and feedback) activity. Software is given to end
users for beta testing and user feedback reports both defects and necessary changes. In
addition, the software team creates the necessary support information (e.g., user
manuals, troubleshooting guides, installation procedures) that is required for the release.
The production phase of the UP coincides with the deployment activity of the generic
process. During this phase, the ongoing use of the software is monitored, support for the
operating environment (infrastructure) is provided, and defect reports and requests for
changes are submitted and evaluated. A software engineering workflow is distributed
across all UP phases.
an ideal setting, you would create a process that best fits your needs, and at the same
time, meets the broader needs of the team and the organization. Alternatively, the team
itself can create its own process, and at the same time meet the narrower needs of
individuals and the broader needs of the organization.
1.12.2 Personal Software Process (PSP): Every developer uses some process to build
computer software. The Personal Software Process (PSP) emphasizes personal
measurement of both the work product that is produced and the resultant quality of the
work product. In addition PSP makes the practitioner responsible for project planning
(e.g., estimating and scheduling) and empowers the practitioner to control the quality of
all software work products that are developed. The PSP model defines five framework
activities:
Planning. This activity isolates requirements and develops both size and resource
estimates. In addition, a defect estimate is made. All metrics are recorded on worksheets
or templates. Finally, development tasks are identified and a project schedule is created.
High-level design. External specifications for each component to be constructed are
developed and a component design is created. Prototypes are built when uncertainty
exists. All issues are recorded and tracked.
High-level design review. Formal verification methods are applied to uncover errors in
the design. Metrics are maintained for all important tasks and work results.
Development. The component-level design is refined and reviewed. Code is generated,
reviewed, compiled, and tested. Metrics are maintained for all important tasks and work
results.
Postmortem. Using the measures and metrics collected, the effectiveness of the process
is determined. Measures and metrics should provide guidance for modifying the process
to improve its effectiveness.
PSP emphasizes the need to record and analyze the types of errors you make, so that
you can develop strategies to eliminate them.
1.12.3 Team Software Process (TSP): Because many industry-grade software projects
are addressed by a team of practitioners, The goal of TSP is to build a “self-directed”
project team that organizes itself to produce high-quality software.
Humphrey defines the following objectives for TSP:
• Build self-directed teams that plan and track their work, establish goals, and own their
processes and plans. These can be pure software teams or integrated product teams
(IPTs) of 3 to about 20 engineers.
• Show managers how to coach and motivate their teams and how to help them sustain
peak performance.
• Accelerate software process improvement by making CMM Level 5 behavior normal and
expected.
A self-directed team has a consistent understanding of its overall goals and objectives;
defines roles and responsibilities for each team member; tracks quantitative project data
identifies a team process that is appropriate for the project and a strategy for
implementing the process; defines local standards that are applicable to the team’s
software engineering work; continually assesses risk and reacts to it; and tracks,
manages, and reports project status.
TSP defines the following framework activities: project launch, high-level design,
implementation, integration and test, and postmortem.
TSP makes use of a wide variety of scripts, forms, and standards that serve to guide team
members in their work. TSP recognizes that the best software teams are self- directed.
Team members set project objectives, adapt the process to meet their needs, control the
project schedule, and through measurement and analysis of the metrics collected, work
continually to improve the team’s approach to software engineering.