Software Testing & SCM Guide
Software Testing & SCM Guide
Testability ■ Operability—it operates cleanly ■ Observability—the results of each test case are readily observed ■ Controllability—the degree to which testing can be automated
and optimized ■ Decomposability—testing can be targeted ■ Simplicity—reduce complex architecture and logic to simplify tests ■ Stability—few changes are requested during
testing ■ Understandability—of the design
What is a “Good” Test? ■ A good test has a high probability of finding an error ■ A good test is not redundant. ■ A good test should be “best of breed” ■ A good test should be
neither too simple nor too complex.
Internal and External Views ■ Any engineered product (and most other things) can be tested in one of two ways: ■ Knowing the specified function that a product has been
designed to perform, tests can be conducted that demonstrate each function is fully operationalwhile at the same time searching for errors in each function; ■ Knowing the
internal workings of a product, tests can be conducted to ensure that "all gears mesh," that is, internal operations are performed according to specifications and all internal
components have been adequately exercised.
White-Box Testing ... our goal is to ensure that all statements and conditions have been executed at least once
Deriving Test Cases ■ Summarizing: ■ Using the design or code as a foundation, draw a corresponding flow graph. ■
Determine the cyclomatic complexity of the resultant flow graph. ■ Determine a basis set of linearly independent
paths. ■ Prepare test cases that will force execution of each path in the basis set.
Graph Matrices ■ A graph matrix is a square matrix whose size (i.e., number of rows and columns) is equal to the
number of nodes on a flow graph ■ Each row and column corresponds to an identified node, and matrix entries
correspond to connections (an edge) between nodes. ■ By adding a link weight to each matrix entry, the graph
matrix can become a powerful tool for evaluating program control structure during testing
Control Structure Testing ■ Condition testing — a test case design method that exercises the logical conditions
contained in a program module ■ Data flow testing — selects test paths of a program according to the locations of
definitions and uses of variables in the program
Data Flow Testing ■ The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variables in the program. ■ Assume that each statement in a program is assigned a unique
statement number and that each function does not modify its parameters or global variables. For a statement with S as its
statement number • DEF(S) = {X | statement S contains a definition of X} • USE(S) = {X | statement S contains a use of X} ■
A definition-use (DU) chain of variable X is of the form [X,S, S'], where S and S' are statement numbers, X is in DEF(S) and
USE(S'), and the definition of X in statement S is live at statement S'
Black-Box Testing ■ How is functional validity tested? ■ How is system behavior and performance tested? ■ What classes
of input will make good test cases? ■ Is the system particularly sensitive to certain input values? ■ How are the
boundaries of a data class isolated? ■ What data rates and data volume can the system tolerate? ■ What effect will
specific combinations of data have on system operation?
The “First Law” : No matter where you are in the system life cycle, the system will change, and the desire to change it will persist throughout the life cycle.
SCM: ■ Software configuration management (SCM), also called change management ■ Is a set of activities designed to
manage change.
The SCM Process: ■ How does a software team identify the discrete elements of a software configuration? ■ How does an
organization manage the many existing versions of a program (and its documentation) in a manner that will enable change
to be accommodated efficiently? ■ How does an organization control changes before and after software is released to a
customer? ■ Who has responsibility for approving and ranking changes? ■ How can we ensure that changes have been
made properly? ■ What mechanism is used to appraise others of changes that are made?
Software configuration item ■ an SCI is all or part of a work product (e.g., a document, an entire suite of test cases, a named
program component, a multimedia content asset, or a software tool) ■ In reality,
SCIs are organized to form configuration objects that may be cataloged in the
project database with a single name.
Baselines ■ The IEEE defines a baseline as: • A specification or product that has
been formally reviewed and agreed upon, that thereafter serves asthe basis for
further development, and that can be changed only through formal change
control procedures. ■ a baseline is a milestone in the development of software
that is marked by the delivery of one or
more software configuration items and
the approval of these SCIs that is obtained
through a formal technical review SCM
Repository ■ The SCM repository is the
set of mechanisms and data structures
that allow a software team to manage
change in an effective manner ■ The
repository performs or precipitates the
following functions: ■ Data integrity ■
Information sharing ■ Tool integration ■
Data integration ■ Methodology
enforcement ■ Document standardization
Repository Features ■ Versioning. ■ saves all of
these versions to enable effective management of
product releases and to permit developers to go back
to previous versions ■ Dependency tracking and
change management. ■ The repository manages a
wide variety of relationships among the data
elements stored in it. ■ Requirements tracing. ■
Provides the ability to track all the design and
construction components and deliverables that result
from a specific requirement specification ■
Configuration management. ■ Keeps track of a series
of configurations representing specific project
milestones or production releases. Version
management provides the needed versions, and link
management keeps track of interdependencies. ■
Audit trails. ■ establishes additional information about when, why, and
by whom changes are made.
Continues Integration ■ Best practices for SCM include: ■ keeping the number of code variants small. ■ Test early and
often. ■ Integrate early and often. ■ Tool use to automate testing, building, and code integration. ■ Continuous integration
(CI) is important to agile developers following the DevOps workflow. CI also adds value to SCM by ensuring that each
change is promptly integrated into the project source code, compiled, and tested automatically.
Continues Integration ■ CI offers development teams several concrete Advantages: ■ Accelerated feedback. Notifying
developers immediately when integration fails allows fixes to be made while the number of performed changes is small. ■
Increased quality. Building and integrating software whenever necessary provides confidence into the quality
of the developed product. ■ Reduced risk. Integrating components early avoids risking a long integration
phase because design failures are discovered and fixed early. ■ Improved reporting. Providing additional
information (e.g., code analysis metrics) allows for more accurate configuration status accounting.
WebApp software change ■ Class 1. A content or function change that corrects an error or enhances local
content or functionality. ■ Class 2. A content or function change that has an impact on other content objects
or functional components. ■ Class 3. A content or function change that has a broad impact across an app (e.g.,
major extension of functionality, significant enhancement or reduction in content, major required changes in
navigation). ■ Class 4. A major design change (e.g., a change in interface design or navigation approach) that
will be immediately noticeable to one or more categories of user.
Estimation ◼ Estimation of resources, cost, and schedule for a software engineering effort requires ◼
experience ◼ access to good historical information (metrics) ◼ the courage to commit to quantitative
predictions when qualitative information is all that exists ◼ Estimation carries inherent risk and this risk leads
to uncertainty.
Software scope describes ◼ the functions and features that are to be delivered to end-users ◼ the data that
are input and output ◼ the “content” that is presented to users as a consequence of using the software ◼ the
performance, constraints, interfaces, and reliability that bound the system. ◼ Scope is defined using one of
two techniques: • A narrative description of software scope is developed after communication with all
stakeholders. • A set of use-cases is developed by end-users.
Project Estimation ◼ Project scope must be understood ◼ Elaboration (decomposition) is necessary ◼ Historical metrics are very helpful ◼ At least two different techniques
should be used ◼ Uncertainty is inherent in the process
Estimation Techniques ◼ Past (similar) project experience ◼ Conventional estimation techniques ◼ task breakdown and effort estimates ◼ size (e.g., FP) estimates ◼ Empirical
models ◼ Automated tools
Estimation Accuracy Predicated on … ◼ the degree to which the planner has properly estimated the size of the product to be built ◼ the ability to translate the size estimate
into human effort, calendar time, and dollars (a function of the availability of reliable software metrics from past projects) ◼ the degree to which the project plan reflects the
abilities of the software team ◼ the stability of product requirements and the environment that supports the software engineering effort.
LOC/KLOC ◼ LOC: lines of code ◼ KLOC: kilo lines of code, or (lines of code) / 1000 ◼ Still regarded as most accurate way to measure labor costs ◼ What are some uncertainties
about measuring LOC? ◼ Should comment lines count? Or blank lines for formatting? ◼ How do we compare lines of assembly language vs. high-level language like C++ or Java?
◼ How do you know how many LOC the system will contain when it’s not implemented or even designed yet? ◼ How do you account for reuse of code?
Function-Oriented Metrics ◼ Mainly used in business applications ◼ The focus is on program functionality ◼ A measure of the information domain + a subjective assessment
of complexity ◼ Most common are: ◼ function points and ◼ feature points (FP) .
Function Points ◼ STEP 1: measure size in terms of the amount of functionality in a system. Function points are computed by first calculating an unadjusted function point
count (UFC). Counts are made for the following categories ◼ •External inputs – those items provided by the user that describe distinct application-oriented data (such as file
names and menu selections) ◼ •External outputs – those items provided to the user that generate distinct application-oriented data (such as reports and messages, rather than
the individual components of these)
Function Points(.) ◼ • External inquiries – interactive inputs requiring a response ◼ • External files – machine-readable interfaces to other systems ◼ • Internal files – logical
master files in the system
Use-case points estimation (UUCW) • A simple use case indicates a simple user interface, a single database, and three or fewer transactions and five or fewer class
implementations. • An average use case indicates a more complex UI, two or three databases, and four to seven transactions with 5 to 10 classes. • Finally, a complex use case
implies a complex UI with multiple databases, using eight or more transactions and 11 or more classes. • UUCW : Each use case is assessed using these criteria and the count of
each type is weighted by a factor of 5, 10, and 15, respectively. A total unadjusted use case weight.
Use-case points estimation (UAW) • Simple actors are automatons (another system, a machine or device) that communicate through an API. • Average actors are automatons
that communicate through a protocol or a data store. • Complex actors are humans who communicate through a GUI or other human interface. • Each actor is assessed using
these criteria, and the count of each type is weighted by a factor of 1, 2, and 3, • UAW: the total unadjusted actor weight (UAW) is the sum of all weighted counts.
Use-case points estimation (UAW) • Considering technical complexity factors (TCFs) and environment complexity factors (ECFs). • Thirteen factors contribute to an assessment
of the final TCF, and eight factors contribute to the computation of the final ECF
COCOMO 81 ◼ COCOMO stands for COnstructive COst Model. COCOMO has three different models (each one increasing with detail and accuracy): ◼ Basic, applied early in a
project ◼ Intermediate, applied after requirements are specified. ◼ Advanced, applied after design is complete ◼ COCOMO has three different modes: ◼ Organic – “relatively
small software teams develop software in a highly familiar, in-house environment” ◼ Embedded – operate within tight constraints, product is strongly tied to “complex of
hardware, software, regulations, and operational procedures”◼ Semi-detached – intermediate stage somewhere between organic and embedded. Usually up to 300 KDSI ◼
COCOMO uses two equations to calculate effort in man months (MM) and the number on months estimated for project (TDEV) ◼ MM is based on the number of thousand lines
of delivered instructions/source (KDSI) ◼ MM = a(KDSI)b * EAF ◼ TDEV = c(MM)d ◼ EAF is the Effort Adjustment Factor derived from the Cost Drivers, EAF for the basic model is
1 ◼ The values for a, b, c, and d differ depending on which mode you are using.
A simple example: Project is a flight control system (mission critical) with 310,000 DSI in embedded mode
◼ Reliability must be very high (RELY=1.40). So we can calculate: ◼ Effort = 1.40*3.6*(319)1.20 = 5093
MM ◼ Schedule = 2.5*(5093)0.32 = 38.4 months ◼ Average Staffing = 5093 MM/38.4 months = 133 FSP
COCOMO II ◼ Main objectives of COCOMO II: ◼ To develop a software cost and schedule estimation model tuned to the life cycle practices of the 1990’s and 2000’s ◼ To
develop software cost database and tool support capabilities for continuous model improvement.
◼ COCOMO II is actually a hierarchy of estimation models that address the following areas: • Application composition model. Used during the early stages of software
engineering, when prototyping of user interfaces, consideration of software and system interaction, assessment of performance, and evaluation of technology maturity are
paramount. • Early design stage model. Used once requirements have been stabilized and basic software architecture has been established. • Post-architecture-stage model.
Used during the construction of the software.
COCOMO II Differences ◼ The exponent value b in the effort equation is replaced with a
variable value based on five scale factors rather then constants ◼ Size of project can be listed as
object points, function points or source lines of code (SLOC). ◼ EAF is calculated from seventeen
cost drivers better suited for today's methods, COCOMO81 has fifteen ◼ A breakage rating has
been added to address volatility of system
Estimation for Agile Projects 1. Each user story is considered separately for estimation
purposes. 2. The user story is decomposed into the set of software engineering tasks that will be
required to develop it. 3a. Each task is estimated separately. Note: Estimation can be based on
historical data, an empirical model, or “experience”. 3b. Alternatively, the “volume” of the user
story can be estimated in LOC, FP, or some other volume-oriented measure (e.g., use case
count). 4a. Estimates for each task are summed to create an estimate for the user story. 4b.
Alternatively, the volume estimate for the user story is translated into effort using historical
data. 5. The effort estimates for all user stories that are to be implemented for a given software
increment are summed to develop the effort estimate for the increment.
Reactive Risk Management ◼ project team reacts to risks when they occur ◼ mitigation—plan for
additional resources in anticipation of fire fighting ◼ fix on failure—resource are found and applied
when the risk strikes ◼ crisis management—failure does not respond to applied resources and
project is in jeopardy
Proactive Risk Management ◼ formal risk analysis is performed ◼ organization corrects the root
causes of risk ◼ TQM concepts and statistical SQA ◼ examining risk sources that lie beyond the
bounds of the software ◼ developing the skill to manage change
Seven Principles ◼ Maintain a global perspective—view software risks within the context of system and
the business problem ◼ Take a forward-looking view—think about the risks that may arise in the
future; establish contingency plans ◼ Encourage open communication—if someone states a potential
risk, don’t discount it. ◼ Integrate—a consideration of risk must be integrated into the software process
◼ Emphasize a continuous process—the team must be vigilant throughout the software process,
modifying identified risks as more information is known and adding new ones as better insight is
achieved. ◼ Develop a shared product vision—if all stakeholders share the same vision of the software,
it likely that better risk identification and assessment will occur. ◼ Encourage teamwork—the talents,
skills and knowledge of all stakeholder should be pooled
Risk Identification ◼ Product size—risks associated with the overall size of the software to be built or
modified. ◼ Business impact—risks associated with constraints imposed by management or the
marketplace. ◼ Customer characteristics—risks associated with the sophistication of the customer and
the developer's ability to communicate with the customer in a timely manner. ◼ Process definition—
risks associated with the degree to which the software process has been defined and is followed by the
development organization. ◼ Development environment—risks associated with the availability and
quality of the tools to be used to build the product. ◼ Technology to be built—risks associated with the
complexity of the system to be built and the "newness" of the technology that is packaged by the
system. ◼ Staff size and experience—risks associated with the overall technical and project experience of the software engineers who will do the work.
Assessing Project Risk-I ◼ Have top software and customer managers formally committed to support the project? ◼ Are end-users enthusiastically committed to the project
and the system/product to be built? ◼ Are requirements fully understood by the software engineering team and their customers? ◼ Have customers been involved fully in the
definition of requirements? ◼ Do end-users have realistic expectations?
Assessing Project Risk-II ◼ Is project scope stable? ◼ Does the software engineering team have the right mix of skills? ◼ Are project requirements stable? ◼ Does the project
team have experience with the technology to be implemented? ◼ Is the number of people on the project team adequate to do the job? ◼ Do all customer/user constituencies
agree on the importance of the project and on the requirements for the system/product to be built?
Risk Components ◼ performance risk—the degree of uncertainty that the product will meet its requirements and be fit for its intended use. ◼ cost risk—the degree of
uncertainty that the project budget will be maintained. ◼ support risk—the degree of uncertainty that the resultant software will be easy to correct, adapt, and enhance. ◼
schedule risk—the degree of uncertainty that the project schedule will be maintained and that the product will be delivered on time.
Risk Projection ◼ Risk projection, also called risk estimation, attempts to rate each risk in two ways ◼ the likelihood or probability that the risk is real ◼ the consequences of the
problems associated with the risk, should it occur. ◼ The are four risk projection steps: ◼ establish a scale that reflects the perceived likelihood of a risk ◼ delineate the
consequences of the risk ◼ estimate the impact of the risk on the project and the product, ◼ note the overall accuracy of the risk projection so that there will be no
misunderstandings.
Risk Mitigation, Monitoring, and Management ◼ mitigation—how can we avoid the risk? ◼
monitoring—what factors can we track that will enable us to determine if the risk is becoming
more or less likely? ◼ management—what contingency plans do we have if the risk becomes a
reality?
Building the Risk Table ◼ Estimate the probability of occurrence ◼ Estimate the impact on the
project on a scale of 1 to 5, where ◼ 1 = low impact on project success ◼ 5 = catastrophic
impact on project success ◼ sort the table by probability and impact
Risk Exposure (Impact): The overall risk exposure, RE, is determined using the following
relationship: RE = P x C where P is the probability of occurrence for a risk, and C is the cost to
the project should the risk occur
Risk Exposure Example ◼ Risk identification. Only 70 percent of the software components
scheduled for reuse will, in fact, be integrated into the application. The remaining functionality
will have to be custom developed. ◼ Risk probability. 80% (likely). ◼ Risk impact. 60 reusable software components were planned. If only 70 percent can be used, 18
components would have to be developed from scratch (in addition to other custom software that has been scheduled for development). Since the average component is 100
LOC and local data indicate that the software engineering cost for each LOC is $14.00, the overall cost (impact) to develop the components would be 18 x 100 x 14 = $25,200. ◼
Risk exposure. RE = 0.80 x 25,200 ~ $20,200.
Risk Due to Product Size: Attributes that affect risk: • estimated size of the product in LOC or FP? • estimated size of product in number of programs, files, transactions? •
percentage deviation in size of product from average for previous products? • size of database created or used by the product? • number of users of the product? • number of
projected changes to the requirements for the product? before delivery? after delivery? • amount of reused software?
Risk Due to Business Impact Attributes that affect risk: • affect of this product on company revenue? • visibility of this product by senior management? • reasonableness of
delivery deadline? • number of customers who will use this product • interoperability constraints • sophistication of end users? • amount and quality of product documentation
that must be produced and delivered to the customer? • governmental constraints • costs associated with late delivery? • costs associated with a defective product?
Risks Due to the Customer Questions that must be answered: • Have you worked with the customer in the past? • Does the customer have a solid idea of requirements? • Has
the customer agreed to spend time with you? • Is the customer willing to participate in reviews? • Is the customer technically sophisticated? • Is the customer willing to let your
people do their job—that is, will the customer resist looking over your shoulder during technically detailed work? • Does the customer understand the software engineering
process?
Risks Due to Process Maturity Questions that must be answered: • Have you established a common process framework? • Is it followed by project teams? • Do you have
management support for software engineering • Do you have a proactive approach to SQA? • Do you conduct formal technical reviews? • Are CASE tools used for analysis,
design and testing? • Are the tools integrated with one another? • Have document formats been established?
Technology Risks Questions that must be answered: • Is the technology new to your organization? • Are new algorithms, I/O technology required? • Is new or unproven
hardware involved? • Does the application interface with new software? • Is a specialized user interface required? • Is the application radically different? • Are you using new
software engineering methods? • Are you using unconventional software development methods, such as formal methods, AI-based approaches, artificial neural networks? • Are
there significant performance constraints? • Is there doubt the functionality requested is "do-able?"
Staff/People Risks Questions that must be answered: • Are the best people available? • Does staff have the right skills? • Are enough people available? • Are staff committed
for entire duration? • Will some people work part time? • Do staff have the right expectations? • Have staff received necessary training? • Will turnover among staff be low?
Chapter System Feasibility Study
What is Feasibility Study? • A feasibility study assesses the viability of a project or system before its implementation. • Objectives • Evaluate technical, economic, operational,
and time-related aspects. • Identify potential risks and mitigate them in early stages. • Guide informed decision-making. • Benefits • Saves resources by identifying issues early.
• Improves stakeholder confidence.
• Key Dimensions of Feasibility: ◼ Technical Feasibility • Can the existing infrastructure and technology support the system? • Availability of required technical expertise.
Example: Developing a mobile app with limited developer skills may require outsourcing. ◼ Economic Feasibility • Cost-benefit analysis. • Budget requirements versus projected
benefits. • Example: Comparing cloud hosting costs versus onpremise infrastructure. ◼ Operational Feasibility • Alignment with organizational goals. • User acceptance and
usability considerations. • Example: Testing whether employees can adapt to a new time-tracking system. ◼ Time Feasibility • Can the project meet its deadlines? • Time
allocation for development, testing, and deployment. • Example: Assessing if a holiday shopping app can be launched before peak season.
Types of Feasibility Testing: ◼ Technical Testing • Prototyping or proof-of-concept to validate technical assumptions. • Compatibility and scalability assessments. • Example:
Running a pilot server to check load capacity. ◼ Economic Testing: • Financial modeling and ROI analysis. • Simulating various economic scenarios. • Example: Calculating break-
even points for a subscription service. ◼ Operational Testing: • Engaging with stakeholders to assess user requirements. • Pilot testing with end-users to evaluate effectiveness.
• Example: User testing of a new customer service chatbot. ◼ Security Testing: • Evaluating system’s resilience against cyber threats. • Ensuring data protection and compliance
with regulations. • Example: Running penetration tests on a financial application.
1.ﺧﻼﺻﻪ اﺟﺮاﯾﯽ :ﺷﺮﮐﺖ ﻣﺎ ﻗﺼﺪ دارد ﺑﺎ اﺳﺘﻔﺎده از ﻓﻨﺎوري ﻫﺎي ﭘﯿﺸﺮﻓﺘﻪ“ ،ﮔﺮدﻧﺒﻨﺪ ﭘﺰﺷﮑﯽ ﻫﻮﺷﻤﻨﺪ” را ﺗﻮﺳﻌﻪ دﻫﺪ ﮐﻪ ﺑﻪ ﻋﻨﻮان ﯾﮏ ﻣﺤﺼﻮل ﭘﯿﺸﮕﯿﺮي ﮐﻨﻨﺪه و درﻣﺎﻧﯽ ﺑﺮاي ﻣﺸﮑﻼت ﮔﺮدﻧﯽ ﻃﺮاﺣﯽ ﺷﺪه اﺳﺖ .اﯾﻦ ﮔﺮدﻧﺒﻨﺪ ﻫﻮﺷﻤﻨﺪ ﺑﺎ
ﺑﻬﺮه ﮔﯿﺮي از ﺣﺴﮕﺮﻫﺎي اﻟﮑﺘﺮﯾﮑﯽ و ﻓﻨﺎوري ﻫﺎي ردﯾﺎﺑﯽ وﺿﻌﯿﺖ ،ﺑﻪ ﮐﺎرﺑﺮان ﮐﻤﮏ ﻣﯽ ﮐﻨﺪ ﺗﺎ وﺿﻌﯿﺖ ﺻﺤﯿﺢ ﮔﺮدن ﺧﻮد را ﺣﻔﻆ ﮐﺮده و از آﺳﯿﺐ ﻫﺎي ﻃﻮﻻن ﯾﻤﺪت ﺟﻠﻮﮔﯿﺮي ﮐﻨﻨﺪ .در ﻣﻘﺎﯾﺴﻪ ﺑﺎ ﮔﺮدﻧﺒﻨﺪﻫﺎي ﺛﺎﺑﺖ ﺳﻨﺘﯽ ،ﻣﺤﺼﻮل ﻣﺎ آزادي
ﺣﺮﮐﺖ ﺑﯿﺸﺘﺮي ﺑﻪ ﮐﺎرﺑﺮ ﻣﯽ دﻫﺪ و ﺑﻪ ﻋﻀﻼت ﮔﺮدن اﺟﺎزه ﻣﯽ دﻫﺪ ﻓﻌﺎل ﺑﻤﺎﻧﻨﺪ ،در ﻧﺘﯿﺠﻪ از ﺗﺤﻠﯿﻞ ﻋﻀﻼت ﺟﻠﻮﮔﯿﺮي ﻣﯽ ﺷﻮد.
2.ﺗﺤﻠﯿﻞ ﺑﺎزار -ﺗﺤﻠﯿﻞ ﺻﻨﻌﺖ و ﺑﺎزار ﻫﺪف • ﺻﻨﻌﺖ ﺗﺠﻬﯿﺰات ﭘﺰﺷﮑﯽ ﻫﻮﺷﻤﻨﺪ ﺑﻪ ﺳﺮﻋﺖ در ﺣﺎل رﺷﺪ اﺳﺖ .اﻧﺘﻈﺎر ﻣﯽ رود ﺑﺎزار ﺟﻬﺎﻧﯽ ﺗﺠﻬﯿﺰات ﭘﺰﺷﮑﯽ ﺑﯿﺶ از % 10رﺷﺪ ﮐﻨﺪ .ﺗﻘﺎﺿﺎ ﺑﺮاي اﺑﺰارﻫﺎي ) (CAGRﻫﻮﺷﻤﻨﺪ از ﺳﺎل 2024ﺗﺎ
2030ﺑﺎ ﻧﺮخ رﺷﺪ ﻣﺮﮐﺐ ﺳﺎﻟﯿﺎﻧﻪ ﭘﯿﺸﮕﯿﺮي و ﻣﺪﯾﺮﯾﺖ دردﻫﺎي ﻣﺰﻣﻦ ﺑﻪ دﻟﯿﻞ اﻓﺰاﯾﺶ آﮔﺎﻫﯽ از ﺳﻼﻣﺖ ﻋﻤﻮﻣﯽ رو ﺑﻪ اﻓﺰاﯾﺶ اﺳﺖ .ﺑﺎزار ﻫﺪف اﺻﻠﯽ ﻣﺎ ﻋﺒﺎرﺗﻨﺪ از • :ﺑﯿﻤﺎران ﻣﺒﺘﻼ ﺑﻪ ﻣﺸﮑﻼت ﮔﺮدن :اﯾﻦ دﺳﺘﻪ ﺷﺎﻣﻞ اﻓﺮادي اﺳﺖ ﮐﻪ از دردﻫﺎي
ﮔﺮدﻧﯽ ﻧﺎﺷﯽ از ﮐﺎر ﻃﻮﻻﻧﯽ ﺑﺎ ﮐﺎﻣﭙﯿﻮﺗﺮ ،راﻧﻨﺪﮔﯽ و ﯾﺎ وﺿﻌﯿﺖ ﻫﺎي ﻧﺎدرﺳﺖ ﺑﺪن رﻧﺞ ﻣﯽ ﺑﺮﻧﺪ • .اﻓﺮاد ﺷﺎﻏﻞ )ﻣﺎﻧﻨﺪ ﮐﺎرﻣﻨﺪان اداري( :اﯾﻦ ﮔﺮوه ﻋﻤﺪﺗﺎً اﻓﺮادي ﻫﺴﺘﻨﺪ ﮐﻪ ﺑﺮاي ﻣﺪت ﻃﻮﻻﻧﯽ در وﺿﻌﯿﺖ ﺛﺎﺑﺖ ﻗﺮار دارﻧﺪ و اﺣﺘﻤﺎل درد ﮔﺮدن در آﻧﻬﺎ
زﯾﺎد اﺳﺖ• .ﺳﺎﻟﻤﻨﺪان :ﺑﺎ اﻓﺰاﯾﺶ ﺟﻤﻌﯿﺖ ﺳﺎﻟﻤﻨﺪان ،ﻧﯿﺎز ﺑﻪ اﺑﺰارﻫﺎي ﭘﺸﺘﯿﺒﺎﻧﯽ ﮐﻨﻨﺪه ﮔﺮدن ﺑﯿﺸﺘﺮ اﺣﺴﺎس ﻣﯽ ﺷﻮد• .ورزﺷﮑﺎران :ورزﺷﮑﺎران ﺣﺮﻓﻪ اي ﺑﻪ اﺑﺰاري ﺑﺮاي ﭘﯿﺸﮕﯿﺮي از آﺳﯿﺐ ﻫﺎي ﮔﺮدﻧﯽ ﻧﯿﺎز دارﻧﺪ.
• ﻣﺤﺼﻮل ﻣﺎ در ﻣﻘﺎﯾﺴﻪ ﺑﺎ ﻣﺤﺼﻮﻻت ﻣﺸﺎﺑﻪ ﻣﻮﺟﻮد در ﺑﺎزار از وﯾﮋﮔﯽ ﻫﺎي ﻣﻨﺤﺼﺮﺑﻪ ﻓﺮدي ﺑﺮﺧﻮردار اﺳﺖ • :ﮔﺮدﻧﺒﻨﺪﻫﺎي ﺳﻨﺘﯽ :اﯾﻦ
ﻣﺤﺼﻮﻻت ﮔﺮدن را ﮐﺎﻣﻼً ﺛﺎﺑﺖ ﻧﮕﻪ ﻣﯽ دارﻧﺪ و ﺑﺎﻋﺚ ﺗﺤﻠﯿﻞ ﻋﻀﻼت ﻣﯽﺷﻮﻧﺪ • .ﻣﺤﺼﻮل ﻣﺎ :ﺑﺮﺧﻼف رﻗﺒﺎ ،آزادي ﺣﺮﮐﺖ ﺳﺮ را ﻓﺮاﻫﻢ
ﻣﯽﮐﻨﺪ و از ﺗﺤﻠﯿﻞ ﻋﻀﻼت ﺟﻠﻮﮔﯿﺮي ﻣﯽﮐﻨﺪ .ﺣﺴﮕﺮﻫﺎي اﻟﮑﺘﺮﯾﮑﯽ داﺧﻠﯽ ﺑﻪ ﮐﻤﮏ ﭘﺎﻟﺲ ﻫﺎي ﮐﻨﺘﺮل ﺷﺪه ﺑﻪ ﺑﻬﺒﻮد ﻗﺪرت ﻋﻀﻼت ﮐﻤﮏ
ﻣﯽﮐﻨﻨﺪ.
• ) (Strengthsﻧﻘﺎط ﻗﻮت • ﻓﻨﺎوري ﻧﻮآوراﻧﻪ و ﺛﺒﺖ اﺧﺘﺮاع • ﺗﻘﺎﺿﺎي ﻓﺰاﯾﻨﺪه ﺑﺮاي ﺗﺠﻬﯿﺰات ﭘﺰﺷﮑﯽ ﻫﻮﺷﻤﻨﺪ • ﻗﺎﺑﻠﯿﺖ اﺳﺘﻔﺎده ﺑﺮاي
ﻫﺮ دو ﮔﺮوه ﺑﯿﻤﺎران و اﻓﺮاد ﺳﺎﻟﻢ • )(Weaknessesﻧﻘﺎط ﺿﻌﻒ • ﻧﯿﺎز ﺑﻪ ﺳﺮﻣﺎﯾﻪ ﮔﺬاري اوﻟﯿﻪ ﺑﺎﻻ • آﮔﺎﻫﯽ ﻣﺤﺪود ﮐﺎرﺑﺮان درﺑﺎره
ﻣﺰاﯾﺎي اﯾﻦ ﻓﻨﺎوري ﻓﺮﺻﺖﻫﺎ ) • (Opportunitiesرﺷﺪ ﺳﺮﯾﻊ ﺑﺎزار ﺗﺠﻬﯿﺰات ﭘﺰﺷﮑﯽ ﻫﻮﺷﻤﻨﺪ • ﺣﻤﺎﯾﺖ دوﻟﺖ از اﺳﺘﺎرﺗﺎپ ﻫﺎي
ﺣﻮزه ﺳﻼﻣﺖ • ﻓﺮﺻﺖ ﺻﺪور ﻣﺤﺼﻮل ﺑﻪ ﺑﺎزارﻫﺎي ﺑﯿﻦ اﻟﻤﻠﻠﯽ • )(Threatsﺗﻬﺪﯾﺪﻫﺎ • رﻗﺎﺑﺖ ﺑﺎ ﺗﻮﻟﯿﺪﮐﻨﻨﺪﮔﺎن ﺑﯿﻦ اﻟﻤﻠﻠﯽ • ﻣﺴﺎﺋﻞ ﻧﻈﺎرﺗﯽ و ﻧﯿﺎز ﺑﻪ اﺧﺬ ﻣﺠﻮزﻫﺎي ﭘﺰﺷﮑﯽ
• ﺑﺮﻧﺪﯾﻨﮓ ﻣﺤﺼﻮل :ﻣﺤﺼﻮل ﺑﻪ ﻋﻨﻮان ﯾﮏ رﻫﺎ ﺣﻞ ﻧﻮآوراﻧﻪ ﺑﺮاي ﻣﺮاﻗﺒﺖ ﻫﺎي ﮔﺮدﻧﯽ ﻣﻌﺮﻓﯽ ﺧﻮاﻫﺪ ﺷﺪ “NeckGuard Smart” • .ﻧﺎم ﺑﺮﻧﺪ • ﺷﻌﺎر ﺑﺮﻧﺪ“ :ﺳﻼﻣﺖ ﮔﺮدن ﺷﻤﺎ ،آزادي ﺣﺮﮐﺖ ﺷﻤﺎ”
• دﯾﺠﯿﺘﺎل ﻣﺎرﮐﺘﯿﻨﮓ:اﺳﺘﻔﺎده از ﭘﻠﺘﻔﺮم ﻫﺎي ﺗﺒﻠﯿﻐﺎﺗﯽ ﻣﺎﻧﻨﺪ ﮔﻮﮔﻞ ادز ،اﯾﻨﺴﺘﺎﮔﺮام ،ﻟﯿﻨﮑﺪﯾﻦ و ﻓﯿﺲ ﺑﻮك ﺑﺮاي ﺟﺬب ﻣﺸﺘﺮﯾﺎن • .ﺑﺎزارﯾﺎﺑﯽ ﻣﺤﺘﻮاﯾﯽ :اﯾﺠﺎد وﺑﻼگ و ﺗﻮﻟﯿﺪ ﻣﺤﺘﻮاي آﻣﻮزﺷﯽ درﺑﺎره ﺑﻬﺪاﺷﺖ ﮔﺮدن • .ﺑﺎزارﯾﺎﺑﯽ ﺗﺄﺛﯿﺮﮔﺬار
)اﯾﻨﻔﻠﻮﺋﻨﺴﺮ ﻣﺎرﮐﺘﯿﻨﮓ( :ﻫﻤﮑﺎري ﺑﺎ اﯾﻨﻔﻠﻮﺋﻨﺴﺮﻫﺎي ﺣﻮزه ﺳﻼﻣﺖ و ورزش • .ﻧﻤﺎﯾﺸﮕﺎ ﻫﻬﺎ و روﯾﺪادﻫﺎي ﺻﻨﻌﺖ ﺳﻼﻣﺖ :ﺣﻀﻮر در ﻧﻤﺎﯾﺸﮕﺎه ﻫﺎي ﺻﻨﻌﺖ ﺳﻼﻣﺖ.
•ﻓﺮوش آﻧﻼﯾﻦ از ﻃﺮﯾﻖ و ﺑﺴﺎﯾﺖ اﺧﺘﺼﺎﺻﯽ • ﻫﻤﮑﺎري ﺑﺎ ﻓﺮوﺷﮕﺎه ﻫﺎي ﺗﺠﻬﯿﺰات ﭘﺰﺷﮑﯽ • ﻓﺮوش ﻣﺴﺘﻘﯿﻢ ﺑﻪ ﮐﻠﯿﻨﯿﮏ ﻫﺎ و ﺑﯿﻤﺎرﺳﺘﺎن ﻫﺎ
4.ﻧﯿﺮوي اﻧﺴﺎﻧﯽ
• ﻣﺪﯾﺮﻋﺎﻣﻞ :ﻣﺪﯾﺮﻋﺎﻣﻞ ﻣﺴﺌﻮل ﺗﻮﺳﻌﻪ اﺳﺘﺮاﺗﮋ ﯾﻬﺎي ﮐﻼن و ﻣﺬاﮐﺮات ﻣﺎﻟﯽ اﺳﺖ • .ﻣﺪﯾﺮ ﻓﻨﺎوري :ﻣﺴﺌﻮل ﻃﺮاﺣﯽ و ﺗﻮﺳﻌﻪ ﻣﺤﺼﻮل و ﻫﻤﮑﺎري ﺑﺎ ﺗﯽ ﻣﻬﺎي ﻣﻬﻨﺪﺳﯽ • .ﻣﺪﯾﺮ ﺑﺎزارﯾﺎﺑﯽ :ﻣﺴﺌﻮل اﯾﺠﺎد اﺳﺘﺮاﺗﮋ ﯾﻬﺎي ﺑﺎزارﯾﺎﺑﯽ و اﺟﺮاي ﮐﻤﭙﯽ
ﻧﻬﺎي ﺗﺒﻠﯿﻐﺎﺗﯽ • .ﻣﻬﻨﺪس ﻧﺮم اﻓﺰار و ﺳﺨﺖ اﻓﺰار :ﻣﺴﺌﻮل ﺗﻮﺳﻌﻪ اﻟﮕﻮرﯾﺖ ﻣﻬﺎي ﻧﺮم اﻓﺰاري و ﺑﻬﯿﻨﻪ ﺳﺎزي ﺳﺨﺖ اﻓﺰار • .ﺗﯿﻢ ﻓﺮوش و ﭘﺸﺘﯿﺒﺎﻧﯽ ﻣﺸﺘﺮﯾﺎن :ﺷﺎﻣﻞ 3ﻧﻔﺮ در ﺳﺎل اول.
5.ﺗﺤﻠﯿﻞ رﯾﺴﮏ
• رﯾﺴﮑﻬﺎي ﻓﻨﯽ :رﯾﺴﮏ ﻫﺎي ﻣﺮﺑﻮط ﺑﻪ ﻋﻤﻠﮑﺮد ﮔﺮدﻧﺒﻨﺪ و ﻗﺎﺑﻠﯿﺖ ﻫﺎي ﻧﺮم اﻓﺰاري • .رﯾﺴﮑﻬﺎي ﻗﺎﻧﻮﻧﯽ :ﻧﯿﺎز ﺑﻪ درﯾﺎﻓﺖ ﻣﺠﻮزﻫﺎي ﺗﺠﻬﯿﺰات ﭘﺰﺷﮏ • .رﯾﺴﮑﻬﺎي ﻣﺎﻟﯽ :ﻧﻮﺳﺎﻧﺎت ﻫﺰﯾﻨﻪ ﺗﻮﻟﯿﺪ و ﻫﺰﯾﻨﻪ ﻫﺎي ﻏﯿﺮﻣﻨﺘﻈﺮه.
•ﺗﺤﻘﯿﻖ و ﺗﻮﺳﻌﻪ )ﻣﺎه ﻫﺎي 1ﺗﺎ ( 6ﺗﻮﻟﯿﺪ ﻧﻤﻮﻧﻪ اوﻟﯿﻪ و آزﻣﺎﯾﺶ آن •اﺧﺬ ﻣﺠﻮزﻫﺎي ﭘﺰﺷﮑﯽ )ﻣﺎه ﻫﺎي 7ﺗﺎ (9ﺛﺒﺖ ﻣﺤﺼﻮل و درﯾﺎﻓﺖ ﺗﺄﯾﯿﺪﯾﻪ ﻫﺎي ﻻزم • آﻏﺎز ﺑﺎزارﯾﺎﺑﯽ )ﻣﺎه ﻫﺎي 10ﺗﺎ (12آﻏﺎز ﮐﻤﭙﯽ ﻧﻬﺎي ﺗﺒﻠﯿﻐﺎﺗﯽ و ﻓﻌﺎﻟﯿﺘﻬﺎي
ﺑﺮﻧﺪﯾﻨﮓ
7.ﻃﺮح ﻣﺎﻟﯽ
• اﯾﻦ ﺑﺮﻧﺎﻣﻪ ﻣﺎﻟﯽ ﺑﺮاي ﯾﮏ دوره 5ﺳﺎﻟﻪ ﻃﺮاﺣﯽ ﺷﺪه اﺳﺖ ﮐﻪ در آن ﺑﻪ ﺑﺮرﺳﯽ درآﻣﺪﻫﺎ ،ﻫﺰﯾﻨﻪ ﻫﺎ ،ﺳﻮد و زﯾﺎن ﺧﺎﻟﺺ ،ﻧﻘﻄﻪ ﺳﺮ ﺑﻪ ﺳﺮ ،ﺳﺮﻣﺎﯾﻪ ﮔﺬاري ﻫﺎي ﻻزم و ﭘﯿﺶ ﺑﯿﻨﯽ ﺑﺎزﮔﺸﺖ ﺳﺮﻣﺎﯾﻪ )(ROIﭘﺮداﺧﺘﻪ ﻣﯽ ﺷﻮد .ﺗﻤﺎم ﻣﻘﺎدﯾﺮ ﺑﻪ دﻻر
ﮐﺎﻧﺎدا ) (CADﻧﻤﺎﯾﺶ داده ﺷﺪه اﺳﺖ.
ﻧﺘﯿﺠﻪ ﮔﯿﺮي
•اﯾﻦ ﻃﺮح ﻧﺸﺎن ﻣﯽ دﻫﺪ ﮐﻪ ﮔﺮدﻧﺒﻨﺪ ﭘﺰﺷﮑﯽ ﻫﻮﺷﻤﻨﺪ ،ﻓﺮﺻﺖ ﻫﺎي ﮔﺴﺘﺮده اي را ﺑﺮاي رﺷﺪ در ﺑﺎزار داﺧﻠﯽ و ﺑﯿﻦ اﻟﻤﻠﻠﯽ
ﻓﺮاﻫﻢ ﻣﯽ آورد .اﯾﻦ ﻣﺤﺼﻮل ﻣﯽ ﺗﻮاﻧﺪ ﺑﻪ ﻋﻨﻮان ﯾﮏ ﻓﻨﺎوري ﭘﯿﺸﺮو در ﺻﻨﻌﺖ ﺗﺠﻬﯿﺰات ﭘﺰﺷﮑﯽ ﻣﻌﺮﻓﯽ ﺷﻮد.