Monday

Modularity

  • If a system is partitioned into modules so that the modules are solvable and modifiable separately.
  • It will be even better if the modules are also separately compliable.
  • A system is considered modular if it consists of discreet components so that each component can be implemented separately, and a change to one component has minimal impact on other components.
  • Modularity helps in system:
  1. debugging—isolating the system problem to a component is easier if the system is modular;
  2. in system repair—changing a part of the system is easy as it affects few other parts;
  3. in system building—a modular system can be easily built by "putting its modules together.“ 
  • For modularity, each module needs to support a well defined abstraction and have a clear interface through which it can interact with other modules. Modularity is where abstraction and partitioning come together.
  • Module-Level Concepts:
  1. A module is a logically separable part of a program. It is a program unit that is discreet and identifiable with respect to compiling and loading.
  2. In terms of common programming language constructs, a module can be a macro, a function, a procedure (or subroutine), a process, or a package.
  3. In systems using functional abstraction, a module is usually a procedure of function or a collection of these.
  4. To produce modular designs, some criteria must be used to select modules so that the modules support well-defined abstractions and are solvable and modifiable separately. 
  5. In a system using functional abstraction, coupling and cohesion are two modularization criteria, which are often used together.

Top-Down and Bottom-Up Strategies

  • A system consists of components, which have components of their own; indeed a system is a hierarchy of components. The highest-level component correspond to the total system.
  • To design such a hierarchy there are two possible approaches: top-down and bottom-up.
  • A top-down design approach starts by identifying the major components of the system, decomposing them into their lower-level components and iterating until the desired level of detail is achieved.
  • Top-down design methods often result in some form of stepwise refinement .
  • Starting from an abstract design, in each step the design is refined to a more concrete level, until we reach a level where no more refinement is needed and the design can be implemented directly.
  • Most design methodologies are based on the top-down approach.
  • A bottom-up design approach starts with designing the most basic or primitive components and proceeds to higher-level components that use these lower-level components.
  • Bottom-up methods work with layers of abstraction.
  • Starting from the very bottom, operations that provide a layer of abstraction are implemented.
  • The operations of this layer are then used to implement more powerful operations and a still higher layer of abstraction, until the stage is reached where the operations supported by the layer are those desired by the system.
  • A top-down approach is suitable only if the specifications of the system are clearly known and the system development is from scratch.
  • If a system is to be built from an existing system, a bottom-up approach is more suitable, as it starts from some existing components.
- for example, if an iterative enhancement type of process is being followed, in later iterations, the bottom-up approach could be more suitable (in the first iteration a top down approach can be used).
  • Pure top-down or pure bottom-up approaches are often not practical.
  • Combine approaches is also use full for layer of abstraction.
  • This approach is frequently used for developing systems.

Abstraction

  • Abstraction is a tool that permits a designer to consider a component at an abstract level without worrying about the details of the implementation of the component.
  • An abstraction of a component describes the external behavior of that component without bothering with the internal details that produce the behavior.
  • Abstraction is an indispensable part of the design process and is essential for problem partitioning.
  • There are two common abstraction mechanisms for software systems:
        - Functional abstraction, a module is specified by the function it performs.
  • For example, a module to compute the log of a value can be abstractly represented by the function log.
  • Similarly, a module to sort an input array can be represented by the specification of sorting.
       - Functional abstraction is the basis of partitioning in function-oriented approaches. The decomposition of the system is in terms of functional modules.

       - Data abstraction, any entity in the real world provides some services to the environment to which it belongs.

       - Data is not treated simply as objects, but is treated as objects with some predefined operations on them. The operations defined on a data object are the only operations that can be performed on those objects.

       - Data abstraction forms the basis for object-oriented design.

Problem Partitioning

  • When solving a small problem, the entire problem can be tackled at once. The complexity of large problems and the limitations of human minds do not allow large problems to be treated as huge monoliths.
  • As basic aim of problem analysis is to obtain a clear understanding of the needs of the clients and the users. 
  • Frequently the client and the users do not understand or know all their needs, because the potential of the new system is often not fully appreciated.
  • The analysts have to ensure that the real needs of the clients and the users are uncovered, even if they don't know them clearly.
  • That is, the analysts are not just collecting and organizing information about the client's organization and its processes, but they also act as consultants who play an active role of helping the clients and users identify their needs.
  • For solving larger problems, the basic principle is the time-tested principle of "divide and conquer.“
"divide into smaller pieces, so that each piece can be conquered separately.“
  • For software design, partition the problem into sub problems and then try to understand each sub problem and its relationship to other sub problems in an effort to understand the total problem.
  • That is goal is to divide the problem into manageably small pieces that can be solved separately, because the cost of solving the entire problem is more than the sum of the cost of solving all the pieces.
  • The different pieces cannot be entirely independent of each other, as they together form the system. The different pieces have to cooperate and communicate to solve the larger problem. 
  • Problem partitioning also aids design verification.
  • The concepts of state and projection can sometimes also be used effectively in the partitioning process.
  1. A state of a system represents some conditions about the system. This approach is sometimes used in real-time software or process-control software.
  2. Projection, different viewpoints of the system are defined and the system is then analyzed from these different perspectives. The different "projections" obtained are combined to form the analysis for the complete system. Analyzing the system from the different perspectives is often easier, as it limits and focuses the scope of the study.

 

 

 

System Design Introduction

Introduction :

  • The design activity begins when the requirements document for the software to be developed.
  • Design focuses on module view.
  • The design of a system is essentially a blueprint or a plan for a solution for the system.
  • The design process for software systems often has two levels.
  1. At the first level the focus is on deciding which modules are needed for the system, and how the modules should be interconnected. This is what is called the system design or top-level design.
  2. In the second level, the internal design of the modules, or how the specifications of the module can be satisfied, is decided. This design level is often called detailed design or logic design.
  • A design methodology is a systematic approach to creating a design by applying of a set of techniques and guidelines.
  • A design should clearly be verifiable, complete (implements all the specifications) traceable, and Simple.
  • The two most important concern of designers
  1. Efficiency of any system is concerned with the proper use of scarce resources by the system.
  2. An efficient system is one that consumes less processor time and requires less memory.

 

 

 

Thursday

Project Planning Monitoring and Tracking

  • The main goal of monitoring is for project managers to get visibility into the project execution so that they can determine whether any action needs to be taken to ensure that the project goals are met.
  • The three main levels of monitoring are
  1. Activity-level monitoring ensures that each activity in the detailed schedule has been done properly and within time.
  2. Status reports are often prepared weekly to to take stock of what has happened and what needs to be done. Status reports typically contain a summary of the activities successfully completed
  3. The milestone analysis is done at each milestone or every few weeks.

Wednesday

Types of Metrics

  • Size—Function Points 
  1. A major problem after requirements are done is to estimate the effort and for the project. For this, some metrics are needed that can be extracted from the requirements and used to estimate cost and schedule (through the use of some model).
  2. As the primary factor that determines the cost (and schedule) of a software project is its size and functionality of the system

  •  Size-Oriented Metrics
  1. Size-oriented software metrics are derived by normalizing quality and/or productivity measures by considering the size of the software that has been produced.
  2. The size could be in number of pages, number of paragraphs, number of functional requirements, etc.

  • Function-Oriented Metrics
  • Function-oriented software metrics use a measure of the functionality, that is, what the system performs, is the measure of the system size.
  • Since ‘functionality’ cannot be measured directly, it must be derived indirectly using other direct measures.
  • Function points are derived using an empirical relationship based on countable (direct) measures of software's information domain and assessments of software complexity.
  • The system functionality is calculated in terms of the number of functions it implements, the number of inputs, the number of outputs, etc.—parameters that can be obtained after requirements analysis and thatare independent of the specification (and implementation) language.
  • The original formulation for computing the function points uses the count of five different parameters, namely, external input types, external output types, logical internal file types, external interface file types, and external inquiry types.
  • To account for complexity, each parameter in a type is classified as simple, average, or complex.
  • Drawback of the function point or function- oriented metrics approach is that the process of computing the function points involves subjective evaluation at various points, may not be unique and can depend on the analyst.
  • Some of the places where subjectivity enters are:
  • (1) different interpretations of the SRS (e.g., whether something should count as an external input type or an external interface type; whether or not something constitutes a logical internal file; if two reports differ in a very minor way should they be counted as two or one);
  • (2) complexity estimation of a user function is totally subjective and depends entirely on the analyst (an analyst may classify something as complex while someone else may classify it as average)
  • The main advantage of function points over the size metric of KLOC, is that the definition of DFP depends only on information available from the specifications, whereas the size in KLOC cannot be directly determined from specifications.
  • The DFP count is independent of the language in which the project is implemented.

  •  Quality Metrics
  1. the quality of the SRS has direct impact on the cost of the project. Hence, it is important to ensure that the SRS is of good quality.
  2. Quality of an SRS can be assessed either directly by evaluating the quality of the document by estimating the value of one or more of the quality attributes of the SRS, or indirectly, by assessing the effectiveness of the quality control measures used in the development process during the requirements phase.
  3. Quality attributes of the SRS are generally hard to quantify and determining correlation with project parameters. Hence, the use of these metrics is still limited.
  4. Process-based metrics are better understood and used more widely for monitoring and controlling the requirements phase of a project.
  • Number of errors found is a process metric that is useful for assessing the quality of requirement specifications.
  • Once the number of errors of different categories found during the requirement review of the project is known, some assessment can be made about the SRS from the size of the project and historical data.
  • The error distribution during requirement reviews of a project will show a pattern similar to other projects executed following the same development process.
  • For example, if much fewer than expected errors were detected, it means that either the SRS was of very high quality or the requirement reviews were not careful.
  • Further analysis can reveal the true situation. If too many clerical errors were detected and too few omission type errors were detected, it might mean that the SRS was written poorly or that the requirements review meeting could not focus on "larger issues" and spent too much effort on "minor" issues.
  • Again, further analysis will reveal the true situation. a large number of errors that reflect ambiguities in the SRS can imply that the problem analysis has not been done properly.
  • Some project management decision to control this can then be taken (e.g., build a prototype or do further analysis).
  • Change request frequency can be used as a metric to assess the stability of the requirements and how many changes in requirements to expect during the later stages.
  • The frequency of changes can also be plotted against time. For most projects, the frequency decreases with time.
  • For a project, if the change requests are not decreasing with time, it could mean that the requirements analysis has not been done properly. Frequency of change requests can also be used to "freeze" the requirements—when the frequency goes below an acceptable threshold, the requirements can be considered frozen and the design can proceed. The threshold has to be determined based on experience and historical data.

 

 

 


 


 

Role of Metrics and Measurement in software development

  • The terms measure, measurement, and metrics are often used interchangeably, it is important to note the subtle differences between them. Because measure can be used either as a noun or a verb, definitions of the term can become confusing.
  • When a single data point has been collected (e.g., the number of errors uncovered in the review of a single module), a measure has been established.
  • Measurement occurs as the result of the collection of one or more data points (e.g., a number of module reviews are investigated to collect measures of the number of errors for each).
  • A software metric relates the individual measures in some way (e.g., the average number of errors found per review or the average number of errors found per person-hour expended on reviews).
  • Measurements in the physical world can be categorized in two ways: direct measures and indirect measures.
  1.       Direct measures of the product include lines of code (LOC) produced, execution speed, memory size, and defects reported over some set period of time. Indirect measures of the product include functionality, quality, complexity, efficiency, reliability, maintainability.
  • The basic purpose of metrics at any point during a development project is to provide quantitative information to the management process so that the information can be used to effectively control the development process. Unless the metric is useful in some form to monitor or control the cost, schedule, or quality of the project, it is of little use for a project.
  • There are very few metrics that have been defined for requirements.

 

 

 

Validation

  • The development of software starts with a requirements document, which is also used to determine eventually whether or not the delivered software system is acceptable. It is therefore important that the requirements specification contains no errors and specifies the client's requirements correctly.
  • Due to the nature of the requirement specification phase, there is a lot of room for misunderstanding and committing errors, and it is quite possible that the requirements specification does not accurately represent the client's needs.
  • The basic objective of the requirements validation activity is to ensure that the SRS reflects the actual requirements accurately and clearly. A related objective is to check that the SRS document is itself of "good quality“.
  • The most common errors that occur can be classified in four types:
  1. Omission is a common error in requirements. Some user requirement is simply not included in the SRS; the omitted requirement may be related to the behavior of the system, its performance, constraints, or any other factor.
  2. Inconsistency can be due to contradictions within the requirements themselves or to incompatibility of the stated requirements with the actual requirements of the client or with the environment in which the system will operate.
  3. Incorrect fact, Errors of this type occur when some fact recorded in the SRS is not correct.
  4. Ambiguity, Errors of this type occur when there are some requirements that have multiple meanings, that is, their interpretation is not unique.
  5. On an average, a total of more than 250 errors were detected, and the percentage of different types of errors was:

  • Checklists are frequently used in reviews to focus the review effort and ensure that no major source of errors is overlooked by the reviewers.
  1. Are all hardware resources defined?
  2. Have the response times of functions been specified?
  3. Have all the hardware, external software, and data interfaces been defined?
  4. Have all the functions required by the client been specified?
  5. Is each requirement testable?
  6. Is the initial state of the system defined?
  7. Are the responses to exceptional conditions specified?
  8. Does the requirement contain restrictions that can be controlled by the designer?
  9. Are possible future modifications specified? 
Requirements reviews are probably the most effective means for detecting requirement errors.

Requirement Specification

  • The final output is the software requirements is SRS document .
  • Characteristics of an Requirement Specification.
To properly satisfy the basic goals, an Requirement Specification should have certain properties and should contain different types of requirements.

1. Correct
2. Complete
3. Unambiguous
4. Verifiable
5. Consistent
6. Ranked for importance and/or stability
7. Modifiable
8. Traceable

Problem Analysis

  • Informal Approach :
  1. The informal approach to analysis is one where no defined methodology is used.
  2. The information about the system is obtained by interaction with the client, end users, questionnaires, study of existing documents, brainstorming, etc.
  3. The informal approach to analysis is used widely and can be quite useful because conceptual modeling-based approaches frequently do not model all aspects of the problem and are not always well suited for all the problems.
  4. as the SRS is to be validated and the feedback from the validation activity may require further analysis or specification.
  5. choosing an informal approach to analysis is not very risky—-the errors that may be introduced are not necessarily going to slip by the requirements phase. Hence such approaches may be the most practical approach to analysis in some situations. 
  • Various fact finding methods are used to collect detailed information about every aspect of an existing system.
  • Shadowing : 
  1. Shadowing is a technique in which you observe a user performing the tasks in the actual work environment and ask the user any questions related to the task.
  2. You typically follow the user as the user performs tasks.
  3. The information obtained by using this technique was firsthand and in context. 
  • Interviews:
  1. In interview is a one-on-one meeting between a member of the project team and a user.
  2. The quality of the information a team gathers depends on the skills of both the interviewer and the interviewee.
  3. An interviewer can learn a great deal about the difficulties and limitations of the current solution.
  4. Interviews provide the opportunity to ask a wide range of questions about topics that you cannot observe by means of shadowing.
  • Some of the questions we covered were:
  1. What do you look from the system?
  2. What are the problems you face while performing your task?
  3. What are the details that need to be maintained?
  4. Who provides the information needed to perform tasks?
  5. What information do you need to maintain for future use?
  6. What changes would make their experience more enjoyable?
  • Hence with this we were able to achieve the following:
  1. Identify the types of information that is gathered and processed and maintained.
  2. Identify the sources for information.
  3. Identify techniques required for data processing and the business rules to keep in mind to do the same.
  4. Identify end user requirements from the system.
  5. Identify the changes to be made to the system to make the experience more user-friendly.


 

 

Role of Management in Software Development

  • Software development is populated by players who can be categorized into one of five constituencies:
  1. Senior managers who define the business issues that often have significant influence on the project. 
  2. Project (technical) managers who must plan, motivate, organize, and control the practitioners who do software work.
  3. Practitioners who deliver the technical skills that are necessary to engineer a product or application. 
  4. Customers who specify the requirements for the software to be engineered and other stakeholders who have a peripheral interest in the outcome. 
  5. End-users who interact with the software once it is released for production use. 
“Management includes Senior managers, Project (technical) managers, Practitioners”
  • Senior managers
  1. Motivation. The ability to encourage (by “push or pull”) technical people to produce to their best ability.
  2. Organization. The ability to mold existing processes (or invent new ones) that will enable the initial concept to be translated into a final product.
  3. Ideas or innovation. The ability to encourage people to create even when they must work within bounds established.
  • Team Leaders : Project manager emphasizes four key traits:
  1. Problem solving. An effective software project manager can diagnose the technical and organizational issues, systematically structure a solution or properly motivate other practitioners to develop the solution.
  2. Managerial identity. A good project manager must take charge of the project.
  3. Achievement. To optimize the productivity of a project team, a manager must reward initiative and accomplishment and demonstrate through his own actions that controlled risk taking will not be punished.
  4. Influence and team building. An effective project manager must be able to “read” people; must be able to understand verbal and nonverbal signals and react to the needs of the people sending these signals. The manager must remain under control in high-stress situations.
  • The Software Team:
  1. The “best” team structure depends on the management style of your organization,
  2. The number of people who will populate the team and their skill levels, and the overall problem difficulty.
  3. To achieve a high-performance team: 
• Team members must have trust in one another.
• The distribution of skills must be appropriate to the problem.
• Problems may have to be excluded from the team, if team is to be maintained.

     4. It is important to recognize that human differences is the first step toward creating teams that jell.

Four P’s

  • Effective software project management focuses on the four P’s: people, product, process, and project. the “people factor” is so important.
  • The People
-The Software Engineering Institute has developed a people management capability maturity model (PM-CMM), The people management maturity model defines the following key practice areas for software people: recruiting, performance management, training, and work design, and team/culture development.
  •  The Product
-The software developer and customer must meet to define product objectives and scope.
  •  The Process
-A small number of framework activities are applicable to all software projects, regardless of their size or complexity. A number of different task sets—tasks, milestones, work products, and quality assurance points. Umbrella activities are independent of any one framework activity and occur throughout the process.
  • The Project
-We conduct planned and controlled software projects for one primary reason—it is the only known way to manage complexity.

Monday

Spiral Model

Spiral Model is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of the linear sequential model.

Spiral model that contains six task regions:



• Customer communication—tasks required to establish effective communication between developer and customer
• Planning—tasks required to define resources, timelines, and other project related information.
• Risk analysis—tasks required to assess both technical and management risks.
• Engineering—tasks required to build one or more representations of the application.
• Construction and release—tasks required to construct, test, install, and provide user support (e.g., documentation and training).
• Customer evaluation—tasks required to obtain customer feedback based on evaluation of the software.

Iterative Enhancement Model

The incremental model combines elements of the linear sequential model (applied repetitively) with the iterative philosophy of prototyping. Each linear sequence produces a deliverable “increment” of the software.

Prototyping Model

Software prototyping, an activity during certain software development, is the creation of prototypes, i.e., incomplete versions of the software program being developed. A prototype typically simulates only a few aspects of the features of the eventual program, and may be completely different from the eventual implementation.

The conventional purpose of a prototype is to allow users of the software to evaluate developers' proposals for the design of the eventual product by actually trying them out, rather than having to interpret and evaluate the design based on descriptions



Advantages of prototyping:
  1. Reduced time and costs
  2. Improved and increased user involvement.

Disadvantages of prototyping:
  1. Insufficient analysis.
  2. User confusion of prototype and finished system
  3. Developer misunderstanding of user objectives
  4. Expense and time of implementing prototyping

Waterfall model

The waterfall model is a sequential software development process, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design ,Construction, Testing and maintenance



Problems encountered when the waterfall model:
1. Real projects rarely follow the sequential flow that the model proposes. Because, changes can cause confusion as the project team proceeds.
2. It is often difficult for the customer to state all requirements explicitly. The linear sequential model requires this and has difficulty accommodating the natural uncertainty that exists at the beginning of many projects.

System Requirements Specification

A Software Requirements Specification (SRS) is a complete description of the behavior of the system to be developed. It includes
  • A set of use cases that describe all the interactions the users will have with the software also known as functional requirements.  
  • Non-functional (or supplementary) requirements, are requirements which impose constraints on the design or implementation (such as performance engineering requirements, quality standards, or design constraints).
A Business analyst (BA), sometimes titled System analyst, is responsible for analyzing the business needs of their clients.

The goal of the requirements activity is to produce the Software Requirements Specification (SRS), that describes what the proposed software should do without describing how the software will do it.

 Need for SRS
  • An SRS establishes the basis for agreement between the client and the supplier on what the software product will do.
  • An SRS provides a reference for validation of the final product.
  • A high-quality SRS is a prerequisite to high-quality software.
  • A high quality SRS reduces the development cost.



System development life cycle

Traditional methodology for developing, maintaining, and replacing information systems

 Phases in SDLC:

  1. Planning
  2. Analysis
  3. Design
  4. Implementation
  5. Maintenance

 Computer systems have become more complex and often link multiple traditional systems potentially supplied by different software vendors. To manage this level of complexity, a number of system development life cycle (SDLC) models have been created: "waterfall," "fountain," "spiral," "build and fix," "rapid prototyping," "incremental," and "synchronize and stabilize."



Initiation/planning
To generate a high-level view of the intended project and determine the goals of the project. The feasibility study is sometimes used to present the project to upper management in an attempt to gain funding. Projects are typically evaluated in three areas of feasibility: economical, operational, and technical.

Requirements gathering and analysis
The goal of systems analysis is to determine where the problem is in an attempt to fix the system. This step involves breaking down the system in different pieces and drawing diagrams to analyze the situation, analyzing project goals, breaking need to be created and attempting to engage users so that definite requirements can be defined.

Design
In systems design functions and operations are described in detail, including screen layouts, business rules, process diagrams, and tables of business rules, business process diagrams, pseudo code, and a complete entity-relationship diagram with a full data dictionary. The output of this stage will describe the new system as a collection of modules or subsystems.

Implementation
Its include build or coding and testing
Modular and subsystem programming code will be accomplished during this stage. Unit testing and module testing are done in this stage by the developers before integration to the main project.
Unit, system and user acceptance testing are often performed. This is a grey area as many different opinions exist as to what the stages of testing are and how much if any iteration occurs. Types of testing:
Unit testing, Module testing ,System testing ,Black box testing ,White box testing ,Regression testing ,User acceptance testing

Operations and maintenance
The deployment of the system includes changes and enhancements of the system. As key personnel change positions in the organization, new changes will be implemented, which will require system updates.


Wednesday

SOFTWARE PROCESS MODELS

A software engineer or a team of engineers must incorporate a development strategy that encompasses the process, methods, and tools layers.

This strategy is often referred to as a process model or a software engineering paradigm.

A process model for software engineering is chosen based on the nature of the project and application, the methods and tools to be used, and the controls and deliverables that are required.

Four distinct stages are encountered: status quo, problem definition, technical development, and solution integration. Status quo “represents the current state of affairs”

Problem definition identifies the specific problem to be solved; technical development solves the problem through the application of some technology, and solution integration delivers the results (e.g., documents, programs, data, new business function, new product) to those who requested the solution in the first place.

A Generic View of Software Engineering

Engineering is the analysis, design, construction, verification, and management of technical (or social) entities. Regardless of the entity to be engineered, the following questions must be asked and answered:
  • What is the problem to be solved?
  • What characteristics of the entity are used to solve the problem?
  • How will the entity (and the solution) be realized?
  • How will the entity be constructed?
  • What approach will be used to uncover errors that were made in the design and construction of the entity?
  • How will the entity be supported over the long term, when corrections, adaptations, and enhancements are requested by users of the entity.

Process, Methods, and Tools

Any engineering approach (including software engineering) must rest on an organizational commitment to quality.



The foundation for software engineering is the process layer. Software engineering process is the glue that holds the technology layers.

Process defines a framework for a set of key process areas (technical methods are applied, work products (models, documents, data, reports, forms, etc.) are produced, milestones are established, quality is ensured) that must be established for effective delivery of software engineering technology.

Software engineering methods provide the technical how-to's for building software.

Methods encompass a broad array of tasks that include requirements analysis, design, program construction, testing, and support.

Software engineering tools provide automated or semi-automated support for the process and the methods. When tools are integrated so that information created by one tool can be used by another, a system for the support of software development, called computer-aided software engineering, CASE.

SOFTWARE ENGINEERING: A LAYERED TECHNOLOGY

Although hundreds of authors have developed personal definitions of software engineering, but the IEEE [IEE93] has developed a more comprehensive definition when it states:

Software Engineering:

(1) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software.

(2) The study of approaches as in (1).

Tuesday

software process

When you build a product or system, it’s important to go through a series of predictable steps—a road map that helps you create a timely, high-quality result. The road map that you follow is called a ‘software process.’

 Software engineers and their managers adapt the process to their needs and then follow it. In addition, the people who have requested the software play a role in the software process.

 Process important because it provides stability.

 At a detailed level, the process that you adopt depends on the software you’re building. One process might be appropriate for creating software for an aircraft avionics system, while an entirely different process would be indicated for the creation of a Web site.

 From the point of view of a software engineer, the work products are the programs, documents, and data  produced as a consequence of the software engineering activities.

 A number of software process assessment mechanisms enable organizations to determine the “maturity” of a software process. defined by the process.

 we define a software process as a framework for the tasks that are required to build high-quality software.

 A software process defines the approach that is taken as software is engineered. But software engineering also encompasses technologies that populate the process—technical methods and automated tools.

 More important, software engineering is performed by creative, knowledgeable people who should work within a defined and mature software process that is appropriate for the products they build and the demands of their marketplace.



Umbrella activities in this category include:
 
  1. Software project tracking and control
  2. Formal technical reviews
  3. Software quality assurance
  4. Software configuration management
  5. Document preparation and production
  6. Reusability management
  7. Measurement
  8. Risk management

In recent years, there has been a significant emphasis on “process maturity.”

software engineering practices and establishes five process maturity levels that are defined in the following manner:

Level 1: Initial. The software process is characterized as ad hoc and occasionally even chaotic. Few processes are defined, and success depends on individual effort.

Level 2: Repeatable. Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.

Level 3: Defined. The software process for both management and engineering activities is documented, standardized, and integrated into an organization wide software process.

Level 4: Managed. Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled using detailed measures.

Level 5: Optimizing. Continuous process improvement is enabled by quantitative feedback from the process and from testing innovative ideas and technologies.

SOFTWARE: A CRISIS ON THE HORIZON

Many industry observers have characterized the problems associated with software development as a "crisis.“

Yet, the great successes achieved by the software industry have led many to question whether the term software crisis is still appropriate. It is true that software people succeed more often than they fail. It also true that the software crisis predicted 30 years ago never seemed to materialize.

The word crisis is defined in Webster's Dictionary as “a turning point in the course of anything; decisive or crucial time, stage or event.”

The word crisis has another definition: "the turning point in the course of a disease, when it becomes clear whether the patient will live or die." This definition may give us a clue about the real nature of the problems that have plagued software development.

Software Applications

System software:  System software is a collection of programs written to service other programs. Some system software (e.g., compilers, editors, and file management utilities) process complex, but determinate, information structures. Other systems applications (e.g., operating system components, drivers, telecommunications processors) process largely indeterminate data.

Real-time software:  Software that monitors/analyzes/controls real-world events as they occur is called real time. Elements of real-time software include a data gathering component that collects and formats information from an external environment, an analysis component that transforms information as required by the application, a control/output component that responds to the external environment, and a monitoring component that coordinates all other components so that real-time
response (typically ranging from 1 millisecond to 1 second) can be maintained.

Business software:  Business information processing is the largest single software application area. Discrete "systems" (e.g., payroll, accounts receivable/payable, inventory) have evolved into management information system (MIS) software that accesses one or more large databases containing business information.

Engineering and scientific software:  Engineering and scientific software have been characterized by "number crunching" algorithms. Applications range from astronomy to volcanology, from automotive stress analysis to space shuttle orbital dynamics, and from molecular biology to automated manufacturing.

Embedded software:  Intelligent products have become commonplace in nearly every consumer and industrial market. Embedded software resides in read-only memory and is used to control products and systems for the consumer and industrial markets. Embedded software can perform very limited and esoteric functions (e.g., keypad control for a microwave oven) or provide significant function and control capability (e.g., digital functions in an automobile such as fuel control, dashboard displays, and braking systems).

Personal computer software:  The personal computer software market has burgeoned over the past two decades. Word processing, spreadsheets, computer graphics, multimedia, entertainment, database management, personal and business financial applications, external network, and database access are only a few of hundreds of applications.

Web-based software:  The Web pages retrieved by a browser are software that incorporates executable instructions (e.g., CGI, HTML, Perl, or Java), and data (e.g., hypertext and a variety of visual and audio formats). In essence, the network becomes a massive computer providing an almost unlimited software resource that can be accessed by anyone with a modem.

Artificial intelligence software:  Artificial intelligence (AI) software makes use of nonnumerical algorithms to solve complex problems that are not amenable to computation or straightforward analysis. Expert systems, also called knowledgebased systems, pattern recognition (image and voice), artificial neural networks, theorem proving, and game playing are representative of applications within this category.

Software Characteristics

Software is a logical rather than a physical system element. Therefore, software has characteristics that are considerably different than those of hardware:

Software is developed or engineered, it is not manufactured in the classical sense.

Although some similarities exist between software development and hardware manufacture, the two activities are fundamentally different. In both activities, high quality is achieved through good design, the construction of a "product“ required but the approaches are different. Software costs are concentrated in engineering. This means that software projects cannot be managed as if they were manufacturing projects.

Software doesn't "wear out.“ 



depicts failure rate as a function of time for hardware. The relationship, often called the "bathtub curve," indicates that hardware exhibits relatively high failure rates early in its life (these failures are often attributable to design or manufacturing defects); defects are corrected and the failure rate drops to a steady-state level (ideally, quite low) for some period of time. As time passes, however, the failure rate rises again as hardware components suffer from the cumulative affects of dust, vibration, abuse, temperature extremes, and many other environmental maladies. Stated simply, the hardware begins to wear out.


Software is not susceptible to the environmental maladies that cause hardware to wear out. In theory, therefore, the failure rate curve for software should take the form of the “idealized curve”. Undiscovered defects will cause high failure rates early in the life of a program. However, these are corrected (ideally, without introducing other errors) and the curve flattens as shown. The idealized curve is a gross oversimplification of actual failure models for software. However, the implication is clear—software doesn't wear out.


 
 
As changes are made, it is likely that some new defects will be introduced, causing the failure rate curve to spike.

Before the curve can return to the original steady-state failure rate, another change is requested, causing the curve to spike again. Slowly, the minimum failure rate level begins to rise—the software is deteriorating due to change.

Although the industry is moving toward component-based assembly, most software continues to be custom built.

The design engineer draws a simple schematic of the digital circuitry, does some fundamental analysis to assure that proper function will be achieved, and then goes to the shelf where catalogs of digital components exist. As an engineering discipline evolves, a collection of standard design components is created. The reusable components have been created so that the engineer can concentrate on the truly innovative elements of a design, that is, the parts of the design that represent something new.

Monday

SOFTWARE DEFINITION

Software is
(1)instructions (computer programs) that when executed provide desired function and performance,
(2) data structures that enable the programs to adequately manipulate information, and
(3) documents that describe the operation and use of the programs.

THE EVOLVING ROLE OF SOFTWARE

Today, software takes on a dual role. It is a product and, at the same time, the vehicle for delivering a product. As a product, it delivers the computing potential embodied by computer hardware or, more broadly, a network of computers that are accessible by local hardware. Whether it resides within a cellular phone or operates inside a mainframe computer, software is an information transformer—producing, managing, acquiring, modifying, displaying, or transmitting information that can be as simple as a single bit or as complex as a multimedia presentation. As the vehicle used to deliver the product, software acts as the basis for the control of the computer (operating systems), the communication of information (networks), and the creation and control of other programs (software tools and environments). Software delivers the most important product of our time—information.

Software transforms personal data (e.g., an individual’s financial transactions) so that the data can be more useful in a local context; it manages business information to enhance competitiveness; it provides a gateway to worldwide information networks (e.g., Internet) and provides the means for acquiring information in all of its forms.

The role of computer software has undergone significant change over a time span of little more    than 50 years. Dramatic improvements in hardware performance, profound changes in computing architectures, vast increases in memory and storage capacity, and a wide variety of exotic input and output options have all precipitated more sophisticated and complex computer-based systems.
The lone programmer of an earlier era has been replaced by a team of software specialists, each focusing on one part of the technology required to deliver a complex application.
And yet, the same questions asked of the lone programmer are being asked when modern computer-based systems are built:
1)Why does it take so long to get software finished?
2)Why are development costs so high?
3)Why can't we find all the errors before we give the software to customers?
4)Why do we continue to have difficulty in measuring progress as software is being developed?

Computer software

Computer software is the product that software engineers design and build.

It encompasses programs that execute within a computer of any size and architecture, documents that encompass hard-copy and virtual forms, and data that combine numbers and text but also includes representations of pictorial, video, and audio information.

It  is important because it affects nearly every aspect of our lives and has become pervasive in our commerce, our culture, and our everyday activities.

Computer software is build like any successful product, by applying a process that leads to a high-quality result that meets the needs of the people who will use the product. You apply a software engineering approach.

Software’s impact on our society and culture continues to be profound.

As its importance grows, the software community continually attempts to develop technologies that will make it easier, faster, and less expensive to build high-quality computer programs.

Some of these technologies are targeted at a specific application domain (e.g., Web-site design and implementation); others focus on a technology domain (e.g., object-oriented systems); and still others are broad-based (e.g., operating systems such as LINUX).

However, we have yet to develop a software technology that does it all, and the likelihood of one arising in the future is small.