Vous êtes sur la page 1sur 16

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0071 Software Engineering 4 Credits (Book ID: B0808 &

p; B0809) Assignment Set 1 (60 Marks) Answer all Questions TEN marks Book ID: B0808 1. Explain the following development models: a. Serial or Linear Sequential b. Incremental 2. Describe the Object Interface Design. 3. Explain the following testing strategies: a. Top-Down Testing b. Bottom-Up Testing c. Thread testing d. Stress testing Book ID: B0809 4. Describe the following Risk Reduction Models: a. Prototyping Model b. Spiral Model c. Clean Room Model 5. Describe the Capability Maturity Model. 6. Describe the following with respect to Software Technology: a. Exponential Growth in Capability b. Business Problem-Solving Optimization c. The E-Business Revolution d. Portability Power e. Connectivity Power Each question carries

Ans. 1 (a)

Explain the following development models:

a. Serial or Linear Sequential: It is also called Classic Life Cycle or Waterfall model or Software Life Cycle suggests a systematic and sequential approach to software development that begins at the system level and progresses through analysis, design, coding, testing and support. The waterfall model derives its name due to the cascading effect from one phase. In this model each phase well defined starting and ending point, with identifiable deliveries to the next phase Analysis-->Design-->Coding-->Testing Advantages Simple and a desirable approach when the requirements are clear and well understood at the beginning It provides a clear cut template for analysis, design, coding, testing and support. It is an enforced disciplined approach Disadvantages It is difficult for the customers to state the requirements clearly at the beginning. There is always certain degree of natural uncertainty at beginning of each project. Difficult and costlier to change when the changes occur at later stages. Customer can see the working version only at the end. Thus any changes suggested here are not only difficult to incorporate but also expensive. This may result in disaster if any undetected problems are precipitated to this stage Ans. 1 (b) Incremental: The incremental Model is an evolution of the waterfall model, where the waterfall model is incrementally applied. The series of releases is referred to as increments, with each increment providing more functionality to the customers. After the first increment, a core product is delivered, which can already be used by the customer. Based on customer feedback, a plan is developed for the next increments, and modifications are made accordingly. This process continues, with increments being delivered until the complete product is delivered. The incremental philosophy is also used in the agile process model. Advantages 1. After every iteration any faulty piece software can be identified easily as very few changes are done after every iteration. 2. It is easier to test and debug as testing and debugging can be performed after each iteration. 3. This model does not affect anyone's business values because they provide core of the software which customer needs, which will indeed help that person to keep run his business. 4. After establishing an overall architecture, system is developed and delivered in increments. Disadvantages 1. If the requirements initially were thought to be stable but at later stages are realized to be unstable then the increments have to be withdrawn and have to be reworked. 2. Resulting cost may exceed the cost of the organization. 3. Problems may arise related to system architecture.

Ans. 2 Describe the Object Interface Design.

Conceptual model is consistent with system image Interface should include mappings that reveal relationships among task stages User should receive continuous feedback Strive for consistency Enable short-cuts for frequent users Informative feedback Design dialogs to yield closure Offer simple error handling Permit easy reversal of actions Support internal locus of control Reduce short-term memory load on user In computing an object-oriented user interface (OOUI) is a type of user interface based on an object-oriented programming metaphor. In an OOUI, the user interacts explicitly with objects that represent entities in the domain that the application is concerned with. Many vector drawing applications, for example, have an OOUI - the objects being lines, circles and canvases. The user may explicitly select an object, alter its properties (such as size or color), or invoke other actions upon it (such as to move, copy, or re-align it). If a business application has any OOUI, the user may be selecting and/or invoking actions on objects representing entities in the business domain such as customers, products or orders. Dave Collins defines an OOUI as demonstrating three characteristics: Users perceive and act on objects Users can classify objects based on how they behave In the context of what users are trying to do, all the user interface objects fit together into a coherent overall representation. Ans. 3 Explain the following testing strategies:

State and action alternatives are visible

Ans. 3(a) Top-Down Testing: In this approach testing is conducted from main module to sub module. If the sub module is not developed a temporary program called STUB is used for simulate the sub module. Advantages: - Advantageous if major flaws occur toward the top of the program. - Once the I/O functions are added, representation of test cases is easier. - Early skeletal Program allows demonstrations and boosts morale. Disadvantages: - Stub modules must be produced - Stub Modules are often more complicated than they first appear to be. - Before the I/O functions are added, representation of test cases in stubs can be difficult. - Test conditions may be impossible, or very difficult, to create. - Observation of test output is more difficult. - Allows one to think that design and testing can be overlapped. - Induces one to defer completion of the testing of certain modules.

Ans. 3 (b) Bottom up testing: In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module. Advantages: - Advantageous if major flaws occur toward the bottom of the program. - Test conditions are easier to create. - Observation of test results is easier. Disadvantages: - Driver Modules must be produced. - The program as an entity does not exist until the last module is added. Ans. 3 (c) Thread testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels Ans. 3 (d) Stress testing: Stress testing is a fundamental quality assurance activity that should be part of every significant software testing effort. The key idea behind stress testing is simple: instead of running manual or automated tests under normal conditions, you run your tests under conditions of reduced machine or system resources. The resources to be stressed generally include internal memory, CPU availability, disk space, and network bandwidth. To reduce these resources for testing you can run a tool called a stressor. Basically test the application to ensure how system is behaved when large volume of data interacts with the system. The objective of the stress testing is basically stress the application and check the applications before it is put in to the production environment. Normal and abnormal of data was processed during the specific time frame Test data can create either with the data used in the production environment or the set of data can create on your own. We can also check whether the sufficient disk space is allocated to the application. We can also check whether any communication link failure come across at the time of the processing the transactions. People you are involving of entering instructions need to the end user you is going to use the application one it is put in to production. Stress testing need to break the system by overloading the large number of transactions. The disadvantage of stress testing is the amount to time to use to prepare for the test and the amount of resources used for it. However it will helps to take up the decision to whether we make the application live based on the results which will help to reduce the risk. It will be done before UAT towards end of the developmental phase.

Ans. 4 Describe the following Risk Reduction Models: Ans. 4 (a) Prototyping Model: The Prototyping Model is a systems development method (SDM) in which a prototype (an early approximation of a final system or product) is built, tested, and then reworked as necessary until an acceptable prototype is finally achieved from which the complete system or product can now be developed. This model works best in scenarios where not all of the project requirements are known in detail ahead of time. It is an iterative, trial-and-error process that takes place between the developers and the users. There are several steps in the Prototyping Model: 1. The new system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the departments or aspects of the existing system. 2. A preliminary design is created for the new system. 3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product. 4. The users thoroughly evaluate the first prototype, noting its strengths and weaknesses, what needs to be added, and what should to be removed. The developer collects and analyzes the remarks from the users. 5. The first prototype is modified, based on the comments supplied by the users, and a second prototype of the new system is constructed. 6. The second prototype is evaluated in the same manner as was the first prototype. 7. The preceding steps are iterated as many times as necessary, until the users are satisfied that the prototype represents the final product desired. 8. The final system is constructed, based on the final prototype. 9. The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing basis to prevent large-scale failures and to minimize downtime. Advantages It could serve as the first system. The customer doesnt need to wait long as in the Linear Model. Feedback from customers are received periodically and the changes dont come as a last minute surprise. Disadvantages Customer could believe the prototype as the working version. Developer also could make the implementation compromises where he could make the quick fixes to the prototype and make is as a working version. Often clients expect that a few minor changes to the prototype will more than suffice their needs. They fail to realise that no consideration was given to the overall quality of the software in the rush to develop the prototype

Ans. 4 (b) Spiral Model: The spiral model, also known as the spiral lifecycle model, is a systems development lifecycle (SDLC) model used in information technology

(IT). This model of development combines the features of the prototyping model and the waterfall model. The spiral model is favoured for large, expensive, and complicated projects. The steps in the spiral model can be generalized as follows: 1. The new system requirements are defined in as much detail as possible. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system. 2. A preliminary design is created for the new system. 3. A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-down system, and represents an approximation of the characteristics of the final product. 4. A second prototype is evolved by a fourfold procedure: (1) evaluating the first prototype in terms of its strengths, weaknesses, and risks; (2) defining the requirements of the second prototype; (3) planning and designing the second prototype; (4) constructing and testing the second prototype. 5. At the customer's option, the entire project can be aborted if the risk is deemed too great. Risk factors might involve development cost overruns, operating-cost miscalculation, or any other factor that could, in the customer's judgment, result in a less-than-satisfactory final product. 6. The existing prototype is evaluated in the same manner as was the previous prototype, and, if necessary, another prototype is developed from it according to the fourfold procedure outlined above. 7. The preceding steps are iterated until the customer is satisfied that the refined prototype represents the final product desired. 8. The final system is constructed, based on the refined prototype. 9. The final system is thoroughly evaluated and tested. Routine maintenance is carried out on a continuing basis to prevent large-scale failures and to minimize downtime. Ans. 4 (c) Clean Room Model: Clean room software engineering is a theorybased team-oriented process for development and certification of high-reliability software systems under statistical quality control. A principal objective of the Clean room process is development of software that exhibits zero failures in use. The Clean room name is borrowed from hardware Clean rooms, with their emphasis on rigorous engineering discipline and focus on defect prevention rather than defect removal. Clean room combines mathematically based methods of software specification, design, and correctness verification with statistical, usage-based testing to certify software fitness for use. Clean room projects have reported substantial gains in quality and productivity. This report defines the Clean room Software Engineering Reference Model, or CRM. The CRM is expressed in terms of a set of 14 Clean room processes and 20 work products. It is intended as a guide for Clean room project management and performance, process assessment and improvement, and technology transfer and adoption.

Ans. 5 Describe the Capability Maturity Model. The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization's software development process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes. CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development center sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address software engineering issues and, in a broad sense, to advance software engineering methodologies. More specifically, SEI was established to optimize the process of developing, acquiring, and maintaining heavily softwarereliant systems for the DoD. Because the processes involved are equally applicable to the software industry as a whole, SEI advocates industry-wide adoption of the CMM. The CMM is similar to ISO 9001, one of the ISO9000series of standards specified by the International Organization for Standardization (ISO). The ISO 9000 standards specify an effective quality system for manufacturing and service industries; ISO 9001 deals specifically with software development and maintenance. The main difference between the two systems lies in their respective purposes: ISO 9001 specifies a minimal acceptable quality level for software processes, while the CMM establishes a framework for continuous process improvement and is more explicit than the ISO standard in defining the means to be employed to that end. CMM's Five Maturity Levels of Software Processes

At the initial level, processes are disorganized, even chaotic. Success is likely to depend on individual efforts, and is not considered to be repeatable, because processes would not be sufficiently defined and documented to allow them to be replicated. At the repeatable level, basic project management techniques are established, and successes could be repeated, because the requisite processes would have been made established, defined, and documented. At the defined level, an organization has developed its own standard software process through greater attention to documentation, standardization, and integration. At the managed level, an organization monitors and controls its own processes through data collection and analysis. At the optimizing level, processes are constantly being improved through monitoring feedback from current processes and introducing innovative processes to better serve the organization's particular needs

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0071 Software Engineering 4 Credits (Book ID: B0808 & B0809) Assignment Set 2 (60 Marks) Answer all Questions Book ID: B0808 1. Describe the following with respect to Software Design: a. The design process b. Design Methods c. Design description d. Design strategies. 2. Describe the following with respect to Software Testing: a. Control Structure Testing b. Black Box Testing c. Boundary Value Analysis d. Testing GUIs e. Testing Documentation and Help Facilities. 3. Draw possible data flow diagram of system design for the following application. Part of the electronic mail system which presents a mail form to a user, accepts the completed form and sends it to the identified destination. Book ID: B0809 4. Describe the following with respect to Software Testing: a. Open Source development Model b. Agile Software Development 5. Describe Classic Invalid assumptions in the context of Process Life Cycle models. 6. Describe the following: a. Importance of people in problem solving process b. Human driven software engineering Each question carries TEN marks

Ans. 1 Describe the following with respect to Software Design: Ans. 1 (a) The design process: Software design is a process of problem solving and planning for a software solution. After the purpose and specifications of software are determined, software developers will design or employ designers to develop a plan for a solution. It includes low-level component and algorithm implementation issues as well as the architectural view. The design concepts provide the software designer with a foundation from which more sophisticated methods can be applied. A set of fundamental design concepts has evolved. They are: 1. Abstraction - Abstraction is the process or result of generalization by reducing the information content of a concept or an observable phenomenon, typically in order to retain only information which is relevant for a particular purpose. 2. Refinement - It is the process of elaboration. A hierarchy is developed by decomposing a macroscopic statement of function in a stepwise fashion until programming language statements are reached. In each step, one or several instructions of a given program are decomposed into more detailed instructions. Abstraction and Refinement are complementary concepts. 3. Modularity - Software architecture is divided into components called modules. 4. Software Architecture - It refers to the overall structure of the software and the ways in which that structure provides conceptual integrity for a system. A good software architecture will yield a good return on investment with respect to the desired outcome of the project, e.g. in terms of performance, quality, schedule and cost. 5. Control Hierarchy - A program structure that represents the organization of a program component and implies a hierarchy of control. 6. Structural Partitioning - The program structure can be divided both horizontally and vertically. Horizontal partitions define separate branches of modular hierarchy for each major program function. Vertical partitioning suggests that control and work should be distributed top down in the program structure. 7. Data Structure - It is a representation of the logical relationship among individual elements of data. 8. Software Procedure - It focuses on the processing of each modules individually 9. Information Hiding - Modules should be specified and designed so that information contained within a module is inaccessible to other modules that have no need for such information. Ans. 1 (b) Design Methods: A more methodical approach to software design is purposed by structured methods, which are sets of notations and guidelines for software design. Budgen (1993) describes some of the most commonly used methods such as structured design, structured systems analysis, Jackson System Development and various approaches to object-oriented design. The use of structured methods involves producing large amounts of diagrammatic design documentation. CASE tools have been developed to support particular methods. Structured methods have been applied successfully in many large projects. They can deliver significant cost reductions because they use standard notations and ensure that standard design documentation is produced.

A mathematical method (such as the method for long division) is a strategy that will always lead to the same result irrespective of who applies the method. The term structured methods suggests, therefore, that designers should normally generate similar designs from the same specification. A structured method includes a set of activities, notations, report formats, rules and design guidelines. So structured methods often support some of the following models of a system: (1) A data-flow model where the system is modeled using the data transformations, which take place as it, is processed. (2) An entity-relation model, which is used to describe the logical data, structures being used. (3) A structural model where the system components and their interactions are documented. (4) If the method is object-oriented it will include an inheritance model of the system, a model of how objects are composed of other objects and, usually, an object-use model which shows how objects are used by other objects. Ans. 1 (c) Design description: A software design is a model system that has many participating entities and relationships. This design is used in a number of different ways. It acts as a basis for detailed implementation; it serves as a communication medium between the designers of sub-systems; it provides information to system maintainers about the original intentions of the system designers, and so on. Designs are documented in a set of design documents that describes the design for programmers and other designers. There are three main types of notation used in design documents: (1) Graphical notations: These are used to display the relationships between the components making up the design and to relate the design to the real-world system is modeling. A graphical view of a design is an abstract view. It is most useful for giving an overall picture of the system. (2) Program description languages these languages (PDLs) use control and structuring constructs based on programming language constructs but also allow explanatory text and (sometimes) additional types of statement to be used. These allow the intention of the designer to be expressed rather than the details of how the design is to be implemented. (3) Informal text much of the information that is associated with a design cannot be expressed formally. Information about design rationale or non-functional considerations may be expressed using natural language text. Ans. 1 (d) Design strategies: The most commonly used software design strategy involved decomposing the design into functional components with system state information held in a shared data area. Since from late 1980s that this alternative, object oriented design has been widely adopted. Two design strategies are summarized as follows: (1) Functional design: The system is designed from a functional viewpoint, starting with a high-level view and progressively refining this into a more detailed design. The System State is centralized and shared between the functions operating on that state. Methods such as Jackson Structured Programming and the Warnier-Orr method are techniques of functional decomposition where the structure of the data is used to determine the functional structure used to process that data. (2) Object-oriented design: The system is viewed as a collection of objects rather than as functions. Object-oriented design is based on the idea of information hiding and has

been described by Meyer, Booch, and Jacobsen. and many others. JSD is a design method that falls somewhere between function-oriented and object-oriented design.

Ans. 2 (a) Control Structure Testing: Control structure testing is a group of whitebox testing methods. CONDITION TESTING

It is a test case design method. It works on logical conditions in program module. It involves testing of both relational expressions and arithmetic expressions. If a condition is incorrect, then at least one component of the condition is incorrect. Types of errors in condition testing are boolean operator errors, boolean variable errors, boolean parenthesis errors, relational operator errors, and arithmetic expression errors. Simple condition: Boolean variable or relational expression, possibly proceeded by a NOT operator. Compound condition: It is composed of two or more simple conditions, Boolean operators and parentheses. Boolean expression: It is a condition without Relational expressions.


Data flow testing method is effective for error protection because it is based on the relationship between statements in the program according to the definition and uses of variables. Test paths are selected according to the location of definitions and uses of variables in the program. It is unrealistic to assume that data flow testing will be used extensively when testing a large system, However, it can be used in a targeted fashion for areas of software that are suspect.


Loop testing method concentrates on validity of the loop structures. Loops are fundamental to many algorithms and need thorough testing. Loops can be defined as simple, concatenated, nested, and unstructured. In simple loops, test cases that can be applied are skip loop entirely, only one or two passes through loop, m passes through loop where m is than n, (n-1), n, and (n+1) passes through the loop where n is the maximum number of allowed passes. In nested loops, start with inner loop, set all other loops to minimum values, conduct simple loop testing on inner loop, work outwards and continue until all loops tested. In concatenated loops, if loops are independent, use simple loop testing. If dependent, treat as nested loops. In unstructured loops, redesign the class of loops.

Ans.2 (b) Black Box Testing: Black-box testing treats the software as a "black box"without any knowledge of internal implementation. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, exploratory testing and specification-based testing. Specification-based testing: Specification-based testing aims to test the functionality of software according to the applicable requirements. Thus, the tester inputs data into, and only sees the output from, the test object. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Specification-based testing is necessary, but it is insufficient to guard against certain risks. Advantages and disadvantages: The black-box tester has no "bonds" with the code, and a tester's perception is very simple: a code must have bugs. Using the principle, "Ask and you shall receive," black-box testers find bugs where programmers do not. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight," because the tester doesn't know how the software being tested was actually constructed. As a result, there are situations when (1) a tester writes many test cases to check something that could have been tested by only one test case, and/or (2) some parts of the back-end are not tested at all. Therefore, black-box testing has the advantage of "an unaffiliated opinion", on the one hand, and the disadvantage of "blind exploring", on the other. Ans. 2 (c) Boundary Value Analysis: Boundary value analysis is a software testing technique in which tests are designed to include representatives of boundary values. Values on the minimum and maximum edges of an equivalence partition are tested. The values could be either input or output ranges of a software component. Since these boundaries are common locations for errors that result in software faults they are frequently exercised in test cases Limitations of BVA: 1) Boolean and logical variables present a problem for Boundary Value Analysis. 2) BVA assumes the variables to be truly independent which is not always possible. 3) BVA test cases have been found to be rudimentary because they are obtained with very little insight and imagination. Ans. 2 (d) Testing GUIs: GUIs need testing. Contrary to some opinion, the problem is not always (or even commonly) solvable by making the GUI as stupid as possible. GUIs that are sufficiently simple to not require testing are also uninteresting, so they do not play into this discussion. Any GUI of sufficient utility will have some level of complexity, and even if that complexity is limited to listeners (code responding to GUI changes) and updates (GUI listening to code state changes), those hook-ups need to be verified. Getting developers to test is not easy, especially if the testing itself requires additional learning. Developers will not want to learn the details of specific application procedures when it has no bearing on their immediate work, nor should they have to. If I'm working on a pie chart graph, I don't really want to know the details of connecting to the database and making queries simply to get an environment set up for testing. So the

framework for testing GUIs should require no more special knowledge than you might need to use the GUI manually. That means

Look up a component, usually by some obvious attribute like its label. Perform some user action on it, e.g. "click" or "select row".

Ans. 2 (e) Testing Documentation and Help Facilities: Errors in documentation can be as devastating to the acceptance of the program as errors in data or source e code. You must have seen the difference between following the user guide and getting results or behaviours that do not coincide with those predicted by the document. For this reason, documentation testing should be a meaningful part of every software test plan. Documentation testing can be approached in two phases. The first phase, formal technical review, examines the document for editorial clarity. The second phase, live test, users the documentation in conjunction with the use of the actual program. Some of the guidelines are discussed here: Does the documentation accurately describe how to accomplish each mode of use? Is the description of each interaction sequence accurate? Are examples accurate and context based? Are terminology, menu descriptions, and system responses consistent with the actual program? Is it relatively easy to locate guidance within the documentation? Can troubleshooting be accomplished easily with the documentation? Are the document table of contents and index accurate and complete? Is the design of the document (layout, typefaces, indentation, graphics) conducive to understanding and quick assimilation of information? Are all error messages displayed for the user described in more detail in the document? If hypertext links are used, are they accurate and complete

Ans. 3 Draw possible data flow diagram of system design for the following application.

Part of the electronic mail system which presents a mail form to a user, accepts the completed form and sends it to the identified destination.

Ans. 4 (a)

Describe the following with respect to Software Testing:

Ans. (a) Open Source development Model: Open Sources software model can be defined as a refinement of strengths associated with existing software engineering models. Like other models it attempts to glean the strengths from currently used software engineering models, while excluding the weakness of those models. This feat has been accomplished through open communications and sharing of ideas between major developers of the Open Source movement. The models structure improves on the incremental, build-and-fix, and rapid prototype models by creating a cyclic communications path between the project maintainer, development team, and users or debuggers, For example, a Unified Modelling Language (UML) tool concept is developed and registered with the Open Source Development Network Surge Force, an Internet repository for Open Source projects. After the project attracts a development team the maintainer provides them with an initial release for testing and feature additions. The developers, in turn inform the project manager of enhancements and once they have been coded into the application a user base is identified for product testing. The user base also has the opportunity to suggest design flaw corrections and propose new features they would like the maintainer to incorporate into the project. This improved product is then resubmitted to the development team, and this cycle continues until the project has matured into a stable releasable product. Ans. 4 (b) Agile software development: Agile software development is a group of software development methodologies based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle. The Agile Manifesto [1] introduced the term in 2001.
Ans. 5 Describe Classic Invalid assumptions in the context of Process Life Cycle

models. Four unspoken assumptions that have played an important role in the history of software development are considered next.

1. First Assumption: Internal or External Drivers: The first unspoken assumption is that software problems are primarily driven by internal software factors. Granted this supposition, the focus of problem solving will necessarily be narrowed to the software context, thereby reducing the role of people, money, knowledge, etc. in terms of their potential to influence the solution of problems. 2. Second Assumption: Software or Business Processes: A second significant unspoken assumption has been that the software development process is independent

of the business processes in organiza-tions. This assumption implied that it was possible to develop a successful software product independently of the business environment or the busi-ness goals of a firm. This led most organizations and business firms to separate software development work, people, architecture, and planning from business processes. This separation not only isolated the software-related activities, but also led to different goals, backgrounds, configurations, etc. for software as opposed to business processes. 3. Third Assumption: Processes or Projects: A third unspoken assumption was that the software project was separate from the software process. Thus, a software process was understood as reflecting an area of computer science concern, but a software project was understood as a business school interest. If one were a computer science specialist, one would view a quality software product as the outcome of a development process that involved the use of good algo-rithms, data base design, and code. If one were an MIS specialist, one would view a successful software system as the result of effective software economics and software management. 4. Fourth Assumption: Process Centred or Architecture Centred: There are currently two broad approaches in software engineering; one is process centred and the other is architecture centred. In process-centred software engineering, the quality of the product is seen as emerging from the quality of the process. This approach reflects the concerns and interests of industrial engineering, management, and standardized or systematic quality assurance approaches such as the Capability Maturity Model and ISO. The viewpoint is that obtaining quality in a product requires adopting and implementing a correct problem-solving approach. If a product con-tains an error, one should be able to attribute and trace it to an error that occurred somewhere during the application of the process by carefully examining each phase or step in the process.