Vous êtes sur la page 1sur 35

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0071 Software Engineering 4 Credits Assignment Set 1 (60 Marks)

1. Explain the following development models: a. Serial or Linear Sequential This Model also called as the Classic life cycle or the Waterfall model. The Linear sequential model suggests a systematic sequential approach to software development that begins at the system level and progresses through analysis, design, coding, testing, and support. Figure 2.1 shows the linear sequential model for software engineering Modeled after a conventional engineering cycle, the linear sequential model has the following activities:

b. Incremental When an incremental model is used, the first increment is a core product. That is, basic requirements are addressed, but many supplementary features remain undelivered. The customer uses the core product. As a result of use and/or evaluation, a plan is developed for the next increment. The plan addresses the modification of the core product to better meet the needs of the customer and the delivery of additional features and functionality. This process is repeated following the delivery of each increment, until the complete product is produced. The incremental process model is iterative in nature. The incremental model focuses on the delivery of an operational product with each increment. Incremental development is particularly useful when staffing is unavailable for a complete implementation by the business deadline that has been established for the project. Early increments can be implemented with fewer people. If the core product is well received, then additional staff can be added to implement the next increment. In addition increments can be planned to manage technical risks. For e.g.: a major system might require the availability of new hardware i.e., under development and whose delivery date is uncertain. It might be possible to plan early increments in a way that avoids the use of this hardware, thereby enabling partial functionality to be delivered to end users- without inordinate delay.

2. Describe the Object Interface Design. n computing an object-oriented user interface (OOUI) is a type of user interface based on an object-oriented programming metaphor. In an OOUI, the user interacts explicitly with objects that represent entities in the domain that the application is concerned with. Many vector drawing applications, for example, have an OOUI - the objects being lines, circles and canvases. The user may explicitly select an object, alter its properties (such as size or colour), or invoke other actions upon it (such as to move, copy, or re-align it). If a business application has any OOUI, the user may be selecting and/or invoking actions on objects representing entities in the business domain such as customers, products or orders.

3. Explain the following testing strategies: a. Top-Down Testing In this approach testing is conducted from main module to sub module. if the sub module is not developed a temporary program called STUB is used for simulate the submodule. Advantages: Advantageous if major flaws occur toward the top of the program. Once the I/O functions are added, representation of test cases is easier. Early skeletal Program allows demonstrations and boosts morale.

b. Bottom-Up Testing Bottom up testing: In this approach testing is conducted from sub module to main module, if the main module is not developed a temporary program called DRIVERS is used to simulate the main module. Advantages: - Advantageous if major flaws occur toward the bottom of the program. - Test conditions are easier to create. - Observation of test results is easier. C.Thread testing A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels. d.Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Stress testing may have a more specific meaning in certain industries, such as fatigue testing for materials.

4. Describe the following Risk Reduction Models: a. Prototyping Model The Prototyping Model is a systems development method (SDM) in which a prototype (an early approximation of a final system or product) is built, tested, and then reworked as necessary until an acceptable prototype is finally achieved from which the complete system or product can now be developed. This model works best in scenarios where not all of the project requirements are known in detail ahead of time. It is an iterative, trial-and-error process that takes place between the developers and the users.

b. Spiral Model The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottomup concepts. Also known as the spiral lifecycle model (or spiral development), it is a systems development method (SDM) used in information technology (IT). This model of development combines the features of the prototyping and the waterfall model. The spiral model is intended for large, expensive and complicated projects. c. Clean Room Model The Cleanroom software engineering process is a software development process intended to produce software with a certifiable level ofreliability. The Cleanroom process was originally developed by Harlan Mills and several of his colleagues including Alan Hevner at IBM. The focus of the Cleanroom process is on defect prevention, rather than defect removal. The name Cleanroom was chosen to evoke the cleanrooms used in the electronics industry to prevent the introduction of defects during the fabrication of semiconductors. 5. Describe the Capability Maturity Model. The Capability Maturity Model (CMM) (a registered service mark of Carnegie Mellon University, CMU) is a development model that was created after study of data collected from organizations that contracted with the U.S. Department of Defense, who funded the research. This model became the foundation from which CMU created the Software Engineering Institute (SEI). The term "maturity" relates to the degree of formality and optimization of processes, from ad hoc practices, to formally defined steps, to managed result metrics, to active optimization of the processes. 6. Describe the following with respect to Software Technology: a. Exponential Growth in Capability According to Moores law, the density of digital chips doubles approximately every 18 months but cost remains constant, thus increasing computing power but not price. This in turn fuels software technology as software applications become increasingly powerful based on ever faster hardware platforms. No other problem-solving tool exists whose power expands so rapidly yet remains so cheap. When the objective is to reduce business product development cycle time under the constraint of limited financial resources, computer technology allows solutions in less time and with lower cost. Due to this correlation with technology, the issue of the development of problem solving is coupled with technological forecasting for the computer industry. Next, the implications for business problem solving of the evolving power of computing will be consid. b. Business Problem-Solving Optimization As people solve problems, they rely on computer hardware and software to store and retrieve data; explore solution alternatives; use communication technology to interact with others; utilize perceived if then rules to make decisions; and process data, knowledge, and techniques to implement solutions. Software technology can shorten this process, potentially translating it into a single application requiring only a single stage of inputs with solutions delivered rapidly.

c. The E-Business Revolution Metcalfes law observes that networks increase in value with each additional node (user) in proportion to the square of the number of users. This relationship follows because, with n nodes directly or indirectly interconnected, n(n 1)/2 total possible interconnections are available. The telephone network is a classic instance of the effect of this kind of utility behavior. When the network is small, its overall value is relatively limited. As the network encompasses more users, its benefit grows disproportionately, with the individual benefit growing linearly in the number of users, n, and the total network benefit growing quadratically in n.E-business illustrates the impact of networking power on industry. E-business led to the generation of value-chain partnerships, new ways of interacting with customers and new services. This etransformation introduced the concept of a virtual organization to business. One consequence is the acceleration of the decisionmaking process. E-transformation removed or changed the character of business boundaries, including those between the inside and outside of a company, and opened companies to partnerships from unexpected sources, including new relationships with partners, providers, and even competitors. Moreover, e-business capabilities enabled an integrated back-endfront-end architecture that allows online sales and physical activities to support each other in an almost real-time manner. d.Portability Power One of the most notable characteristics of organizational problem solving is its frequent dependence on physical (as opposed to digital) resources: people, places, devices, connections, and work-flow documents; these extensively bind the problem-solving process to these resources. These bonds can restrict the ability of organizations to take advantage of opportunities that arise, for example, outside regular operating hours or beyond the physical location of the organization. e.Connectivity Power Software technology facilitates communication between devices in a multimedia fashion. A computer can be attached to a digital camcorder, TV, printer, scanner, external storage device, PDA, or another networked computer and to the Internet simultaneously. The architectural strategy of integrating these capabilities within a single platform can add more than mere entertainment or aesthetic value to business exchanges. It can lead to an environment in which the cycle time and costs of the business processes can be reduced via an all-in-one architecture. Multimedia data can be captured immediately, edited as required, stored on an electronic portable device, or sent to a vendor, customer, or business partner in almost real time.

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0071 Software Engineering 4 Credits Assignment Set 2 (60 Marks) Answer all Questions 1. Describe the following with respect to Software Design: a. The design process software development process, also known as a software development life cycle (SDLC), is a structure imposed on the development of a software product. Similar terms include software life cycle and software process. It is often considered a subset of systems development life cycle. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. Some people consider a life-cycle model a more general term and a software development process a more specific term. For example, there are many specific software development processes that 'fit' the spiral life-cycle model. ISO/IEC 12207 is an international standard for software lifecycle processes. It aims to be the standard that defines all the tasks required for developing and maintaining software. b. Design Methods The Methods Quality Attribute Workshop (QAW) [1] collects and organizes software quality attribute requirements. The QAW collects, prioritizes, and refines scenarios that can be used to test if the architecture will meet the requirements. The analysis proper is not part of the QAW process but can be performed as a step in some of the other methods. The inputs, outputs, and activities in QAW are illustrated in Figure 1.

c.Design description In software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. A design pattern is not a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations. So patterns are formalized best practices that you [1] must implement yourself in your application. Object-oriented design patterns typically show

relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Many patterns imply object-orientation or more generally mutable state, and so may not be as applicable in functional programming languages, in which data is immutable or treated as such. Design strategies. Algorithm strategy patterns addressing concerns related to high-level strategies describing how to exploit application characteristic on a computing platform. Computational design patterns addressing concerns related to key computation identification. Execution patterns that address concerns related to supporting application execution, including strategies in executing streams of tasks and building blocks to support task synchronization. Implementation strategy patterns addressing concerns related to implementing source code to support a. program organization, and b. the common data structures specific to parallel programming. Structural design patterns addressing concerns related to high-level structures of applications being developed.

2. Describe the following with respect to Software Testing: a. Control Structure Testing Control structure testing is a group of white-box testing methods. CONDITION TESTING - It is a test case design method. - It works on logical conditions in program module. - It involves testing of both relational expressions and arithmetic expressions. - If a condition is incorrect, then at least one component of the condition is incorrect. - Types of errors in condition testing are boolean operator errors, boolean variable errors, boolean parenthesis errors, relational operator errors, and arithmetic expression errors. - Simple condition: Boolean variable or relational expression, possibly proceeded by a NOT operator. - Compound condition: It is composed of two or more simple conditions, Boolean operators and parentheses. - Boolean expression: It is a condition without Relational expressions. b.Black Box Testing Black-box testing is a method of software testing that tests the functionality of an application as opposed to its internal structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. The tester is only aware of what the software is supposed to do, but not how i.e. when he enters a certain input, he gets a certain [1] output; without being aware of how the output was produced in the first place . Test cases are built

around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid inputs and determines the correct output. There is no knowledge of the test object's internal structure. c.Boundary Value Analysis Boundary value analysis is a software testing technique in which tests are designed to include representatives of boundary values. Values on the minimum and maximum edges of an equivalence partition are tested. The values could be either input or output ranges of a software component. Since these boundaries are common locations for errors that result in software faults they are frequently exercised in test cases. b.Testing GUIs To generate a good set of test cases, the test designers must be certain that their suite covers all the functionality of the system and also has to be sure that the suite fully exercises the GUI itself. The difficulty in accomplishing this task is twofold: one has to deal with domain size and then one has to deal with sequences. In addition, the tester faces more difficulty when they have to do regression testing. e.Testing Documentation and Help Facilities. The term software testing conjures images of large numbers of test cases prepared to exercise computer programs and the data that they manipulate. From the definition of software, it is important to note that testing must also extend to the third element of the software configuration-documentation. Errors in documentation can be as devastating to the acceptance of the program as errors in data or source code. Nothing is more frustrating than following a user guide or an on-line help facility exactly and getting results or behaviors that do not coincide with those predicted by the documentation. it is for this reason that that documentation testing should be a meaningful part of every software test plan. Documentation testing can be approached in two phases. The first phase, review and inspection, examines the document for editorial clarity. The second phase, live test, uses the documentation in conjunction with the use of the actual program.

3. Draw possible data flow diagram of system design for the following application. Part of the electronic mail system which presents a mail form to a user, accepts the completed form and sends it to the identified destination.

4. Describe the following with respect to Software Testing: a. Open Source development Model Open source software development can be divided into several phases. The phases specified here are [3] derived from Sharma et al. . A diagram displaying the process-data structure of open source software development is shown on the right. In this picture, the phases of open source software development are displayed, along with the corresponding data elements. This diagram is made using the metamodeling and meta-process modeling techniques. b. Agile Software Development Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between selforganizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle. 5. Describe Classic Invalid assumptions in the context of Process Life Cycle models. First Assumption: Internal or External Drivers The first unspoken assumption is that software problems are primarily driven by internal software factors. Granted this supposition, the focus of problem solving will necessarily be narrowed to the software context, thereby reducing the role of people, money, knowledge, etc. in terms of their potential to influence the solution of problems. Excluding the people factor reduces the impact of disciplines such as management (people as managers); marketing (people as customers); and psychology (people as perceivers). Excluding the money factor reduces the impact of disciplines such as economics (software in terms of business value cost and benefit); financial management (software in terms of risk and return); and portfolio management (software in terms of options and alternatives). Excluding the knowledge factor reduces the impact of engineering; social studies; politics; language arts; communication sciences; mathematics; statistics; and application area knowledge (accounting, manufacturing, World Wide Web, government, etc). Second Assumption: Software or Business Processes A second significant unspoken assumption has been that the software development process is independent of the business processes in organizations. This assumption implied that it was possible to develop a successful software product independently of the business environment or the business goals of a firm. This led most organizations and business firms to separate software development work, people, architecture, and planning from business processes. This separation not only isolated the softwarerelated activities, but also led to different goals, backgrounds, configurations, etc. for software as opposed to business processes. As a consequence, software processes tended to be driven by their internal purposes, which were limited to product functionality and not to product effectiveness Third Assumption: Processes or Projects A third unspoken assumption was that the software project was separate from the software process. Thus, a software process was understood as reflecting an area of computer science concern, but a software project was understood as a business school interest. If one were a computer science specialist,

one would view a quality software product as the outcome of a development process that involved the use of good algorithms, data base design, and code. If one were an MIS specialist, one would view a successful software system as the result of effective software economics and software management. Fourth Assumption: Process Centered or Architecture Centered There are currently two broad approaches in software engineering; one is process centered and the other is architecture centered. In process-centered software engineering, the quality of the product is seen as emerging from the quality of the process. This approach reflects the concerns and interests of industrial engineering, management, and standardized or systematic quality assurance approaches such as the Capability Maturity Model and ISO. The viewpoint is that obtaining quality in a product requires adopting and implementing a correct problem-solving approach. If a product contains an error, one should be able to attribute and trace it to an error that occurred somewhere during the application of the process by carefully examining each phase or step in the process. 6. Describe the following: a. Importance of people in problem solving process A solution represents the final output from a problem-solving process. To obtain reliable solutions, the problem-solving process must receive all the requisite inputs. The more comprehensive, carefully defined and well-established these inputs are, the more effective the solutions will be. Regardless of whether one uses a manual or computerized system to tackle a problem, the problem-solving process can properly operate only when it has sufficient relevant data, a well-defined problem, and appropriate tools. b. Human driven software engineering The fields of Human-Computer Interaction (HCI) and Software Engineering (SE) have evolved almost independently from each other until the last two decades, when it became obvious that an integrated perspective would benefit the development of interactive software applications as considered in both disciplines. The chapters in this book are written by prominent researchers and practitioners who bring to light the major issues and challenges posed by this integration, and offer a variety of solutions in order to address the integration of HCI and SE, including:

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0072 Computer Graphics 4 Credits (Book ID: B0810) Assignment Set 1 (60 Marks) Answer all Questions 1. Describe the following with respect to development of Hardware and Software for Computer Graphics: a. Output Technology shows the historical development in the output technology. In early days of computer the hardcopy devices such as teletype printer and line printer were in use with computer driven CRT displays. In mid fifties command and control CRT display consoles were introduced. The more display devices developed in mid-sixties and in common use until the mid-eighties, are called vector, stroke, line drawing or calligraphic displays. The term vector is used as a synonyms for line; a stroke is a short line, and characters are made of sequence of such strokes.

b. Input Technology Input technology has also improved greatly over the years. Number of input devices were developed over the years. These devices are punch cards, light pens, keyboard, tables, mouse and scanners.

c.Software Technology Like output and input technology there is a lot of development in the software technology. In early days low level software were available. Over the years software technology moved from low level to device dependent and then to device independent packages. The device independent packages are high level packages with can drive a wide variety of display and printer devices. As a need for the device independent package a standardization is made and specification are decided. The first graphics specification to be officially standardized was GKS (the Graphical Kernel System). GKS supports the grouping of logically related primitives such as lines, polygons, and character strings and their attributes in collected form called segments. In 1988, a 3D extension of GKS, became an official standard, as did a much more sophisticated but even more complex graphics system called PHIGS (Programmers Hierarchical Interactive Graphics System).

2. Describe the following with respect to Graphics Hardware: a. Graphics Workstation A {workstation} specifically configured for graphics works such as {image manipulation}, {bitmap graphics} ("paint"), and {vector graphics} ("draw") type applications. Such work requires a powerful {CPU} and a high {resolution} display. A graphic workstation is very similar to a {CAD} workstation and, given the typical specifications of personal computers currently available in 1999, the distinctions are very blurred and are more likely to depend on availability of specific {software} than any detailed hardware requirements. (1999-05-04) b. Raster Display System with Peripheral Display Processor A raster scan, or raster scanning, is the rectangular pattern of image capture and reconstruction in television. By analogy, the term is used for raster graphics, the pattern of image storage and transmission used in most computer bitmap image systems. The word raster comes from the Latin word rastrum (a rake), which is derived from radere (to scrape); see also rastrum, an instrument for drawing musical staff lines. The pattern left by the tines of a rake, when drawn straight, resembles the parallel lines of a raster: this line-by-line scanning is what creates a raster. It's a systematic process of covering the area progressively, one line at a time. Although often a great deal faster, it's similar in the most-general sense to how one's gaze travels when one reads lines of text. c. Color and Grayscale Levels Grayscale is a range of shades of gray without apparent color. The darkest possible shade is black, which is the total absence of transmitted or reflected light. The lightest possible shade is white, the total transmission or reflection of light at all visible wavelength s. Intermediate shades of gray are represented by equal brightness levels of the three primary colors (red, green and blue) for transmitted light, or equal amounts of the three primary pigments (cyan, magenta and yellow) for reflected light. 3. Discuss the following Raster Graphic Algorithms: a. Basic Concepts in Line DrawingConsidering the assumptions made in the previous section most line drawing algorithms use incremental methods. In these methods line starts with the starting point. Then a fix increment is added to the current point to get the next point on the line. This is continued till the end of line. Let us see the incremental algorithm. Incremental Algorithm 1. CurrPosition = Start Step = Increment 2. if (|CurrPosition End | < Accuracy) then go to step 5 [ This checks whether the current position is reached upto approximate end point. If yes, line drawing is completed.] If (CurrPosition < End) then go to step 3 [Here start < End] If (CurrPosition > End) then go to step 4 [Here start > End] 3. CurrPosition = CurrPosition + Step Go to step 2 4. CurrPosition = CurrPosition Step Go to step 2 5. Stop. In the following sections we discuss the line rasterizing algorithms based on the incremental algorithm.

b. Digital Differential Analyzer We know that the slope of a straight line is given as m= (1) The above differential equation can be used to obtain a rasterized straight line. For any given x interval x along, a line, we can compute the corresponding y interval y from equation (1) as (2) Similarly, we can obtain the x interval x corresponding to a specified y as (3) Once the intervals are known the values for next x and next y on the straight line can be obtained as follows xi+1= xi+x = + And yi+1=yi+y ( 4)

= + (5) The equations 4 and 5 represent a recursion relation for successive values of x and y along the required line. Such a way of rasterizing a line is called a digital differential analyzer (DDA). c. Midpoint Line Drawing Algorithm y=mx+C; Where m= and C is the y intercept

y= x+C F(x,y)=dy. x - dx. y+ C. dx=0 =ax+by+c where a =dy, b=- dx, and c= C. dx The F(x,y) is zero on the line, positive for points below for the line, and negative for points above the line. Therefore, to apply the midpoint criterion, we have only to compute F(m)=F(x p+1,yp+1/2) and to test its sign. The result of the computation F(m) gives the decision, hence it is denoted as d (decision variable). By definition d can be given as d=a (xp+1,)+b(yp+1/2)+c. if d>0, we choose pixel A, d<0, we choose B; and if d=0, we can choose either A or B. The value of d for the next grid (di+1) depends on whether we chose A or B. If B is chosen, C is incremented by one step in the x direction. Then di+1=F(xp+2,yp+1/2)=a(xp+2)+b(yp+1/2)+c We know that di=a(xp+1)+b(yp+1/2)+c Subtracting di from di+1 we get the incremental difference as follows. di+1=di + a =di + dy Similarly, if A is chosen, C is incremented by one step each in both the x and y directions. Then,

di+1=F(xp+2,yp+3/2)=a(xp+2)+b(yp+3/2)+c Subtracting di form di+1 we get the incremental difference as follows di+1=di+ a+ b =di +dy dx a= dy and b= -dx Therefore, in midpoint algorithm, at each step algorithm chooses between 2 pixels based on the sign of the decision variable calculated in the previous iteration and then it updates the decision variable by adding either dy (incrB) or dy-dx(incrA) to the previous value, depending on the choice of pixel.

4. Explain the Polygon Seed Filling Algorithms: a. Boundary Fill algorithmStart at a point inside the figure and paint with a particular color. Filling continues until a boundary color is encountered.

Thjere are two ways 1. Four-connected fill where we propagate : left right up down Procedure Four_Fill (x, y, fill_col, bound_col: integer); var curr_color: integer; begin curr_color := inquire_color(x, y) if (curr_color <> bound color) and (curr_color <> fill_col) then begin set_pixel(x, y, fill_col) Four_Fill (x+1, y, fill_col, bound_col); Four_Fill (x-1, y, fill_col, bound_col); Four_Fill (x, y+1, fill_col, bound_col); Four_Fill( x, y-1, fill_col, bound_col); end; There is the following problem with four_fill:

to

do

this:

Thisd leads to the 2. Eight-connected fill algorithm where we test all eight adjacent pixels.

So we add the calls: eight_fill (x+1, y-1, ...........) eight_fill (x+1, y+1, ..........) eight_fill (x-1, y-1, ............) eight_fill (x-1, y+1, ...........) Note: above 4-fill and 8-fill algorithms involve heavy duty recursion which may consume memory and time. Better algorithms are faster, but more complex. They make use of pixel runs (horizontal groups of pixels).

b. Flood Fill algorithm Flood fill, also called seed fill, is an algorithm that determines the area connected to a given node in a multi-dimensional array. It is used in the "bucket" fill tool of paint programs to determine which parts of a bitmap to fill with color, and in games such as Go and Minesweeper for determining which pieces are cleared. When applied on an image to fill a particular bounded area with color, it is also known as boundary fill. c. Scan Line algorithm The algorithm proceeds in two phases. First, a numerical search is made to find the local maxima of the Y definition function within the desired parameter ranges. These determine when portions of the surface first become visible as the scan plane progresses down the screen. Secondly, the actual scan conversion process is performed, maintaining a list of segments of the surface intersection the current scan plane. As the scan plane passes local maxima of the Y function, new segments are added to the list

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0072 Computer Graphics 4 Credits (Book ID: B0810) Assignment Set 2 (60 Marks) 1. Explain the following: a. Image Processing as Picture Analysis The computer graphics is a collection, combination and representation of real or imaginary objects from their computer-based models. Thus we can say that computer graphics concerns the pictorial synthesis of real or imaginary objects.However,the related field image processing or sometimes called picture analysis concerns the analysis of scenes, or the reconstruction of models of 2D or 3D objects from their picture. This is exactly the reverse process.

The image processing can be classified as: Image enhancement Pattern detection and recognition Scene analysis and computer vision b. The advantages of Interactive Graphics Today, a high quality graphics displays of personal computer provide one of the most natural means of communicating with a computer.It provides tools for producing pictures not only of concrete, real-world objects but also of abstract, synthetic objects, such as mathematical surfaces in 4D and of data that have no inherent geometry, such as survey results.It has an ability to show moving pictures, and thus it is possible to produce animations with interactive graphics.With interactive graphics use can also control the animation by adjusting the speed, the portion of the total scene in view, the geometric relationship of the objects in the scene to one another, the amount of detail shown and so on. The interactive graphics provides tool called motion dynamics. with this tool user can move and tumble objects with respect to a stationary observer, or he can make objects stationary and the viewer moving around them. A typical example is walk thorough made by builder to show flat interior and building surroundings. In many case it is also possible to move both objects and view c. Representative Uses of Computer Graphics The use of computer graphics is wide spread. It is used in various areas such as industry, business, government organizations, education, entertainment and most recently the home. Let us discuss representative uses of computer graphics in brief. User friendliness is one of the main factors underlying the success and popularity of any system. It is now a well established fact that graphical interfaces provide in attractive and easy interaction between users and computers. The built-in graphics provided with user interfaces use visual control items such as buttons, menus, icons, scroll bar etc, which allows user to interact with computer only by mouse-click. Typing is necessary only to input text to be stored and manipulated.In industry, business, government and educational organizations, computer graphics is most commonly used to create 2D and 3D graphics of mathematical, physical and economic functions in form of histograms, bars, and pie-chats. These graphs and charts are very useful for decision making.The desktop publishing on personal computers allow the use of graphics for the creation and dissemination of information. Many organizations does the in-house creation and dissemination of documents. The desktop publishing allows user to create documents which contain text, tables, graphs, and other forms of drawn or scanned images or pictures. This is one approach towards the office automation.

The computer-aided drafting uses graphics to design components and systems electrical, mechanical, electromechanical and electronic devices such as automobile bodies, structures of building, airplane, slips, very large-scale integrated chips, optical systems and computer networks. Use of graphics in simulation makes mathematic models and mechanical systems more realistic and easy to study. The interactive graphics supported by animation software proved their use in production of animated movies and cartoons films. There is lot of development in the tools provided by computer graphics. This allows user to create artistic pictures which express messages and attract attentions. Such pictures are very useful in advertising. By the use of computer now it is possible to control various processes in the industry from a remote control room. In such cases, process systems and processing parameters are shown on the computer with graphic symbols and identification. This makes it easy for operator to monitor and control various processing parameters at a time. Computer graphics is also used to represent geographic maps, weather maps, oceanographic charts, contour maps, population density maps

2. Explain the following in relation to the concept of Filling Rectangles and Polygons a. Pattern fillingTo fill an area with a pattern Transparent mode: perform WritePixel() with foreground color if pattern = 1 inhibit WritePixel() if pattern = 0 Opaque mode: perform WritePixel() with foreground color if pattern = 1 perform WritePixel() with background color if pattern = 0 Where the pattern is anchored ? Left most polygon vertex (doesnt work with circles) Screen origin (fast, seamless connection Patterns are defined as small M by N bitmaps Assume that the Pattern [0, 0] pixel is coincident with the screen origin, we can write a pattern in transparent mode with the following code: if pattern[x mod M, y mod n] then WritePixel(x, y, color); b. Thick primitives Replicating pixels write multiple pixels at each selected pixel in scan conversion process thickness inconsistent, gaps may occur when connecting 2 lines The moving pen using a rectangular pen whose center moves along the single-pixel outline of the primitive line seems thicker at endpoints Filling areas between boundaries A thick line is drawn as a rectangle with thickness t A thick circle is draw as 2 circles of radius R-t/2 and R+t/2 Approximation by thick polyline decompose each primitive into rectangular pieces draw each piece

c. Line Style and Pen Style Sample code for a write mask of 16 booleans: if bitstring[i mod 16] then WritePixel(x, y, color); index i is a new variable incremented in the inner loop Thick lines are created as sequences of altering solid and transparent rectangles. Line style is used to calculate the rectangle for each dash Pen style is used to fill each rectangle 3. Describe the following Line Clipping Algorithms: a. Sutherland and Cohen Subdivision Line Clipping Algorithm The Cohen-Sutherland line clipping algorithm quickly detects and dispenses with two common and trivial cases. To clip a line, we need to consider only its endpoints. If both endpoints of a line lie inside the window, the entire line lies inside the window. It is trivially accepted and needs no clipping. On the other hand, if both endpoints of a line lie entirely to one side of the window, the line must lie entirely outside of the window. It is trivially rejected and needs to be neither clipped nor displayed

b. Generalized Clipping with Cyrus-beck Algorithm The CyrusBeck algorithm is a line clipping algorithm. It was designed to be more efficient than the SutherlandCohen algorithm which uses repetitive clipping. CyrusBeck is a general algorithm and can be used with a convex polygon clipping window unlike Sutherland-Cohen that can be used only on a rectangular clipping area. Here the parametric equation of a line in the view plane is: P(t)=tp1+(1-t)p0 =p0+t(p1+p0) where 0 <=t<=1. Now to find intersection point with the clipping window we calculate value of dot product. Let pE be a point on the clipping plane E. Calculate n.(p9t0-p^e). if > 0 vector pointed towards interior if = 0 vector pointed parallel to plane containing p if < 0 vector pointed away from interior Here n stands for normal of the current clipping plane.

c. Liang - Barsky Line Clipping Algorithm The ideas for clipping line of Liang-Barsky and Cyrus-Beck are the same. The only difference is LiangBarsky algorithm has been optimized for an upright rectangular clip window. So we will study only the idea of Liang-Barsky.Liang and Barsky have created an algorithm that uses floating-point arithmetic but finds the appropriate end points with at most four computations. This algorithm uses the parametric equations for a line and solves four inequalities to find the range of the parameter for which the line is in the viewport. Let P(x1,y1`),Q(x2,y2) be the line which we want to study. The parametric equation of the line segment from gives x-values and y-values for every point in terms of a parameter tthat ranges from 0 to 1. The equations are X=x1+(x2-x3)*t=x1+dx*t and y=y1+(y2-y1)*t=dy*t We can see that when t = 0, the point computed is P(x1,y1); and when t = 1, the point computed is Q(x2,y2). 4. Describe the following: a. Three Dimensional Viewing Viewing in 3D involves the following considerations: - We can view an object from any spatial position, eg.In front of an object,Behind the object,In the middle of a group of objects,Inside an object, etc. - 3D descriptions of objects must be projected onto the flat viewing surface of the output device. - The clipping boundaries enclose a volume of space. b. Specifying an Arbitrary 3D View Projection plane (view plane) specified by VRP: View Reference Point VPN: View Plane Normal VUP: View Up Vector View Volume (visible part of world) specified by Window on View plane PRP: Projection Reference Point CW: Center of Window COP: Center of Projection (Perspective Proj.) (derived) DOP: Direction of Projection (Parallel Proj.) (derived) PRP and CW are used to determine COP and DOP perspective: COP = PRP parallel: DOP = PRP - CW Coordinate Systems WC: World Coordinates - normal, 3-space (x, y, z) VRC: Viewing Reference Coordinates - defined by VRP, VPN, and VUP above, and called (u, n), Think of the synthetic camera paradigm

c. Projection

3D projection is any method of mapping three-dimensional points to a two-dimensional plane. As most current methods for displaying graphical data are based on planar two-dimensional media, the use of this type of projection is widespread, especially in computer graphics, engineering and drafting. Winter 2011 Master of Computer Application (MCA) Semester 3 MC0073 System Programming 4 Credits (Book ID: B0811) Assignment Set 1 (60 Marks) 1. Discuss the following: a. Fundamentals of Language Specification A specification language is a formal language used in computer science. Unlike most programming languages, which are directly executable formal languages used to implement a system, specification languages are used during systems analysis, requirements analysis and systems design. Specification languages are generally not directly executed. They describe the system at a much higher level than a programming language. Indeed, it is considered as an error if a requirement specification is cluttered with unnecessary implementation detail, because the specification is meant to describe the what, not the how. A common fundamental assumption of many specification approaches is that programs are modelled as algebraic or model-theoretic structures that include a collection of sets of data values together with functions over those sets. This level of abstraction is commensurate with the view that the correctness of the input/output behaviour of a program takes precedence over all its other properties. In the property-oriented approach to specification (taken e.g. by CASL), specifications of programs consist mainly of logical axioms, usually in a logical system in which equality has a prominent role, describing the properties that the functions are required to satisfy - often just by their interrelationship. This is in contrast to so-called model-oriented specification in frameworks like VDM and Z, which consist of a simple realization of the required behaviour. Specifications must be subject to a process of refinement (the filling-in of implementation detail) before they can actually be implemented. The result of such a refinement process is an executable algorithm, which is either formulated in a programming language, or in an executable subset of the specification language at hand. For example, Hartmann pipelines, when properly applied, may be considered a dataflow specification which is directly executable. Another example is the Actor model which has no specific application content and must be specialized to be executable.

b. Language Processor Development Tools

Language Processor Development tools usually perform sentence detection, tokenization, POS-tagging, text chunking, lemmatisation, coreference analysis and resolution, and named-entity detection among others.

2. Discuss the following: a. Design Specification of Assembler There are six steps to be followed in the design of assembler. They are: 1. Specify the problem. 2. Specify data structures 3. Define format of data structures. 4. Specify algorithm 5. Look for modularity. (Capability of one program to be subdivided into independent programming units). 6. Repeat 1 through 5 on each module. In the first step we have to specify the function the assembler has to perform. The second step specifies the data the assembler needs to perform in further operations. This will be stored in the form of tables, which is called as database (data structure). Thus the assembler makes use of the information, which is present in the database for further processing. In the third step we specify the structure or the way data has to be stored in the database. It specifies the format of storing of data, and the contents of the database. The fourth step gives the algorithm, which has to be converted to program to get the result from the assembler. The fifth step is the step for dividing the program into sub problems, which enables the designer to write the assembler efficiently. Finally the same steps have to be repeated for the sub problems, which have been divided from the given program. b. Design of Single Pass Assembler Single pass translationLC processing and construction of the symbol table proceed as in two pass translation. The problem of forward references is tackled using a process called backpatch-ing. The operand field of an instruction containing a forward reference is left blank initially. The address of the forward referenced symbol is put into this field when its definition is encountered. MOVER BREG, ONE can be only partially synthesized since ONE is a forward reference. Hence the instruction opcode and address of BREG will be assembled to reside in location 101. The need for inserting the second operands address at a later stage can be indicated by adding an entry to the Table of Incomplete Instructions (TII). This entry is a pair (instruction address>, <symbol>), e.g. (101, ONE) in this case. By the time the END statement is processed, the symbol table would contain the addresses of all symbols defined in the source program and TII would contain information describing all forward references. The assembler can now process each entry in TII to complete the concerned instruction. For example, the entry (101, ONE) would be processed by obtaining the address of ONE from symbol table and inserting it in the operand address field of the instruction with assembled address 101. Alternatively, entries in TII

can be processed in an incremental manner. Thus, when definition of some symbol symb is encountered, all forward references to symb can be processed.

3. Discuss the following: a. Macro Parameters Macros may have any number of parameters, as long as they fit on one line. Parameter names are local symbols, which are known within the macro only. Outside the macro they have no meaning! b. Nested and Recursive Macro Calls and its expansion Macro bodies may also contain macro calls, and so may the bodies of those called macros, and so forth. If a macro call is seen throughout the expansion of a macro, the assembler starts immediately with the expansion of the called macro. For this, its expanded body lines are simply inserted into the expanded macro body of the calling macro, until the called macro is completely expanded. Then the expansion of the calling macro is continued with the body line following the nested macro call c. Flow chart of Design of Macro Preprocessors Implementation

4. Discuss the following: a. Motivation for a Retargetable loader (RL) To develop any machine code manipulation tool, understanding the BFF s/BFFt (source BFF and target BFF if developing a binary translator) is a key factor of the overall development. The loader plays an important role in this, as it is the very first light bulb in the development circuit to enlighten the BFF structure. The loader can be quite simple in a way that it only tells the programmer where file information is located, but it is this fundamental element that describes the various sections of the BFF (similar to the Table of contents in a book) and hence provides the basis for decoding of machine instructions. Traditionally, when developing a machine code manipulation tool, we need to write a decoder for every BFF we want to manipulate. For example, if we want to write a disassembler for an Intel x86 machine

running DOS and using the EXE binary file format. We will write a loader for (x86, DOS, EXE) and most probability writes it with the disassembler as a single program. If we then decide to write another disassembler for the Windows New Executable (NE) BFF, we will need to write another loader for (x86, Windows, NE) and another disassembler as the interface to information on the BFF will be different. So, if we have n different (M, OS, BFF) tuples, we will need to write n different loaders. Hence, for X number of machines architectures, Y number of Operating Systems and Z number of BFF, we will need to write a total of X*Y*Z number of different loaders. That is if we want to test on all those different platforms.The process carried out by all loaders are similar although the (M, OS, BFF) tuples are different. The ideal view of the above model is to unite all the n different loaders together and form a single generic one - a retargetable loader or RL.

b. Basic Loader Functions The most fundamental functions of loader are to bringing an object program into memory and starting its execution.1 Design of an Absolute Loader 2 Simple Bootstrap Loader 5. Discuss the following: a. Phases of Compilation Phases of Compilation text of program (characters) v A LEXICAL <- names -> D N words = (syntactic, semantic) I A v C L SYNTACTIC T Y assemble words into sentences I S v O I SEMANTIC <- declarations -> N S bind meanings to names A v R G GLOBAL OPTIMISATION Y E look for large optimisations N v E CODE GENERATION <- addresses -> R A v T PEEPHOLE OPTIMISATION E look for small optimisations v run-time debugging

b. Java Compiler and Environment Java refers to several computer software products and specifications from Sun Microsystems (which has since merged with Oracle Corporation), that together provide a system for developing application software and deploying it in a cross-platform environment. Java is used in a wide variety of computing plattforms from embedded dervices and mobile phones on the low end, to enterprise servers and supercomputers on the high end. While less common on desktop computers, Java applets are sometimes used to provide improved and secure functions while browsing the World Wide Web.

Writing in the Java programming language is the primary way to produce code that will be deployed as Java bytecode. There are, however, bytecode compilers available for other languages such as Ada, JavaScript, Python, and Ruby. Several new languages have been designed to run natively on the Java Virtual Machine (JVM), such as Scala, Clojure and Groovy. Java syntax borrows heavily from C and C++, but object-oriented features are modeled after Smalltalk and Objective-C. Java eliminates certain lowlevel constructs such as pointers and has a very simple memory model where every object is allocated on the heap and all variables of object types are references. Memory management is handled through integrated automatic garbage collection performed by the JVM. 6. Describe: a. Software Tools for Program Development A software development tool is a set of programs used by a computer programmer to write application programs. Typically, an SDT includes a visual screen builder, an editor, a compiler, a linker, and sometimes other facilities. The term is used by Microsoft, Sun Microsystems, and a number of other companies.This term is sometimes seen as software development kit. b. Programming Environments An integrated development environment (IDE) is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of: a source code editor build automation tools a debugger

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0073 System Programming 4 Credits (Book ID: B0811) Assignment Set 2 (60 Marks) 1. Explain the following: a. Data Formats Data format in information technology can refer to either one of: Data type, constraint placed upon the interpretation of data in a type system Signal (electrical engineering), a format for signal data used in signal processing Recording format, a format for encoding data for storage on a storage medium File format, a format for encoding data for storage in a computer file Container format (digital), a format for encoding data for storage by means of a standardized audio/video codecs file format

b. Introduction to RISC & CISC machines. reduced instruction set computer, a type of microprocessor that recognizes a relatively limited number of instructions. Until the mid-1980s, the tendency among computer manufacturers was to build increasingly complex CPUs that had ever-larger sets of instructions. At that time, however, a number of computer manufacturers decided to reverse this trend by building CPUs capable of executing only a very limited set of instructions. One advantage of reduced instruction set computers is that they can execute their instructions very fast because the instructions are so simple. Another, perhaps more important advantage, is that RISC chips require fewer transistors, which makes them cheaper to design and produce. Since the emergence of RISC computers, conventional computers have been referred to as CISCs (complex instruction set computers). c. Addressing Modes Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various addressing modes that are defined in a given instruction set architecture define how machine language instructions in that architecture identify the operand (or operands) of each instruction. An addressing mode specifies how to calculate the effective memory address of an operand

by using information held in registers and/or constants contained within a machine instruction or elsewhere. In computer programming, addressing modes are primarily of interest to compiler writers and to those who write code directly in assembly language.

2. Explain the following: a. Basic Assembler Functions Mnemonic code (or instruction name) opcode. Symbolic operands (e.g., variable names) addresses. Choose the proper instruction format and addressing mode. Constants Numbers Output to object files and listing files

b. Design of Multi-pass(two pass) Assemblers Implementation So far, we have presented the design and implementation of a two-pass assembler. Here, we will present the design and implementation of One-pass assembler If avoiding a second pass over the source program is necessary or desirable.

Multi-pass assembler Allow forward references during symbol definition.

c. Examples: MASM Assembler and SPARC Assembler. The SPARC is of the Reduced Instruction Set Computing(RISC) architecture. The theory is that having the bare minimum of instructions needed to complete a job, the resulting architecture is faster, as most instructions take only one clock cycle to decode, leading to rapid execution. This is in contrast to CISC machines, which have specialized variable length instructions, and can take multiple clock cycles to decode and execute. The SPARC architecture also prefetches instructions, having the next instruction fetched while the current one is executed. This has implication for branch instructions, as the next instruction might not be executed if the branch is taken, and this must be dealt with accordingly.

The Microsoft Macro Assembler is an x86 assembler that uses the Intel syntax for Microsoft Windows. As of 2011 there was a version of the Microsoft Macro Assembler for 16-bit and 32-bit assembly sources, MASM, and a different one, ML64, for 64-bit sources only. References below to MASM include ML64 where appropriate.

3. Explain the following: a. Non-Deterministic Finite Automata A nondeterministic finite automaton can be different from a deterministic one in that for any input symbol, nondeterministic one can transit to more than one states. epsilon transition

NFA and DFA stand for nondeterministic finite automaton and deterministic finite automaton, respectively b. Generalized Non-Deterministic Finite Automata (GFNA) A generalized non-deterministic finite automaton (GNFA) is a 5-tuple: (S, , T, s, a) S is a finite set of states is a finite set of symbols T : (S -{a}) (S - {s}) R l, s is the start state a S is the accept state Where R is the collection of all regular expressions over the alphabet . A DFA or NFA can easily be converted into a GNFA and then the GNFA can be easily converted into a regular expression by reducing the number of states until S = {s, a}

c. Moore Machine and Mealay Machine Finite state machines is not a new technique, it has been around for a long time. The concept of decomposition should be familiar to people with programming or design experience. There are a number of abstract modeling techniques that may help or spark understanding in the definition and design of a finite state machine, most come from the area of design or mathematics. State Transition Diagram: also called a bubble diagram, shows the relationships between states and inputs that cause state transitions. State-Action-Decision Diagram: simply a flow diagram with the addition of bubbles that show waiting for external inputs. Statechart Diagrams: a form of UML notation used to show behavior of an individual object as a number of states, and transitions between those states.

Hierarchical Task Analysis (HTA): though it does not look at states, HTA is a task decomposition technique that looks at the way a task can be split into subtasks, and the order in which they are performed

4. Explain the following: a. YACC Compiler-Compiler The computer program Yacc is a parser generator developed by Stephen C. Johnson at AT&T for the Unix operating system. The name is an acronym for "Yet Another Compiler Compiler." It generates a parser (the part of a compiler that tries to make syntactic sense of the source code) based on an analytic grammar written in a notation similar to BNF. Yacc used to be available as the default parser generator on most Unix systems. It has since been supplanted as the default by more recent, largely compatible, programs such asBerkeley Yacc, GNU bison, MKS Yacc and Abraxas PCYACC. An updated version of the original AT&T version is included as part of Sun's OpenSolaris project. Each offers slight improvements and additional features over the original Yacc, but the concept has remained the same. Yacc has also been rewritten for other languages, including Ratfor, ML, Ada,Pascal, Java, Python, Ruby and Common Lisp.

b. Interpreters an interpreter normally means a computer program that executes, i.e. performs, instructions written in a programming language. An interpreter may be a program that either 1. executes the source code directly 2. translates source code into some efficient intermediate representation (code) and immediately executes this 3. explicitly executes stored precompiled code system
[1]

made by a compiler which is part of the interpreter

c. Compiler writing tools A compiler is a computer program (or set of programs) that transforms source code written in a programming language (thesource language) into another computer language (the target language,

often having a binary form known as object code). The most common reason for wanting to transform source code is to create an executable program.

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0075 Computer Networks 4 Credits (Book ID: B0813 & B0814) Assignment Set 1 (60 Marks) Book ID: B0813 1. Describe the following: a. Networks Software Networks consist of hardware, such as servers, Ethernet cables and wireless routers, and networking software. Networking software differs from software applications in that the software does not perform tasks that end-users can see in the way word processors and spreadsheets do. Instead, networking software operates invisibly in the background, allowing the user to access network resources without the user even knowing the software is operating

b. Reference Models A reference model in systems, enterprise, and software engineering is an abstract framework or domainspecific ontology consisting of an interlinked set of clearly defined concepts produced by an expert or body of experts in order to encourage clear communication. A reference model can represent the component parts of any consistent idea, from business functions to system components, as long as it represents a complete set. This frame of reference can then be used to communicate ideas clearly among members of the same community. c. Network Standards You can't study networking and its related technologies without very quickly encountering a whole host of standards that are related to the subjectand organizations that create these standards. Network standards facilitate the interoperability of network technologies and are extremely important. It may be an exaggeration to say that networking wouldn't exist without standards, but it isnt to say that networking as we know it would not exist without them. Networks are literally everywhere, and every hardware device or protocol is governed by at least one standard, and usually many.

In this section I provide a brief examination of the often-overlooked subject of network standards and standards organizations. I begin with a background discussion of why standards are important, highlighting the differences between proprietary, de facto and open standards. I give an overview of networking standards in general terms, and then describe the most important international standards organizations and industry groups related to networking. I then describe the structure of the organizations responsible for Internet standards, including the registration authorities and registries that manage resources such as addresses, domain names and protocol values. I conclude with a discussion of the Request For Comment (RFC) process used for creating Internet standards.

2. Discuss the following Switching Mechanisms: a. Circuit switching Seeking out and establishing a physical copper path end-to-end [historic definition] . Circuit switching implies the need to first set up a dedicated, end-to-end path for the connection before the information transfer takes place. Once the connection is made the only delay is propagation time.

b. Message switching A store-and-forward network where the block of transfer is a complete message. Since messages can be quite large, this can cause: buffering problems high mean delay

c. Packet switching A store-and-forward network where the block of transfer is a complete packet. A packet is a variable length block of data with a tight upper bound. Using packets improves mean message delay.

3.Explain the different classes of IP addresses with suitable examples. Given an IP address, its class can be determined from the three high-order bits. shows the significance in the three high order bits and the range of addresses that fall into each class. For informational purposes, Class D and Class E addresses are also shown. Figure 1

In a Class A address, the first octet is the network portion, so the Class A example in Figure 1 has a major network address of 1.0.0.0 - 127.255.255.255. Octets 2, 3, and 4 (the next 24 bits) are for the network manager to divide into subnets and hosts as he/she sees fit. Class A addresses are used for networks that have more than 65,536 hosts (actually, up to 16777214 hosts!). In a Class B address, the first two octets are the network portion, so the Class B example in Figure 1 has a major network address of 128.0.0.0 - 191.255.255.255. Octets 3 and 4 (16 bits) are for local subnets and hosts. Class B addresses are used for networks that have between 256 and 65534 hosts. In a Class C address, the first three octets are the network portion. The Class C example in Figure 1 has a major network address of 192.0.0.0 - 233.255.255.255. Octet 4 (8 bits) is for local subnets and hosts perfect for networks with less than 254 hosts.

4. Discuss the following with respect to Internet Control Message Protocols: a. Congested and Datagram Flow control DCCP provides a way to gain access to congestion control mechanisms without having to implement them at the Application Layer. It allows for flow-based semantics like in Transmission Control Protocol (TCP), but does not provide reliable in-order delivery. Sequenced delivery within multiple streams as in the Stream Control Transmission Protocol (SCTP) is not available in DCCP. b. Route change requests from routers Used only by router to suggest a more suitable route to the originator (also called ICMP redirect) PING sends and ICMP echo request to a remote host, which then return an ICMP echo reply to the sender All TCP/IP node is supposed to implement ICMP and respond to ICMP echo

c. Detecting circular or long routes Router is unable to deliver datagram, it can return the ICMP type 3 with failure code Internet header plus 64 bits of original datagram are used to identify the datagram caused the problem

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0075 Computer Networks 4 Credits (Book ID: B0813 & B0814) Assignment Set 2 (60 Marks)

1. Discuss the following design issues of DLL: a. Framing Framing bit. A common practice in telecommunications, for example in T-carrier, is to insert, in a dedicated time slot within the frame, a noninformation bit or framing bit that is used for synchronization of the incoming data with the receiver. In a bit stream, framing bits indicate the beginning or end of a frame. They occur at specified positions in the frame, do not carry information, and are usually repetitive. b. Error control Error control (automatic repeat request,ARQ), in addition to ARQ provided by some transportlayer protocols, to forward error correction (FEC) techniques provided on the physical layer, and to errordetection and packet canceling provided at all layers, including the network layer. Data-link-layer error control (i.e. retransmission of erroneous packets) is provided in wireless networks and V.42 telephone network modems, but not in LAN protocols such as Ethernet, since bit errors are so uncommon in short wires. In that case, onlyerror detection and canceling of erroneous packets are provided. c. Flow control Flow control, in addition to the one provided on the transport layer. Data-link-layer error control is not used in LAN protocols such as Ethernet, but in modems and wireless networks. 2. Discuss the following with respect to Routing algorithms: a. Shortest path algorithm

An algorithm that is designed essentially to find apath of minimum length between two specified vertices of a connected weighted graph. A good algorithm for this problem was given by E. W. Dijkstra in 1959. b. Flooding A flooding algorithm is an algorithm for distributing material to every part of a graph. The name derives from the concept of inundation by a flood. Flooding algorithms are used in computer networking and graphics. Flooding algorithms are also useful for solving many mathematical problems, including maze problems and many problems in graph theory.

c. Distance vector routing Iterative, asynchronous, distributed Distance table D (Y,Z) : cost of the direct link from X to Z + Zs currently known minmum-cost path to Y D (Y,Z)=c(X,Z)+minw{D (Y,w)}
X z X

Initialization: D (*,v) = inifinite, D (v,v)=c(x,v) X Send minwD (y,w) to each neighbor when they changes C(X,V) changes Neighbor node send its update
X X

3. Describe the following: a. IGP An interior gateway protocol (IGP) is a routing protocol that is used to exchange routing information within an autonomous system (AS). In contrast, an Exterior Gateway Protocol (EGP) is for determining network reachability between autonomous systems and makes use of IGPs to resolve routes within an AS.

b. OSPF OSPF is an interior gateway protocol that routes Internet Protocol (IP) packets solely within a single routing domain (autonomous system). It gathers link state information from available routers and constructs a topology map of the network. The topology determines the routing table presented to the Internet Layer which makes routing decisions based solely on the destination IP address found in IP packets. OSPF was designed to support variable-length subnet masking (VLSM) or Classless InterDomain Routing (CIDR) addressing models.

c. OSPF Message formats For routing multicast IP traffic, OSPF supports the Multicast Open Shortest Path First protocol (MOSPF) as defined in RFC 1584.
[5]

Neither Cisco nor Juniper Networks include MOSPF in their OSPF

implementations. PIM (Protocol Independent Multicast) in conjunction with OSPF or other IGPs, (Interior Gateway Protocol), is widely deployed. The OSPF protocol, when running on IPv4, can operate securely between routers, optionally using a variety of authentication methods to allow only trusted routers to participate in routing. OSPFv3, running on IPv6, no longer supports protocol-internal authentication. Instead, it relies on IPv6 protocol security (IPsec). OSPF version 3 introduces modifications to the IPv4 implementation of the protocol.
[2]

Except for virtual

links, all neighbor exchanges use IPv6 link-local addressing exclusively. The IPv6 protocol runs per link, rather than based on the subnet. All IP prefix information has been removed from the link-state advertisements and from the Hello discovery packet making OSPFv3 essentially protocol-independent. Despite the expanded IP addressing to 128-bits in IPv6, area and router identifications are still based on 32-bit values.

4. Describe the following with respect to Internet Security: a. Cryptography More generally, it is about constructing and analyzing protocols that overcome the influence of adversariesand which are related to various aspects in information security such as [4] data confidentiality, data integrity, and authentication. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce. b. DES Algorithm The DES (Data Encryption Standard) algorithm is the most widely used encryption algorithm in the world. For many years, and among many people, "secret code making" and DES have been synonymous. And despite the recent coup by the Electronic Frontier Foundation in creating a $220,000 machine to crack DES-encrypted messages, DES will live on in government and banking for years to come through a lifeextending version called "triple-DES."

DES is a block cipher--meaning it operates on plaintext blocks of a given size (64-bits) and returns cipher text blocks of the same size. Thus DES results in a permutation among the 2^64 (read this as: "2 to the 64th power") possible arrangements of 64 bits, each of which may be either 0 or 1. Each block of 64 bits is divided into two blocks of 32 bits each, a left half block L and a right half R. (This division is only used in certain operations.)

Winter 2011 Master of Computer Application (MCA) Semester 3 MC0074 Statistical and Numerical methods using C++ 4 Credits (Book ID: B0812) Assignment Set 1 (60 Marks)
1. A book shelf containing 20 books of which 12 are on circuit theory and 8 are on Mathematics. If three books are taken out at random, find the probability that all the three are on the same subject.
20 books. 12 on circuit theory. 8 on mathematics. you pick 3 at random. probability that all 3 are on circuit theory would be 12/20 * 11/19 * 10/18 probability that all 3 are on mathematics would be 8/20 * 7/19 * 6/18 probability that all 3 are either on circuit theory or on mathematics would be: (12/20 * 11/19 * 10/18) + (8/20 * 7/19 * 6/18) you can see this with smaller numbers. assume 5 books total. 3 on circuit theory 2 on math you pick 2 at random. probability all on circuit theory is 3/5 * 2/4 = 6/20 probability all on math is 2/5 * 1/4 = 2/20 probability either all circuit theory or all math is 6/20 + 2/20 = 8/20 let abcde be the books. possible permutations of 2 are: ab,ac,ad,ae,ba bc,bd,be,ca,cb cd,ce,da,db,dc de,ea,eb,ec,ed assume abc are books on circuit theory look for all permutations of ab or ac or bc that would be ab ba ac ca bc cb that should be 6 occurrences out of 20 assume de are books on math. look for all permutations of de that would be de and ed that should be 2 occurrences out of 20 total possible occurrences is 8 / 20 which is the same as derived above.

Vous aimerez peut-être aussi