Vous êtes sur la page 1sur 139

Answerstothe

QuestionBank
2010
PARTTIMEMBA|2YEAR 1
ST
SEMESTER

INTRODUCTIONTOCOMPUTERS|MIS
Prof.MaxWilliamDCosta|max.dcosta77@gmail.com|(m)9821439790
[CompiledonJuly15,2010]

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 2 of 138


Q1



Q2


Q3


Q4

Q87

Using specific examples, describe how computerization would help your department perform more
efficiently and effectively. Also explain how computerization will help better decision-making, analysis
and planning?

In what ways will the use of IT and Internet enhance your job function as a Middle Manager? Discuss
with examples with respect to either HRD Function or Marketing Function or Finance Function?

Giving suitable examples explain how and why IT and computers have contributed in increasing
productivity and competitiveness of organizations in doing business.

Write a note on how IT can help an organization in gaining competitive advantage

How does IT contribute towards increasing productivity and in doing business better? Explain with
reference to any one function in your organization in detail.


(Ans 1-4, 87)

Scenario before IT:

More paper work
Lack of storage space
Communication was expensive and huge Telephone bills were incurred
Redundancy of information
Lack of Information security

Scenario after IT:

Less paper work due to the use of computerized transactions
Storage space greatly increased to store huge amounts of data running into terabytes
Streamlined communication and convergence has enabled different modes of
communication as well as different devices to communicate and share information across
the globe
Communications today is less expensive and in most cases almost free as many calls
happen online using free softwares like Skype and other online chat and messenger
applications.
Use of databases have helped optimize data and reduce redundancy thereby offering
easy and powerful access to the information warehouses and showing different users the
data they seek in different ways wherever and whenever they need it.
Uses of sophisticated databases have also brought in data security leading to overall
information security and safety. This allows for only authorized users to communicate and
access data from the databases
Data Analysis and generation of Metrics have helped users and organizations perform
their decision making activities to be more effective and accurate.
Information has now become more Accessible, Reachable, Portable and Editable across
mediums and devices and across users as well.







MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 3 of 138
IT and Internet has enhanced the job functions of a middle managers working in the HR, Finance
and Marketing functions in the following manner:

HRD Function

Recruitment processes have been automated wherein software tools like CV scanners
and ATS (Applicant Tracking Systems) have been introduced to scan and filter the
appropriate CVs from thousand applicants and then tracking the applicants candidature
from recruitment to selection in the organization.
Payroll function also uses IT to automate disbursement of salaries and claims and other
financial related transactions that concern the employees or the organization and its
stakeholders
Online banking ensures that organizations can deposit employee salaries in banks and
employees can transact online without having to physically visit a bank or deal in physical
cash.
Use of E-HRIS (Electronic Human Resource Information Systems) has automated the
tasks of employee HR communication. They serve as a virtual HR manager who takes
care of employee related transactions online. These transactions can range from entering
daily times sheets, claiming for late sitting re-imbursements, Mediclaims and LTA claims,
posting grievances online to applying for leave online etc.
Allocations of employees and online skill matrices of employees with their skill sets helps
the HR functions to optimally use the human capital
Use of Intranets have empowered the HR managers to leverage this platform for Internal
communication within the organization
Today Employee Satisfaction and other key surveys are conducted online using the
companies Intranets and the data collected is collated, analyzed and used for decision
making by the HR function.
Use of Enterprise HR applications like SAP, Oracle PeopleSoft etc have enabled the HR
functions to work well with the human capital across business functions, branches and
continents in a very synchronous and effective manner

Finance Function

Use of softwares like Tally and other financial accounting packages have helped the
finance departments and financial accountants greatly by automating financial
transactions and ensuring their accuracy and integrity
Today balance sheets and other financial records can be easily created, maintained and
distributed online to stakeholders
Online banking has enabled transactions between financial departments and banks
Usage of financial modules of Enterprise Resource Planning applications have
empowered organizations to effectively manage their finances
Automated tools help Finance managers make projections and predictions thereby
guiding the overall organization goals
E-Finance has reduced paperwork and created a revolution where digital money /
transactions have greatly impacted global electronic commerce and global transactions

Marketing Function

Today Marketing managers use tools like e-Marketing to reach out to customers across
the globe.
Marketing has leveraged the power of IT and internet in bringing together new markets
and new customers and has created a virtual marketplace where users, organizations
exchange goods, services, information, ideas etc.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 4 of 138
Customers can now give their product related suggestions, feedbacks and even register
their issues or complaints to the manufacturing or product development departments
directly using CRM (Customer Relationship Mgmt) software's.
Middle to high level managers can use the internet and the various communication
facilities that come with it to network with their user base.
Online banner advertisements can rake in a lot of money and generate new revenue
streams.
Traditional brick and mortar business is now being complimented and eventually replaced
by online business and e-Commerce.
Enterprise Resource Planning Applications like SAP, Oracle etc have modules for
Marketing and Distribution that helps the materials planning function, logistics, Supply
Chain functions and CRM functions as well.
Marketing managers can now IT enable the small and third party vendors to connect to
their organizations network so that they can check the organizations inventory levels and
continuously replenish them as and when required, this helps them to save a lot as they
do not have to store goods unnecessarily thereby incurring warehousing expenses.
Instead they can operate on J ust in Time basis.
Marketing Services in the services industry can also be greatly enhanced by use of IT
and internet as now all services can be accessed online, and users can get help online
as well as offline.

This is how computerization, the advent of IT and Internet has helped departments like these to
perform more efficiently and enhanced the role of middle level managers in organizations.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 5 of 138

Q5 Write a note on Operating Systems
Q6

What are the purposes of the Operating System? Explain the functions of the Operating
System?
Q7 Explain the functions of the Operating System?
Q8 Enumerate the key purpose and name one example of the Operating System?

(Ans 5-8)



Definition of an Operating System:

An operating system (OS) is a set of computer programs that manage the hardware and software
resources of a computer. It is a systems software that manages the operations of a computer.
Without it you cannot start a computer.

It is the most important program that runs on a computer. Every general-purpose computer must
have an operating system to run other programs. Operating systems perform basic tasks, such as
recognizing input from the keyboard, sending output to the display screen, keeping track of files
and directories on the disk, and controlling peripheral devices such as disk drives and printers.

For large systems, the operating system has even greater responsibilities and powers. It is like a
traffic cop -- it makes sure those different programs and users running at the same time do not
interfere with each other. The operating system is also responsible for security, ensuring that
unauthorized users do not access the system.

Most operating systems come with an application that provides an interface to the OS managed
resources. These applications have had command line interpreters (e.g. DOS Disk Operating
System) as a basic user interface but, since the mid-1980s; have been implemented as a
graphical user interface (GUI) for ease of operation. Operating Systems themselves have no user
interfaces; the user of an OS is an application, not a person. The operating system forms a
platform for other system software and for application software. Windows, Linux, and Mac OS are
some of the most popular Operating Systems.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 6 of 138


Types of Operating Systems

a) Single User Operating System
b) Multi-User Operating System

Single User Operating System

This OS is used for standalone PCs (e.g. MS DOS)
OS/2 and Win NT are also single user, multi tasking OS for micro computers
Most Personal Computers and Workstations are Single User computer systems that have
a single user OS

Multi-User Operating System

This OS is used for those computers having many terminals connected to it. E.g. Linux,
Novell Netware, Unix
All Mainframes and Minicomputers are multi-user systems that run on a multi-user OS


Operating System Techniques

Multiprogramming
- In which a single CPU works on two or more programs
- In this technique the OS keeps the CPU busy by allowing either batch
multiprogramming or timesharing multiprogramming

Multiprocessing
- Refers to a computers ability to support more than one process (program) at the
same time.
- It is also referred to as parallel processing
- Unix is one of the most widely used multiprocessing systems

Multitasking
- It is the computers ability to execute more than one task at the same time.
- In multitasking only one CPU is involved, but it switches from one program to
another so quickly that it gives the appearance of executing all of the programs at
the same time.

Multithreading
- Its the ability of an OS to execute different parts of a program, called threads,
simultaneously.
- Programmers must carefully design the programs in such a way that all the
threads can run at the same time without interfering with each other.

Real Time
- It means occurring immediately.
- Real time operating systems are systems that respond to input immediately.
- They are useful for tasks like GPS, Navigation in which computers are required
to react to a steady flow of new information without interruption or delays.


Functions of an Operating System

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 7 of 138
a) Process Management
- OS helps the CPU allocate resources for executing programs / processes. A process is
a program in execution. E.g. Spooling, printing etc. The Operating System helps in the
creation, deletion, suspension, resumption and synchronization of processes.

b) Memory Management
- Memory is a large array of words and bytes each with its own address. The CPU reads
from and writes to memory. The OS keeps track of currently used memory and who is
using it. It decides which processes to load in memory when memory space becomes
available and it allocates & de-allocates memory space as needed.

c) Storage Management
- The OS deals with the allocation and reclamation of storage space when a process /
program is opened or terminated. The OS helps in reading of data from the disk to the
main memory (RAM) in order to execute processes.

d) I/O (Input / Output) System
- The OS helps the I/O devices to communicate easily as it hides the peculiarities and
device driver details from the end user.

e) File Management
- The OS provides a logical view of Information Storage, it maps files on physical devices.
It helps in the creation, deletion of files and directories, manipulating of files and
directories on to the storage and also does a backup. It offers a very user friendly
interface like the windows explorer for end users like us to work with files easily.

f) Protection System
- The OS protects processes from interference of other processes and it checks for
authorization of processes and allows them to access CPU resources.

g) Networking
- Distributed computing systems require Multi-user OS for allowing processes and users
the access to shared resources on the network.

h) System and Resource Monitoring
- The OS helps monitor resource usage and provides information on system
performance.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 8 of 138


Q9

Explain the importance of Documentation / System Documentation.


(Ans 9)

Definition of Documentation:

Documentation can be described as the information for Historical, Operational and Reference
purposes. Documents establish and declare the performance criteria of a system. Documentation
explains the system and helps the people understand and interact with the system.

Types of Documentation:

a) Program Documentation:

This begins in the system analysis phase and continues during systems implementation.
It includes process descriptions and report layouts. Programmers provide documentation
with comments that make it easier to understand and maintain the program. A system
analyst must verify that the program documentation is accurate and complete.

b) System Documentation:

It describes the systems functions and how they are implemented. Most systems
documentation is prepared during the system analysis and system design phases.
Documentation consists of:

- Data Dictionary Entries
- Data Flow Diagrams
- Object Models
- Screen Layouts
- Source Documents
- Initial Systems Request

c) Operations Documentation:

Typically used in a minicomputer or a mainframe environment with centralized processing
and batch job scheduling. Such documentation tells the IT Operations group how and
when to run programs. A Common example is a program run sheet which contains
information needed for processing and distributing output.

d) User Documentation:

A User Documentation includes the following items, namely:

- A complete System Overview
- Source document description with samples
- Menu and data entry screens
- Reports that are available with samples
- Security and Audit trail information
- Responsibility for Input / Output and Processing
- Procedure for handling bugs, changes and problems
- Examples of exceptions and error situations
- Frequently Asked Questions (FAQs)
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 9 of 138
- Explanation of Help and Updating the Manual
- Online documentation to empower users and reduce the need for direct IT
support
- Global and Contextual Help
- Interactive Tutorials
- Hints and Tips
- Hypertext and Interactive Tutorials

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 10 of 138

Q10 What is a Graphical User Interface (GUI)?
Q11 Differentiate between Graphical User Interface and Character User Interface?

(Ans 10-11)

GUI (Graphical User Interface)

Abbreviated GUI (pronounced GOO-ee). A program interface that takes advantage of the
computer's graphics capabilities to make the program easier to use. Well-designed graphical user
interfaces can free the user from learning complex command languages.

Graphical user interfaces, such as Microsoft Windows and the one used by the Apple Macintosh,
feature the following basic components:

pointer: A symbol that appears on the display screen and that you move to select objects
and commands. Usually, the pointer appears as a small angled arrow. Text -processing
applications, however, use an I-beam pointer that is shaped like a capital I.
pointing device: A device, such as a mouse or trackball, that enables you to select
objects on the display screen.
icons: Small pictures that represent commands, files, or windows. By moving the pointer
to the icon and pressing a mouse button, you can execute a command or convert the
icon into a window. You can also move the icons around the display screen as if they
were real objects on your desk.
desktop: The area on the display screen where icons are grouped is often referred to as
the desktop because the icons are intended to represent real objects on a real desktop.
windows: You can divide the screen into different areas. In each window, you can run a
different program or display a different file. You can move windows around the display
screen, and change their shape and size at will.
menus: Most graphical user interfaces let you execute commands by selecting a choice
from a menu.

The first graphical user interface was designed by Xerox Corporation's Palo Alto Research
Center in the 1970s, but it was not until the 1980s and the emergence of the Apple Macintosh
that graphical user interfaces became popular. One reason for their slow acceptance was the fact
that they require considerable CPU power and a high-quality monitor, which until recently were
prohibitively expensive.

In addition to their visual components, graphical user interfaces also make it easier to move data
from one application to another. A true GUI includes standard formats for representing text and
graphics. Because the formats are well-defined, different programs that run under a common GUI
can share data. This makes it possible, for example, to copy a graph created by a spreadsheet
program into a document created by a word processor.

Many DOS programs include some features of GUIs, such as menus, but are not graphics based.
Such interfaces are sometimes called graphical character-based user interfaces to distinguish
them from true GUIs.


CUI (Character User Interface)

Describes programs capable of displaying only ASCII (American Standard Code for Information
Interchange) characters. Character-based programs treat a display screen as an array of boxes,
each of which can hold one character. When in text mode, for example, PC screens are typically
divided into 25 rows and 80 columns. In contrast, graphics-based programs treat the display
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 11 of 138
screen as an array of millions of pixels. Characters and other objects are formed by illuminating
patterns of pixels.

Because the IBM extended ASCII character set includes shapes for drawing pictures, character-
based programs are capable of simulating some graphics objects. For example, character-based
programs can display windows and menus, bar charts, and other shapes that consist primarily of
straight lines. However, they cannot represent more complicated objects that contain curves.

A Command Line Interface or CLI is a method of interacting with an operating system or
software using a command line interpreter. A command line interpreter is a computer program
that reads lines of text entered by a user and interprets them in the context of a given operating
system or programming language.

This requires the user to know the names of the commands and their parameters, and the syntax
of the language that is interpreted. From the 1960s onwards, user interaction with computers was
primarily by means of command line interfaces


Distinguish between GUI & CUI / CLI


Graphical User Interface (GUI)


Character User Interface (CUI)
Generally used in Multimedia,
eLearning, Demos etc
Generally used in programming
languages
Consists of a Visual / Graphical control
features using toolbars, buttons, icons,
menus etc
Consists of character control features
such as text elements or characters
Used to create animations or pictures Used to create words or sentences or
syntax commands
A variety of input devices are used to
manipulate text & images as visually
displayed
Allows users to specify options through
function keys
Its usually a graphical Interface, e.g.
Web pages, Navigation etc
Its purely textual (commands which are
understood by computers)
GUI can be affected by Virus Proves less affected by virus
E.g. Windows OS, Apple OS E.g. Disk Operating System (DOS)

All Operating Systems today provide a Graphical User Interface. Applications use GUI, e.g. Web
Applications, ATM Softwares etc. The GUI uses metaphors for helping users understand the
nature of activities they can perform. E.g. Home Icon which indicates the homepage, Lock and
Key Icon which indicates security etc. GUI may also include audio, video. GUI is also known as
Look and Feel.

Apple Mac OS & Win OS are todays familiar GUIs.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 12 of 138

Q12 What is Internet?
Q16 Why Internet is called the Network of Networks? How does it work?

(Ans 12, 16)

The Birth of the Internet

While computers were not a new concept in the 1950s there were relatively few computers in
existence and the field of computer science was still in its infancy. Most of the advances in
technology at the time cryptography, radar, and battlefield communications were due to
military operations during World War II, and it was in fact, government activities that led to the
development of the Internet.

On October 4, 1957, the Soviets launched Sputnik, the first unmanned satellite in space, and the
US Government under President Eisenhower subsequently launched an aggressive military
campaign to compete and surpass the Soviet activities. From the launch of Sputnik and the
U.S.S.R. testing its first ICBM (Intercontinental Ballistic Missile) the Advanced Research Projects
Agency (ARPA) was born. ARPA was the U.S. Governments research agency for all space and
strategic missile research. In 1958, NASA was formed, and the activities of ARPA moved away
from aeronautics and focused mainly on computer science and information processing. One of
ARPAs goals was to connect mainframe computers at different universities around the country
so that they would be able to communicate using a common language and a common protocol.
Thus the ARPAnet The worlds first multiple-site computer networkwas created in 1969.

The original ARPAnet eventually grew into the Internet. The Internet was based on the concept
that there would be multiple independent networks that began with the ARPAnet as the
pioneering packet-switching network but would soon include packet satellite networks and
ground-based radio networks.

Why the Internet is also known as the network of networks

The internet is the publicly accessible worldwide system of interconnected computer networks
that transmit data by packet switching using a standardized Internet Protocol like TCP/IP (i.e.
Transmission Control Protocol / Internet Protocol) and may other protocols. It is made up of
thousands of smaller commercial, academic, domestic and government networks. It carries
various information and services, such as electronic mail, online chat and the interlinked web
pages and other documents of the World Wide Web.

Unlike online services, which are centrally controlled, the Intranet is decentralized by design.
Each Internet Computer, called a host, is independent. Its operators can choose which Internet
services to use and which local services to make available to the global internet community. It is
possible to gain access to the Internet through a commercial ISP (Internet Service Provider).

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 13 of 138

Illustration of the Internet Architecture:

Difference between Internet and the World Wide Web

Many people use the terms Internet and World Wide Web interchangeably, but in fact the two
terms are not the same thing. They are both separate but related concepts. The Internet is a
massive network of networks. It connects millions of computers together globally, forming a
network in which any computer can communicate with any other computer as long as they are
both connected to the Internet. Information that travels over the Internet does so via a variety of
languages known as protocols.

The World Wide Web also known as the Web is a way of accessing information over the
medium of the Internet. It is an information-sharing model that is built on top of the Internet. The
web uses the HTTP Protocol. Web services which use HTTP to allow applications to
communicate in order to exchange business logic use the World Wide Web to share information.
The web also makes use of browsers such as Internet Explorer or Netscape, or Firefox to access
web documents called web pages that are linked to each other via hyperlinks. Web documents
contain graphics, sounds, text and video. So the Web is just a large portion of the Internet but the
two terms are not the same and should not be confused with each other.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 14 of 138

Q13 Internet Technology and Electronic Commerce has brought the manufacturers and
customers very close to each other. This should result into better Customer Relationship
Management and Supply Chain Management. Please explain.
Q29 Internet and E-Commerce has brought the manufacturers and consumers very close to
each other, resulting in better Customer Relation Management and Supply Chain
Management? Please explain?

(Ans 13, 29)

Internet has paved the way for Electronic Commerce (e-Commerce), which can be defined as the
business activities conducted using electronic data transmission technologies, such as those
used on the Internet and the World Wide Web. It involves buying, selling, transferring or
exchanging products, services, and / or information via computer networks and is a major
distribution channel for goods, services and managerial and professional jobs.

Companies today are interested in E-Commerce simply because it can help increase sales and
profits and decrease costs. Even a small firm that advertises on the web can get their message
out to potential customers in every country in the world. E-Commerce has proven to have many
benefits and advantages. E-Commerce plays a major role in global reach. It expands the
marketplace to national and international markets. E-Commerce allows firms to now reach narrow
market segments that are geographically scattered. People in third world countries are now able
to enjoy products and services that were unavailable in the past.

It is important to understand that not only does E-Commerce benefit the Seller or the Company,
but it also benefits the buyer or the customer. One main benefit of E-Commerce is that it is
particularly useful in creating virtual communities that become ideal target markets for specific
types of products and services. J ust as E-Commerce increases the sales opportunities for the
seller, it increases the purchasing opportunities for the buyer. With minimum investment, a
business can use E-Commerce to easily and quickly identify the best suppliers, more customers
and the most suitable business partners worldwide. By expanding the base of consumers and
suppliers, enables an organization to buy at cheap rates and sell more at competitive and lower
prices.

E-Commerce has also given a great boost to Supply Chain Management (SCM). Today using E-
Commerce companies can manage the integration of all activities within and between
enterprises. Some of these activities include Procurement, Inventory Management and Logistics.
The Internet and E-Commerce revolution allows you to manage your Supply Chain better by
effectively integrating a system of suppliers, partners, customers and employees. This nowadays
is done online with the help of customized Extranets, use of Virtual Private Networks, Emails and
other collaboration tools and hence today this activity can be referred to as e-SCM. The Internet
and E-Commerce has benefited the SCM process by decreasing operating costs through reduced
inventory requirements. It has improved customer satisfaction by maintaining adequate inventory
and has improved productivity and logistics.

The Internet will help you provide better customer service by delivering rich on-demand solutions.
Better customer service results in brand loyalty and this results in good revenue. Internet and E-
Commerce has helped companies support their existing customers, develop new customers and
retain profitable customers. Today customers can reach the manufacturers online and share
market research data; they can make complaints online and even send their suggestions for
product / service improvements online and even track the progress of their orders online (e.g.
FedEx allows its customers to track the status of their courier packages online.)

Thus it can be said that the use of Internet and E-Commerce has improved margins, increased
customer awareness and increased competitive advantage in todays new economy and has
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 15 of 138
brought the manufacturers and customers closer to each other and has resulted into better CRM
and SCM.




Q17

Q18

Q53


Explain Protocols in Communication?

Define and explain a protocol.

What is a Protocol?


(Ans 17, 18, 53)

Protocols are a collection of rules and procedures that control the transmission of data between
devices. They enable intelligent exchange of data between incompatible devices. Protocols are
needed to minimize transmission errors. They are as set of rules for sending and receiving
electronic messages. Protocols split data files into small blocks or packets in order to transfer
them over a network. Protocols use algorithms to measure data packets to ensure their proper
receipt.





Transport protocols provide end-to-end data exchanges where the network devices maintain a
connection or session.

Network protocols handle addressing and routing information, error control and requests for
transmissions. These protocols function at the transport & network layers of the OSI reference
model)

TCP/IP Protocol was initially designed to provide a method for interconnecting different packet
switching networks in the ARPANET which was the foundation of the Internet. TCP is a
connection-oriented transport layer protocol that uses the connectionless services for IP to
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 16 of 138
ensure reliable delivery of data. TCP was introduced with UNIX and used in the IBM environment
and it provided file transfer, email and remote logon across large distributed client server
systems.

WAP (Wireless Access Protocol)

WAP utilizes the wireless transmission protocols to transfer content from the internet to WAP
enabled mobile devices. The underlying protocols in WAP technology are WCMP, WDP, WTLS
and WML. Users using WAP services were charged by the time duration of data transfer. WAP
requires a WSP (Wireless Service Provider) who converts http into WAP requests and
transforms HTML content of the web into WML content which can be viewed on mobile phone
screens. WSPs are connected to WAP Content Providers (also known as Origin Servers) who
provide WAP content to WAP users.

Ethernet Protocol

Ethernet refers to the family of Local Area Network products covered by the IEEE 802.3
standards that defines the protocol CSMA/CD (Carrier Sense Multiple Access / Collision
Detection). The three data transmission rates associated with Ethernet are:

10 Mbps (10 Base-T Ethernet)
100 Mbps (Fast Ethernet)
1000 Mbps (Gigabit Ethernet)
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 17 of 138


Q19

Explain Office Automation? HOMEWORK


(Ans 19)

The use of computer systems to execute a variety of office operations, such as word processing,
accounting, and e-mail. Office automation almost always implies a network of computers with a
variety of available programs.
The integration of office information functions, including word processing, data processing,
graphics, desktop publishing and e-mail. The backbone of office automation is a LAN, which
allows users to transmit data, mail and even voice across the network. All office functions,
including dictation, typing, filing, copying, fax, Telex, microfilm and records management,
telephone and telephone switchboard operations, fall into this category. Office automation was a
popular term in the 1970s and 1980s as the desktop computer exploded onto the scene.
Automation refers to the use of computers and other automated machinery for the execution of
business-related tasks. Automated machinery may range from simple sensing devices to robots
and other sophisticated equipment. Automation of operations may encompass the automation of
a single operation or the automation of an entire factory.
There are many different reasons to automate. Increased productivity is normally the major
reason for many companies desiring a competitive advantage. Automation also offers low
operational variability. Variability is directly related to quality and productivity. Other reasons to
automate include the presence of a hazardous working environment and the high cost of human
labor. Some businesses automate processes in order to reduce production time, increase
manufacturing flexibility, reduce costs, eliminate human error, or make up for a labor shortage.
Decisions associated with automation are usually concerned with some or all of these economic
and social considerations.
For small business owners, weighing the pros and cons of automation can be a daunting task.
"Failure to take a strategic look at where the organization wants to go and then capitalizing on the
new technologies available will hand death-dealing advantages to competitorstraditional and
unexpected ones."
Types of Automation
Although automation can play a major role in increasing productivity and reducing costs in service
industriesas in the example of a retail store that installs bar code scanners in its checkout
lanesautomation is most prevalent in manufacturing industries. In recent years, the
manufacturing field has witnessed the development of major automation alternatives. Some of
these types of automation include:
Information technology (IT)
Computer-aided manufacturing (CAM)
Numerically controlled (NC) equipment
Robots
Flexible manufacturing systems (FMS)
Computer integrated manufacturing (CIM)
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 18 of 138
Information technology (IT) encompasses a broad spectrum of computer technologies used to
create, store, retrieve, and disseminate information.
Computer-aided manufacturing (CAM) refers to the use of computers in the different functions of
production planning and control. CAM includes the use of numerically controlled machines,
robots, and other automated systems for the manufacture of products. Computer-aided
manufacturing also includes computer-aided process planning (CAPP), group technology (GT),
production scheduling, and manufacturing flow analysis. Computer-aided process planning
(CAPP) means the use of computers to generate process plans for the manufacture of different
products. Group technology (GT) is a manufacturing philosophy that aims at grouping different
products and creating different manufacturing cells for the manufacture of each group.
Numerically controlled (NC) machines are programmed versions of machine tools that execute
operations in sequence on parts or products. Individual machines may have their own computers
for that purpose; such tools are commonly referred to as computerized numerical controlled
(CNC) machines. In other cases, many machines may share the same computer; these are called
direct numerical controlled machines.
Robots are a type of automated equipment that may execute different tasks that are normally
handled by a human operator. In manufacturing, robots are used to handle a wide range of tasks,
including assembly, welding, painting, loading and unloading of heavy or hazardous materials,
inspection and testing, and finishing operations.
Flexible manufacturing systems (FMS) are comprehensive systems that may include numerically
controlled machine tools, robots, and automated material handling systems in the manufacture of
similar products or components using different routings among the machines.
A computer-integrated manufacturing (CIM) system is one in which many manufacturing functions
are linked through an integrated computer network. These manufacturing or manufacturing-
related functions include production planning and control, shop floor control, quality control,
computer-aided manufacturing, computer-aided design, purchasing, marketing, and other
functions. The objective of a computer-integrated manufacturing system is to allow changes in
product design, to reduce costs, and to optimize production requirements.
Automation and the Small Business Owner
Understanding and making use of automation-oriented strategic alternatives is essential for
manufacturing firms of all shapes and sizes. It is particularly important for smaller companies,
which often enjoy inherent advantages in terms of operational nimbleness. But experts note that
whatever your company's size, automation of production processes is no longer sufficient in
many industries.
"The computer has made it possible to control manufacturing more precisely and to assemble
more quickly, factors which have increased competition and forced companies to move faster in
today's market, Now, with the aid of the computer, companies will have to move to the next
logical step in automationthe automatic analysis of data into information which empowers
employees to immediately use that information to control and run the factory as if they were
running their own business. Small business owners face challenges in several distinct areas as
they prepare their enterprises for the technology-oriented environment in which the vast majority
of them will operate. Three primary issues are employee training, management philosophy, and
financial issues.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 19 of 138


Challenges that come with automation
1. Employee training
2. Management Philosophy
3. Financial Issues
Employee Training:

Many business owners and managers operate under the assumption that acquisition of fancy
automated production equipment or data processing systems will instantaneously bring about
measurable improvements in company performance. But as countless consultants and industry
experts have noted, even if these systems eliminate work previously done by employees, they
ultimately function in accordance with the instructions and guidance of other employees.
Therefore, if those latter workers receive inadequate training in system operation, the business
will not be successful. All too often, wrote Lura K. Romei in Modern Office Technology, "the
information specialists who designed the software and installed the systems say that the
employees are either unfamiliar with technology or unwilling to learn. The employees' side is that
they were not instructed in how to use the system, or that the system is so sophisticated that it is
unsuited to the tasks at hand. All the managers see are systems that are not doing the job, and
senior management wonders why all that money was spent for systems that are not being used."
An essential key to automation success for small business owners, then, is to establish a quality
education program for employees, and to set up a framework in which workers can provide input
on the positive and negative aspects of new automation technology. As J ohn Hawley commented
in Quality Progress, the applications of automation technology may be growing, but the human
factor still remains paramount in determining organizational effectiveness.
Management Philosophy:

Many productive business automation systems, whether in the realm of manufacturing or data
processing, call for a high degree of decision-making responsibility on the part of those who
operate the systems. As both processes and equipment become more automatically controlled,
Employees will be watching them to make sure they stay in control, and fine tune the process as
need. These enabler tools are changing the employee's job from one of adding touch labor to
products to one of monitoring and supervising an entire process."
But many organizations are reluctant to empower employees to this degree, either because of
legitimate concerns about worker capabilities or a simple inability to relinquish power. In the
former instance, training and/or workforce additions may be necessary; in the latter, management
needs to recognize that such practices ultimately hinder the effectiveness of the company. "The
people aspect, the education, the training, the empowerment is now the management issue,"
Flaig told J asany. "Management is confronted today with the decision as to whether or not they
will give up perceived power, whether they will make knowledge workers of these employees."
Financial Issues:

It is essential for small businesses to anticipate and plan for the various ways in which new
automation systems can impact on bottom-line financial figures. Factors that need to be weighed
include tax laws, long-term budgeting, and current financial health. Depreciation tax laws for
software and hardware are complex, which leads many consultants to recommend that business
owners use appropriate accounting assistance in investigating their impact. Budgeting for
automation costs can be complex as well, but as with tax matters, business owners are
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 20 of 138
encouraged to educate themselves. With the shortened life of most new technology, especially at
the desktop, it is critical that you plan on annually reinvesting in your technology. You'll also need
to decide what is an appropriate level of spending for your company, or for yourself if it's a
personal decision. Arriving at that affordable spending level requires a strategic look at your
company to assess how vital a contributor technology is to the success of your business."
Once new automation systems are in operation, business owners and managers should closely
monitor financial performance for clues about their impact on operations. "Unused technology or
underused technology is a big tipoff that something is wrong. Watch for cost overruns on new
systems, and look out when new systems are brought in predictably late."
The accelerating pace of automation in various areas of business can be dizzying. It will be a
challenge for small businesses to keep paceor stay aheadof such changes. But the forward-
thinking business owner will plan ahead, both strategically and financially, to ensure that the
evermore automated world of business does not leave him or her behind.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 21 of 138


Q20

Q21


Q22


Q23


Q24

Q60

Q92 (e)


Explain Operational Feasibility & Economic Feasibility?

Describe the steps in the System Development Life Cycle (SDLC) clearly specifying the activities
in each stage?

What are the stages in the System Development Life Cycle (SDLC)? Describe the activities in
each stage?

Enumerate the stages in a SDLC. Clearly mention the inputs, outputs and activities with respect to
each stage?

Enumerate the inputs, activity and output of each stage in a systems development life cycle?

Define Feasibility Studies?

Explain the following terms: Feasibility Study


(Ans 20 24, 60, 92 e)

System Development Life Cycle (a.k.a. Software Development Life Cycle)

The Software Development Life Cycle (SDLC) method is an approach to developing an
Information system / software product that is characterized by a linear sequence of steps that
progress from Start to Finish without revisiting any previous step.

This methodology ensures that systems are designed and implemented in a methodical, logical
and step-by-step approach. It is the oldest and most widely used systems development models.

It consists of the following activities, namely:
1) Preliminary Investigation
2) Determination of System Requirements (a.k.a. Requirements Analysis Phase)
3) Design of the System
4) Development of the Software
5) System Testing
6) Implementation and Evaluation
7) Review

A) Preliminary Investigation Phase:
This phase begins with a phone call from the customer by way of either a memorandum
from the top management to the Director of Systems Development, or by way of a letter
or an initiative from a customer to discuss a perceived problem or deficiency or by way of
a request for something new in an existing system.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 22 of 138
Here the purpose of this step is not to develop a system but to verify that a problem or
deficiency really exists or to pass a judgment on the new requirement.
Here the investigator considers the financial costs of completing the project versus the
benefits of completing it.

There are three factors in this process which is also typically called Feasibility Study Process in
which the investigator tries to evaluate the Technical, Economic and Operational feasibility of the
project.
a) Technical Feasibility
- This feasibility study assesses whether the current technical resources and skills
are sufficient for the new system or not.
- If they are not available, then how easy it is to obtain them, or if they exist, how
can they be upgraded to help create the new system.
- It considers the existing system and to what extent it can support the proposed
addition or enhancements.

b) Economic Feasibility
- This feasibility study examines the benefits of the new system to make costs
acceptable.
- It determines whether time and money are available to develop it.
- It includes purchase of new equipment, hardware, software etc.

c) Operational Feasibility
- This feasibility study determines the human resources available to operate it
once it is installed.
- Whether the system will be used if it is developed and implemented or will there
be resistance from users?
- It should be noted that users that do not want a new system may prevent it from
being operationally feasible.
Deliverables for this phase: Feasibility Reports Document

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 23 of 138

B) Requirements Analysis Phase (a.k.a. Requirements Gathering Phase)
It is a study of current business system and the users requirements and priorities
for a new or improved application.
Analysts study the domain and the problem / requirements in detail.
The key to make this phase a success is by gaining a rigorous understanding of
the problem or opportunity driving the need for the new system.
Close interaction with the employees (end users), managers to get details of the
business process and their opinions / issues.
System Analysts should not only focus on the current problems but should
closely inspect various documents related to operations and processes which are
closely related.
The System Analyst needs to have a futuristic perspective and apply it to the
problem at hand.
System Analyst needs to help the user visualize the system
In this phase the System Analyst usually recommends more than one alternative
for the problem resolution.
The System Analyst works closely with the Information Architect to create the
prototypes and walk users through them.
The System Analyst must have strong People as well as Technical skills, as he
would be required to work with clients which would put his communication and
interpersonal skills to the test and that he could better understand and identify
problems, and on the other hand his technical skill sets would help him resolve
conflicting objectives and document the requirements with process, data and
network models.

Deliverables for this phase: System Requirements Specification Document (SRS)
containing Business Use Cases, Project Charter / Goals / Statement of Work (SoW),
Document containing Inputs and Outputs to the system & Process Methodology.

This phase covers all major details of the system or program to be built.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 24 of 138

C) Design of the System
This phase produces details stating how the system will meet requirements
identified during system analysis. This stage is known as a logical design.
This logical design involves components like Input, Output, Processing, Files etc.
At the end of this phase the logical design is validated with the client and a
Functional Requirements Specification (FRS) is prepared along with the rough
sketch of what the User Interface (UI) will look like.
In this phase most programs are designed by first determining the desired output
of the program. If you know the desired output, you can determine the necessary
input needed to generate such a kind of output.
In this phase the User Interface (UI) Prototype usually goes through a number of
iterations.
The focus here is still on NOT discussing How or What the problem will do, but
just to capture every requirement on paper.
Here the User Interface designers aid the developers with html, style guides etc.

Deliverables for this phase: Formal Requirements Specification, Rough Sketch of the UI
Prototype .

D) Development of the Software
This phase is the most exciting time of the SDLC.
During this phase the computer hardware is purchased, licensed software is
purchased and the coding for the development of the software actually begins.
There is adherence to the SRS (System Requirements Specification)
Any deviations should be approved by either the Project Manager or the Client
This phase is usually split into Prototyping & Production Quality Application
Creation.
Developers use this stage to demo the application to the customer as another
check that the final software solution answers the problem posed.

Deliverables for this phase: Revised UI Prototype, Production Quality Code .



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 25 of 138
E) System Testing
The system is used experimentally to ensure whether the software runs as per
the specs and in the way the Customer / End User really wanted.
Special Test Data is prepared and used and the results are examined.
Bugs or deviations are identified and corrected before the system is given to the
user for the UAT (User Acceptance Testing)
User Testing helps uncover many bugs / irregularities.
System is tested for its performance, reliability, functionality, scalability etc.

Deliverables for this phase: Test Plan and Test Cases .

F) Implementation and Evaluation
After the development phase of the SDLC is complete, the system is
implemented.
Any hardware that has been purchased will be delivered and installed.
The software which has been designed and developed in the earlier phases will
now be installed on the computers that require it.
Users required to use the program will also be trained on how to use it in this
phase.
During the implementation phase, both the hardware and software are tested.
Although the programmer will find and fix problems
The old system can be gradually or instantly replaced.

Evaluation
Evaluation of the system is performed to identify the strengths and weaknesses
of the new system. The actual evaluation can be any of the following:

1. Operational Evaluation
Assessment of the manner in which the system functions,
including the ease of use, response time, suitability of
information formats, overall reliability and level of utilization.

2. Organizational Impact
Identification and measurement of benefits of the organization in
such areas as financial concerns (cost, revenue and profits),
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 26 of 138
operational efficiency and competitive impact.

3. User Manager Assessment
Evaluation of the attitudes of Senior and User Managers within
the organization as well as end users.

4. Development Performance
Evaluation of the development process in accordance with such
yardsticks as overall development time and effort, conformance
to budgets and standards, other project management criteria.
Includes assessment of development methods and tools.

Deliverables for this phase: All of the above + User Guides + Training Manuals .


G) Review
Review is important to gather information for maintenance of the system. No
system is ever complete. It has to be maintained as changes are required
because of internal developments such as new users of business activities, and
external developments such as Industry standards or competition.
The implementation review provides the first source of Information for
maintenance requirement. The most fundamental concern during post
implementation review is in determining whether the system has met its
objectives.
The Analysts assess the users performance levels and optimum quality of the
system.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 27 of 138

Q25

Q26

Q27

Q28


Q35

Q36

Q37
Computers & communications seem to merge together seamlessly. Discuss?

Why are the computer and computer related devices networked in an organization?

What are the benefits of networking?

An integrated companywide computerization is the only way of deriving full benefits of Information
technology today. Discuss?

What is networking?

Describe the benefits of networking with examples?

Why are computers and computer related devices networked in an organization?


(Ans 25, 26, 27, 28, 35, 36, 37)

Definition of Networking

Networking can be explained as the linking of a number of devices, such as computers,
workstations, printers, and Audio Video gear into a network (system) for the purpose of sharing
resources and exchange of information between them.

A network is not just a bunch of computers with wires running between them. When properly
implemented, a network is a system that provides its users with unique capabilities, above and
beyond what the individual machines and their software applications can provide.

Most of the benefits of networking can be divided into two generic categories: connectivity and
sharing. Networks allow computers, and hence their users, to be connected together. They also
allow for the easy sharing of information and resources, and cooperation between the devices in
other ways. Since modern business depends so much on the intelligent flow and management of
information, this tells you a lot about why networking is so valuable.

Here are some of the specific advantages generally associated with networking:

a) Connectivity and Communication:

Networks connect computers ad the users of those computers. Individuals within a building or
work group can be connected into local area networks (LANs); LANs in distant locations can be
interconnected into larger wide area network (WANs). Once connected, it is possible for network
users to communicate with each other using technologies such as electronic mail. This makes the
transmission of business (or non-business) information easier, more efficient and less expensive
than it would be without the network. Today people and processes have come together via the
emergence of networks. The advent of 3G and 4G technologies have enabled audio, video, text,
rich media to converge and allow for seamless communication and interaction.

b) Data Sharing:

One of the most important uses of networking is to allow the sharing of data. Before networking
was common, an accounting employee who wanted to prepare a report for her manager would
have to produce it on his PC, put it on a floppy disk and then walk it over to the manager, who
would transfer the data to his or her PCs hard disk. Today true networking allows thousands of
employees to share data much more easily and quickly than this. More so, it makes possible
applications that rely on the ability of many people to access and share the same data, such as
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 28 of 138
databases, group software development, and much more. Intranets and extranets can be used to
distribute corporate information between sites and to business partners.

c) Hardware Sharing:

Networks facilitate the sharing of hardware devices. For example, instead of giving each of 10
employees in a department an expensive color printer, one printer can be placed on the network
for everyone to share.

d) Internet Access:

The internet is itself an enormous network, so whenever you access the Internet, you are using a
network. The significance of the Internet on modern society is hard to exaggerate, especially for
those of us in technical fields.

e) Internet Access Sharing:

Small computer networks allow multiple users to share a single Internet connection. Special
hardware devices allow the bandwidth of the connection to be easily allocated to various
individuals as they need it, and permit an organization to purchase one high-speed connection
instead of many slower ones.

f) Data Security, Data Backup and Management:

In a business environment, a network allows the administrators to much better manage the
companys critical data. Instead of having this data spread over dozens or even hundreds of small
computers in a haphazard fashion as their users create it. Data can be centralized on shared
servers. They can also be easily backed up. This makes it easy for everyone to find the data,
makes it possible for the administrators to ensure that the data is regularly backed up, and also
allows for the implementation of security measures to control who can read or change various
pieces of critical information.

g) Performance Enhancement and Balancing:

Under some circumstances, a network can be used to enhance the overall performance of some
applications by distributing the computation tasks to various computers on the network.

h) Entertainment:

Networks facilitate many types of games and entertainment. The Internet itself offers many
sources of entertainment. Today many multi-player games exist that operate over a local area
network. Many home networks are set up for this reason, and gaming across wide area networks
(including the internet) has also become quite popular.




MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 29 of 138


The ISO OSI Model for Networking



(Ans)

The Open Systems Interface means a system that can communicate with any other system that
follows the specified standards, formats and semantics. The Open system Interface works well
because of protocols that specify how the communicating parties may communicate.

The OSI Model supports two types of Protocols namely:

a) Connection Oriented:
- Sender and receiver first establish a connection, possibly negotiate on a protocol
- Transmit the stream of data
- Release the connection when done
- E.g. Telephone connection


b) Connectionless:
- No advance setup is needed
- You can transmit the messages to the receiver irrespective of whether the receiver is
online or offline.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 30 of 138
Features of the OSI Model are listed below:

Consists of 7 layers. Each layer deals with a specific aspect of communication
Each layer provides an interface to the layer above it. In other words each layer above
provides some support to the layer before it.
Messages are sent from the top layer (i.e. Application Layer) and are passed on the next
lower layer until the message reaches the bottom layer (i.e. Physical Layer).
At each level / layer, a header may be added to the message. Some layers add both a
header and a trailer.
The lowest layer transmits the message over the network to the receiving machine. The
physical layer of the sender communicates with the physical layer of the receivers
machine
Each layer then strips the header or trailer, handles the message using the protocol
provided by the layer and passes it on the next layer above it, until it moves right up to
the application layer of the receivers machine.

a) The Physical Layer:

Following are the characteristics of the Physical Layer:

The physical layer is concerned with the transmission of bits
It follows a two way or one way transmission
It follows standard protocols which deal with electrical, mechanical and signaling
interfaces

b) The Data Link Layer:

Following are the characteristics of the Data Link Layer:

Handles errors in the physical layer
Groups bits into frames and ensures their correct delivery
Adds some bits at the beginning and end of each frame plus the checksum
The Data Link Layer on the receivers machine verifies the checksum and if the checksum
is not correct, it asks for retransmission
Consists of two layers (Logical Link Control) which defines how data is transferred over
the cable and provides data link service to the higher layers. The second layer is
(Medium Access Control or MAC layer) which defines who can use the network when
multiple computers are trying to access the network simultaneously.

c) The Network Layer:

Following are the characteristics of the Network Layer:

Concerned with the transmission of packets
Chooses the best path to send a packet (routing) to ensure speedy delivery of data
It may be complex in a large network (e.g. Internet)
It uses a connection oriented protocol called X.25 for telephone connections and also
used Internet Protocol for establishing connectionless networks.

d) The Transport Layer:

Following are the characteristics of the Transport Layer:

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 31 of 138
Since the network layer does not deal with lost messages, the transport layer above it
ensures reliable service
This layer breaks the message from the sessions layer above it into small packets,
assigns sequence number and sends them to the network layer below it for transmission
over the network
This layer is also supported in the Internet Protocol suite.

e) The Sessions Layer:

Following are the characteristics of the Sessions Layer:

Very few applications use this.
Its an enhanced version of the transport layer and helps in dialog control,
synchronization facilities.
Not supported by the Internet Protocol suite

f) The Presentation Layer:

Following are the characteristics of the Presentation Layer:

Very few applications use it
Concerned with the semantics of the bits sent
Sender can tell the receiver the format of the data that is being sent

g) The Application Layer:

Following are the characteristics of the Application Layer:

This is the layer where users actual work on
This layer consists of applications which communicate using protocols
Email, file transfer, remote login applications use protocols like SMTP, FTP, Telnet etc.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 32 of 138


Understanding the Network Tree

Q38 Describe similarity between bus and ring topology?
Q39 Define Topology?
Q40 Networks can be classified based on various criteria such as:
Geographical Spread / Distance
Type of Switching
Topologies
Medium of Data Communications etc.

Give two examples of types of networks for each classification mentioned above.
Q57 Explain Fiber Optics?
Q69 What are the various options available for a company to establish network /
connectivity across its offices and branches?
Q75 Explain the different transmission mediums used for networking.
Q76 (a) Distinguish between Optical Fiber and Conventional Copper Cable
Q86 A company has its head office in Mumbai. It has 4 regional offices that is Mumbai,
Delhi, Calcutta and Chennai with 4 branches in each region. Some of the branches do
not have even basic telephone connectivity. Prepare a network plan incorporating the
different technical options for connectivity.
Q93 What is networking? State any three topologies of networking.
Distinguish between LAN and WAN.


(Ans 38, 39, 40, 57, 69, 75, 76a, 93)

The following illustration outlines the concepts that belong to the world of networking and explains
their hierarchy.
























Network Tree
Medium Geographical Distance
/ Location
Topology Switching
Wired Wireles LAN

MAN

WAN

SAN
Bus

Ring

Star

Mesh
Circuit Packet
Co-Axial Cable

Twisted Pair Cable

Fiber Optic Cable
Infrared

Blue Tooth

Wi-Fi

Wi-Max
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 33 of 138
Networks > Medium: Wired > Coaxial Cable

A type of wire that consists of a center wire surrounded by insulation and then a grounded shield
of braided wire. The shield minimizes electrical and radio frequency interference.
Coaxial cabling is the primary type of cabling used by the cable television industry and is also
widely used for computer networks, such as Ethernet. Although more expensive than standard
telephone wire, it is much less susceptible to interference and can carry much more data.

Features:
Used extensively in LANs.
Has a single central conductor surrounded by a circular insulation layer and a conductive
shield.
Offers a high bandwidth of up to 400 Mhz.
Offers a high quality of data transmission
Offers maximum data transfer rates of 100 Mbps
Problems:
It can have signal loss when data is sent at high frequencies
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 34 of 138
Networks > Medium: Wired > Twisted Pair Cable


Features:
Extensively used in telephone circuits where several wires are insulated and put together
Offers bandwidth of around 250 KHz
It has a low signal to noise ratio (crosstalk)
It offers a low data transfer rate
Its preferred for short distance communications
Its generally used in LAN

Networks > Medium: Wired > Fiber Optic Cable


Features:
Used for applications requiring a high quality and high bandwidth of data transfer
Uses light instead of electric pulses for data transmission
It offers very high frequency ranges of around 20,000 MHz and higher
A single fiber can support over 30,000 telephone lines
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 35 of 138
It offers data transmission rates of 400 Mbps and higher
It has become very popular for LAN and MAN as well as for Intercontinental links
It has a high signal to noise ratio and is very difficult to tap into the data transmission
happening inside an optic fiber
Earlier its cost was a big drawback but today the use of pure plastic has made optic fiber
very economical and commercially feasible to use.

Networks > Medium: Wireless > Infrared
Infrared technology has been around for ages, and is something that weve all come to take for
granted in television, VCR, DVD remote control devices.
What Infrared does?
Infrared allows transmission of data over very short distances. One cannot transmit huge
amounts of data via Infrared. For example, the remote control unit and the equipment share a
special radio frequency or code, which allows the remote unit to transmit a one-way signal. J ust
point the remote device at whatever you want to control, and press the button.
How it works?
IR technology only works over short distances of less than 25 feet, and there cant be anything
solid, like walls, standing in the way as an obstacle. Infrared is a one-way communication. It
requires a clear line of sight between the devices.
How it is used?
Most of todays computers and printers have built-in infrared technology that allows you to print
without bulky cables. All mobile phones have infrared built in as well for allowing their users to do
a data transfer with other devices like beaming addresses, notes and other data.



Networks > Medium: Wireless > Bluetooth
Bluetooth wireless technology is a short-range radio technology. Bluetooth wireless technology
makes it possible to transmit signals over short distances between telephones, computers and
other devices and thereby simplify communication and synchronization between devices.
Bluetooth is the name of a protocol for a short range (10 meter) frequency-hopping 2.4 GHz radio
link between wireless devices such as a mobile phone and a PC. The idea is to make
connections between different electronic items much easier and simpler, and without a lot of
operator intervention. Bluetooth was launched in 1998 as a joint effort between Ericsson, IBM,
Intel, Nokia and Toshiba. Over 1000 companies are now involved in the effort -- so you can see
that it has stirred a lot of interest in the wireless community.
Bluetooth is similar to infrared, but taken a step further. Instead of one-way transmissions,
Bluetooth allows multiple devices from multiple manufacturers to speak the same wireless
language without the conflicts that are found in standard infrared. The Bluetooth standard was
jointly developed by a group of key players in the technology industry to ensure compatibility
between various wireless devices. It was named after the Dutch king Harald Bltand who was
famous for bringing together the warring tribes of the Scandinavian region and built a strong
network of allies.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 36 of 138
What Bluetooth does?
Bluetooth operates over short distances of around 30 feet or less, and it requires a clear line of
sight between the devices. Bluetooth allows you to create your own PWAN (private wireless area
network) where you can hook up up to eight devices without the hassle of cables and cords.
Popular day to day use of Bluetooth can be found in many of todays wireless keyboards, wireless
mouse, cell phone handsets etc.


How Bluetooth works?

Bluetooth operates over the unlicensed 2.5 GHz radio spectrum which allows Bluetooth-enabled
equipment to operate anywhere in the world. Bluetooth uses more than 71 different frequencies,
which allows a signal to hop around from one frequency to another to avoid conflicts with other
devices.

How is Bluetooth used?

Bluetooth enabled PDAs such as Pocket PC, can synchronize email, documents and contact
information with a Bluetooth enabled PC without the need of cradles, cables or plugs. Bluetooth
enabled mobile phones can communicate with other Bluetooth enabled devices thereby allowing
data transfer. Bluetooth enabled wireless headsets can be used with a mobile phone to provide
hands free usage without the hassle of cords and plugs. Like Infrared, its limitation is that it is a
short range mode of communication.



Networks > Medium: Wireless > Wi-Fi

Wireless Ethernet or Wi-Fi is the latest standard for long-range wireless networking. It goes
further and faster than Infrared or Bluetooth and does not require a clear line of sight. Wireless
local area networks (WLANs) are a lot less expensive and much easier to set up than traditional
wired networks. Because they are easy and inexpensive, wireless networks have become very
popular for home and small business networks and have found a niche in hospitals and clinics
where its important to securely connect people to shared file servers, printers, Internet
connections and other resources.

What does Wi-Fi do?

Wi-Fi is based on the IEEE (Institute of Electronic and Electrical Engineers) 802.11 specifications.
There are currently four deployed 802.11 variations, namely: 802.11(a), 802.11(b), 802.11(g) and
802.11(n). The (b) standard allows up to 11Mbps while both (a) and (g) allows up to 54 Mbps.
The new (n) specification will allow even higher speeds of up to 100 Mbps and beyond. The
802.11(a) standard works in the 5GHz frequency band, and the others work in the 2.4GHz band.

How does Wi-Fi work?

Wi-Fi technology operates using the unlicensed radio frequencies in the 2.4GHz to 5GHz range.
2.4GHz for both the 802.11(b) and 802.11(g) and the 5GHz range for 802.11(a). The primary
difference between the Wi-Fi signals and Infrared or Bluetooth is that Wi-Fi does not require the
devices to have a direct line of sight. Wi-Fi transmits data over radio signals that are sent /
received via little antennas that are connected to the devices.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 37 of 138
How is Wi-Fi used?

Wi-Fi technology is used to create a fast, wireless, low cost network. Notebook PCs, Laptops,
Tablet PCs, desktops, handheld devices etc can now talk to each other as well as the internet
using Wi-Fi. Wi-Fi networks are springing up in airports, hotels, convention centers, hospitals and
health care centers. Even airplanes like Ethihad Airways have Wi-Fi access inside their airplanes.



Networks > Medium: Wireless > Wi-Max



The two driving forces of modern Internet are broadband and wireless. The WiMax standard
combines the two, delivering high-speed broadband Internet access over a wireless connection.
WiMax is the next generation of Wi-Fi, or wireless networking technology that will connect you to
the Internet at faster speeds and from much longer ranges than current wireless technology
allows.

WiMAX (Worldwide Interoperability for Microwave Access) is the IEEE 802.16 standards-based
wireless technology that provides MAN (Metropolitan Area Network) broadband connectivity.

WiMax is based on the IEEE 802.16 Air Interface Standard (AIS). WiMax delivers a point-to-
multipoint architecture, making it an ideal method for carriers to deliver broadband to locations
where wired connections would be difficult or too costly. It may also provide a useful solution for
delivering broadband to rural areas where high-speed lines have not yet become available. A
WiMax connection can also be bridged or routed to a standard wired or wireless Local Area
Network (LAN).

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 38 of 138
The so-called last mile of broadband is the most expensive and most difficult for broadband
providers, and WiMax provides an easy solution. Although it is a wireless technology, unlike
some other wireless technologies, it doesnt require a direct line of sight between the source and
endpoint, and it has a service range of 30 miles. It provides a shared data rate of up to 70 Mbps,
which is enough to service up to a thousand homes with high-speed access.

WiMax offers some advantages over WiFi, a similar wireless technology; in that it offers a greater
range is more bandwidth efficient. Ultimately, WiMax may be used to provide connectivity to
entire cities, and may be incorporated into laptops to give users an added measure of mobility.

WiMax requires a tower, similar to a cell phone tower, which is connected to the Internet using a
standard wired high-speed connection, such as a T3 line. But as opposed to a traditional Internet
Service Provider (ISP), which divides that bandwidth among customers via wire, it uses a
microwave link to establish a connection.

Because WiMax does not depend on cables to connect each endpoint, deploying WiMax to an
entire high rise, community or campus can be done in a matter of a couple of days, saving
significant amounts of manpower.



Networks > Geographical Spread / Distance: Local Area Network (LAN)

Networks can be divided into three types based on geographical areas covered, namely: LANs,
MANs and WANs

LAN: Local Area Network



Features:

1) LAN typically connects computers within a single building or campus.
2) LAN was developed in 1970s
3) Its restricted in size and hence the worst case transmission time is known in
advance
4) It uses a single cable transmission technology to which all computers are
attached
5) The medium used here is optical fibers, coaxial cables, twisted pair, wireless
6) It offers low latency (delay) except during peak traffic periods
7) LANs are high speed networks with data transmission speeds ranging from 0.2 to
100Mbps
8) The LAN speeds are adequate for most of the distributed systems
9) LANs use Ethernet as their protocol



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 39 of 138

Networks > Geographical Spread / Distance: Metropolitan Area Network (MAN)

Features:

1) MAN generally covers towns and cities (50 kms)
2) Developed in 1980s
3) The Medium used in MAN are optic fibers and cables
4) MAN offers data transmission rates that are adequate for distributed computing
5) Supports Data and Voice (e.g. Local Cable TV)
6) MAN has one or two cables and no switching elements
7) MAN has a broadcast medium to which all computers are attached
8) IT is a simple network design
9) MAN offers typical latencies of less than 1 msec (millisecond)
10) Message routing in a MAN is fast


Networks > Geographical Spread / Distance: Wide Area Network (WAN)



Features:

1) WAN was developed in 1960s
2) A WAN is made up of numerous cables and telephone lines each connecting to a pair of
routers
3) WAN generally covers large distances (states, countries, continents)
4) WAN uses communication circuits connected by routers as the medium of networking
5) Routers forward packets from one to another following a router from the sender to the
receiver
6) WAN offers typical latencies of 100 msec to 500 msec
7) There can be delays in communication if the WAN uses satellites
8) The typical speeds offered by WANs range from about 20 to 2000 Kbps
9) WANs are not yet suitable for distributed computing however new web standards and
networking standards are enabling WANs to be better and more robust



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 40 of 138

Networks > Topology

The physical topology of a network refers to the configuration of cables, computers, and other
peripherals. Physical topology should not be confused with logical topology which is the method
used to pass information between workstations.

Main Types of Network Topologies
In networking, the term "topology" refers to the layout of connected devices on a network. This
article introduces the standard topologies of computer networking.
One can think of a topology as a network's virtual shape or structure. This shape does not
necessarily correspond to the actual physical layout of the devices on the network. For
example, the computers on a home LAN may be arranged in a circle in a family room, but it
would be highly unlikely to find an actual ring topology there.
Network topologies are categorized into the following basic types:
Star Topology
Ring Topology
Bus Topology
Tree Topology
Mesh Topology
Hybrid Topology
More complex networks can be built as hybrids of two or more of the above basic topologies.



Networks > Topology: BUS Topology



The Bus topology is the simplest network configuration. It uses a single transmission medium
called a bus to connect computers together. Co-axial cable is often used to connect computers
in a bus topology. It often serves as the backbone for a network. The cable, in most cases, is not
of one length, but many short strands that use T-Connectors to join the ends. T-Connectors allow
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 41 of 138
the cable to branch off in a third direction to enable a new computer to be connected to the
network.

Special hardware has to be used to terminate both ends of the coaxial cable such that a signal
traveling to the end of the bus would come back as a repeat data transmission. Since a bus
topology network uses a minimum amount of wire and minimum special hardware, it is
inexpensive and relatively easy to install. The set-up costs are also relatively low. One can simply
connect a cable and T-Connector from one computer to the next and eventually terminate the
cable at both ends. The number of computers connected to the bus is limited as the signal loses
strength when it travels along the cable. If more computers have to be added to the network, a
repeater must be used to strengthen the signal at fixed locations along the bus. The problem with
bus topology is that if the cable breaks at any point, the computers on each side will lose its
termination. The loss of termination causes the signals to reflect and corrupt data on the bus.
Moreover, a bad network card may produce noisy signals on the bus, which can cause the entire
network to function improperly. Bus networks are simple, easy to use and reliable. Repeaters can
be used to boost signal and extend the bus. Heavy network traffic can slow a bus considerably.
Each connection weakens the signal causing distortion among too many connections.



Networks > Topology: Ring Topology



In a ring topology, the network has no end connection. It forms a continuous ring through which
data travels from one node to another. Ring topology allows more computers to be connected to
the network than the other two topologies. In a ring topology each node is able to purify and
amplify the data signal before sending it to the next node. Therefore, ring topology introduces less
signal loss as data traveling along the path. Ring-topology network is often used to cover a larger
geographic location where implementation of star topology is difficult. The problem with ring
topology is that a break anywhere in the ring will cause network communications to stop. A
backup signal path may be implemented in this case to prevent the network from going down.
Another drawback of ring topology is that users may access the data circulating around the ring
when it passes through his or her computer.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 42 of 138
Networks > Topology: Star Topology



A star Network is a LAN in which all nodes are directly connected to a common central computer.
Every workstation is indirectly connected to every other through the central computer. In some
Star networks, the central computer can also operate as a workstation. The star network topology
works well when the workstations are at scattered point. It is easy to add or remove workstation.
If workstation is reasonably close to the vertices of a convex polygon and the system
requirements are the modest. The ring network topology may serve the intended purpose at lower
cost than the star network topology. If the workstation lie nearly along a straight line, the bus
network topology may be best.

In a star network, a cable failure will isolate the workstation that is linked to the central computer,
While all other workstation will continue to function normally, except that the other workstations
will not be able to communicate with the isolated workstation. If any of the workstation goes down
other workstation wont be affected, but if the central computer goes down the entire network will
suffer degraded performance of complete failure.

The star topology can have a number of different transmission mechanisms, depending on the
nature of the central hub.

Broadcast Star Network: The hub receives and resends the signal to all of the nodes on
a network.

Switched Star Network: The hub sends the message to only the destination node.

Active Hub (Multi-port Repeater): Regenerates the electric signal and sends it to all the
nodes connected to the hub.

Passive Hub: Does not regenerate the signal; simply passes it along to all the nodes
connected to the hub.

Hybrid Star Network: Placing another star hub where a client node might otherwise go.
Star networks are easy to modify and one can add new nodes without disturbing the rest
of the network. Intelligent hubs provide for central monitoring and managing. Often there
are facilities to use several different cable types with hubs.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 43 of 138
Networks > Topology: Mesh Topology



The mesh topology has been used more frequently in recent years. Its primary attraction is its
relative immunity to bottlenecks and channel/node failures. Due to the multiplicity of paths
between nodes, traffic can easily be routed around failed or busy nodes. A mesh topology is
reliable and offers redundancy.

If one node can no longer operate, all the rest can still communicate with each other, directly or
through one or more intermediate nodes. It works well when the nodes are located at scattered
points that do not lie on a common point. . Given that this approach is very expensive in
comparison to other topologies like star and ring network, some users will still prefer the reliability
of the mesh network to that of others (especially for networks that only have a few nodes that
need to be connected together).



Networks > Switching: Circuit Switching

A type of communications in which a dedicated channel (or circuit) is established for the duration
of a transmission. The most obvious circuit-switching network is the telephone systems, which
links together wire segments to create a single unbroken line for each telephone call.

The other common communications method is packet switching, which divides messages into
packets and sends each packet individually. The Internet is based on a packet switching protocol
TCP/IP.

Circuit switching systems are ideal for communications that require data to be transmitted in real-
time. Packet switching networks are more efficient if some amount of delay is acceptable.

Circuit switching networks are sometimes called connection-oriented networks. Note, however
that although packet switching is essentially connectionless, a packet switching network can be
made connection-oriented by using a higher-level protocol. TCP, for example, makes IP networks
connection-oriented.


Networks > Switching: Packet Switching

Packet Switching refers to protocols in which messages are divided into packets before they are
sent. Each packet is then transmitted individually and can even follow different routes to its
destination. Once all the packets forming a message arrive at the destination, they are
recompiled into the original message.

Most modern Wide Area Network (WAN) protocols, including TCP/IP are based on packet-
switching technologies. In contrast, normal telephone service is based on a circuit-switching
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 44 of 138
technology, in which a dedicated line is allocated for transmission between two parties. Circuit-
switching is ideal when data must be transmitted quickly and must arrive in the same order in
which its sent. This is the case with most real-time data, such as live audio and video. Packet
switching is more efficient and robust for data that can withstand some delays in transmission,
such as email messages and web pages.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 45 of 138


Q30

Describe benefits of Database Management System over Conventional File Management System


Q31

What is a Database Management System? What are its functions?


Q32

Describe basic functions of Relational Database Model and discuss their importance to the users and
designers.


Q33

Describe the features and merits of DBMS over conventional file system.


Q34

Enumerate the key purpose / function and name one example of the Database Management System.


Q98


What is a DBMS, what are its functions, describe the basic functions of the relational
database model and discuss its importance to the users and designers.


(Ans 30 to 34, 98)



Illustration: Database Management System

First let us understand what the terms Database, Conventional File Management System,
Database Management System (DBMS) and Relational Database Management System
(RDBMS) stands for.

Database

A database is usually a structured collection of data usually handled by a database
engine. Examples of databases are Oracle, SQL, Sybase, DB2, Informix, SQLite,
MySQL, Microsoft Access etc.

Conventional File Management System

Conventional files are relatively easy to design and implement because they are normally
based on a single application or information system. In conventional data systems, an
organization often builds a collection of application programs created by different
programmers. The data in conventional data systems is often not centralized. Some
applications may require data to be combined from several systems. These several
systems could well have data that is redundant as well as inconsistent (that is, different
copies of the same data may have different values).

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 46 of 138
Data inconsistencies are often encountered in everyday life. For example, we have all
come across situations when a new address is communicated to an organization that we
deal with (e.g. a bank), we find that some of the communications from that organization
are received at the new address while others continue to be mailed to the old address.

A significant disadvantage of files stored in conventional file systems is their inflexibility
and non-scalability. Data stored in the conventional file systems cant be easily searched
for and it is very difficult to query, migrate or integrate such data with other open systems.

Data encapsulation is difficult in a conventional file system as no semantic interpretation
of data is available. Whereas a database encapsulates type information and logical
relationships between objects. In conventional file systems data cannot be easily
accessed, the location of the data needs to be known, data is not portable and there are
issues with performance and scalability. Whereas in the database these issues are taken
care of well.

Combining all the data in a database would involve reduction in redundancy as well as
inconsistency. It also is likely to reduce the costs for collection, storage and updating of
data. With a DBMS, data items need to be recorded only once and are available for
everyone to use.

The following points summarize the difference between the Conventional File System
and a Database Management System, namely:

1) Conventional File Systems fail to support efficient search on the data
2) Conventional File Systems fail to perform efficient modifications to small pieces of
data
3) One cannot make complex queries on the data
4) Versioning of data is not possible or inefficient in a Conventional File System
5) One cannot perform efficient access control for many users who wish to access the
data

Database Management System (DBMS)

A Database Management System (DBMS) is a computer software designed for the
purpose of managing databases based on a variety of data models.

It is a complex set of software programs that controls the organization, storage,
management and retrieval of data in a database. DBMS are categorized according to
their data structures or types. Sometimes DBMS is also known as Data Base Manager. It
is a set of pre-written programs that are used to store, update and retrieve a Database.

A database management system provides the ability for many different users to share
data and process resources. But as there can be many different users, there are many
different database needs. The question now is: How can a single, unified database meet
the differing requirement of so many users?

A DBMS minimizes these problems by providing two views of the database data: a
physical view and a logical view. The physical view deals with the actual, physical
arrangement and location of data in the direct access storage devices (DASDs).
Database specialists use the physical view to make efficient use of storage and
processing resources. Users, however, may wish to see data differently from how they
are stored, and they do not want to know all the technical details of physical storage.
After all, a business user is primarily interested in using the information, not in how it is
stored. The logical view/users view, of a database program represents data in a format
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 47 of 138
that is meaningful to a user and to the software programs that process those data. That
is, the logical view tells the user, in user terms, what is in the database. One strength of a
DBMS is that while there is only one physical view of the data, there can be an endless
number of different logical views. This feature allows users to see database information in
a more business-related way rather than from a technical, processing viewpoint. Thus the
logical view refers to the way user views data, and the physical view to the way the data
are physically stored and processed.

Relational Database Management System (RDBMS)

A Relational database management system (RDBMS) is a database management
system (DBMS) that is based on the relational model as introduced by E. F. Codd.
Most popular commercial and open source databases currently in use are based on the
relational model.

A short definition of an RDBMS may be a DBMS in which data is stored in the form of
tables and the relationship among the data is also stored in the form of tables. It is used
to describe database engines that are able to work with more than a group (table) of
information at the same time, thereby allowing much efficiency when dealing with most
types of data.

Almost all commercial relational DBMS employ SQL (Structured Query Language) as
their query language. A relational database is a collection of data items organized as a
set of formally-described tables from which data can be accessed or reassembled in
many different ways without having to reorganize the database tables.

The standard user and application program interface to a relational database is the
structured query language (SQL). SQL statements are used both for interactive queries
for information from a relational database and for gathering data for reports. In addition to
being relatively easy to create and access, a relational database has the important
advantage of being easy to extend. After the original database creation, a new data
category can be added without requiring that all existing applications be modified.

A relational database is a set of tables containing data fitted into predefined categories.
Each table (which is sometimes called a relation) contains one or more data categories in
columns. Each row contains a unique instance of data for the categories defined by the
columns. For example, a typical business order entry database would include a table that
described a customer with columns for name, address, phone number, and so forth.
Another table would describe an order: product, customer, date, sales price, and so forth.
A user of the database could obtain a view of the database that fitted the users needs.
For example, a branch office manager might like a view or report on all customers that
had bought products after a certain date. A financial services manager in the same
company could, from the same tables, obtain a report on accounts that needed to be
paid.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 48 of 138



Three main features / functions / purposes of (R)DBMS

Centralized Data Management
Data Independence
Systems Integration

Advantages / Benefits of a (R)DBMS:
1) Redundancies & Inconsistencies in data can be reduced
- Data can be shown in different views and in different combinations without
actually making different copies of the same data.
- The same data needs to be recorded only once and are available for everyone to
use.
2) Better Services to Users
- Users can interact and view data in any style or pattern or combination without
having to know programming to interact with the data in the physical database.
3) Flexibility of the system is improved
- Changes / Updates can now be easily done and their impact can be controlled or
tracked easily.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 49 of 138
4) Cost of developing, implementing and maintaining systems is lower
5) Standards can be enforced
- Since all access to the database must be through the DBMS, standards are
easier to enforce. Standards may relate to the naming convention of data, the
format of data, the structure of data etc. so as to avoid inconsistencies.
6) Data security and integrity can be improved.


Database Design
Definition: It is the study of determining how the data will be organized in the database.
Objectives of a database design are:
a) Ease of Retrieval of Data
b) Ease of Maintenance of Data
c) Efficient Data Management
To have a good design you should organize the data in a way that makes the information easy to
retrieve and makes the maintenance of the database easy.
Within a database, data is stored in one or more tables. Efficient data management is possible by
storing data in multiple tables and by establishing relationships between the tables.
A properly designed database provides you with the data models which can be compared to the
equivalent of the real world physical models.
In a relational model, the design of a database has to do with the way data is stored and how that
data is related. The design process is performed after you determine exactly what information
needs to be stored and how it is to be retrieved. The more carefully you design, the better the
physical database would meet the users needs.
Problems resulting from a poor database design:
a) The database or the application may malfunction
b) Data may be unreliable or inaccurate
c) Performance of the system may be degraded
d) Flexibility of the system may be lost
e) Data redundancy can take place thereby degrading the system performance.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 50 of 138


Q41

Distinguish between Micro Computers, Mini Computers, Mainframe Computers and Super
Computers.


(Ans 41)

There are classifications of digital computer systems, namely:

Super computers
These are very fast and powerful machines.
Their internal architecture allows them to run at speeds of tens of million instructions per
second (MIPS)
Very expensive (e.g.) Cray, CDC Cyber 205, Deep Blue, PARAM
Not used for CAD Applications
In 1958, Seymour Cray built the first completely transistorized supercomputer for the
Control Data Corporation.
PARAM was a super computer developed by C-DAC (Center for Development of
Advanced Computing), Pune, which today is being used for weather forecasting, remote
sensing, drug design & molecular modeling, space program and oil and gas exploration
etc.

Mainframe Computers
Built for general computing serving needs of business and engineering.
They are a step Below Super Camps but are fast and process info at about 10 MIPS.
Mainframe comp systems are located in a centralized composing center with 20 100 +
workstations connected to it.
Very expensive and needs lot of space.
Mainframe is a larger type of Computer and is 10 100 times faster than Microcomputer.
Mainframe comps require a controlled environment for temp and Humidity
Examples of Mainframe systems are IBM 3090, Unisys 2200

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 51 of 138
Mini Computers
Developed in 1960s resulting from advances in Microchip technology
These are smaller and less expensive than main frames.
Run at several MIPS (Million Instructions Per Second) and can support 5 20 users.
Used for CAD due to its low cost and high performance.
Ex. DEC PDP (Digital Equipment Corp Programmed Data Processor), VAX 11 (Virtual
Address Extension)
Mini Comp is 3 25 times faster than a Micro Computer.
Its physically larger and has greater storage capacity
Mini computers require a controlled environment for temp and Humidity

Micro Computers
Invented in 1970s
Generally used for Home Computing and Dedicated Data Processing Workstations.
Development in Micro Computer Technology resulted in growth of PCs
In 1980s, Small and Medium Design firms used for CAD due to low cast and availability.
Ex IBM, Compaq, Dell Gateway, Apple Macintosh.

Todays average computer users use a micro computer i.e. PCs, laptops, Notebooks, Palm tops
etc.

Both Mini and Mainframe can support more workstation than a micro and both cost several
hundred thousand dollars.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 52 of 138


Q42

Differentiate between Computers with Multiple Processors and Computers with Parallel Processors.


(Ans 42)

In a computer with Multiple Processors the calculations are divided between the numerous
processors. Since each processor now base less work to do, the fast can be finished more
quickly.

For example, Using a computer with 2 processors, one can get a 210% increase in speed. The
speed increase in computers having Multi Processors greatly depends on the software used to
perform the calculations. Such computers with Multi Processors introduce Overheads as efforts
is required to divide calculations between the processors and later reassembling the results into
a useful form. Hence they can be sometimes slower than those with single processors. Super
linear speedup is possible on comps with Multi Processors because todays processors contain a
piece of high-speed memory known as Cache. This helps accelerate the access to frequently
used data. When processor needs some data, it first checks to see if its available in the cache,
and thus avoids time taking process of fetching the data from the hard disk.

More the processors, more the cache and hence faster processing and greater speeds are
possible.

In computers, Parallel Processing is the processing of program instructions by dividing them
among multiple processors with the objective of running a program in less time.

In Computers with Parallel Processors there is a simultaneous use of more than one CPU to
execute a program. Ideally, parallel processing makes a program run faster because there are
more CPUs running it. In practice, it is often difficult to divide a program in such a way that
separate CPUs can execute different portions without interfering with each other.

Most computers have just one CPU, but some computers have several. There are even
computers with thousands of CPUs. With single CPU computers, it is possible to perform parallel
processing by connecting the computers in a network. However, this type of parallel processing
requires very sophisticated software called distributed processing software.

Note that parallel processing differs from multitasking, in which a single CPU executes several
programs at once. Parallel processing is also called parallel computing.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 53 of 138


Q43

Explain Compilers & Interpreters.


Q97

Discuss the functions of an Assembler, Interpreter and Compiler.


(Ans 43, 97)

Compilers Interpreters
A Complier is a translation program that
translates the instructions of a high level
language to machine language.
An interpreter is another type of translator used
for translating high level language into machine
code.
A Complier merely translates entire source
program into an object program and it is not
involved in execution.
Interpreter is involved in execution also.
The Object code is permanently created for
future use and is used every time the program
is to be executed.
No object code is saved for future use as in the
case the translation and execution are
alternate.
Compliers are complex programs.
They require large memory space.
They are less time consuming
It Runs faster as no translation is required
every time the code is executed, since it is
already complied.
Interpreter are easy to write
Do not require large memory space
More time consuming
Each statement requires translation every time
the source code is executed.
Any change in the source code requires to be
complied all over again for the changes take
effect.
No need for any compilation in the case of any
changes done to the source code.
Slow for debugging and testing Good for faster debugging and testing.


Assemblers

A programming language that is once removed from a computer's machine language. Machine
languages consist entirely of numbers and are almost impossible for humans to read and write.
Assembly languages have the same structure and set of commands as machine languages, but
they enable a programmer to use names instead of numbers.

Each type of CPU has its own machine language and assembly language, so an assembly
language program written for one type of CPU won't run on another. In the early days of
programming, all programs were written in assembly language. Now, most programs are written
in a high-level language such as FORTRAN or C. Programmers still use assembly language
when speed is essential or when they need to perform an operation that isn't possible in a high-
level language
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 54 of 138


Q44

Distinguish between Centralized Processing and Distributed Processing.


(Ans 44)

Centralized Processing
Following are the features of the Centralized Processing Systems:
Historically seen in Mainframe comps used in biz for data processing. In such
architecture, several dumb terminals are attached to the central Mainframe Comp.
Dumb Terminals are the machines, which users can use to input data and see results of
processed data. They do not do any kind of processing.
All processing is taken care by the Centralized Mainframe, thereby helping the
organization to have tighter control on the main data processing machine.
In such processing system, one or more processors handle the workload of several
distant terminals.
The central processor switches from one terminal to another and does a part of each job
in a time-sharing manner. Hence such systems are also known as Time Sharing
Systems.

Drawbacks of Centralized Processing Systems:
If the main comp fails the whole system fails remotes will stop working.
All end users have to format data based on the format of the central office and as it
controls the centralized system.

Distributed Processing
Following are the features of the Distributed Processing Systems:
It is a system of comps connected together by a communication network.
Each comp is chosen to handle its local work load and the network is designed to support
the system as a whole Distributed Data Processing system enable sharing of several
hardware and significant software resources among several users located far away from
each other.

Advantages / Flexibility:
Better utilization of Resources
Better accessibility even distant users can easily access data.
Lower cost of Communication (Telecommunication costs can be lower as the
processing can be intelligently handled by the computers)
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 55 of 138

Disadvantages:
Security can be breached as its easy to Ta0p a data communication line.
Linking of Different systems: Mismatch or Incompatibility issues can arise if no
standards are developed.
Difficult to maintain Due to decentralization the resources at remote sites can be
centrally maintained.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 56 of 138


Q14

Q15

Q45






Q58

Q94

What is Internet and Intranet?

What is Intranet and explain its uses to an organization.

Differentiate between the following:

a) Online Processing and Batch Processing
b) Internet and Intranet
c) Main Memory and Secondary Memory
d) Single and Multi User

Explain benefits of Intranets to an organization

What is Internet? What is Intranet and Extranet?
How is Internet beneficial to a business organization?



(Answer for 14, 15, 58 and 94)

What is the Internet?
The Internet is revolutionizing and enhancing the way we as humans communicate, both locally
and around the globe. Simply put, the Internet is a network of linked computers allowing
participants to share information on those computers. You should want to be a part of it because
the Internet literally puts a world of information and a potential worldwide audience at your
fingertips.

Internet History: The Internet's roots can be traced to the 1950s with the launch of Sputnik, the
ensuing space race, the Cold War and the development of ARPAnet (Department of Defense
Advanced Research Projects Agency), but it really took off in the 1980s when the National
Science Foundation used ARPAnet to link its five regional supercomputer centers. From there
evolved a high-speed backbone of Internet access for many other types of networks, universities,
institutions, bulletin board systems and commercial online services. The end of the decade saw
the emergence of the World Wide Web, which heralded a platform-independent means of
communication enhanced with a pleasant and relatively easy-to-use graphical interface.
Internet Activity: The information superhighway is literally buzzing with activity as Internet
pipelines pump out all manner of files, movies, sounds, programs, video, e-mail, live chat, you
name it. Yet amid all this activity there are always two key players in every transaction: a server
and a client.
Servers are computers with a 24-hour Internet connection that provide access to their
files and programs. These can be but are not limited to educational institutions,
commercial companies, organizations, government or military organizations, Internet
access providers and various other computer networks of all sizes.

Clients are software programs (and the people on remote computers using the software!)
used to access files on a server (typically, a Web browsing program such as Netscape
Navigator or an e-mail program such as Microsoft Outlook, Lotus Notes, Eudora).

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 57 of 138
Servers are typically located and organized by IP address and domain.
An IP address (IP stands for Internet Protocol) is a specific set of numbers referring to a
server's exact location on a network. Most domains have their own IP address, for
instance, 192.41.20.33 is the IP address of my server at webcurrent.com. You can type
those numbers in to get there, but the domain is easier to remember. An IP address also
leaves your fingerprints wherever you "surf" on the net. Each modem connection typically
is designated a specific IP address at Internet providers (this number typically changes
dynamically as users log in), so you never really surf the net anonymously. You can be
traced to a point.

A domain is part of the server's official name on the network, an alias for the less
descriptive IP numbers. Domains are organized by type of organization (a three-letter
suffix) and by country (a two-letter suffix which defaults to the U.S. if no suffix is
specified). You can tell a lot about a server by looking at its domain name.

o Here are some typical organizational suffixes: com=commercial,
edu=educational, gov=government, int=international, mil=military, net=network,
org=organization.

o Here are some country codes: au=Australia, at=Austria, be=Belgium, br=-Brazil,
dk=Denmark, jp=J apan, nz=New Zealand, ru=Russian Federation, uk=United
Kingdom, ch=Switzerland.
Internet Issues: Emerging technologies and especially this communications revolution we are
witnessing also bring with them new issues relevant to safety, privacy, security, decency and
netiquette. Please familiarize yourself with these issues and your responsibilities as you become
a member of the Internet society at large.
Safety
Privacy
Security
Decency, Filters, Censorship
Credit Card & Identity thefts etc.

What is an Intranet?

Intranet is the generic term for a collection of private computer networks within an organization.
An intranet uses network technologies as a tool to facilitate communication between people or
workgroups to improve the data sharing capability and overall knowledge base of an
organization's employees.

An Intranet is a network based on the internet TCP/IP open standard. An intranet belongs to an
organization, and is designed to be accessible only by the organization's members, employees,
or others with authorization. An intranet's Web site looks and act just like other Web sites, but has
a firewall surrounding it to fend off unauthorized users. Intranets are used to share information.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 58 of 138
Secure intranets are much less expensive to build and manage than private, proprietary-standard
networks.
Intranets utilize standard network hardware and software technologies like Ethernet, Wi-Fi,
TCP/IP, Web browsers and Web servers. An organization's intranet typically includes Internet
access but is firewalled so that its computers cannot be reached directly from the outside.
A common extension to intranets, called extranets, opens this firewall to provide controlled
access to outsiders. Many schools and non-profit groups have deployed them, but an intranet is
still seen primarily as a corporate productivity tool. A simple intranet consists of an internal email
system and perhaps a message board service. More sophisticated intranets include Web sites
and databases containing company news, forms, and personnel information. Besides email and
groupware applications, an intranet generally incorporates internal Web sites, documents, and/or
databases.
The business value of intranet solutions is generally accepted in larger corporations, but their
worth has proven very difficult to quantify in terms of time saved or return on investment.
To summarize it can be said that the Intranet is accessible exclusively to an organization and its
employees. It has all the relevant data that can be shared and used for day to day functioning of
the organization like recording employee attendance, leave records, leave and other application
forms, daily production records, policies, brochures of the company that can be used by
employees belonging to the company and who are accessing such material within the physical
premises of the organization. The Intranet can also be connected to the Internet through proper
security features like firewalls.


What is an Extranet?

An extranet is a private network that uses Internet protocols, network connectivity, and possibly
the public telecommunication system to securely share part of an organization's information or
operations with suppliers, vendors, partners, customers or other businesses.

An extranet can be viewed as part of a company's Intranet that is extended to users outside the
company (e.g.: normally over the Internet). Extranet refers to an intranet that is partially
accessible to authorized outsiders. Whereas an intranet resides behind a firewall and is
accessible only to people who are members of the same company or organization, an extranet
provides various levels of accessibility to outsiders. You can access an extranet only if you have
a valid username and password, and your identity determines which parts of the extranet you can
view. Extranets are becoming a very popular means for business partners to exchange
information.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 59 of 138
(Ans 45 a)

Batch Processing versus Online Processing

Following are the distinguishing features of Online Processing & Batch Processing
Batch Processing Online Processing
Applicable for high volume transactions like
Payroll / Invoicing.
Suitable for business applications e.g. Railway
/ Airline Reservation.
Data is collected in time period and processed
in batches.
Data is randomly entered
User does not have direct access to the
system.
All users have direct assessed to the system.
Files are online only when processing takes
place
Files are always online.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 60 of 138
(Answer for 45 b)

Internet versus Intranet

Following are the distinguishing features of Internet & Intranet:
Internet and Intranet are two different environments, the distinction between these two
environments can be seen below:

Internet is characterized by the following: Intranets are characterized by the following:

Slow access speeds (e.g. 56kbps dial
up connectivity)
Different types of web browsers are
used to view the website (e.g.
Netscape, IE, Opera etc.)
Different types of operating systems
are used to view the website (e.g.
Windows and Mac OS)
Global audience (e.g. multilingual,
different cultures)

Faster access speeds (e.g. 10Mbps
LAN connectivity)
Standardized type of browser. Minimal
or no compatibility issues
Standardized type of operating
systems
Primarily local audience


How does one design a website for the Internet:

Designing a website for the Internet is more difficult than designing a website for the intranet
environment. In general a website designed for the Internet will work well in an Intranet
environment. Considerations for designing an Internet Website are:
Small file size for fast downloading
Designed to display and function correctly on a wide range of browsers
Avoid use of frames, search engines have difficulty indexing frames website
Content must consider the global audience


How does one design a website for the Intranet:

Designing a website for an Intranet is much easier than designing for the Internet. The
technologies used in an Intranet setting is more standardized and controlled, unlike the Internet,
wherein Website are accessed by so many different types of technology. Considerations for
designing an Intranet Website are:
The file size can be bigger due to faster access speeds
You can design for a specific type of browser
Use of frames is acceptable

Note that in global companies, Intranet Websites are connected via slow WAN connections. If
your Intranet will be viewed by your local and global offices, it would be a good idea to design the
website following the Internet criteria.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 61 of 138
(Answer for 45 c)

Main Memory versus Secondary Memory
Following are the distinguishing features of Main Memory and Secondary Memory:
Main Memory Secondary Memory

Main memory is used to store a variety
of critical information required for
processing by the CPU (Central
Processing Unit)
Example of Main Memory is RAM
(Random Access Memory) and ROM
(Read Only Memory)
Main memory is made up of a number
of memory locations or cells.
Main memory is measured in terms of
capacity and speed.
The storage capacity of main memory
is limited.
Main memory is expensive
Main memory stores program
instructions and data in binary machine
code.
Main memory offers temporary storage
of data (i.e. storage is volatile in
nature)

Secondary memory is essential to any
computer system for providing backup
storage.

Example of Secondary memory are
Floppy Disks, Magnetic Disks and
Tapes, CD, Flash Disks etc.
Secondary memory is made up of
sectors and tracks.
Secondary memory is measured in
terms of storage space.
The storage capacity of secondary
memory is huge.
Secondary memory is less expensive
Secondary memory stores data in the
form of bytes made up of bits.

Secondary memory offers permanent
storage of data (i.e. storage is non-
volatile in nature)



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 62 of 138
(Answer for 45 d)

Single User versus Multi User Systems

following are the distinguishing features between Single and Multi User systems:
Single User systems allow for interaction between the user and the job. A single user workstation
is used by a single person. These systems are also known as a Stand-alone system.
For example: Using a microcomputer for playing games or for word processing.
Multi User systems also known as a Multi Access Systems allow a number of users to access a
central computer interactively. Examples of such systems are Mainframes, Super Computers and
Mini Computers.
Multi User systems are often kept in a computer room, and individual users communicate with it
by means of a terminal (often known as a dumb terminal) or a terminal emulator on a desktop
connected by a LAN or a modem.
Multi User systems are based on the Centralized Processing concept. All data and information is
stored in the central computer. Various terminals are connected to the mainframe for data input /
output purposes. All users work on the system simultaneously. This is possible via timesharing
where every user connected to the main computer is given a time-slice or the main computers
processing time. This happens so fast that every user feels that he is the only one using the
computer system.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 63 of 138


Q46

Define Booting


(Ans 46)

Booting
In computing, booting (booting up) is a bootstrapping process that starts operating systems when
the user turns on a computer system. A boot sequence is the initial set of operations that the
computer performs when it is switched on. The boot loader typically loads the main operating
system for the computer.

Because the operating system is essential for running all other programs, it is usually the first
piece of software loaded during the boot process. Boot is a short form for Bootstrap, which in
olden days was a strap attached to the top of a boot, such that one could easily wear his boot by
pulling the strap. Similarly, bootstrap utility helps the computer get started by loading the
operating system from the hard disk into the RAM.

There are two types of booting namely cold booting and warm booting.
A cold boot is when you turn the computer on from an off position. A warm boot is when you reset
a computer that is already on.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 64 of 138


Q47

Define Search Engine and Explain how they work?


Q82 (d)

What is Internet? With respect to Internet, explain the following terms:
d) Search Engine


(Ans 47 and 82 d)

Search Engines and how they work
Search engines are the key to finding specific information on the vast expanse of the World Wide
Web. Without the use of sophisticated search engines, it would be virtually impossible to locate
anything on the web without knowing a specific URL, especially as the Internet grows
exponentially every day. Information on the World Wide Web may consist of web pages, images,
and other types of files. Some Search Engines also mine data available in newsgroups,
databases, or open directories.

Search Engines are programs that search documents for specified keywords and return a list of
the documents where the keywords were found. Although search engine is really a general class
of programs, the term is often used to specifically describe systems like Google, Yahoo, Alta
Vista and Excite that enable users to search for documents on the World Wide Web.

Search Engines make use of programs called Spiders or Crawlers and indexers which search
and index the data found, and provide the same to the user who searches for it based on the
keyword entered in the search box.

There are basically three types of search engines. Those that are powered by crawlers or
spiders, those that are powered by human submissions, and those that are a combination of the
two.
Spider / Crawler based search engines send crawlers or spiders out into cyberspace
for fetching as many documents as possible. Spiders are programs that automatically
fetch web pages. Spiders are used to feed pages to search engines. Its called a spider
because it crawls over the web. Another term for these programs is web crawler.

Because most web pages contain links to other pages, a spider can start almost
anywhere. As soon as it sees a link to another page, it goes off and fetches it. Large
search engines like Google, Alta Vista etc have many spiders working in parallel. These
crawlers visit a website, read the information on the actual site, read the sites meta tags
and also follow the links that the site connects to. The crawler returns all that information
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 65 of 138
back to a central depository where the data is indexed using the indexer program. The
crawler will periodically return to the sites to check for any information that has changed,
and the frequency with which this happens is determined by the administrators of the
search engine.

Human powered Search Engines rely on humans to submit information that is
subsequently indexed and catalogued. Only information that is submitted is put into the
index.

In both cases when you query a search engine to locate information, you are actually searching
through the index that the search engine has created. You are not actually searching the World
Wide Web in real time.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 66 of 138


Q48

Define Batch Processing


(Ans 48)

Batch Processing
Batch processing is execution of a series of programs ("jobs") on a computer without human
interaction.

Batch jobs are set up so they can be run to completion without human interaction, so all input
data is preselected through scripts or command-line parameters. This is in contrast to "online" or
interactive programs which prompt the user for such input.

Batch processing has these benefits:
It allows sharing of computer resources among many users,
It shifts the time of job processing to when the computing resources are less busy,
It avoids idling the computing resources with minute-by-minute human interaction and
supervision,
By keeping high overall rate of utilization, it better amortizes the cost of a computer,
especially an expensive one.

Batch processing has been associated with mainframe computers since the earliest days of
electronic computing in 1950s. Because such computers were enormously costly, batch
processing was the only economically-viable option of their use. In those days, interactive
sessions with either text-based computer terminal interfaces or graphical user interfaces were not
widespread. Initially, computers were not even capable of having multiple programs loaded to the
main memory.

A popular computerized batch processing procedure is printing. This normally involves the
operator selecting the documents they need printed and indicating to the batch printing software
when and where they should be output. Batch processing is also used for efficient bulk database
updates and automated transaction processing and also for high volume transactions like Payroll
or Invoicing.

In Batch processing systems the data is collected in time periods and processed in batches. The
user does not have direct access to the batch processing system. The files are online only when
the processing takes place.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 67 of 138


Q49

Define Real Time Processing


(Ans 49)

Real Time Processing
Real Time is synonymous to an event occurring immediately. The term is used to describe a
number of different computer features. For example, real-time operating systems are systems
that respond to input immediately. They are used for such tasks as navigation, in which the
computer must react to a steady flow of new information without interruption. Most general
purpose operating systems are not real-time because they can take a few seconds or even
minutes to react to input.

Real time can also refer to events simulated by a computer at the same speed that they would
occur in real life. In graphics animation, for example, a real time program would display objects
moving across the screen at the same speed that they would actually move. Another example of
a real time system is the ECG (electrocardiogram) unit in a hospital which monitors the electrical
activity of the heart over time.

Real time processing are used in mission critical systems. These real time processing is very
useful in medical, defense and other areas where data computation is expected immediately in
response to an input.

Examples of real time processing applications on the web are Online Chat programs, Online
Reservations for Railways, Airways etc.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 68 of 138


Q50

Define a Server


(Ans 50)

Server
A Server can be understood as a computer or a device on a network that manages network
resources. For example, a file server is a computer and storage device dedicated to storing files.
Any user on the network can store files on the server. A Print Server is a computer that manages
one or more printers, and a Network Server is a computer that manages network traffic. A
database server is a computer system that processes database queries.

Servers are often dedicated, meaning that they perform no other tasks besides their server tasks.
On multiprocessing operating systems, however, a single computer can execute several
programs at once. A server in this case could refer to the program that is managing resources
rather than the entire computer.

A Server computer is a computer dedicated to running a server application. A server application
is a computer program that accepts connections in order to service requests by sending back
responses. Examples of server applications include Web Servers, e-Mail Servers, Database
Servers and File Servers.

Servers are strong and robust computing machines. They are used for heavy workloads; they can
remain unattended for long period of times unlike any personal computer or workstation which we
commonly use in our offices or our houses. A Server computer usually has special features
intended to make it more strong and suitable for heavy duty work. They contain a faster
processor and memory, more RAM, larger hard drives with huge storage capacity often running
into terabytes, they are highly reliable, they have redundant power supplies, they have redundant
hard drives (RAID), they have special cooling solutions and multiple fans as well.

On the Internet the most popular Servers are the Web Servers which serve static content to a
web browser by loading a file from the disk and serving it across the network to a users web
browser. This entire exchange is mediated by the browser and the server talking to each other
using the HTTP (hypertext transfer protocol).

To conclude, a server can be defined as a multiuser computer that provides a service for e.g.
database access, file transfer, remote access etc. over a network connection.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 69 of 138


Q51

Define Time Sharing


(Ans 51)

Time Sharing
Time Sharing is an approach to interactive computing in which a single computer is used to
provide apparently simultaneous interactive general-purpose computing to multiple users by
sharing processor time.

Time Sharing refers to the concurrent use of a computer by more than one user where the
multiple users share the computers processing time. Time Sharing is synonymous with Multi
User Systems. Almost all Mainframe Computing Systems and all Mini Computers are time
sharing systems. Most personal computers and workstations are Not time sharing systems.

Because early Mainframe computers were extremely expensive, it was not possible to allow a
single user exclusive access to the machine for interactive use. But because computers in
interactive use often spend much of their time idly waiting for user input, it was suggested that
multiple users could share a machine by using one users idle time to service the other users.
Similarly, small slices of time spend waiting for disk, tape, or network input could be granted to
other users. In a time sharing environment, the time sharing happens so fast that each user feels
he is the only one using the system.

The first project to implement a time sharing system was initiated by J ohn Mc Carthy in late 1957.
It was known as CTSS (Computer Time Sharing System).
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 70 of 138


Q52

What is a Domain Name?


(Ans 52)

Domain Name
Domain name is a name that identifies one or more IP addresses on the World Wide Web.
For example, the domain name microsoft.com represents about a dozen IP addresses. Domain
Names are used in URLs (Uniform Resource Locators) to identify particular web pages.

For example in the URL http://www.wikipedia.com/index.html, the domain name is
wikipedia.com.

Every domain name has a suffix that indicates which top level domain (TLD) it belongs to. There
are only a limited number of such domains. For example:

gov - Government agencies
edu - Educational Institutions
org - Organizations (Non Profit)
mil - Military & Defence
com - Commercial Businesses
net - Network Organizations

(Country specific domain names)
ca - Canada
in - India
th - Thailand
cn - China

Because the Internet is based on IP addresses, not domain names, every Web Server requires a
Domain Name System (DNS) server to translate the domain names into IP addresses.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 71 of 138


Q54

What is a Hub?


(Ans 54)

Hub
A Hub can be understood as a connection device for networks. A Hub allows multiple segments
or computers in a LAN to connect and share packets of information.

It serves as a common connection point for devices in a network. Hubs are commonly used to
connect segments of a LAN (Local Area Network). A Hub contains multiple ports. When a packet
arrives at one port, it is copied to the other ports so that all segments of the LAN can see all
packets.

Hubs can be classified as Passive Hubs and Active or Intelligent Hubs. A Passive Hub serves
simply as a carrier for the data, enabling it to go from one device (or segment) to another in a
LAN. The Active Hub or so called Intelligent Hubs include additional features that enable an
administrator to monitor the traffic passing through the hub and to configure each port in the hub.
Intelligent hubs are also called as manageable hubs.

A third type of Hub called Switching Hub, actually reads the destination address of each packet
and then forwards the packet to the correct port. These hubs contribute to network security.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 72 of 138


Q55

What is a Router? What are its functions?


(Ans 55)

Router and its functions
A router is specialized computer connected to more than one network. It runs software that allows
it to move data from one network to another. Routers operate at the network layer (OSI layer 3).
The primary function of a router is to connect networks together and keep certain kinds of
broadcast traffic under control. There are several companies that make routers: Cisco, J uniper,
Nortel (Bay Networks), Redback, Lucent, 3Com, and HP just to name a few.

Routers forward data packets along networks. It is a very important device on the Internet. A
router is connected to at least two networks, commonly two LANs or WANs or a LAN and its
ISPs network. Routers are located at gateways, the places where two or more networks connect.
Routers use Headers and Forwarding Tables to determine the best path for forwarding the
packets on the network, and they use protocols such as ICMP (Internet Control Message
Protocol) to communicate with each other and configure the best route between any two hosts.

Very little filtering is done through routers. However in future routers would be more intelligent
and will be able to sense what data to forward and where in line with advanced security policies.
Routers may provide connectivity inside enterprises, between enterprises and the Internet, and
also inside ISPs (Internet Service Providers). The largest routers for example CISCO CRS-1 or
J uniper T1600 interconnect ISPs, and are also used to connect very large enterprise networks.
The smallest routers provide connectivity for small and home offices. There are also Wireless
routers that perform the functions of a router, as well as function as a Wireless Access Point.

Routers perform the following functions:
Restrict network broadcasts to the local LAN
Act as the default gateway for computers trying to connect and talk to each other across
networks.
Move data between networks
Learn and advertise loop free paths between sub-networks.

If you are a small network, with few hosts, you probably do not need a router unless you are
connecting your network to another network. For. E.g. The Internet is a very large network, so
you would need a router to connect your network to the Internet.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 73 of 138


Q56

What is a Modem?


(Ans 56)

Modem


Modem is a short form for Modulator - Demodulator. A modem is a device or program that
enables a computer to transmit data over a telephone or cable lines.

Computer information is stored digitally, whereas information transmitted over telephone lines is
transmitted in the form of analog waves. A modem converts between these two forms.
Fortunately there is one standard interface for connecting external modems to computers called
RS-232. Any external modem can be attached to any computer that has an RS-232 port, which
almost all computers have. There are also modems that come as an expansion board that you
can insert into a vacant expansion slot in your computer. These are sometimes called onboard or
internal modems.

While the modem interfaces are standardized, a number of different protocols for formatting data
to be transmitted over telephone lines exist. Some like CCITT V.34. are official standards, while
others have been developed by private companies. Most modems have built in support for the
more common protocols. At slow data transmission speeds at least, most modems can
communicate with each other. At high transmission speeds, however the protocols are less
standardized.

Apart from the transmission protocols, modems can be distinguished from each other based on
their following characteristics, namely:

a) bps (bits per second)
How fast the modem can transmit and receive data. At slower data transmission rates,
modems are measured in terms of baud rates. The slowest rate is 300 baud. At higher
speeds, modems are measured in terms of bits per second (bps). The fastest modems
run at 57,600 bps, although they can achieve even higher data transfer rates by
compressing the data. The faster the transmission rate, the faster you can send and
receive data. However even if you have a fast modem, some telephone lines are unable
to transmit data reliably at very high rates.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 74 of 138
b) Voice / Data
Many modems support a switch to change between voice and data modes. In data mode,
the modem acts like a regular modem. In voice mode, the modem acts like a regular
telephone. Such modems have a built-in loudspeaker and microphone for voice
communication.

c) Auto Answer
An auto-answer modem enables your computer to receive calls in your absence. This is
only necessary if you are offering some type of computer service that people can call in
to use.

d) Data Compression
Some modems perform data compression, which enables them to send data at faster
rates. However, the modem at the receiving end must be able to decompress the data
using the same compression technique.

e) Flash Memory
Some modems come with flash memory rather than conventional ROM, which means
that the communications protocols can be easily updated if necessary using auto update
features present inside the software of the modem.

f) Fax Capability
Most modern modems are fax modem, which means that they can send and receive
faxes.

In order to get the most of your modem, you should have a communications software package, a
program that simplifies the task of transferring data.


CCITT: Abbreviation for Comit Consultatif International Tlphonique et Tlgraphique, an organization that sets
international communications standards. CCITT, now known as ITU (International Telecommunication Union - the parent
organization) has defined many important standards for data communications.

CCITT V.34: The standard for full-duplex modems sending and receiving data across phone lines at up to 28,800 bps.
V.34 modems automatically adjust their transmission speeds based on the quality of the lines.

Baud rate: also known as baud rate or modulation rate; the number of distinct symbol changes (signaling events) made to
the transmission medium per second in a digitally modulated signal or a line code.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 75 of 138


Q59

Q67

Q76 (c)

Explain the generations of Programming Languages

Explain the evolution of Programming Languages from Machine to Natural Languages?

Distinguish between the 3
rd
Generation Language v/s 4
th
Generation Language?


(Ans 59, 67 and 76 c)

Generation of Programming Languages
A programming language or computer language is a standardized communication technique for
expressing instructions to a computer. It is a set of syntactic and semantic rules used to define
computer programs. A language enables a programmer to precisely specify what data a
computer will act upon, how this data will be stored / transmitted, and what actions will be taken
under various circumstances.

There are five generations of computer programming language. They are explained below:

a) First Generation Programming Language (The Machine Language)
A first-generation programming language is a machine-level programming language. It
consists of 1s and 0s. Originally, no translator was used to compile or assemble the first-
generation language. The first-generation programming instructions were entered through
the front panel switches of the computer system.

The main benefit of programming in a first-generation programming language is that the
code a user writes can run very fast and efficiently since it is directly executed by the CPU,
but machine language is somewhat more difficult to learn than higher generational
programming languages, and it is somewhat more difficult to edit if errors occur, or for
example, if instructions need to be added to memory at some location, then all the
instructions after the insertion point need to be moved down to make room in memory to
accommodate the new instructions. Doing so on a front panel with switches can be very
difficult. Furthermore portability is significantly reduced in order to transfer the code to a
different computer; it needs to be completely rewritten since the machine language for one
computer could be significantly different from another computer. Architectural
considerations make portability difficult too.

b) Second Generation Programming Language (Assembly Language)
A second-generation programming language is a term usually used to refer to some form of
assembly language. Unlike first-generation programming languages, the code can be read
and written fairly easily by a human. But it must be converted into a machine readable form
in order to run on a computer. The conversion process is simply a mapping of the assembly
language code into binary machine code (the first-generation language). The language is
specific to a particular processor family and environment. Since it is the native language of
a processor it has significant speed advantages, but it requires more programming effort
and is difficult to use effectively for large applications.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 76 of 138
The Assembly languages were developed to reduce the difficulties in writing language
programs. Assembly languages are known as Symbolic Languages because symbols are
used to represent operation code and storage locations. Convenient alphabetic
abbreviations called mnemonics (memory aids) and other symbols are used.

Advantages:
Alphabetic abbreviations are easier to remember and they are used in the place of
the numerical addresses of the data.
This generation of language simplified programming to an extent.

Disadvantages:
Assembly language is machine oriented because the language instructions
correspond closely to the machine language instructions of the particular computer
model used. i.e. it was computer dependant.


Tip:
Assembly language or simply assembly is a human-readable notation for the machine language
that a specific computer architecture uses. Machine language, a pattern of bits encoding machine
operations, is made readable by replacing the raw values with symbols called mnemonics.


c) Third Generation Programming Language (High Level Language)
The third-generation language is also known as compiler languages. Instructions of HLL
(high level language) are called statements and they closely resemble human language or
standard notation of mathematics. A third-generation language is a programming language
designed to be easier for humans to understand, including things like using named
variables e.g. x=b+c, where x is the variable.

Some examples of third-generation languages are BASIC, C, C++, J ava, COBOL and
FORTRAN.

Advantages:
Easy to learn and understand
Have less rigid rule forms
Potential for error is reduced
It does not depend on the make or model of the machine

Disadvantages:
It is less efficient than the Assembly language program.
It requires a greater amount of time for translation into machine instructions,

d) Fourth Generation Programming Language
The fourth-generation programming language is used to describe a variety of programming
languages that are more procedural and conversational than prior languages. Natural
languages are fourth-generational and are very close to English or other human languages.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 77 of 138
While using fourth-generation languages, the programmers only need to satisfy the results
they want, while the computer determines the sequence of instructions that will accomplish
those results. Examples of such generation languages are FOXPRO, Oracle and Dbase.

Advantages:
Ease of use and technical sophistication
Natural query languages that impose no rigid grammar rules

Disadvantages:
Not very flexible
Difficult for an end user to override some of the pre-specified formats or
procedures of a fourth-generation language.
The machine language codes generated by 4 GL programs are less efficient
compared to earlier languages.
Unable to provide reasonable response times when faced with a large amount of
real-time transactions.
e) Fifth Generation Programming Language (Artificial Intelligence)
The fifth-generation languages are those which work on Artificial Intelligence techniques.
Artificial Intelligence is a science and technology based on disciplines such as Computer
Sciences, Biology, Psychology, Linguistics and Mathematics.

The major focus on this generation of languages is the development of computer functions
normally associated with human intelligence such as reasoning, inference, problem solving
etc. The term AI was coined by J ohn McCarthy at MIT in 1956. The domains of AI are
Cognitive Science, Computer Science, Robotics, Natural Language etc.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 78 of 138


Q61

What are RAM, ROM and Cache Memory?


(Ans 61)

RAM (Random Access Memory)

RAM is an acronym for Random Access Memory. It is a type of computer memory that can be
accessed randomly that is any byte of memory can be accessed without touching the preceding
bytes.

RAM is the most common type of memory found in computers and other devices, such as
printers.

There are two basic types of RAM:
a) Dynamic RAM (DRAM)
b) Static RAM (SRAM)

The two types differ in the technology they use to hold data. Dynamic RAM being the more
common type. Dynamic RAM needs to be refreshed thousands of times per second. Static RAM
does not need to be refreshed, which makes it faster, but it is also more expensive than dynamic
RAM. Both types of RAM are volatile, meaning that they lose their contents when the power is
turned off.

In common usage, the term RAM is synonymous with Main Memory, the memory which is
available to programs. For example, a computer with 8MB RAM has approximately 8 million bytes
of memory that programs can use. In contrast, ROM (Read Only Memory) refers to special
memory used to store programs that boot the computer and perform diagnostics. Most personal
computers have a small amount of ROM (a few thousand bytes). In fact, both types of memory
(ROM and RAM) allow random access.

RAM chips or Main Memory Chips are very expensive compared to Secondary Storage Devices.
A RAM chip of 1 GB may cost anywhere around Rs.2000 for a Laptop, while a 250 GB Hard Disk
would come for around the same cost.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 79 of 138
ROM (Read Only Memory)

ROM is an acronym for read only memory, a computer memory on which data has been pre-
recorded. Once data has been written onto a ROM chip, it cannot be removed and can only be
read.

Unlike main memory (RAM), ROM retains its contents even when the computer is turned off.
ROM is referred to as being non-volatile, whereas RAM is volatile.

Most personal computers contain a small amount of ROM that stores critical programs such as
the program that boots the computer. In addition, ROMs are used extensively in calculators and
peripheral devices such as laser printers etc whose fonts are often stored in ROMs.

A variation of ROM is a PROM (Programmable Read Only Memory). PROMs are manufactured
as blank chips on which data can be written with a special device called PROM programmer.


Cache Memory
Pronounced as cash, Cache memory is a special high-speed storage mechanism. It can be
either a reserved section of the main memory or an independent high-speed storage device. Two
types of caching are commonly used in personal computers namely; memory caching and disk
caching.

A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of
high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used
for main memory. Memory caching is effective because most programs access the same data or
instructions over and over. By keeping as much of this information as possible in SRAM, the
computer avoids accessing the slower DRAM.

Some memory caches are built into the architecture of microprocessors. The Intel 80486
microprocessor, for example, contains 8k memory cache and the Pentium has a 16k cache. Such
internal caches are often called Level 1 (L1) caches. Most modern PCs also come with external
cache memory called Level 2 or (L2) cache. These caches sit between the CPU and the DRAM.
Like L1 caches, L2 caches are composed of SRAM but they are much larger.

Disk caching works under the same principle as memory caching, but instead of using high-speed
SRAM, a disk cache uses conventional main memory. The most recently accessed data from the
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 80 of 138
disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access
data from the disk, it first checks the disk cache to see if the data is there. Disk caching can
dramatically improve the performance of applications, because accessing a byte of data in RAM
can be thousands of times faster than accessing a byte on a hard disk.

When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is
judged by its hit rate. Many cache systems use a technique known as smart caching, in which the
system can recognize certain types of frequently used data.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 81 of 138


Q62

Explain the Generations of Computers


(Ans 62)

Generations of Computers

The history of computer development is often referred to in reference to the different generations
of computing devices. Each generation of computer is characterized by a major technological
development that fundamentally changed the way computers operate, resulting in increasingly
smaller, cheaper, more powerful and more efficient and reliable devices. Following are the five
generations of the computers, namely;

First Generation (1940 1956: Vacuum Tubes)
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and
were often huge in size taking up entire rooms. They were expensive to operate and they used to
consume a great amount of electricity. These machines generated lots of heat, which was often
the cause of malfunctions. First generation computers relied on machine language to perform
operations and they could only solve one problem at a time. Input was based on punched cards
and paper tape, and output was displayed on printouts.

The UNIVAC and ENIAC (Electronic Numerical Integrator and Computer) computers are
examples of first generation computing devices. The UNIVAC was the first commercial computer
delivered to a business client namely the U.S. Census Bureau in 1951.

Second Generation (1956 1963: Transistors)
Transistors replaced vacuum tubes in the second generation of computers. The transistor was
invented in 1947 but did not see widespread use in computers until the late 50s. The transistor
was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper,
more energy-efficient and more reliable than their first generation predecessors. Though the
transistor still generated a great deal of heat, that subjected the computer to damage. It however
was a great improvement over the vacuum tube. Second generation computers still relied on
punched cards for input and printouts for output.

Second generation computers moved from cryptic binary machine language to symbolic or
assembly languages, which allowed programmers to specify instructions in words and
symbols. High level programming languages were also being developed at this time, such as
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 82 of 138
early versions of COBOL and FORTRAN. These were also the first computers that stored their
instructions in their memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.

Third Generation (1964 1971: Integrated Circuits)
The development of the Integrated Circuit (IC) was the hallmark of the third generation of
computers. Transistors were miniaturized and placed on silicon chips, called semi conductors,
which drastically increased the speed and efficiency of computers.

Instead of punched cards and printouts, users interacted with third generation computers
through keyboards and monitors and interfaced with an operating system, which allowed
the device to run many different applications at one time with a central program that monitored
the memory. Computers for the first time became accessible to a mass audience because they
were smaller and cheaper than their predecessors.

Fourth Generation (1971 to Present: Microprocessors)
The microprocessor brought the fourth generation of computers, as thousands of integrated
circuits were built onto a single silicon chip. What in the first generation filled an entire room could
now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the
components of the computer from the central processing unit and memory to input / output
controls on a single chip.

In 1981, IBM introduced its first computer for the home user, and in 1984 Apple introduced the
Macintosh. Microprocessors also moved out of the realm of desktop computers and into many
areas of life as more and more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form networks,
which eventually led to the development of the Internet. Fourth generation computers also saw
the development of GUIs, the mouse and handheld devices.

Fifth Generation (Present and Beyond: Artificial Intelligence)
Fifth generation computing devices, based on artificial intelligence, are still in development and
prototype stages, though there are some very good applications such as Voice recognition and
other Biometric Applications that are being used today. The use of parallel processing and
superconductors is helping to make artificial intelligence a reality. Quantum computation and
molecular and nanotechnology will radically change the face of computers in years to come. The
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 83 of 138
goal of the fifth generation computing is to develop devices that respond to natural language input
and are capable of learning and self-organization.


Tips:
ENIAC: Acronym for Electronic Numerical Integrator and Computer, the first operational electronic digital computer in the
U.S. developed by the Army Ordnance to compute World War II ballistic firing tables. The ENIAC, weighing 30 tons, using
200 kilowatts of electrical power and consisting of 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of
resistors, capacitors and inductors, was completed in 1945. In addition to ballistics, the ENIACs field of application
included Weather Prediction, Atomic Energy Calculations, Cosmic Ray studies, Thermal Ignition, Random Number
Studies, Wind Tunnel Design, and other scientific uses. The ENIAC soon became obsolete as the need arose for faster
computing speeds.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 84 of 138


Q63

Define a System Analyst.
Explain his role.
Explain the technical and other attributes required for a System Analyst.


(Ans 63)

System Analyst Role, technical & other attributes

Definition: A Systems Analyst (SA) is the one who designs an Information System. His primary
responsibility is to identify the information needs of an organization and obtain a logical design of
the Information system which will meet the needs.

There are three groups of people involved in developing Information Systems for an organization,
namely; Managers, Users of the system, and computer programmers / developers who
implement the systems. The System Analyst co-ordinates the efforts of all these groups to
effectively develop and operate computer based Information systems.

The System Analysts role consists of the following activities namely:

a) Defining Requirements:
In this most important an difficult task, the System Analyst interviews users and finds out
what information they use in the current system and how they use it. The System Analyst
has to determine technical and performance issues with the users existing system and
process. Here the System Analyst aims to gather as much relevant information he can.

b) Prioritizing Requirements by Consensus:
In an organization there are many users, each having special information needs. Thus
there is a need to set priorities among the requirements of various users. This is done in
a meeting with all users by a consensus. Here the System Analyst would need to use his
good interpersonal relations and diplomacy while prioritizing the requirements.

c) Gathering Data, Facts and Opinions of Users:
Having determined the information needs and their priority, the System Analyst must
develop the logical model of the system and must validate it with all the users. The users
will be made aware of what information they will get, how it will be derived and how they
can use it. The System Analyst must consider the users experience and expertise. He
must be like a student and capture the users views, opinions and organizations facts.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 85 of 138

d) Analysis & Evaluation:
The System Analyst analyses the working of the current Information system and finds out
to what extent they meet the users needs. He then studies the facts and an opinion
gathered by him and visualizes / conceptualizes how the new system would look like.

e) Solving Problems:
The System Analyst must study the problem in depth and suggest alternate solutions to
the management. The relative difficulties must be determined so that the management
can pick the best solution.

f) Drawing up Specifications:
A key job of the System Analyst is to obtain the functional specifications of the system to
be designed in a form which can be understood by users. It should be non-technical and
the System Analyst must get the acceptance of all the users. It should be precise and
detailed and must take into account expansions envisaged in the near future.

g) Designing the System:
Once the specifications are accepted, the System Analyst designs the system. The
design must be clear to the system implementer. The design must also be modular so
that changes can be accommodated easily. The System Analyst must know the latest
design tools to assist him in his task. He must also create a system test plan.

h) Evaluating the System:
The System Analyst must critically evaluate a system after it has been in use for a
reasonable period of time. When and how it is done and how users comments are
gathered and used must be decided by the analyst. He must accept valid criticism to
enable him improve the system.

Attributes of a System Analyst
To be effective, a System Analyst must have several attributes. These attributes are listed below:

a) Should possess knowledge of the organization:
A System Analyst must understand the way in which various organizations function. He
must understand the management structure and the departmental relationships and their
day to day functioning. He must have domain knowledge of the clients business.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 86 of 138
b) Should possess knowledge of computer systems and softwares:
The System Analyst must also know about the recent developments in computer systems
and software. He need not be an expert programmer or computer manager, but he
should have enough technical details to enable him to interact effectively with program
designers. He should be technically sound to determine feasibility studies.

c) Should possess good interpersonal relations:
A System Analyst must be able to interpret, sharpen and refine the user needs. He must
be a good listener, good diplomat and win over the users as friends. He should be able to
resolve conflicting requirements and arrive at a consensus.

d) Should possess the ability to communicate effectively:
A System Analyst is also required to orally present his design to user groups. Such oral
presentations are often made to non-technical management personnel. He must
therefore be able to give a satisfactory answer to questions raised by users.

e) Should possess an analytical mind:
A System Analyst is required to find solutions to problems. A good analyst must be able
to perceive the core of a problem and discard redundant data.

f) Should possess breadth of knowledge:
A System Analyst must understand the different users that would work with the
Information system and how they perform

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 87 of 138


Q64

Explain File Design?


(Ans 64)

File Design

The design of files include the decisions about the nature and content of files such as whether
the file is to be used for storing transaction details, whether it would be used to store historical
data, or whether it would contain reference information to other files.

Among the design decisions we look into which data items to include in record format within the
file, length of each record and the arrangement of records within the file (i.e the storage structure
indexed sequentially or relatively)



Q65

Explain User Involvement


(Ans 65)

User Involvement

The users can be understood as Managers and employees in business and are highly involved in
the system development.

The important characteristic of these users is that they have accumulated experience working
with Applications developed earlier. They have better insight into what the information system
should be. If they have experienced systems failures earlier they will have ideas about avoiding
problems.

The applications developed in organizations are often highly complex; hence systems analysts
need continual involvement of user to understand the biz functions being studied. With better
system development tools emerging, the user can design and develop application without
involving trained system analyst.

Hence in the entire software dev. Life cycle the user involvement plays an important role.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 88 of 138


Q66

What are System Software and Application Software?
Enumerate the various System Software which you have come across and explain its basic functions.


(Ans 66)

System Software and Application Software

System Software
Software can be classified as a Systems Software and Application Software.
An Operating System (OS) can be classified as a part of the system software. It is a series of
programs used by the computer to manage its operation as well as allow the users to interact with
the computer through various devices. It is a set of programs that acts as a middle layer between
application software and computer hardware.

Systems Software's can be classified as operating system and language processors. Operating
system creates an interface between user and the system hardware. Examples of systems
software are Windows Operating Systems like Windows NT, XP, Vista, Windows Mobile, Unix,
Linux etc.

Language Processors are those which help to convert computer language (i.e. Assembly and
high level language) to machine level language. Examples of language processors are
Assemblers, Compilers and Interpreters.

Microsoft Windows: The first version of the Microsoft Windows OS was launched in 1983.
Microsoft encouraged developers to produce software applications to run on their Windows
Operating System. Todays Microsoft is totally GUI based. Windows 98, NT, XP, Vista are the
most popular Operating System Softwares. Microsoft has even launched Windows Operating
Systems for Mobile Devices. The Windows Mobile 5 and Windows Mobile 6 run todays state of
the art mobile devices. It even has operating systems for Smartphones.

Unix Operating System: Developed in Bell Labs in 1969 to satisfy the following objectives.
Simple and elegant interface to allow users to interact with the computer
Written in high level language rather than assembly language
Allows for the re-use of code.

Functions of a System Software

a) Process Management
- System Software helps the CPU allocate resources for executing programs / processes.
A process is a program in execution. E.g. Spooling, printing etc. The OS helps in the
creation, deletion, suspension, resumption and synchronization of processes.

b) Memory Management
- Memory is a large array of words and bytes each with its own address. The CPU reads
from and writes to memory. The System Software keeps track of currently used memory
and who is using it. It decides which processes to load in memory when memory space
becomes available and it allocates & de-allocates memory space as needed.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 89 of 138

c) Storage Management
- The System Software deals with the allocation and reclamation of storage space when
a process / program is opened or terminated. The OS helps in reading of data from the
disk to the main memory (RAM) in order to execute processes.

d) I/O (Input / Output) System
- The System Software helps the I/O devices to communicate easily as it hides the
peculiarities and device driver details from the end user.

e) File Management
- The System Software provides a logical view of Information Storage, it maps files on
physical devices. It helps in the creation, deletion of files and directories, manipulating of
files and directories on to the storage and also does a backup. It offers a very user
friendly interface like the windows explorer for end users like us to work with files easily.

f) Protection System
- The System Software protects processes from interference of other processes and it
checks for authorization of processes and allows them to access CPU resources.

g) Networking
- Distributed computing systems require Multi-user OS for allowing processes and users
the access to shared resources on the network.

h) System and Resource Monitoring
-The System Software helps monitor resource usage and provides information on system
performance.


Application Software:

In computer science, an application is a computer program designed to help people perform a
certain type of work. An application thus differs from an operating system (which runs a
computer), a utility (which performs maintenance or general-purpose chores), and a programming
language (with which computer programs are created). Depending on the work for which it was
designed, an application can manipulate text, numbers, graphics, or a combination of these
elements. Some application packages offer considerable computing power by focusing on a
single task, such as word processing; others, called integrated software, offer somewhat less
power but include several applications, such as a word processor, a spreadsheet, and a database
program.

Application Softwares are softwares which users like us work on. The users work on Application
softwares to perform their tasks on the computer. The typical examples of Application Software's
are Word Processors, Spreadsheets, media players etc. Multiple applications bundled together as
a package are sometimes referred to as an application suite. Microsoft Office and
OpenOffice.org, which bundle together a word processor, a spreadsheet, and several other
discrete applications, are typical examples. The separate applications in a suite usually have a
user interface that has some commonality making it easier for the user to learn and use each
application. And often they may have some capability to interact with each other in ways
beneficial to the user. For example, a spreadsheet might be able to be embedded in a word
processor document even though it had been created in the separate spreadsheet application.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 90 of 138


Q68

Distinguish between Applications designed for the Web / Internet versus Conventional Applications?


(Ans 68)

Web / Internet Applications versus Conventional Applications

Although Web / Internet Applications (a.k.a. thin client applications) and Conventional (a.k.a. thick
client applications or client-server) Applications have basic architectural similarities such as
application servers, database servers and GUI, there are several key differences that have
increased the popularity of Web / Internet based applications over Conventional Client Server
Applications. These are listed below:

a) Client Processing
Both Web / Internet based applications as well as Conventional Client-Server
applications consist of multiple tiers, but the Client tier is very different. In a Conventional
application the client is extremely sensitive often referred to as a thick or fat client.

Whereas in a web application the web browser is the client, the GUI is generated at the
server and delivered to the web browser via HTML and the UI is rendered on the client
side. All the business rules are executed on the server.

b) Network Protocols
Web based applications use standard Internet Protocols. The thin client or the browser
uses HTTP (Hypertext Transfer Protocol) & SSL (Secure Sockets Layer) which is
supported by the web browser and even firewalls.

Conventional Client-Server applications do not use HTTP to communicate between the
client and server. They use proprietary protocols. These applications do not execute over
the web as they do not use HTTP protocol.

c) Security
Web applications are compatible with the Internet Security model and we can work well
with the Internet Security Model and can work well with Firewalls. These applications use
SSL, Digital Certificates. These applications use LDAP for end user authentication.

The Conventional Client-Server applications have not used many of these new Internet
Security Technologies.

d) Ability to work with Structured & Unstructured Data
Web based applications support both structured and unstructured data through the use of
technologies like HTML, HTTP and XML and here data can reside outside a database.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 91 of 138
A Conventional Client-Server application deals with structured, relational data that
resides in the applications database. Such applications do not deal with data outside of
its own database.

e) Extensibility
Web applications are extensible. They can be made available from anywhere and any
amount of personalization features can be made available to the users. All the user is
needs is to have an internet connection. Having a web based application means a much
less complicated setup from the client side as all they need is a web browser to run such
an application. No additional software, drivers, dlls (dynamic link libraries) etc. are
needed. Web based applications are becoming very popular because they are cheaper to
run on low end PCs with minimum configurations.

The disadvantages with conventional applications are that you need a license for the
client on every PC which needs to run it. This costs money and maintenance etc.

f) Speed & flexibility of updates
Conventional Client-Server Applications are faster as they run on local resources.
Whereas web based / internet applications may run slower depending on the bandwidth
and the congestion on the network.

Conventional applications have to be updated on every computer on which the
application is installed. Whereas Web based applications can automatically receive live
updates when the applications are connected to the internet.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 92 of 138


Q70

Explain Storage Devices


Q74

Describe a standard Fully Featured Desktop Configuration. Explain in one line the description of each
item in the configuration.


Q79

As a buyer how will you describe the required configuration of a typical desktop PC?
J ustify the selection of components described in the configuration maintaining a balance between
cost and performance?


Q80

As a buyer how will you describe the required configuration of a typical desktop PC?
J ustify the selection of the components described in the configuration.


Q90

How does one specify a typical configuration for a desktop computer system, keeping cost and
performance in mind, when it is required to be purchased for personal use? For use at office? Give
details.


Q96

A manufacturing company is computerizing its operations; you are required to train the
shop floor people about basics of computers. You are planning to teach the components of
a computer system initially, discuss the functionality of the various components of the
computer system.


(Ans 70)

Storage Devices

A standard fully featured desktop configuration has basically three types of featured devices
namely:

1. Input Device
2. Output Device
3. Storage Devices
4. Memory

Storage Devices:

The purpose of storage in a computer is to hold data or information and get that data to the CPU
as quickly as possible when it is needed. Computers use disks for storage: hard disks that are
located inside the computer and floppy or compact disks that are used externally.

Hard Disks:

Your computer uses two types of memory: primary memory, which is stored on chips, located on
the motherboard, and secondary memory that is stored in the hard drive. Primary memory holds
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 93 of 138
all of the essential memory that tells your computer how to be a computer. Secondary memory
holds the information that you store in the computer.
Inside the hard disk drive case you will find circular disks that are made from polished steel. On
the disks, there are many tracks or cylinders. Within the hard drive, an electronic reading/writing
device called the head passes back and forth over the cylinders, reading information from the
disk or writing information to it. Hard drives spin at 3600 or more rpm (Revolutions Per Minute) -
that means that in one minute, the hard drive spins around over 3600 times!

Today's hard drives can hold a great deal of information - sometimes over 1Terrabyte!

Floppy Disks:

When you look at a floppy disk, you'll see a plastic case that measures 3 1/2 by 5 inches. Inside
that case is a very thin piece of plastic (see picture at right) that is coated with microscopic iron
particles. This disk is much like the tape inside a video or audiocassette. Take a look at the floppy
disk pictured. At one end of it is a small metal cover with a rectangular hole in it. That cover can
be moved aside to show the flexible disk inside. But never touch the inner disk - you could
damage the data that is stored on it. On one side of the floppy disk is a place for a label. On the
other side is a silver circle with two holes in it. When the disk is inserted into the disk drive, the
drive hooks into those holes to spin the circle. This causes the disk inside to spin at about 300
rpm! At the same time, the silver metal cover on the end is pushed aside so that the head in the
disk drive can read and write to the disk. Floppy disks are the smallest type of storage, holding
only 1.44MB.

Compact Disks:

Instead of electromagnetism, CDs use pits (microscopic indentations) and lands (flat surfaces) to
store information much the same way floppies and hard disks use magnetic and non-magnetic
storage. Inside the CD-Rom is a laser that reflects light off of the surface of the disk to an electric
eye. The pattern of reflected light (pit) and no reflected light (land) creates a code that represents
data. CDs usually store about 650MB. This is quite a bit more than the 1.44MB that a floppy disk
stores. A DVD or Digital Video Disk holds even more information than a CD, because the DVD
can store information on two levels, in smaller pits or sometimes on both sides. Todays DVDs
can store up to 8GB of data. Many come as high density DVDs. Sony has revolutionized the DVD
market by the launch of their Blue Ray DVDs which can store up to 25 GB (Single Layer) to 50
GB (Dual Layer).

Flash Drives:

These are the most popular storage devices which come in sizes like 512 MB to 300 GB storage
sizes. There are many vendors for these devices.


(Ans 74, 79, 80, 90, 96)

(Self Study)
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 94 of 138


Q71

Define Websites and Portals.
Give their characteristics with some examples?


(Ans 71)

Websites and Portals

A collection of related Web pages is called a Web site. Web sites are housed on Web servers,
Internet host servers that often store thousands of individual pages. Popular Web sites receives
millions of hits or page views every day. When you visit a Web page that is download a page
from the Web server to your computer for viewing the act is commonly called hitting the Web
site.

Web sites are now used to distribute news, interactive educational services, product information
and catalogs, highway traffic reports, and live audio and video among other items. Interactive
Web sites permit readers to consult databases, order products and information, and submit
payment with a credit card or other account number.

Here are a few Web sites that provide portal services:

Amazon http://www.amazon.com
Tata Motors http://www.tatamotors.com

A Web portal is a free, personalized start page, hosted by a Web content provider, which you can
personalize in several ways. Your personalized portal can provide various content and links that
simply cannot be found in typical corporate Web sites.

By design, a portal offers two advantages over a typical personal home page:

1. Rich, Dynamic Content:
Your portal can include many different types of information and graphics, including news, sports,
weather, entertainment news, financial information, multiple search engines, chat room access,
email and more.

2. Customization
You customize a portal page by selecting the types of information you want to view. Many portal
sites allow you to view information from specific sources, such as CNN, Time, and others. Some
portals even provide streaming multimedia content. You can also choose the hyperlinks that will
appear in your portal, making it easy to jump to other favorite sites. Most portal sites let you
change your custom selections whenever you want.

Here are a few Web sites that provide portal services:

Microsoft Network http://www.msn.com
Yahoo http://www.yahoo.com
Rediff http://www.rediff.com
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 95 of 138


Q72

What is Security and Privacy of Data?


(Ans 72)

Security and Privacy of Data

Data in an IT system is at risk from various sourcesuser errors and malicious and non-
malicious attacks. Accidents can occur and attackers can gain access to the system and disrupt
services, render systems useless, or alter, delete, or steal information.


An IT system may need protection for one or more of the following aspects of data:

Confidentiality
The system contains information that requires protection from unauthorized disclosure.
Examples: Timed dissemination information (for example, crop report information),
personal information, and proprietary business information.

Integrity
The system contains information that must be protected from unauthorized,
unanticipated, or unintentional modification. Examples: Census information, economic
indicators, or financial transactions systems.

Availability
The system contains information or provides services that must be available on a timely
basis to meet mission requirements or to avoid substantial losses. Examples: Systems
critical to safety, life support, and hurricane forecasting.

Security administrators need to decide how much time, money, and effort needs to be spent in
order to develop the appropriate security policies and controls. Each organization should analyze
its specific needs and determine its resource and scheduling requirements and constraints.
Computer systems, environments, and organizational policies are different, making each
computer security services and strategy unique. However, the principles of good security remain
the same, and this document focuses on those principles.

Although a security strategy can save the organization valuable time and provide important
reminders of what needs to be done, security is not a one-time activity. It is an integral part of the
system lifecycle.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 96 of 138


Q73

Explain the contents of a User Manual.


(Ans 73)

Contents of a User Manual

A user guide, also commonly known as a User manual, is a technical communication document
intended to give assistance to people using a particular system.
It is usually written by a technical writer, although user manuals could be written by programmers,
product or project managers, or other technical staff, particularly in smaller companies.

User guides are most commonly associated with electronic goods, computer hardware and
software.

Most user manuals contain both a written guide and the associated images. In the case of
computer applications it is usual to include screenshots of how the program should look, and
hardware manuals often include clear, simplified diagrams. The language is written to match up
with the intended audience with jargon kept to a minimum or explained thoroughly.

The usual sections of a User Manual often include:

a) A Cover page
b) A title page and copyright page
c) A preface, containing details of related documents and information on how to best use
the User Manual.
d) A table of contents page
e) A guide on how to use at least the main functions of the system
f) A troubleshooting section detailing possible errors or problems that may occur along with
how to fix them.
g) A FAQ (Frequently Asked Questions) section
h) A section on where to find further help and relevant contact details
i) A glossary section

While writing a user manual follow the methodology as given under:

a) Supply the manual in web / print formats
b) Explain the problem being solved by the system
c) Clearly present the concepts and not just the features
d) Make it enjoyable to read
e) Use illustrations for better understanding
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 97 of 138


Q76 (b)

Distinguish between:
b) An Ordinary Desktop and a Professional Grade Server


(Ans 76 b)

Ordinary Desktop versus Professional Grade Server

It is very important to understand the difference between an ordinary desktop computer which sits
on our desk and a professional grade server which businesses use for their day to day running.

A server is a highly robust and powerful computer system that centrally manages resources that
are used by multiple users in a network. It's much more powerful than a desktop PC, and it
serves a very different purpose. Servers are usually dedicated, meaning that they perform no
other tasks besides their server tasks. Servers can be used to store files, manage print queues,
host the entire company's email, as well as a bunch of other specialized tasks depending on your
needs.

Using a server is advantageous because one person's computer doesn't need to be bogged
down by people repeatedly accessing their hard drive. Also, storing the most important data on
one server makes regular data backup that much easier.

Ordinary Desktop versus Professional Grade Server

Although you may be tempted to use a regular desktop PC as a server, this is never a good idea.
While the two may look the same, they're quite different in their architecture.

The parameters on which they can be distinguished are as follows:

Speed:
A server uses a faster processor - sometimes even more than one -- than your average desktop
computer. These days, most servers are equipped with Quad-Core Intel Xeon
processors.

Even if you start off with just one processor, keep your company's growth in mind. If you expect
any increase in staff, make sure the server has an additional slot to pop another processor in
when the need arises. Currently, servers can old anywhere from 2-8 processors.

Also, you want to get the most cache for your cash. The less cache your processor has, the
harder it has to work to maintain optimum speed. Having at least 512k will ensure that you're
getting the best performance you can.

Memory:
Servers have mega-amounts of RAM, and then room for some more. Starting off with 2GB is
perfectly reasonable. But make sure you have room to grow -- you may want space for as much
as a few gigabytes of RAM ultimately.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 98 of 138

Hard drive:
Unlike a regular computer, a server's hard drive will be accessed constantly, so it's got to be top
of the line. And you r disk space should be planned well based on what applications you'll be
running. For the best, look for no less than 300 GB and a 10,000 rpm (revolutions per minute)
access speed. Many firms are purchasing servers with as many as 1 Terabyte of hard drive
space. Hard drives should also be hot-swappable, meaning they can be removed while the
computer is still running so there's no downtime.

Power and cooling:
Normal desktop power supply isn't designed to handle the multiple hard drives typically found in a
server. Plus, more hard drives means more activity within the server. And that can get hot.

Servers also typically have a more powerful cooling system. Unlike most desktops, servers also
have software that monitors how cool the machine is kept. The servers have multiple fans to keep
the system cool. Some advanced server machines also make use of liquid coolants (for example
the Alien ware range of servers use liquid coolants to keep the machines cool.)

Redundancy:
Servers provide the best insurance when they're redundant -- not only with power but with
storage. While connecting a backup drive is essential to providing you with an end-of-day data
backup, it's best to set your computers up in what's known as a RAID array (Redundant Array of
Independent Disks) to afford you real-time protection throughout the day.

The most standard of these is a RAID 5 system, which means the data is spanned over all the
disks in the array. The advantage of this system is that if one drive fails, you lose no data
(although the disk should be replaced immediately, because failure of another drive would cause
loss of data).

Growth considerations:
Determining how many and what size servers you need depends completely on the task(s) that it
will be asked to perform, and how complex those tasks are. Factors such as the amount of RAM,
disk space, number of users, and number of processors the tasks will require will affect the size
server you need.

While it can be fine for one server to handle multiple tasks for a small office, you may need to split
these tasks among multiple servers as the load increases. If you're expecting to add more
employees, computers or Net traffic over the next two or three years, choose a scalable machine
that can easily handle additional hard disks, memory or processors.

Servers arent cheap. Dual-processor capable servers are priced, on average, between $7000 -
$12000.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 99 of 138
In short the following points clearly distinguish between an Ordinary Desktop and a Professional
Grade Server, these are as follows:

An Ordinary Desktop A Professional Grade Server
1) An ordinary desktop has one processor
2) Normal memory is in MB
3) Slots available for connecting devices
are less
4) Used for low performance applications
5) E.g. Mail Server
1) Server has more than one processor
(minimum 100)
2) Normal memory is in GB
3) Slots available for connecting devices
are more
4) Used for high performance applications
5) E.g. Database Server, Networking
Server, Proxy Server etc.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 100 of 138


Q77

What do you understand by Distributed Computing?


(Ans 77)

Distributed Computing

Distributed computing deals with hardware and software systems containing more than one
processing element or storage element, concurrent processes, or multiple programs, running
under a loosely or tightly controlled regime.

In distributed computing a program is split up into parts that run simultaneously on multiple
computers communicating over a network. Distributed computing is a form of parallel computing,
but parallel computing is most commonly used to describe program parts running simultaneously
on multiple processors in the same computer. Both types of processing require dividing a
program into parts that can run simultaneously, but distributed programs often must deal with
heterogeneous environments, network links of varying latencies, and unpredictable failures in the
network or the computers.

Distributed computing is a type of computing in which different components and objects
comprising an application can be located on different computers connected to a network. So, for
example, a word processing application might consist of an editor component on one computer, a
spell-checker object on a second computer, and a thesaurus on a third computer. In some
distributed systems, each of the three computers could even be running a different operating
system.

One of the important requirements of distributed computing is a set of standards that specify how
objects communicate with one another. There are currently two chief distributed computing
standards namely CORBA and DCOM.

CORBA stands for Common Object Request Broker Architecture, an architecture that enables
pieces of programs, called objects, to communicate with one another regardless of what
programming language they were written in or what operating system they are running on.
CORBA was developed by an industry consortium known as the Object Management Group
(OMG).

DCOM stands for Distributed Component Object Model, which is an extension of the Component
Object Model (COM) that allows COM components to communicate across network boundaries.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 101 of 138


Q78

Write a note on the following:
a) Multitasking
b) Internet Security
c) Impact Printer
d) Utility Software
e) Laser Printer v/s Dot Matrix Printer
f) Browser
g) CD ROM Drive
h) Mouse
i) HTML


(Ans 78 a)

Multitasking

In computing, multitasking is a method by which multiple tasks, also known as processes, share
common processing resources such as a CPU. In the case of a computer with a single CPU, only
one task is said to be running at any point in time, meaning that the CPU is actively executing
instructions for that task. Multitasking solves the problem by scheduling which task may be the
one running at any given time, and when another waiting task gets a turn. The act of reassigning
a CPU from one task to another one is called a context switch. When context switches occur
frequently enough the illusion of parallelism is achieved. Even on computers with more than one
CPU (called multiprocessor machines), multitasking allows many more tasks to be run than there
are CPUs.

Operating systems may adopt one of many different scheduling strategies, which generally fall
into the following categories:

In multiprogramming systems, the running task keeps running until it performs an
operation that requires waiting for an external event (e.g. reading from a tape) or until the
computer's scheduler forcibly swaps the running task out of the CPU. Multiprogramming
systems are designed to maximize CPU usage.
In time-sharing systems, the running task is required to relinquish the CPU, either
voluntarily or by an external event such as a hardware interrupt. Time sharing systems
are designed to allow several programs to execute apparently simultaneously.
In real-time systems, some waiting tasks are guaranteed to be given the CPU when an
external event occurs. Real time systems are designed to control mechanical devices
such as industrial robots, which require timely processing.

The term time-sharing is no longer commonly used, having been replaced by simply multitasking.
Multitasking also improved the computers ability to perform faster and more efficiently by
enabling Multithreading, i.e. the ability to run many threads at a time, (e.g. one process gathering
input data, one process processing the input data, one process writing out results on disk.)
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 102 of 138
(Ans 78 b)

Internet Security

Many small and large business organizations and even individuals fail to protect their computers
and their computer networks from spyware, viruses, worms, hacker attacks, customer data thefts
and other security threats.

Many hackers now have software tools that constantly search the internet for unprotected
networks and computers. Once discovered, unprotected computers can be accessed and
controlled by a hacker, who can use them to launch attacks on other computers or networks.

Spyware authors are busy creating malicious programs that resist removal, perpetually mutate,
and spread across the internet in minutes. Meanwhile, blended threats which assume multiple
forms and can attack systems in many different ways, are on the rise. Small businesses without
adequate, updated security solutions can easily be victimized by these and other threats.

Security breaches dont always come from outside the company but from within, either
intentionally or unintentionally. For example, an employee may unknowingly download spyware
while playing an online game or visiting a website. Small business systems are more vulnerable
to employee tampering simply because they often lack the internal security precautions of a
larger enterprise.

Even wireless networks need extra protection as data is transmitted over radio waves, which can
be easily intercepted. This means w wireless network is inherently less secure than a wired one.

Data has to be encrypted when sent via networks across the internet. Many companies invest
reasonably to ensure their data is always protected when sent across their intranets or the
internet. To ensure Internet Security many companies and individuals are now using firewalls and
anti-virus softwares to keep their systems safe. Organizations have to prepare effective Security
Policies; that govern how their employees will be using the internet and intranet in a manner best
suited for the organizations interests and online safety.

Ultimately, when your business is secure, its stronger and more agile, and definitely more
competitive.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 103 of 138
(Ans 78 c, 78 e)

Impact Printer

Impact printer refers to a class of printers that work by banging a head or needle against an ink
ribbon to make a mark on the paper. This includes dot-matrix printers, daisy-wheel printers
and line printers. In contrast, laser and ink-jet printers are non-impact printers. The
distinction is important because impact printers tend to be considerably noisier than non-impact
printers but are useful for multipart forms such as invoices.

Impact printers are the oldest printing technologies still in active production.

Dot-Matrix Printers:

The technology behind dot-matrix printing is quite simple. The paper is pressed against a drum (a
rubber-coated cylinder) and is intermittently pulled forward as printing progresses. The
electromagnetically-driven printhead moves across the paper and strikes the printer ribbon
situated between the paper and printhead pin. The impact of the printhead against the printer
ribbon imprints ink dots on the paper which form human-readable characters.

Dot-matrix printers vary in print resolution and overall quality with either 9 or 24-pin printheads.
The more pins per inch, the higher the print resolution. Most dot-matrix printers have a maximum
resolution of around 240 dpi (dots per inch). While this resolution is not as high as those possible
in laser or inkjet printers, there is one distinct advantage to dot-matrix (or any form of impact)
printing. Because the printhead must strike the surface of the paper with enough force to transfer
ink from a ribbon onto the page, it is ideal for environments that must produce carbon copies
through the use of special multi-part documents. These documents have carbon (or other
pressure sensitive material) on the underside and create a mark on the sheet underneath when
pressure is applied. Retailers and small businesses often use carbon copies as receipts or bills of
sale.

Daisy-wheel Printers:

If you have ever worked with a manual typewriter before, then you understand the technological
concept behind the daisy-wheel printers. These printers have printheads composed of metallic or
plastic wheels cut into petals. Each petal has the form of a letter (in capital and lower-case),
number, or punctuation mark on it. When the petal is struck against the printer ribbon, the
resulting shape forces ink onto the paper. Daisy-wheel printers are loud and slow. They cannot
print graphics, and cannot change fonts unless the print wheel is physically replaced. With the
advent of laser printers, daisy wheel printers are generally not used in modern computing
environments.

Line Printers:

Another type of impact printer somewhat similar to the daisy-wheel is the line printer. However,
instead of a print wheel, line printers have a mechanism that allows multiple characters to be
simultaneously printed on the same line. The mechanism may use a large spinning print drum or
a looped print chain. As the drum or chain is rotated over the papers surface, electromechanical
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 104 of 138
hammers behind the paper push the paper (along with a ribbon) onto the surface of the drum or
chain, marking the paper with the shape of the character on the drum or chain.

Because of the nature of the print mechanism, the line printers are much faster than dot-matrix or
daisy-wheel printers. However, they tend to be quite loud, have limited multi-font capability, and
often produce lower print quality than more recent printing technologies.

Because line printers are used for their speed, they use special tractor-fed paper with pre-
punched holes along each side. This arrangement makes continuous unattended high-speed
printing possible, which stops only required when a box of paper runs out.

Impact printers have relatively low consumable costs. Ink ribbons and paper are the primary
recurring costs for impact printers. Some Impact printers (usually the line and dot-matrix printers)
require tractor-fed paper, which can increase the costs of operation somewhat.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 105 of 138
(Ans 78 e)

Laser Printer

Laser printers are a popular alternative to legacy impact printing. Laser printers are known for
their high volume output and low cost-per-page. Laser printers are often deployed in enterprises
as a workgroup or departmental print center, where performance, durability and output
requirements are a priority. Because laser printers service these needs so readily (and at a
reasonable cost-per-page), the technology is widely regarded as the workhorse of enterprise
printing.

Laser printers share much of the same technologies as photocopiers. Rollers pull a sheet of
paper from a paper tray and through a charge roller, which gives the paper an electrostatic
charge. At the same time, a printing drum is given the opposite charge. The surface of the drum
is then scanned by a laser, discharging the drums surface and leaving only those points
corresponding to the desired text and image with a charge. This charge is ten used to force toner
to adhere to the drums surface.

The paper and drum are then brought into contact; their differing charges cause the toner to then
adhere to the paper. Finally, the paper travels between fusing rollers, which hear thr paper and
melt the toner, fusing it onto the papers surface.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 106 of 138
(Ans 78 d )

Utility Software

Utility software (also known as service program, service routine, tool, or utility routine) is a type of
computer software. It is specifically designed to help manage and tune the computer hardware,
operating system or application software, and perform a single task or a small range of tasks; as
opposed to application software which tend to be software suites. Utility software has long been
integrated into most major operating systems.

Examples of Utility Software's are:

a) Disk Defragmenters, Disk Checkers and Disk Cleaners:

A Disk defragmenter can detect computer files whose contents have been stored on the
hard disk in disjointed fragments, and move the fragments together to increase efficiency;
a Disk checker can scan the contents of a hard disk to find files or areas that are
corrupted in some way, or were not correctly saved, and eliminate them for a more
efficiently operating hard drive; a Disk cleaner can find files that are unnecessary to the
computer operation, or take up considerable amounts of space. Disk cleaner helps the
user to decide what to delete when his hard disk is full.

b) System Profilers:

A System Profiler can provide detailed information about the software installed and
hardware attached to the computer. Backup software can make a copy of all information
stored on a computer, and restore either the entire system (e.g. in an event of disk
failure) or selected files (e.g. in an event of accidental deletion). Disk compression
software can transparently compress the contents of the hard disk, in order to fit more
information to the drive.

c) Virus Scanners:

Virus Scanners scan for computer viruses among files and folders.

d) Archive Utilities:

Archive utilities like Zip, RAR, etc output a stream or a single file when provided with a
directory or a set of files. Archive utilities help compress files to save disk space. They
also come with a password option for file security reasons.

e) Encryption:

Encryption utilities use a specific algorithm to produce an encrypted stream or encrypted
file when provided with a key and a plaintext.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 107 of 138
(Ans 78 f )

Browser

A Web Browser is a software application that enables a user to display and interact with text,
images, and other information typically located on a web page at a website on the World Wide
Web or a local area network. Text and images on a web page can contain hyperlinks to other web
pages at the same or different websites. Web browsers allow a user to quickly and easily access
information provided on many web pages at many websites by traversing these links.

Web browsers available for personal computers include Microsoft Internet Explorer, Mozilla,
Firefox, Apple, Safari, Netscape, and Opera. Web browsers are the most commonly used type of
user agents. Although browsers are typically used to access the World Wide Web, they can also
be used to access information provided by Web Servers in private networks or content in file
systems.

Web browsers communicate with web servers primarily using the HTTP (hypertext transfer
protocol) to fetch web pages. HTTP allows web browsers to submit information to web servers as
well as fetch web pages from them.

Features of todays browsers:

Different browsers can be distinguished from each other by the features they support. Modern
browsers and web pages tend to utilize many features and techniques that did not exist in the
early days of the web. As noted earlier, with the browser wars, there was a rapid and chaotic
expansion of browser and World Wide Web feature sets.

The following is a list of some of the most notable features:

a) Standards support:

- HTTP and HTTPS
- HTML, XML and XHTML
- Graphics file formats including GIF, PNG, J PG and SVG
- Cascading Style Sheets (CSS)
- J avaScript and DHTML (Dynamic HTML)
- Cookies
- Digital Certificates
- Favicons

b) Fundamental features:

Todays browsers also offer these fundamental features, namely:
- Bookmark manager
- Caching of web contents
- Support of media types via plug-ins such as Adobe Flash and Quick Time

c) Usability and Accessibility features:

Todays browsers also offer these usability features, namely:
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 108 of 138
- Auto completion of URLs and form data
- Tabbed browsing
- Spatial navigation
- Screen reader or full speech support

d) Annoyance removers:

Todays browsers also offer these security features, namely:
- Pop-up advertisement blocker
- Advert filtering
- Phishing defenses

Finally we can conclude that browsers have positively helped us change the way we interact with
the Internet.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 109 of 138
(Ans 78 g )

CD ROM Drive
It is a type of optical disk capable of storing large amounts of data; the most common size is
650MB (megabytes). A single CD-ROM has the storage capacity of 700 floppy disks, enough
memory to store about 300,000 text pages.
To read a CD, you need a CD-ROM player. All CD-ROMs conform to a standard size and format,
so you can load any type of CD-ROM into any CD-ROM player. In addition, CD-ROM players are
capable of playing audio CDs, which share the same technology.
CD-ROMs are particularly well-suited to information that requires large storage capacity. This
includes large software applications that support color, graphics, sound, and especially video.
In a few short years, the Compact Disk - Read Only Memory (CD-ROM) drive has gone from
pricey luxury to inexpensive necessity on the modern PC. The CD-ROM has opened up new
computing avenues that were never possible before, due to its high capacity and broad
applicability. In many ways, the CD-ROM has replaced the floppy disk drive, but in many ways it
has allowed us to use our computers in ways that we never used them before. In fact, the
"multimedia revolution" was largely a result of the availability of cheap CD-ROM drives.
Special formatting is used to allow these disks to hold data. As CD-ROMs have come down in
price they have become almost as common in a new PC as the hard disk or floppy disk, and they
are now the method of choice for the distribution of software and data due to their combination of
high capacity and cheap and easy manufacturing. Recent advances in technology have also
improved their performance to levels approaching those of hard disks in many respects.
CD-ROM drives play a significant role in the following essential aspects of your computer system:
Software Support: The number one reason why a PC today basically must have a CD-ROM
drive is the large number of software titles that are only available on CD-ROM. At one time there
were a few titles that came on CD-ROM, and they generally came on floppy disks as well. Today,
not having a CD-ROM means losing out on a large segment of the PC software market. Also,
some CD-ROMs require a drive that meets certain minimum performance requirements.
Performance: Since so much software uses the CD-ROM drive today, the performance level of
the drive is important. It usually isn't as important as the performance of the hard drive or system
components such as the processor or system memory, but it is still important, depending on what
you use the drive for. Obviously, the more you use the CD-ROM, the more essential it is that it
performs well.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 110 of 138
(Ans 78 h )

Mouse

Mouse is a device that controls the movement of the cursor or pointer on a display screen. A
mouse is a small object you can roll along a hard, flat surface. Its name is derived from its shape,
which looks a bit like a mouse, its connecting wire that one can imagine to be the mouse's tail,
and the fact that one must make it scurry along a surface. As you move the mouse, the pointer on
the display screen moves in the same direction. Mice contain at least one button and sometimes
as many as three, which have different functions depending on what program is running. Some
newer mice also include a scroll wheel for scrolling through long documents.
Invented by Douglas Engelbart of Stanford Research Center in 1963, and pioneered by Xerox in
the 1970s, the mouse is one of the great breakthroughs in computer ergonomics because it frees
the user to a large extent from using the keyboard. In particular, the mouse is important for
graphical user interfaces because you can simply point to options and objects and click a mouse
button. Such applications are often called point-and-click programs. The mouse is also useful for
graphics programs that allow you to draw pictures by using the mouse like a pen, pencil, or
paintbrush.
There are three basic types of mice:
1. Mechanical: Has a rubber or metal ball on its underside that can roll in all directions.
Mechanical sensors within the mouse detect the direction the ball is rolling and move the
screen pointer accordingly.
2. Optomechanical: Same as a mechanical mouse, but uses optical sensors to detect
motion of the ball.
3. Optical: Uses a laser to detect the mouse's movement. You must move the mouse along
a special mat with a grid so that the optical mechanism has a frame of reference. Optical
mice have no mechanical moving parts. They respond more quickly and precisely than
mechanical and optomechanical mice, but they are also more expensive.
Mice connect to PCs in one of several ways:
1. Serial mice connects directly to an RS-232C serial port or a PS/2 port on your computer.
This is the simplest type of connection.
2. PS/2 mice connects to a PS/2 port on your computer.
3. USB mice connects to your USB port on your computer.
Cordless mice aren't physically connected at all. Instead they rely on infrared or radio waves to
communicate with the computer. Cordless mice are more expensive than both serial and bus
mice, but they do eliminate the cord, which can sometimes get in the way.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 111 of 138
(Ans 78 i )

HTML (Hypertext Markup Language)

Short for HyperText Markup Language, the authoring language used to create documents on the
World Wide Web. HTML is similar to SGML (Standard Generalized Markup Language), although
it is not a strict subset.

HTML defines the structure and layout of a Web document by using a variety of tags and
attributes. The correct structure for an HTML document starts with <HTML><HEAD>(enter here
what document is about)<BODY>and ends with </BODY></HTML>. All the information you'd like
to include in your Web page fits in between the <BODY>and </BODY>tags.

There are hundreds of other tags used to format and layout the information in a Web page. Tags
are also used to specify hypertext links. These allow Web developers to direct users to other Web
pages with only a click of the mouse on either an image or word(s).

The latest HTML version is HTML 5.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 112 of 138


Q81

What do you understand by Programming?
Describe the features of Programming Language.
Name any five programming languages?


(Ans 81)

Programming Language and its features

Definition: A programming language is an artificial language that can be used to control the
behavior of a machine, particularly a computer. Programming languages, like human languages,
are defined through the use of syntactic and semantic rules, to determine structure and meaning
respectively.

Programming languages are used to facilitate communication about the task of organizing and
manipulating information, and to express algorithms precisely.

Purpose of a Programming Language:

A prominent purpose of programming language is to provide instructions to a computer. As such,
programming languages differ from most other forms of human expression in that they require a
greater degree of precision and completeness. When using a natural language to communicate
with other people, human authors and speakers can be ambiguous and make small errors, and
still expect their intent to be understood. However, computers do exactly what they are told to do,
and cannot understand the code the programmer intended to write.

Features of a Programming Language:

a) Data Types:

Different pieces of information in a program may have different types. For example, a
language may treat a string of characters and number in different ways. Most
programming languages have many data types.

b) Objected Oriented Programming:

Many languages support object oriented programming. In OOP data and functions are
grouped together in objects (encapsulation). Objects can be re-used across applications
and have the property of inheritance.

c) Compilation:

A computers CPU does not process a high level language directly. The program must be
translated to machine code. Some languages must be compiled, others are interpreted.

d) Threading:

In a multi-threaded program the same code is run more than once at the same time. If
there is only a single processor, then in practice the different threads takes turns to
execute. Having several threads running at once can be useful. For example, a graphical
display can remain responsive (using one thread) while the program is doing some
calculations (using another thread).
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 113 of 138

e) Error handling:

When an error occurs in a program, how is it handled? Many languages implement
various forms of exception handling an error forces a function to return until a special
section of code is met that deals with the error.

f) Development Environment:

Many programming languages may have their own development environment, for
example, some have a command-line based interface while others have a graphical
interface.


Some Examples of Programming Languages:

Thousands of different programming languages have been created and new ones are created
every year. Following are some of the most popular programming languages.

a) C Programming Language

Developed in 1980 at AT&T. Widely used to develop commercial applications. Unix O/S
is written using the C programming language.

b) C++ Programming Language

Object-oriented version of C that is popular because it combines object-oriented
capability with traditional C programming syntax.

c) C# Programming Language

A Microsoft .NET language based on C++with elements from Visual Basic and J ava.

d) Java Programming Language

The programming language developed by Sun and repositioned for web use. It is widely
used on the server side, although client applications are increasingly used.

e) JavaScript

A scripting language widely used on the web. J avaScript is embedded into many HTML
pages for web page interactivity.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 114 of 138


Q82

What is Internet? With respect to Internet, explain the following terms:
a) Homepage
b) URL
c) Email
d) Search Engine
e) IP Address


(Ans 82)

The internet is the publicly accessible worldwide system of interconnected computer networks
that transmit data by packet switching using a standardized Internet Protocol like TCP/IP (i.e.
Transmission Control Protocol / Internet Protocol) and may other protocols. It is made up of
thousands of smaller commercial, academic, domestic and government networks. It carries
various information and services, such as electronic mail, online chat and the interlinked web
pages and other documents of the World Wide Web.

Unlike online services, which are centrally controlled, the Intranet is decentralized by design.
Each Internet Computer, called a host, is independent. Its operators can choose which Internet
services to use and which local services to make available to the global internet community. It is
possible to gain access to the Internet through a commercial ISP (Internet Service Provider).

a) Homepage:

It is the main page of a Web site. Typically, the home page serves as an index or table of
contents to other documents stored at the site. The homepage (often written as home page) or
main page is the URL or local file that automatically loads when a web browser starts and when
the browser's "home" button is pressed. The term is also used to refer to the front page, web
server directory index, or main web page of a website of a group, company, organization, or
individual.

b) URL (Uniform Resource Locator):

Uniform Resource Locator (URL) is a compact string of characters used to represent a resource
available on the Internet. In popular usage and many technical documents, it is a synonym for
Uniform Resource Identifier (URI).

It is the global address of documents and other resources on the World Wide Web.

The first part of the address (i.e. http:// or ftp://) is called a protocol identifier and it indicates what
protocol to use and the second part is called a resource name and it specifies the IP address or
the domain name where the resource is located (e.g. Microsoft.com). The protocol identifier and
the resource name are separated by a colon and two forward slashes.

Example: http://microsoft.com

c) Email:

Electronic mail, often abbreviated to e-mail, email, or simply mail, is a store-and-forward method
of writing, sending, receiving and saving messages over electronic communication systems. The
term "e-mail" applies to the Internet e-mail system based on the Simple Mail Transfer Protocol, to
network systems based on other protocols and to various mainframe, minicomputer, or intranet
systems allowing users within one organization to send messages to each other in support of
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 115 of 138
workgroup collaboration. Intranet systems may be based on proprietary protocols supported by a
particular systems vendor, or on the same protocols used on public networks. E-mail is often
used to deliver bulk unsolicited messages, or "spam", but filter programs exist which can
automatically block, quarantine or delete some or most of these, depending on the situation.

The best web based email applications available today are hotmail (www.hotmail.com), yahoo
mail (http://www.mail.yahoo.com), google mail (http://www.gmail.com) etc. The most popular
desktop email client applications are Microsoft Outlook, Microsoft Outlook Express, and Lotus
Notes etc.


d) Search Engine:

(same as Answer 47)


e) IP Address:

An IP Address (Internet Protocol Address) is a unique number that devices use in order to
identify and communicate with each other on a computer network utilizing the Internet Protocol
Standard (IP). Any participating network device including routers, computers, servers, printers etc
and some telephone can have their own unique address.

An IP address can also be thought of as the equivalent of a street address or a phone number for
a computer or other network device on the internet. J ust as each street address and phone
number uniquely identifies a building or telephone, an IP address can uniquely identify a specific
computer or other network device on a network.

Although IP addresses are stored as binary numbers, they are often displayed in more human-
readable notations, such as 192.168.100.1 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6).
The role of the IP address has been characterized as follows: "A name indicates what we seek.
An address indicates where it is. A route indicates how to get there."

Types of IP Addressing:

There are two types of IP addressing namely: Static IP Addressing and Dynamic IP
Addressing. When a computer is manually configured to use the same IP address each time it
powers up, this is known as a Static IP address. In contrast, in situations when the computer's IP
address is assigned automatically, it is known as a Dynamic IP address.

Following are the IP versions:

IP Version 4

IPv4 uses 32-bit (4 byte) addresses, which
limits the address space to 4,294,967,296
(2
32
) possible unique addresses. However,
many are reserved for special purposes,
such as private networks (approx 18 million
addresses). This reduces the number of
addresses that can be allocated as public
internet addresses and an IPv4 address
shortage appears to be inevitable in the
long run.

An illustration of an IP address (version 4), in both dot-decimal notation and binary.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 116 of 138

IP Version 5

What would be considered IPv5 existed only as an experimental non-IP real time streaming
protocol called ST2 (I.E. Internet Stream Protocol v2). This version was never intended to be
implemented.


IP Version 6

However, due to the enormous growth of the Internet and the resulting depletion of the IPv4
address space, a new addressing system (IPv6), using 128 bits for the address, had to be
developed. IPv6 is now being deployed across the world; in many places it coexists with the old
standard and is transmitted over the same hardware and network links.

Here the addresses are 128 bits wide, which would suffice for the future. In theory there would be
exactly 2
128
, or about 3.403 10
38
unique addresses. The exact number of possible addresses
would be 340,282,366,920,938,463,463,374,607,431,768,211,456.


An illustration of an IP address (version 6), in
hexadecimal and binary












The global IP address space is managed by the Internet Assigned Numbers Authority (IANA).
IANA works in cooperation with five Regional Internet Registries (RIRs) to allocate IP address
blocks to Local Internet Registries (Internet service providers) and other entities.

This magnitude of IP addresses available will be necessary in the future as mobile phones, cars
and all types of personal devices come to rely on the internet for everyday purposes.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 117 of 138


Q83

Explain in terms of storage capacity and performance of the following types of memory:
a) CPU Register
b) Cache
c) RAM
d) Hard Disk


(Ans 83)

Present-day computers actually use a
variety of storage technologies. Each
technology is geared toward a specific
function, with speeds and capacities to
match.

These technologies are:

CPU Registers
Cache memory
RAM
Hard Drives
Off-line backup storage (tape, optical disk, etc.)

In terms of capabilities and cost, these technologies form a spectrum. For example, CPU
registers are:

Very fast (access times of a few nanoseconds)
Low capacity (usually less than 200 bytes
Very limited expansion capabilities (a change in CPU architecture would be required)
Expensive (more than one dollar / byte)

However, at the other end of the spectrum, off-line backup storage is:

Very slow (access times are slower compared to CPU registers)
Very high capacity (10s 100s of Gigabytes)
Essentially unlimited expansion capabilities
Very inexpensive

By using different technologies with different capabilities, it is possible to fine-tune a system
design for maximum performance at the lowest possible cost. The following sections explore
each technology in the storage spectrum.

A) CPU Registers:

Every present day CPU design includes registers for a variety of purposes, from storing
the address of the currently-executed instruction to more general-purpose data storage
and manipulation. CPU registers run at the same speed as the rest of the CPU; otherwise
they would be a serious bottleneck to overall system performance. All operations
performed by the CPU involve the registers in one way or another.

The numbers of CPU registers (and their uses) are strictly dependent on the architectural
design of the CPU itself. There is no way to change the number of CPU registers. For
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 118 of 138
these reasons, the number of CPU registers can be considered a constant, as they are
changeable only with great pain and expense.

B) Cache Memory:

Cache is a special high-speed storage mechanism. The purpose of Cache Memory is to
act as a buffer between the very limited, very high-speed CPU registers and the relatively
slower and much larger main system memory known as RAM. Cache memory has an
operating speed similar to the CPU itself, so when the CPU accesses the data in cache,
the CPU is not kept waiting for the data.

Cache memory is configured such that whenever data is to be read from RAM, the
system hardware first checks to determine if the desired data is in cache. If the data is in
cache, it is quickly retrieved and used by the CPU. However if the data is not in cache,
the data is read from RAM and while being transferred to the CPU, it is also placed in the
cache (just in case it is needed again later).

In terms of storage capacity, cache is much smaller than RAM. It becomes therefore
necessary to split cache up into sections that can be used to cache different areas of
RAM at different times. Due to this mechanism a small amount of cache can effectively
speed access to large amount of RAM.

C) RAM:

RAM stands for Random Access Memory, a type of computer memory that can be
accesses randomly. RAM is the most common type of memory found in computers and
other devices such as printers. RAM is used for the storage of both data and programs
while these are in use. The speed of RAM lies between the speed of cache memory and
that of the hard drives, but usually its more closer to the speed of the cache memory.

The basic operation of RAM is quite straightforward. At the lowest level, there are the
RAM chips which are integrated circuits that do the actual Remembering These chips
have four types of connections to the outside world, namely:

1. Power connections (to operate the circuitry within the chip)
2. Data connections (to enable the transfer of data into or out of the chip)
3. Read / Write connections (to control whether data is to be stored or retrieved from the
chip)
4. Address connections (to determine where in the chip the data should be read /
written)

While these steps seem simple, they take place at high speeds which can be measured
only in nanoseconds.

There are two basic types of RAM, namely

1. Dynamic RAM (DRAM)
2. Static RAM (SRAM)

These two types of RAM differ in the technology they use to hold data. DRAM is more
popular. DRAM needs to be refreshed thousands of time per second. SRAM does not
need to be refreshed, which makes it faster, but its more expensive than DRAM. Both
these types of RAM are volatile which means that they lose their content the moment the
power is turned off.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 119 of 138

D) Hard Disk;

Hard Drives are non-volatile medium of storage. The data they contain remains there
even after the power is turned off. Because of this feature the Hard Disk Drives occupy a
special place in the storage spectrum. Their non-volatile nature makes them ideal for
storing programs and data for long-term use. Programs cannot be directly executed from
the Hard Disk Drive; they have to be first loaded into RAM.

Hard Disks operate at a speed much lower than the speeds of cache memory and RAM.
Hard Disk Drives can store data anywhere from 10 to more than 100 gigabytes, whereas
most floppies have a maximum storage of just 1.4 megabytes.

A single hard disk usually consists of several platters. Each platter requires two read /
write heads one for each side. Each platter has the same number of tracks on which data
gets stored.

In general Hard Disk Drives are less portable than floppies, but these days we can buy
removable hard disk drives, which are getting more and more popular, compact and
inexpensive.

E) Offline Backup Storage Devices:

Off-line backup storage takes a step beyond hard drive storage in terms of capacity
(higher) and speed (slower). Here, capacities are effectively limited only by your ability to
procure and store the removable media. E.g. Removable USB devices like external USB
Hard Disks, Flash Drives etc.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 120 of 138


Q84

Select any small system related to your work area. Explain the system in terms of the
following assuming that you were to computerize it:

a) Brief description of the existing process. (Explain the current manual process first)
b) Document sources, which will provide input to the computerized system. (Talk
about entering data online via web forms)
c) Key output reports from system. (Talk about generating reports & metrics using
tools like Crystal Reports etc which you can print or generate into PDF or simply
mail across)
d) Important screens which you can visualize. (Talk about Data Entry Screens, Admin
Screens, Output Screens etc)


(Ans 84)

Self Study


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 121 of 138


Q85

a) Identify any 5 risks / hazards / threats to data, software and computer operations.
b) For each of the above risks mention a possible solution


(Ans 85)

Data suffers mainly from the following five risks, namely:

Data Confidentiality (Unauthorized people should not be allowed to view/use data)
Data Integrity (Data should not be altered and must remain genuine)
Data Availability (Data should be made available whenever required)
Data Corruption (Data should not be corrupted by manual or automatic processes)
Threat from natural disasters
Accidental Deletion of Data
SQL Injection (Malicious users can execute queries from the browsers address bar or
even from a text field from asp websites and can maliciously drop tables from the
database)
Data Redundancy

The possible solutions to tackle Data Risks are:

a) To ensure data confidentiality, only authorized people should be allowed access to the
data / system
b) Data Storage and transmission must be encrypted so that its integrity is not compromised
at any given time.
c) To ensure data availability we must have regular backups of data taken so that data is
made available when required.
d) We must ensure that databases are isolated from unauthorized users or processes which
can help prevent data from getting corrupt either intentionally or accidentally.
e) We must ensure proper and efficient database designs for ensuring data quality,
accuracy and ensuring no data redundancy.

Softwares suffer from the following risks, namely:

Software Piracy (Usage of Unlicensed Softwares)
Abuse of End User License Agreement (EULA)
Software Cracking

Possible Solutions to tackle Software Risks are:

a) To ensure the usage of only licensed and legal software
b) IT Audits should be regularly conducted in organizations to enforce usage of legally
procured licensed softwares.
c) IT departments should ensure that cracked or pirated software should be identified and
uninstalled from systems.

Computer systems on a whole are affected by the above hazards or a combination thereof. They
are prone to internal and external threats. For example an unhappy or disgruntled employee
within an organization can pose serious threats to data or software or to the computer operations
within an organization. A hacker can pose an external threat to data, software or computer
operations by taking control of it and misusing it with an intention to cause harm.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 122 of 138


Q88

Define Macros


(Ans 88)

A macro in computer science is a rule or pattern that specifies how a certain input sequence
(often a sequence of characters) should be mapped to an output sequence (also often a
sequence of characters) according to a defined procedure. The mapping process which
instantiates a macro into a specific output sequence is known as macro expansion.

The term originated with macro-assemblers, where the idea is to make available to the
programmer a sequence of computing instructions as a single program statement, making the
programming task less tedious and less error-prone.

Keyboard macros and mouse macros allow short sequences of keystrokes and mouse actions to
be transformed into other, usually more time-consuming, sequences of keystrokes and mouse
actions. In this way, frequently-used or repetitive sequences of keystrokes and mouse
movements can be automated. Separate programs for creating these macros are called macro
recorders.

In a way, macros are like simple programs or batch files. Some applications support sophisticated
macros that even allow you to use variables and flow control structures such as loops.

Macros are commonly used in applications like Microsoft Excel and Word.




Q89

State whether the following is true or false and give reasons for any four:

a) A modem converts only digital signals to analog
b) Gateways translate addresses between dissimilar networks.
c) User Involvement in a system design is not very important
d) VSAT works on a network technology
e) Real Time Processing is different from Online Processing
f) Intranet is the same as Internet


(Ans 89)

Self Study



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 123 of 138


Q91

What is EDI?


(Ans 91)

Electronic Data Interchange (EDI) is the process of using computers to exchange business
documents between companies. Previously, fax machines or traditional mail was used to
exchange documents. Mailing and faxing are still used in business, but EDI is a much quicker
way to do the same thing.

EDI is the computer-to-computer exchange of business data in standard formats. In EDI,
information is organized according to a specified format set by both parties, allowing a hands off
computer transaction that requires no human intervention or rekeying on either end. The
information contained in an EDI transaction set is, for the most part, the same as on a
conventionally printed document.

EDI is used by a huge number of businesses. Over 100,000 businesses have replaced the more
traditional methods with EDI. This new system has a number of benefits; cost benefits is one of
them. Computer to computer exchange is much less expensive than traditional methods of
document exchange. Processing a paper-based order can cost up 70 US dollars (USD), whereas
using EDI costs 1 USD or less.

As a computer processes the documents in EDI, there is also less chance of human error. Speed
is another benefit. A paper purchase order can take ten days to two weeks from the time the
buyer requests it to the time the shipper sends it off. Now, the same order can be processed in
less than a day.

These faster transaction times help maintain efficient inventory levels. They also contribute
to a better use of warehouse space, and less out-of-stock problems. This in turn leads to a
reduction in freight costs, as there should be no need for last minute urgent delivery surcharges.

There are a few drawbacks to the process. EDI can only work if everyone the company deals with
is using the same method. If the changeover has not been made within some businesses, other
companies dealing with them may have to use EDI as well as the more traditional methods. This
can be costly and time consuming. It may involve one member of staff maintaining the mailing
process while another worker sends documents electronically.

The process of using EDI for purchasing is very simple. Once a purchase order has been written,
the document is translated into a specific format and submitted to the supplier via the Internet. It
must be ensured that the document used by both parties is in exactly the same format.

Security is an important issue for companies using EDI. Data security is controlled throughout the
process using passwords, encryption and user identification. The EDI has software that checks
and edits the documents for accuracy.

Basically, all that is needed to run this method of document exchange is a computer and access
to the Internet. There are many companies that supply no-fuss EDI software application
packages. These have their own encryption software and the installation process can be
completed in a few minutes. From a small initial outlay, companies can use EDI to improve
efficiency while saving time, money and labor.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 124 of 138


Q92

Explain the following terms:

a) E-Commerce
b) E-CRM
c) World Wide Web
d) Data Warehousing
e) Feasibility Study
f) Semiconductor Memory
g) On-line Systems
h) Electronic Spreadsheet


(Ans 92 a)

Electronic commerce

Electronic commerce, commonly known as e-commerce or eCommerce, consists of the buying
and selling of products or services over electronic systems such as the Internet and other
computer networks. The amount of trade conducted electronically has grown dramatically since
the wide introduction of the Internet. A wide variety of commerce is conducted in this way,
including things such as electronic funds transfer, supply chain management, internet marketing,
online transaction processing, electronic data interchange (EDI), automated inventory
management systems, and automated data collection systems. Modern electronic commerce
typically uses the World Wide Web at least at some point in the transaction's lifecycle, although it
can encompass a wider range of technologies such as e-mail as well.

A small percentage of electronic commerce is conducted entirely electronically for "virtual" items
such as access to premium content on a website, but most electronic commerce eventually
involves physical items and their transportation in at least some way.

E-commerce or electronic commerce is generally considered to be the sales aspect of e-
business. The meaning of the term "electronic commerce" has changed over the last 30 years.
Originally, "electronic commerce" meant the facilitation of commercial transactions electronically,
usually using technology like Electronic Data Interchange (EDI) and Electronic Funds Transfer
(EFT), where both were introduced in the late 1970s, for example, to send commercial
documents like purchase orders or invoices electronically. The 'electronic' or 'e' in e-commerce
refers to the technology/systems; the 'commerce' refers to traditional business models. E-
commerce is the complete set of processes that support commercial business activities on a
network. In the 1970s and 1980s, this would also have involved information analysis. The growth
and acceptance of credit cards, automated teller machines (ATM) and telephone banking in the
1980s were also forms of e-commerce. However, from the 1990s onwards, this would include
enterprise resource planning systems (ERP), data mining and data warehousing.
In the dot com era, it came to include activities more precisely termed "Web commerce" -- the
purchase of goods and services over the World Wide Web, usually with secure connections
(HTTPS, a special server protocol that encrypts confidential ordering data for customer
protection) with e-shopping carts and with electronic payment services, like credit card payment
authorizations.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 125 of 138
Today, it encompasses a very wide range of business activities and processes, from e-banking to
offshore manufacturing to e-logistics. The ever growing dependence of modern industries on
electronically enabled business processes gave impetus to the growth and development of
supporting systems, including backend systems, applications and middleware. Examples are
broadband and fiber-optic networks, supply-chain management software, customer relationship
management software, inventory control systems and financial accounting software.

The emergence of e-commerce also significantly lowered barriers to entry in the selling of many
types of goods; accordingly many small home-based proprietors are able to use the internet to
sell goods. Often, small sellers use online auction sites such as eBay, or sell via large corporate
websites like Amazon.com, in order to take advantage of the exposure and setup convenience of
such sites.

Benefits of e-Commerce
No need for real estate or physical office
No boundary barriers to business
Announce your availability to the entire internet community
Easy to penetrate existing and new markets
Build buyer / seller relationships online
Improve Marketing capabilities

Potential Challenges to e-Business
Customs
Confidentiality Issues
Fraud and Phishing
Security Issues
IPR Infringement
Regulations

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 126 of 138
(Ans 92 b)

E-CRM

Today technology allows you to relate to your customers, vendors etc in so many feasible ways.
Customer Relationship Management done using electronic means can be understood as E-CRM.

As the internet is becoming more and more important in business life, many companies consider
it as an opportunity to reduce customer-service costs, tighten customer relationships and most
important, further personalize marketing messages and enable mass customization. Together
with the creation of Sales Force Automation (SFA), where electronic methods where used to
gather data and analyze customer information, the trend of the upcoming Internet can be seen as
the foundation of what we know as eCRM today.

We can define eCRM as activities to manage customer relationships by using the Internet, web
browsers or other electronic touch points. The challenge hereby is to offer communication and
information on the right topic, in the right amount, and at the right time that fits the customers
specific needs.

Channels through which companies can communicate with its customers, are growing by the day,
and as a result, getting their time and attention has turned into a major challenge. One of the
reasons eCRM is so popular nowadays is that digital channels can create unique and positive
experiences not just transactions for customers. The internet has created a platform today
which allows companies to respond directly to any customers requests or problems, this feature
of eCRM helps companies establish and sustain long-term customer relationships.

Furthermore, Information Technology has helped companies to even further differentiate between
customers and address a personal message or service. Some examples of tools used in eCRM
are Personalized Web Pages where customers are recognized and their preferences are shown.
E.g. Amazon and Dell websites.

CRM programs should be directed towards customer value that competitors cannot match.
However, in a world where almost every company is connected to the Internet, eCRM has
become a requirement for survival, not just a competitive advantage.

Different levels of e-CRM

In defining the scope of eCRM, three different levels can be distinguished:

1. Foundational services:
This includes the minimum necessary services such as web site effectiveness and
responsiveness as well as order fulfillment.

2. Customer-centered services:
These services include order tracking, product configuration and customization as well as
security/trust.

3. Value-added services:
These are extra services such as online auctions and online training and education.

Self-services are becoming increasingly important in CRM activities. The rise of the Internet and
eCRM has boosted the options for self-service activities. A critical success factor is the
integration of such activities into traditional channels. An example was Fords plan to sell cars
directly to customers via its Web Site, which provoked an outcry among its dealers network.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 127 of 138
CRM activities are mainly of two different types. Reactive service is where the customer has a
problem and contacts the company. Proactive service is where the manager has decided not to
wait for the customer to contact the firm, but to be aggressive a contact the customer himself in
order to establish a dialogue and solve problems.

Mobile CRM

One subset of Electronic CRM is Mobile CRM (mCRM). This is defined as services that aim at
nurturing customer relationships, acquiring or maintaining customers, support marketing, sales or
services processes, and use wireless networks as the medium of delivery to the customers.

However, since communications is the central aspect of customer relations activities, many opt
for the following definition of mCRM: communication, either one-way or interactive, which is
related to sales, marketing and customer service activities conducted through mobile medium for
the purpose of building and maintaining customer relationships between a company and its
customer(s).

eCRM allows customers to access company services from more and more places, since the
Internet access points are increasing by the day. mCRM however, takes this one step further and
allows customers or managers to access the systems for instance from a mobile phone or PDA
with internet access, resulting in high flexibility. An example of a company that implemented
mCRM is Finnair, who made it possible for their customers to check in for their flights by SMS.
Since mCRM is not able to provide a complete range of customer relationship activities it should
be integrated in the complete CRM system.
How e-CRM helps improve customer satisfaction?
1) Support your remote sales force

Authorized personnel can retrieve or update all customer information via the internet.
They can visit reports, quotations
Communications can be recorded or retrieved while not at office

2) Share customer information

All authorized personnel have access to the same information
Information is centrally located
Information from different departments is shared regardless of location

3) Keep the customer always informed via the customer portal

Via the internet, your clients can view their own data including communications, documents and
financial transactions. This helps you work in a partnership model with your clients

4) Add value to your Resellers via a reseller portal

Provide prospects, customer information and contract data, product news, company news etc via
the reseller portal, so that your resellers can serve your end consumers well. Remember your
Resellers also represent you, so you need to empower them to reach out to the end consumer.

5) Keep track of your prospects

You can record new prospects yourself or a prospect can request information via your internet
site. The prospect info now becomes a part of a workflow can be traced via e-CRM
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 128 of 138

6) Improve account management

Your accounts, communications, contacts, tasks and financial information are visualized in one
screen. You can track customer requests in the workflow and assign tasks. Roles and
responsibilities can now be effectively delegated and their progress tracked.

7) Reports & Statistics

Where are your customers?
What is the performance of your sales / Service Department?
How long does it take to answer customer questions / requests?
How much does it cost to convert a prospect to a customer v/s the cost of servicing or
account mining an existing customer etc.

8) Security

Systems Security factors that aid CRM are Virtual Private Networks (VPNs), Virtual LANs
(VLANs), Firewalls and other network level restrictions. Role based logins contribute to
Information security and management.

All in all, the core of the E-CRM system consists of Workflow, Messaging and Security
modules.


The things that make e-CRM work for your customers are:

a) Portal
b) Customization
c) Personalization
d) Collaboration

The things that make e-CRM work for your administrator or your partners are:

a) Tracking
b) Content Management
c) Document Management
d) Collaboration
e) Analytics & Reporting

The things that make e-CRM support everyone are:

a) Call center
b) Global and contextual help
c) Wizards
d) Knowledge bases
e) Message / Bulletin Boards
f) Chat
g) FAQs etc.
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 129 of 138
(Ans 92 c)

World Wide Web
A system of Internet servers that support specially formatted documents. The documents are
formatted in a markup language called HTML (HyperText Markup Language) that supports links
to other documents, as well as graphics, audio, and video files. This means you can jump from
one document to another simply by clicking on hot spots. Not all Internet servers are part of the
World Wide Web.
There are several applications called Web browsers that make it easy to access the World Wide
Web; Two of the most popular being Netscape Navigator and Microsoft's Internet Explorer.
World Wide Web is not synonymous with the Internet.
The difference between Internet and the World Wide Web:

Many people use the terms Internet and World Wide Web (aka. the Web) interchangeably, but in
fact the two terms are not synonymous. The Internet and the Web are two separate but related
things.
The Internet is a massive network of networks, a networking infrastructure. It connects millions of
computers together globally, forming a network in which any computer can communicate with any
other computer as long as they are both connected to the Internet. Information that travels over
the Internet does so via a variety of languages known as protocols.
The World Wide Web, or simply Web, is a way of accessing information over the medium of the
Internet. It is an information-sharing model that is built on top of the Internet. The Web uses the
HTTP protocol, only one of the languages spoken over the Internet, to transmit data. Web
services, which use HTTP to allow applications to communicate in order to exchange business
logic, use the Web to share information. The Web also utilizes browsers, such as Internet
Explorer or Firefox, to access Web documents called Web pages that are linked to each other via
hyperlinks. Web documents also contain graphics, sounds, text and video.
The Web is just one of the ways that information can be disseminated over the Internet. The
Internet, not the Web, is also used for e-mail, which relies on SMTP, Usenet news groups, instant
messaging and FTP. So the Web is just a portion of the Internet, albeit a large portion, but the
two terms are not synonymous and should not be confused.
Advantages of the World Wide Web
The Web provides connectivity previously unavailable in the computer world

The Web has open and widely available standards, which allows computers with different
operating systems to communicate on a large scale. Ex. PC users can communicate with
Mac Users, People using Windows OS can communicate with Unix or Linux operating
systems.

All Web applications function in browsers. This eliminates the need to create programs
for multiple operating systems.

Users do not have to routinely upgrade information on hardware, operating systems, and
network infrastructures to use the Web. Web clients only have to upgrade browsers on
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 130 of 138
occasion.

New applications can be developed and distributed at rapid speeds over the Web.

The World Wide Web also supports different languages and hence it can enable people
with from different countries speaking traditionally different languages communicate with
ease with the rest of the world. The World Wide Web has made Knowledge Exchange a
lot easier.

The World Wide Web is now not confined only to a home PC; one can access its features
even through a Mobile Phone or a hand held which can connect to it.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 131 of 138
(Ans 92 d)

Data Warehousing
Data warehousing is combining data from multiple and usually varied sources into one
comprehensive and easily manipulated database. Common accessing systems of data
warehousing include queries, analysis and reporting. Because data warehousing creates one
database in the end, the number of sources can be anything you want it to be, provided that the
system can handle the volume, of course. The final result, however, is homogeneous data, which can
be more easily manipulated.
Data warehousing is commonly used by companies to analyze trends over time. In other words,
companies may very well use data warehousing to view day-to-day operations, but its primary
function is facilitating strategic planning resulting from long-term data overviews. From such
overviews, business models, forecasts, and other reports and projections can be made.
This is not to say that data warehousing involves data that is never updated. On the contrary, the data
stored in data warehouses is updated all the time. It's the reporting and the analysis that take more of
a long-term view.
Data warehousing is typically used by larger companies analyzing larger sets of data for
enterprise purposes. Smaller companies wishing to analyze just one subject, for example,
usually access data marts, which are much more specific and targeted in their storage and
reporting. Data warehousing often includes smaller amounts of data grouped into data marts.
In this way, a larger company might have at its disposal both data warehousing and data marts,
allowing users to choose the source and functionality depending on current needs.
A Data Warehouse is a relational / multidimensional database that is designed for query
and analysis rather than simply transaction processing. A data warehouse usually contains
historic data that is derived from transactional data. It separates analysis workload from
transaction workload and enables a business to consolidate data from several sources. In
addition to a relational database, a data warehouse environment often consists of an ETL
(Extraction Transformation and Loading) solution, an OLAP engine (Online Analytical
Processing), client analysis tools and other applications that manage the process of gathering
data and delivering it to the business users.

A data warehouse is the main repository of an organization's historical data, its corporate
memory. It contains the raw material for management's decision support system. The critical
factor leading to the use of a data warehouse is that a data analyst can perform complex queries
and analysis, such as data mining, on the information without slowing down the operational
systems.

There are three types of data warehouses:

Enterprise Data Warehouse
An enterprise data warehouse provides a central database for decision support throughout the
enterprise;

ODS (Operational Data Store)
This has a broad enterprise wide scope, but unlike the real enterprise data warehouse,, data is
refreshed in near real time and used for routine business activity.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 132 of 138
Data Mart

A data mart is a subject-specific repository of data gathered from operational data and other
sources that is designed to serve a particular community of knowledge workers.

Data mart is a subset of data warehouse and it supports a particular region, business unit or a
business function such as Sales, Marketing or Finance etc.

A data warehouse might be used to find the day of the week on which a company sold the most
items in May 1992, or how employee sick leave the week before the winter break differed
between California and New York from 20012005.

Advantages of using a Data Warehouse

There are many advantages to using a data warehouse, some of them are:

Enhances end-user access to a wide variety of data.
Decision support system users can obtain specified trend reports, e.g. the item with the
most sales in a particular area/country within the last two years.
A data warehouse can be a significant enabler of commercial business applications, most
notably customer relationship management (CRM).

Concerns

Extracting, transforming and loading data consumes a lot of time and computational
resources.
Data warehousing project scope must be actively managed to deliver a release of defined
content and value.
Compatibility problems with systems already in place.
Security could develop into a serious issue, especially if the data warehouse is web
accessible
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 133 of 138
(Ans 92 f)

Semiconductor Memory
A material that is neither a good conductor of electricity (like copper) nor a good insulator (like
rubber). The most common semiconductor materials are silicon and germanium. These materials
are then doped to create an excess or lack of electrons.

Computer chips, both for CPU and memory, are composed of semiconductor materials.
Semiconductors make it possible to miniaturize electronic components, such as transistors. Not
only does miniaturization mean that the components take up less space, it also means that they
are faster and require less energy.

Semiconductor memory is computer memory implemented on a semiconductor-based integrated
circuit. Examples of semiconductor memory include static random access memory, which relies
on transistors, and dynamic random access memory, which uses capacitors to store the bits.

Semiconductors are unique materials, solids whose electrical conductivity can be changed
deliberately, usually in a dynamic (reversible) fashion. Semiconductors are used to make
semiconductor devices, which led to the Information Age of the late 20th century. Today,
semiconductors are ubiquitous, and continue to penetrate further into our daily lives. These
devices include actuators and control systems in cars, MP3 players, cell phones, and computers
of all kinds. Semiconductors are arguably one of the most important technologies of the 20th
century, and continues to be a central aspect of developed economies. The most common
semiconductors are made of silicon, as it is relatively cheap to extract from sand, and gets the job
done. The semiconductor industry sells several hundred billion US Dollars of product per year.



Tips / Note:

To add elements into a semiconductor material during the manufacturing process to increase its conductivity. The
impurities added are called dopants. Common dopants include arsenic, antimony, bismuth and phosphorous.

The type and level of doping determines whether the semiconductor is N-type (current is conducted by excess free
electrons) or P-type (current is conducted by electron vacancies).

Transistors are a device composed of semiconductor material that amplifies a signal or opens or closes a circuit. Invented
in 1947 at Bell Labs, transistors have become the key ingredient of all digital circuits, including computers. Today's
microprocessors contain tens of millions of microscopic transistors.

Capacitors are a passive electronic component that holds a charge in the form of an electrostatic field. They are often
used in combination with transistors in DRAM, acting as storage cells to hold bits.

Capacitors typically consist of conducting plates separated by thin layers of dielectric material, such as dry air or mica.
The plates on opposite sides of the dielectric material are oppositely charged and the electrical energy of the charged
system is stored in the polarized dielectric.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 134 of 138
(Ans 92 g)

On-line Systems
On-line systems are ones that involve an authentication and authorization server (a specialized
dial-up digital cash or VISA computer, for example). Information provided by a user is compared
against information in a central database. A transaction between buyer and seller (customer and
merchant) does not take place unless the third party server first verifies the buyers identity (in
non-anonymous digital cash systems) or the validity of the buyers digital cash (in both
anonymous and non-anonymous systems), and authorizes payment to the seller of the good.

Most Digital cash systems that are purely software-based are usually online; many online
reservation systems are also online. Many real time systems are online.

Online systems are very useful as users can access them whenever they require and from
anywhere in the world via any device that is capable of connecting to the internet.


MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 135 of 138
(Ans 92 h)

Electronic Spreadsheet
A spreadsheet, also known as a worksheet, contains rows and columns and is used to record and
compare numerical or financial data. Originally, spreadsheets only existed in paper format, but
now they are most likely created and maintained through a software program that displays the
numerical information in rows and columns. Spreadsheets can be used in any area or field that
works with numbers and are commonly found in the accounting, budgeting, sales forecasting,
financial analysis, and scientific fields.

Computerized spreadsheets mimic a paper spreadsheet. The advantage of using computerized
spreadsheets is their ability to update data and perform automatic calculations extremely quickly.
On a computerized spreadsheet, the intersection of a row and a column is called a cell. Rows are
generally identified by numbers - 1, 2, 3, and so on - and columns are identified by letters, such
as A, B, C, and so on. The cell is a combination of a letter and a number to identify a particular
location within the spreadsheet, for example A3.

To maneuver around the spreadsheet, you use the mouse or the tab key. When the contents of
one cell are changed, any other affected cell is automatically recalculated according to the
formulas in use. Formulas are the calculations to be performed on the data. Formulas can be
simple, such as sum or average, or they can be very complex. Spreadsheets are also popular for
testing hypothetical scenarios.

Setting up a spreadsheet can be fairly time consuming, although templates, or sample
spreadsheets, are available with most software packages. The computerized spreadsheet can be
formatted with titles, colors, bold text, and italics for a professional look. You can also create
graphs and charts based on the data entered in your spreadsheet. Many packages have the
ability to print mailing lists or labels.

The original computerized spreadsheet software was VisiCalc, designed for use on Apple
computers. Now many commercial computerized software packages are available for Microsoft
Windows and other operating systems. Popular spreadsheet packages include Microsoft Excel
and Lotus 123.

Individuals, in addition to businesses, use computerized spreadsheet software for a variety of
tasks that involve numerical data. Teachers can store and average grades with a spreadsheet.
Individuals can use a spreadsheet to track a personal budget or store sports team statistics.
Spreadsheets are one of the most popular uses for personal computers.



MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 136 of 138
(Ans 95)

Analog Computers versus Digital Computers

Analog Computer:

There are two distinct families of computing device
available to us today, the all pervasive digital
computer and almost forgotten analog computer.
These two types of computer operate on quite
different principles.

An analog computer is a form of computer that uses
continuous physical phenomena such as electrical,
mechanical, or hydraulic quantities to model the
problem being solved.

Illustration: (right) Polish analog computer AKAT-1

Mechanism of an Analog Computer:

In analog computers, computations are often
performed by using properties of electrical
resistance, voltages and so on.

Analog computers do not have the ability of digital
computers to store data in large quantities, nor do
they have the comprehensive logical facilities
afforded by programming digital machines. And
although the arithmetic functions performed by the computing units are more complex in analog
machines than in the digital systems, the cost of the hardware required to provide a high degree
of accuracy in an analog machine is often prohibitive.

Some analog machines are designed for specific applications, but most electrical and electronic
analog computers provide a number of different computing devices which can be connected
together via a plug board to provide different methods of operation for specified problems.

The use of electrical properties in analog computers means that calculations are normally
performed in real time (or faster), at a significant fraction of the speed of light, without the
relatively large calculation delays of digital computers. This property allows certain useful
calculations that are comparatively "difficult" for digital computers to perform, for example
numerical integration. Analog computers can integrate a voltage waveform, usually by means of a
capacitor, which accumulates charge over time.

Any physical process which models some computation can be interpreted as an analog
computer.

MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 137 of 138
Components of an Analog Computer:

A 1960 Newmark analogue computer, made up of five units. This computer was used to solve
differential equations and is currently housed at the Cambridge Museum of Technology. Analog
computers often have a complicated framework, but they have, at their core, a set of key
components which perform the calculations, which the operator manipulates through the
computer's framework.

Key hydraulic components might include pipes, valves or towers; mechanical components might
include gears and levers; key electrical components might include:

potentiometers
operational amplifiers
integrators
fixed-function generators

The core mathematical operations used in an electric analog computer are:

summation
inversion
exponentiation
logarithm
integration with respect to time
differentiation with respect to time
multiplication and division

Differentiation with respect to time is not frequently used. It corresponds in the frequency domain
to a high-pass filter, which means that high-frequency noise is amplified.


Limitations of Analog Computers:

In general, analog computers are limited by real, non-ideal effects. An analog signal is composed
of four basic components: DC and AC magnitudes, frequency, and phase. The real limits of range
on these characteristics limit analog computers. Some of these limits include the noise floor, non-
linearities, temperature coefficient, and parasitic effects within semiconductor devices, and the
finite charge of an electron. Incidentally, for commercially available electronic components,
ranges of these aspects of input and output signals are always figures of merit.

Analog computers, however, have been replaced by digital computers for almost all uses.

These are examples of analog computers that have been constructed or practically used:

Antikythera mechanism
astrolabe
differential analyzer
Kerrison Predictor
mechanical integrator
MONIAC Computer (hydraulic model of UK economy)
nomogram
Norden bombsight
operational amplifier
planimeter
Rangekeeper
MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 138 of 138
slide rule
Torpedo Data Computer
Tide predictors
Water integrator


Digital Computer:

There are two basic data transfer and communication systems in computing technologydigital
and analog. Analog systems have continuous input and output of data, while digital systems
manipulate information in discrete chunks. Although digital devices can use any numeric system
to manipulate data, they currently only use the binary number system, consisting of ones and
zeros. Information of all types, including characters and decimal numbers, are encoded in the
binary number system before being processed by digital devices.

Analog devices can work on a continuous flow of input, whereas digital devices must explicitly
sample the data coming in. Determination of this sample rate is an important decision affecting
the accuracy and speed of real-time systems. One thing that digital computers do more easily is
to include the evaluation of logical relationships. Digital computers use Boolean arithmetic and
logic. The logical decisions of computers are probably as important as their numerical
calculations.

The first digital computer of the modern sort was the programmable calculator designed, but
never built, by British mathematician Charles Babbage (17911871).The Electronic Numerical
Integrator and Computer (ENIAC), designed by American engineers J . Presper Eckert (1919
1995) and J ohn W. Mauchly (19071980), is considered the first general purpose electronic
digital machine.

Digital computers are forgiven their small inaccuracies because of their speed. These speed
improvements in digital computers became common when the entire processor was integrated on
one chip. Data transfers between the components on the chip are quite fast due to the small
distances between them. Analog-to-digital and digital-to-analog converters can also be chip-
based, avoiding the need for any speedup.

Digital computers are made of up to more than 1,000 of these processors working in parallel. This
makes possible what has been the chief strength of digital computers all along: doing what is
impossible for humans. At first, they performed calculations faster, if not better. Then they
controlled other devices, some digital themselves, some analog, almost certainly better than
people. Finally, the size and speed of digital computers make possible the modeling of wind
flowing over a wing, the first microseconds of a thermonuclear explosion, or a supposedly
unbreakable code.














MET Part Time MBA Answers to Questions in the Question Bank (Class Notes 2010)

-- Author: Prof. Max William DCosta (max.dcosta77@gmail.com)

Compiled on: July 15, 2010 by Prof. Max DCosta @ MET SOM Page 139 of 138
Disclaimerandsourceofthisinformation

I would like to acknowledge the tremendous support of the internet, whitepapers, articles
published by many scholars and subject matter experts on renowned websites.

The copyright remains with the original authors and information published in this document is
merely a compilation solely for educational purposes.

This document is not for sale or circulation for commercial purposes.

Vous aimerez peut-être aussi