Académique Documents
Professionnel Documents
Culture Documents
Prepared By:
Approved by:
UNIT I
IT Infrastructure Evolution: Introduction - Technologies - Global Internet Infrastructure - World Wide
Web and Web Services - Open-Source Movement.
Productivity Paradox and Information Technology: Productivity Paradox - Return on Technology
Investment - Information Technology Straightjacket - Consolidation - Outsourcing - Toward a Real-Time
Enterprise - Operational Excellence.
Business Value of Grid Computing: Grid Computing Business Value Analysis - Risk Analysis - Grid
Marketplace.
2 Marks
1. What is Grid Computing?
(Apr 11)(Nov 10)
Grid Computing enables virtual organizations to share geographically distributed resources as they
pursue common goals, assuming the absence of central location, central control, omniscience, and an
existing trust relationship.
2. What is High Performance computing?
High-performance computing generally refers to what has traditionally been called supercomputing.
There are hundreds of supercomputers deployed throughout the world. Key parallel processing
algorithms have already been developed to support execution of programs on different, but co-located
processors. High-performance computing system deployment, contrary to popular belief, is not
limited to academic or research institutions. More than half of supercomputers deployed in the world
today are in use at various corporations.
3. Explain about cluster computing.
( Nov 10)
Clusters are high-performance, massively parallel computers built primarily out of commodity
hardware components, running a free-software operating system such as Linux or FreeBSD, and
interconnected by a private high-speed network. It consists of a cluster of PCs, or workstations,
dedicated to running high-performance computing tasks. The nodes in the cluster do not sit on users
desks, but are dedicated to running cluster jobs. A cluster is usually connected to the outside world
through only a single node.
Grid Computing
Transistors
1970
2,300
1975
6,000
1980
29,000
1985
275,000
1990
5,500,000
2000
42,000,000
Grid Computing
Grid Computing
Large scale,
Geographical distribution,
Heterogeneity,
Resource sharing,
Multiple administration,
Resource coordination,
Transparent access,
Dependable access,
Consistent access, and
Pervasive access
(April 2014)
Grid Computing
11 Marks
1. Explain briefly about IT Infrastructure Evolution?
(Nov 09)
Technology has for many years now been governed by the law of exponentials. Hyper growth has
occurred in microprocessor speeds, storage, and optical capacity. What have historically been
independent developments have now come together to form a powerful technology ensemble on
which new technologies such as grid computing can flourish. This chapter highlights the significant
developments that have occurred in the past decade.
Microprocessor Technology
In a 1965 article in Electronics magazine, Intel founder-to-be Gordon Moore predicted that circuit
densities in chips would double every 1218 months. His proclamation in the article, Cramming
More Components onto Integrated Circuits, has come to be known as Moores Law. Although
Moores original statement was in reference to circuit densities of semiconductors, it has more
recently come to describe the processing power of microprocessors. Over the past 37 years, Moores
Law has successfully predicted the exponential growth of component densities,
Exponential Growth of Transistors on a Single Chip
Year
Transistors
1970
2,300
1975
6,000
1980
29,000
1985
275,000
1990
5,500,000
2000
42,000,000
The exponential growth of computing power is evidenced daily through the use of our personal
computers. The personal desktop computer of today processes one billion floating point operations
per second, giving it more computing power than some of the worlds largest supercomputers circa
1990. The high-performance computing systems have also seen many folds of growth.
Optical Networking Technology
Today, carry more traffic in a second, on a single strand of fiber, than all of the traffic on the whole
Internet in a month in 1997. This phenomenon has been made possible by developments in fiber
optics and related technologies. These developments enable us to put more and more data onto optical
fiber. In the mid 1990s, George Gilder predicted that the optical capacity of a single fiber would
treble every 12 months. Therefore, it is no surprise that Gilders Law is now used to refer to the
exponential improvements in optical capacity.
One of the key optical technologies is called dense wave division multiplexing or DWDM. DWDM
increases the capacity of the fiber by efficiently utilizing the optical spectrum. Certain frequency
bands within the optical spectrum have better attenuation characteristics and have been identified as
suitable for optical data communications.
The increase in optical capacity of a fiber occurs on two dimensions. First, the number of
wavelengths utilized within the spectrum is increased and, second, the amount of data pushed through
each wavelength is increased.
Grid Computing
(Nov09)
Fujitsu Corporation recently announced that it had developed a new read/write disk head technology
that will enable hard disk drive recording densities of up to 300 gigabits per square inch. Current 2.5inch hard disks can store around 30GB per disk platter. When Fujitsu commercializes its new disk
head technology in two to four years time, it will lead to capacities of 180GB per plattersix times
current capacities. This growth in storage capacity is, in fact, not new. Over the last decade, disk
storage capacity has improved faster than Moores Law of processing power. Al Shugart, founder and
former CEO of Seagate, is sometimes credited with predicting this; hence, for the sake of
consistency, we will refer to the exponential increase in storage density as Shugarts Law.
But storage capacity is not the only issue. Indeed, the rate with which data can be accessed is
becoming an important factor that may also determine the useful life span of magnetic disk-drive
technology. Although the capacity of hard-disk drives is surging by 130 percent annually, access rates
are increasing by a comparatively tame 40 percent.
Grid Computing
(Apr 12)
It is interesting to note that no luminary has been granted a law of exponential growth in his or her
name in the field of wireless technology. Perhaps it is because wireless technology has not seen the
exponential gains in raw capacity that other technologies have. Nonetheless, the adoption rate of
mobile wireless in particular has been no less than spectacular around the world. Many countries have
started deployment of 3G technology that promises to bring data at a rate of 1 Mbps to a wireless
device. The deployment of 3G now has more to do with the financial condition of the carriers, and
whether they can actually afford to deploy these systems, than with the robustness of the underlying
technology.
Fixed wireless, another technology that was supposed to revolutionize bandwidth access, has not
fared that well. The adoption rate of licensed spectrum fixed wireless technologies such as MMDS
(Multipoint Microwave Distribution System) and LMDS (Local Multipoint Distribution System) has
been extremely disappointing. In the United States, AT&T announced that it will not be deploying
Grid Computing
802.11applies to wireless LANs and provides 1 or 2 Mbps transmission in the 2.4 GHz
band using either frequency hopping spread spectrum (FHSS) or direct sequence spread
spectrum (DSSS).
802.11aan extension to 802.11 that applies to wireless LANs and provides up to 54 Mbps in
the 5GHz band. 802.11a uses an orthogonal frequency division multiplexing encoding scheme
rather than FHSS or DSSS.
802.11b (also referred to as 802.11 High Rate or Wi-Fi)an extension to 802.11 that applies
to wireless LANs and provides 11 Mbps transmission (with a fallback to 5.5, 2 and 1 Mbps) in
the 2.4 GHz band. 802.11b uses only DSSS. 802.11b was 1999 ratification to the original
802.11 standard, allowing wireless functionality to be comparable to Ethernet.
802.11gapplies to wireless LANs and provides 20+ Mbps in the 2.4 GHz band.
The most widely deployed standard is 802.11b, or Wi-Fi, which allows transmission of wireless data
at speeds as fast as 11Mbps and has a range of 300 feet to 800 feet. Boosters such as sophisticated
antennas can be added to improve the range of the wireless network.
The cost of deploying a wireless LAN has come down substantially. There are now hundreds of
thousands of hot spotsor Wi-Fi-enabled locationsaround the world in airports, cafs, and other
public places that provide unprecedented roaming capabilities. The benefits of 802.11b and 802.11a
are not limited to mobile users or home applications, but will be some of the key technologies that
reshape the manufacturing and production operations in much the same way as the electric motor led
the transformation of the sector in the early part of the 20th century. They will play an extremely
important role in seamlessly deploying wireless networks at manufacturing plants, factory floors, and
warehouses, thus providing a real-time view of operations, production, and inventory.
Sensor Technology
Sensor technologies in one form or another have been with us for a while. However, recent
advancement in certain types of technologies have brought their price and performance capabilities
within reach for pervasive deployment. Paul Saffo of the Institute of the Future goes so far as to say
that if the 1980s was the decade of the microprocessor and the 1990s the decade of the laser, then this
is the decade of the sensor.
A key advancement in sensors has been the ability to not only collect information about their
surroundings, but also to pass the information along to upstream systems for analysis and processing
through various integrated communication channels.
a)Fiber Optic Sensors
Optical fiber sensors have been around since the 1960s when the Fotonic Probe made its debut. But
the technology and applications of optical fibers have progressed very rapidly in recent years. Optical
fiber, being a physical medium, is subjected to perturbation of one kind or another at all times. It,
therefore, experiences geometrical (size, shape) and optical (refractive index, mode conversion)
Grid Computing
Grid Computing
10
(April 2014)
The Internet started in the late 1960s when the United States Defense Departments Advanced
Research Projects Agency developed ARPANET technology to link up networks of computers and,
thus, make it possible to share the information they contained. During the 1980s, the National Science
Foundation invested about $10 million per year, which motivated others (universities, states, private
sector) to spend as much as $100 million per year to develop and support a dramatic expansion of the
original ARPANET. The National Science Foundation Network (NSFNET) that resulted gave U.S.
universities a nationwide, high-speed communications backbone for exchanging information. The
network initially served academic researchers, nonprofit organizations, and government agencies. To
meet a new demand for commercial Internet services, NSF, in 1991 decided to allow for-profit traffic
over the network. During the next four years, NSFNET was privatized and became the global Internet
we know today.
The privatization of the Internet in the United States and the frenzy around the World Wide Web, ecommerce, and the like coincided with another key global event: the beginning of the deregulation of
the telecommunications companies. In the United States, this started with the Telecommunications
Act of 1996. Led by the World Trade Organization, deregulation around the world started shortly
thereafter with 90 percent of the global telecommunications deregulation to be completed by 2004
and 99 percent by 2010.
The privatization of the Internet and the deregulation of the telecom industry collided with a force
that led to some of the most spectacular infrastructure investments of the century. The capital markets
and the venture capitalists poured money heavily into Internet dot-com companies. The markets,
with equal vigor, funded companies that were building the networks, the data centers, and the
equipment that would serve the growing bandwidth demands of the Internet-based economy.
Unfortunately, the demand for bandwidth and related services never materialized as the Internet
economy ran into some old economic realities, such as the importance of free cash flow and profits.
The Internet boom lasted five years and collapsed in March of 2000. The telecommunication bubble,
which started in 1996, today leaves in its wake more than 200 bankruptcies and liquidations. But it
was exciting while it lasted.
Although the impact of the dramatic meltdown of the Internet and telecommunications segments
cannot be understated, there is clearly a silver lining in the debacle. There are three developments that
occurred during the boom period that we think are extremely encouraging.
The first, and the most significant, change that has occurred over the last few years is a deep
appreciation for open and standards-based systems. Many of the companies that were adherent to
proprietary, closed end systems were forced to take a close look at open systems and by the end of the
1990s most were converted. There are dozens of standards bodiessuch as the Optical
Internetworking Forum, Metro Ethernet Forum, 10 Gigabit Ethernet Alliance, W3C, Internet
Engineering Task Force (IETF)that were formed in this bubble period and are still around today
and doing excellent work in delivering standards. The acceptance of open, standard-based systems
will have far reaching benefits in the years to come.
The second key benefit is the substantial research, development, and implementation of security,
security standards, and encryption for the public Internet. Although there is still a lot of work to be
done, over the past few years the public Internet has become a powerful conduit for commerce and
business.
The third important development is the deployment of an impressive global networked infrastructure
that is now ready to be harvested by new technologies, such as Grid Computing. The Internet was
successfully transformed from a playground for academics to one that can withstand the scale and
Grid Computing
11
To understand how Web Services work, let us examine their application to the travel industry. The
global travel and tourism industry generates US $4.7 trillion in revenues annually and employs 207
million people directly and indirectly. It also drives 30 percent of all e-commerce transactions.
However, one of the dominant features of travel and tourism is supply side fragmentation, extreme
Grid Computing
12
(Apr10)
It is a little known factone that goes against all that we, as technology acolytes, have held dear to
usthat the productivity growth rate in the United States attributed to technology (also known as the
Total Factor Productivity or TFP), as defined by the keen economists at the Federal Reserve Bank,
slowed down sharply, from 1.9 percent between 1948 and 1973 to only 0.2 percent from 1973 to
1997. This period completely overlaps with the introduction and the amazing proliferation of desktop
and other computing resources in U.S. companies and homes. This became known as the computer
paradox after MIT professor and Nobel laureate Robert Solow made the famous 1987 quip: We see
the computer age everywhere except in the productivity statistics.
The statistics for the last few years are a bit more heartening. Productivity growth has averaged
considerably higher at 2.7 percent per annum. Some say that this is due to the diffusion effectthat
we are just now reaping the benefits of our earlier information technology investments. However,
others argue that it is actually due to the concentration effect and that the entire acceleration of
productivity growth can be traced directly to the computer manufacturing industry itself. This one
industry, while just a small piece of the overall economy, has produced a genuine productivity
miracle since 1995.
Return on Technology Investment
While economists and researchers continue to study the impact of technology on productivity and
corporate profits, corporations have drastically modified their thinking on technology spending. It is
no longer sufficient to base technology purchase decisions solely on the technological merits or
glamorous features of products. A recent study of Siebel customers revealed that 60 percent of them
did not believe that they received any significant return on their investment in various customer
relationship management (CRM) software programs that they purchased from Siebel. Standish
Grid Computing
13
14
Information technology deployments today are marked with batch or manual processes. Many
systems within the same enterprise are disconnected and exist in silos. There is limited partner
connectivity and, as a whole, corporate IT infrastructure contains a tremendous number of low value
human touch points. In many cases, the value of the information loses its currency as it is manually
processed.
The reality today is that information technology is a constellation of system clusters. Each cluster
contains many applications, databases, servers, networking infrastructure, and its own management
domain. Not only does the variety of applications and hardware continue to grow, but so does the
scale and number of these deployments.
Grid Computing
15
16
Grid Computing
17
When undertaking a storage consolidation effort, it is also important to understand how disk space is
utilized.
The three-tier services model discussed here has been proposed by Sun Microsystems.
4. What does outsourcing mean? Explain.
(April 2014)
Outsourcing IT operations can occur in many different ways. One way is through an application
service provider model. In this case, the company only pays whenever it uses a particular application.
The application itself resides outside the premises of the company and is accessed through the
Internet or through some private network.
As applications have become more complex, many Independent Software Vendors (ISV) are realizing
that the cost of maintaining and managing applications is deterring companies from purchasing them.
In some cases, only the very large companies can afford their software. Fluent, the worlds largest
computation fluid dynamics software company has started to offer its products via the Application
Service Provider (ASP) model. Their software had an initial cost of US $150,000 and a US $28,000
per seat licensing cost, but they are now offering their product for US $15.00 per CPU/hour. Now, not
only do the customers not have any upfront cost, they have no hardware or maintenance costs. We see
this model of outsourcing becoming increasing popular. Salesforce.com, which offers its sales force
management software online, has successfully competed head-to-head with Oracle and Peoplesoft.
One variant of the ASP model is the new utility computing model of outsourcing. We will discuss
this in more detail in the later chapters of this book. However, similar to the ASP model, which
reduces the application costs on a pay-per-use basis, utility computing allows companies to pay only
for the amount of computational, storage, or network capacity they use.
Some companies have been exploring outsourcing back-office to international IT companies in India,
China, Costa Rica, etc., to save money. IT costs can be reduced by up to 4050 percent in many
cases.
Another method of outsourcing is the complete handing over of IT departments to systems integrators
such as IBMs global services division. JP Morgan Chase recently outsourced its entire IT department
to IBM.
Grid Computing
18
(Apr 12)
Companies rely on information technology more so today than they ever have in the past. Trillions of
dollars have been spent in the last decade by companies trying to optimize all aspects of their
operations, such as financial, audit, supply chain, back office, sales and marketing, engineering,
manufacturing, and product development. Yet, enterprises today stand at a crossroads; most of the
investments made in technology, have yet to achieve their financial objectives or provide the
expected boost in corporate productivity. Additionally, much of the deployed infrastructure remains
hopelessly underutilized,
Inefficient Technology Infrastructure
IT Resource
Average Daytime Utilization
Windows Servers <5%
UNIX Servers
15 20%
Desktops
<5%
Grid Computing, a rapidly maturing technology, has been called the silver bullet that will address
many of the financial and operational inefficiencies of todays information technology infrastructure.
After developing strong roots in the global academic and research communities over the last decade,
Grid Computing has successfully entered the commercial world. It is accelerating product
development, reducing infrastructure and operational costs, and leveraging existing technology
investments and increasing corporate productivity. Today, Grid Computing offers the strongest low
cost and high throughput solution that allows companies to optimize and leverage existing IT
infrastructure and investments.
Grid Computing
19
Grid Computing
20
Grid Computing
21
Productivity gains serve as a crucial measure in any business analysis because they have a direct
impact on the corporate bottom line. Yet in many instances of technology deployments, calculating
productivity gains is more voodoo than science. However, in the case of Grid Computing, enterprises
have found it is relatively easy to determine the reduction in processing time due to increased
computational capacity offered by the grid and the resulting emancipation of their employees time.
Along with the Grid Computing value elements discussed earlier in the section, productivity gains
remain a strong driver for grid deployments.
Lock-in
Like most software (and hardware) vendors, Grid Computing vendors would probably prefer it if
their software locked-in a customer for a recurring or a future revenue stream. However in the case of
Grid Computing deployment, the IT manager will not be making any significant investments in
durable complementary assets, which promote lock-in. Although cluster and SMP solutions will
require significant investment in hardware, software, and supporting infrastructure and may
contribute to customer lock-in. Customers should pay keen attention to which vendors are supporting
the Grid Computing standards activities at the Global Grid Forum.
Grid Computing
22
Grid Computing
23
Grid Computing
24
11 Marks
1. Explain in detail with neat diagram complete IT infrastructure.(9) (Nov 09)
(Ref.Pg.No.6,Qn.No.1)
2. What are the different layers in the grid taxonomy, explain each one of them in detail. (May 09)
(Apr 11)(Ref.Pg.No.23, Qn.No.8)
2. Explain the concept of risk analysis with respect to grid computing(5) (May09)
(Ref.Pg.No.22,Qn.No.7)
3. Evaluate key risk factors and analyze the vulnerabilities of grid computing deployments and grid.
(Ref.Pg.No.22,Qn.No.7)
4. computing services in market place (Apr 11) (Ref.Pg.No.23,Qn.No.8)
5. Define optical networking spectrum. Explain in detail the growth of optical networking
technology (Nov 09) (Apr 12) (Ref.Pg.No.6,Qn.No.1)
6. Explain with neat graph how the storage costs reduce with respect to advancement in storage
technology (Nov09) (Ref.Pg.No.7,Qn.No.1)
7. What does outsourcing mean? Explain.(6)(Nov 09) (Ref.Pg.No.18,Qn.No.4)
8. Explain productivity paradox and information technology(15)(Apr10)
(Ref.Pg.No.13,Qn.No.2)
9. Explain Wireless Technology (5)(Apr 12)(Ref.Pg.No.8,Qn.No.1)
10. Describe in detail about Grid computing business value analysis.(11)(Apr 12)
(Ref.Pg.No.19,Qn.No.6)
11. Explain in detail about global internet infrastructure (11)(April 2014) (Ref.Pg.No.11,Qn.No.1)
12. What do you mean by outsourcing? Explain return on technology investment. (11)(April 2014)
(Ref.Pg.No.18,Qn.No.4)
REFERENCES
1. Ahmar Abbas, Grid Computing: A Practical Guide to Technology and Application, Charles
River Media, 2005
2.
Joshy Joseph and Craig Fellenstein, Grid Computing, Pearson Education, 2003.
3. Ian Foster and Carl Kesselman, The Grid2: Blueprint for a New Computing Infrastructure,
Morgan Kaufman, 2004
Grid Computing
25