Vous êtes sur la page 1sur 12

Name: Pankaj Bhadana

Roll No.: WMP5041

Assignment:
Write up on some key emerging technologies being used in your company.
My company, Affiliated Computer Services, is into the business process outsourcing
industry with heavy focus on technology services as well. In order to cater to the highly
competitive market, we have adopted some emerging technologies in the IT space to
leverage our products in the market. Enclosed below is a brief description on each one of
them:

Web 2.0:

The term "Web 2.0" is commonly associated with web applications which facilitate
interactive information sharing, interoperability, user-centered design and collaboration
on the World Wide Web. Examples of Web 2.0 include web-based communities, hosted
services, web applications, social-networking sites, video-sharing sites, wikis, blogs,
mashups and folksonomies. A Web 2.0 site allows its users to interact with other users or
to change website content, in contrast to non-interactive websites where users are limited
to the passive viewing of information that is provided to them.

The term is closely associated with Tim O'Reilly because of the O'Reilly Media Web 2.0
conference in 2004. Although the term suggests a new version of the World Wide Web, it
does not refer to an update to any technical specifications, but rather to cumulative
changes in the ways software developers and end-users use the Web. Whether Web 2.0 is
qualitatively different from prior web technologies has been challenged by World Wide
Web inventor Tim Berners-Lee who called the term a "piece of jargon".

History: From Web 1.0 to 2.0

The term "Web 2.0" was coined by Darcy DiNucci in 1999. In her article "Fragmented
Future," she wrote:

The Web we know now, which loads into a browser window in essentially static
screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are
beginning to appear, and we are just starting to see how that embryo might develop The
Web will be understood not as screenfuls of text and graphics but as a transport
mechanism, the ether through which interactivity happens. It will appear on your
computer screen, on your TV set, your car dashboard...your cell phone...hand-held game
machines...maybe even your microwave oven.
Her use of the term deals mainly with Web design and aesthetics; she argues that the Web
is "fragmenting" due to the widespread use of portable Web-ready devices. Her article is
aimed at designers, reminding them to code for an ever-increasing variety of hardware.
As such, her use of the term hints at - but does not directly relate to - the current uses of
the term.

The term did not resurface until 2003.These authors focus on the concepts currently
associated with the term where, as Scott Dietzen puts it, "the Web becomes a universal,
standards-based integration platform."
In 2004, the term began its rise in popularity when O'Reilly Media and MediaLive hosted
the first Web 2.0 conference. In their opening remarks, John Batelle and Tim O'Reilly
outlined their definition of the "Web as Platform," where software applications are built
upon the Web as opposed to upon the desktop. The unique aspect of this migration, they
argued, is that "customers are building your business for you.” They argued that the
activities of users generating content (in the form of ideas, text, videos, or pictures) could
be "harnessed" to create value.

Characteristics

Flickr, a Web 2.0 web site that allows users to upload and share photosWeb 2.0 websites
allow users to do more than just retrieve information. They can build on the interactive
facilities of "Web 1.0" to provide "Network as platform" computing, allowing users to
run software-applications entirely through a browser. Users can own the data on a Web
2.0 site and exercise control over that data. These sites may have an "Architecture of
participation" that encourages users to add value to the application as they use it. This
stands in contrast to traditional websites, the sort that limited visitors to viewing and
whose content only the site's owner could modify. Web 2.0 sites often feature a rich,
user-friendly interface based on Ajax and similar client-side interactivity frameworks, or
full client-server application frameworks such as OpenLaszlo, Flex, and the ZK
framework.

The concept of Web-as-participation-platform captures many of these characteristics.


Bart Decrem, a founder and former CEO of Flock, calls Web 2.0 the "participatory Web"
and regards the Web-as-information-source as Web 1.0.

The impossibility of excluding group-members who don’t contribute to the provision of


goods from sharing profits gives rise to the possibility that rational members will prefer
to withhold their contribution of effort and free-ride on the contribution of others. This
requires what is sometimes called Radical Trust by the management of the website.
According to Best, the characteristics of Web 2.0 are: rich user experience, user
participation, dynamic content, metadata, web standards and scalability. Further
characteristics, such as openness, freedom and collective intelligence by way of user
participation, can also be viewed as essential attributes of Web 2.0.

How it works

The client-side/web browser technologies typically used in Web 2.0 development are
Asynchronous JavaScript and XML (Ajax), Adobe Flash, and JavaScript/Ajax
frameworks such as Yahoo! UI Library, Dojo Toolkit, MooTools, and jQuery. Ajax
programming uses JavaScript to upload and download new data from the web server
without undergoing a full page reload, hence the term "asynchronous."

The data fetched by an Ajax request is typically formatted in XML or JSON (JavaScript
Object Notation) format; two widely used structured data formats. Since both of these
formats are natively understood by JavaScript, a programmer can easily use them to
transmit structured data in their web application. When this data is received via Ajax, the
JavaScript program then uses the Document Object Model (DOM) to dynamically update
the web page based on the new data, allowing for a rapid and interactive user experience.
In short, using these techniques, Web designers can make their pages function like
desktop applications. For example, Google Docs uses this technique to create a Web-
based word processor.

Adobe Flash is another technology often used in Web 2.0 applications. As a widely
available plug-in independent of W3C (World Wide Web Consortium, the governing
body of web standards and protocols), standards, Flash is capable of doing many things
which are not currently possible in HTML, the language used to construct web pages. Of
Flash's many capabilities, the most commonly used in Web 2.0 is its ability to play audio
and video files. This has allowed for the creation of Web 2.0 sites such as YouTube,
where video media is seamlessly integrated with standard HTML.

In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very
popular means of creating Web 2.0 sites. At their core, these frameworks do not use
technology any different from JavaScript, Ajax, and the DOM. What frameworks do is
smooth over inconsistencies between web browsers and extend the functionality available
to developers. Many of them also come with customizable, prefabricated 'widgets' that
accomplish such common tasks as picking a date from a calendar, displaying a data chart,
or making a tabbed panel.

On the server side, Web 2.0 uses many of the same technologies as Web 1.0. Languages
such as PHP, Ruby, Perl, Python, and ASP are used by developers to dynamically output
data using information from files and databases. What has begun to change in Web 2.0 is
the way this data is formatted. In the early days of the internet, there was little need for
different websites to communicate with each other and share data. In the new
"participatory web," however, sharing data between sites has become an essential
capability. To share its data with other sites, a web site must be able to generate output in
machine-readable formats such as XML, RSS, and JSON. When a site's data is available
in one of these formats, another website can use it to integrate a portion of that site's
functionality into itself, linking the two together. When this design pattern is
implemented, it ultimately leads to data that is both easier to find and more thoroughly
categorized, a hallmark of the philosophy behind the Web 2.0 movement.

Usage

ACS has been using all these technologies extensively in the form of portlets integrated
in its flagship product called SyncHRO. SyncHRO uses Ajax and advanced on demand
document processing techniques to help them customize their user experience based on
individual preferences. Advanced analytics help provide feedback on user experience and
behavior back to the organization which actually adds a lot of value to the research and
development efforts going on in for the product.
Cloud Computing

Cloud computing is the provision of dynamically scalable and often virtualized resources
as a service over the Internet on a utility basis. Users need not have knowledge of,
expertise in, or control over the technology infrastructure in the "cloud" that supports
them. Cloud computing services often provide common business applications online that
are accessed from a web browser, while the software and data are stored on the servers.

The term cloud is used as a metaphor for the Internet, based on how the Internet is
depicted in computer network diagrams and is an abstraction of the underlying
infrastructure it conceals.

A technical definition is "a computing capability that provides an abstraction between the
computing resource and its underlying technical architecture (e.g., servers, storage,
networks), enabling convenient, on-demand network access to a shared pool of
configurable computing resources that can be rapidly provisioned and released with
minimal management effort or service provider interaction." This definition states that
clouds have five essential characteristics: on-demand self-service, broad network access,
resource pooling, rapid elasticity, and measured service.

Overview

Cloud computing can be confused with:

1. Grid computing—"a form of distributed computing whereby a 'super and virtual


computer' is composed of a cluster of networked, loosely coupled computers
acting in concert to perform very large tasks";
2. Utility computing—the "packaging of computing resources, such as computation
and storage, as a metered service similar to a traditional public utility, such as
electricity"; and
3. Autonomic computing—"computer systems capable of self-management".

Indeed, many cloud computing deployments as of 2009[update] depend on grids, have


autonomic characteristics, and bill like utilities—but cloud computing tends to expand
what is provided by grids and utilities. Some successful cloud architectures have little or
no centralized infrastructure or billing systems whatsoever, including peer-to-peer
networks such as BitTorrent and Skype, and volunteer computing such as SETI@home.

Furthermore, many analysts are keen to stress the evolutionary, incremental pathway
between grid technology and cloud computing, tracing roots back to application service
providers (ASPs) in the 1990s and the parallels to SaaS, often referred to as applications
on the cloud. Some believe the true difference between these terms is marketing and
branding; that the technology evolution was incremental and the marketing evolution
discrete.

Characteristics

Cloud computing customers do not generally own the physical infrastructure serving as
host to the software platform in question. Instead, they avoid capital expenditure by
renting usage from a third-party provider. They consume resources as a service and pay
only for resources that they use. Many cloud-computing offerings employ the utility
computing model, which is analogous to how traditional utility services (such as
electricity) are consumed, while others bill on a subscription basis. Sharing "perishable
and intangible" computing power among multiple tenants can improve utilization rates,
as servers are not unnecessarily left idle (which can reduce costs significantly while
increasing the speed of application development). A side effect of this approach is that
overall computer usage rises dramatically, as customers do not have to engineer for peak
load limits. Additionally, "increased high-speed bandwidth" makes it possible to receive
the same response times from centralized infrastructure at other sites.

Economics
Diagram showing economics of cloud computing versus traditional IT, including capital
expenditure (CapEx) and operational expenditure (OpEx) Cloud computing users can
avoid capital expenditure (CapEx) on hardware, software, and services when they pay a
provider only for what they use. Consumption is usually billed on a utility (e.g. resources
consumed, like electricity) or subscription (e.g. time based, like a newspaper) basis with
little or no upfront cost. A few cloud providers are now beginning to offer the service for
a flat monthly fee as opposed to on a utility billing basis. Other benefits of this time
sharing style approach are low barriers to entry, shared infrastructure and costs, low
management overhead, and immediate access to a broad range of applications. Users can
generally terminate the contract at any time (thereby avoiding return on investment risk
and uncertainty) and the services are often covered by service level agreements (SLAs)
with financial penalties.

According to Nicholas Carr, the strategic importance of information technology is


diminishing as it becomes standardized and less expensive. He argues that the cloud
computing paradigm shift is similar to the displacement of electricity generators by
electricity grids early in the 20th century.

Although companies might be able to save on upfront capital expenditures, they might
not save much and might actually pay more for operating expenses. In situations where
the capital expense would be relatively small, or where the organization has more
flexibility in their capital budget than their operating budget, the cloud model might not
make great fiscal sense. Other factors impacting the scale of any potential cost savings
include the efficiency of a company’s data center as compared to the cloud vendor’s, the
company's existing operating costs, the level of adoption of cloud computing, and the
type of functionality being hosted in the cloud.

Architecture

The majority of cloud computing infrastructure, as of 2009, consists of reliable services


delivered through data centers and built on servers with different levels of virtualization
technologies. The services are accessible anywhere that provides access to networking
infrastructure. Clouds often appear as single points of access for all consumers'
computing needs. Commercial offerings are generally expected to meet quality of service
(QoS) requirements of customers and typically offer SLAs. Open standards are critical to
the growth of cloud computing, and open source software has provided the foundation for
many cloud computing implementations.

Key characteristics

Agility improves with users able to rapidly and inexpensively re-provision technological
infrastructure resources. The cost of overall computing is unchanged, however, and the
providers will merely absorb up-front costs and spread costs over a longer period.
Cost is claimed to be greatly reduced and capital expenditure is converted to operational
expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically
provided by a third-party and does not need to be purchased for one-time or infrequent
intensive computing tasks. Pricing on a utility computing basis is fine-grained with
usage-based options and fewer IT skills are required for implementation (in-house). Some
would argue that given the low cost of computing resources, that the IT burden merely
shifts the cost from in-house to outsourced providers. Furthermore, any cost reduction
benefit must be weighed against a corresponding loss of control, access and security
risks.
Device and location independence enable users to access systems using a web browser
regardless of their location or what device they are using (e.g., PC, mobile). As
infrastructure is off-site (typically provided by a third-party) and accessed via the
Internet, users can connect from anywhere.
Multi-tenancy enables sharing of resources and costs across a large pool of users thus
allowing for:
• Centralization of infrastructure in locations with lower costs (such as real estate,
electricity, etc.)
• Peak-load capacity increases (users need not engineer for highest possible load-
levels)
• Utilization and efficiency improvements for systems that are often only 10–20%
utilized.
• Reliability improves through the use of multiple redundant sites, which makes
cloud computing suitable for business continuity and disaster recovery.
Nonetheless, many major cloud computing services have suffered outages, and IT
and business managers can at times do little when they are affected.
• Scalability via dynamic ("on-demand") provisioning of resources on a fine-
grained, self-service basis near real-time, without users having to engineer for
peak loads. Performance is monitored and consistent and loosely-coupled
architectures are constructed using web services as the system interface.
• Security typically improves due to centralization of data, increased security-
focused resources, etc., but concerns can persist about loss of control over certain
sensitive data, and the lack of security for stored kernels. Security is often as good
as or better than under traditional systems, in part because providers are able to
devote resources to solving security issues that many customers cannot afford.
Providers typically log accesses, but accessing the audit logs themselves can be
difficult or impossible.
• Sustainability comes about through improved resource utilization, more efficient
systems, and carbon neutrality. Nonetheless, computers and associated
infrastructure are major consumers of energy.
Layers

Clients
A cloud client consists of computer hardware and/or computer software which relies on
cloud computing for application delivery, or which is specifically designed for delivery
of cloud services and which, in either case, is essentially useless without it. For example:

• Mobile (Android, iPhone, Windows Mobile)


• Thin client (CherryPal, Zonbu, gOS-based systems)
• Thick client / Web browser (Mozilla Firefox, Google Chrome, WebKit)

Application
A cloud application leverages cloud computing in software architecture, often eliminating
the need to install and run the application on the customer's own computer, thus
alleviating the burden of software maintenance, ongoing operation, and support. For
example:

• Peer-to-peer / volunteer computing (BOINC, Skype)


• Web applications (Facebook, Twitter, YouTube)
• Security as a service (MessageLabs, Purewire, ScanSafe, Zscaler)
• Software as a service (Google Apps, Salesforce, Cool Life Systems, SpringCM)
• Software plus services (Microsoft Online Services)
• Storage [Distributed]
• Content distribution (BitTorrent, Amazon CloudFront)
• Synchronisation (Live Mesh)
Platform
A cloud platform (PaaS) delivers a computing platform and/or solution stack as a service,
generally consuming cloud infrastructure and supporting cloud applications. It facilitates
deployment of applications without the cost and complexity of buying and managing the
underlying hardware and software layers. For example:

Services
• Identity (OAuth, OpenID)
• Payments (Amazon Flexible Payments Service, Google Checkout, PayPal)
• Search (Alexa, Google Custom Search, Yahoo! BOSS)
• Real-world (Amazon Mechanical Turk)
• Solution stacks
• Java (Google App Engine)
• PHP (Rackspace Cloud Sites)
• Python Django (Google App Engine)
• Ruby on Rails (Heroku)
• .NET (Azure Services Platform, Rackspace Cloud Sites)
• Proprietary (Force.com, WorkXpress, Wolf Frameworks)
• Storage [Structured]
• Databases (Amazon SimpleDB, BigTable)
• File storage (Amazon S3, Nirvanix, Rackspace Cloud Files)
• Queues (Amazon SQS)

Infrastructure
Cloud infrastructure (IaaS) is the delivery of computer infrastructure, typically a platform
virtualization environment, as a service. For example:

• Compute (Amazon CloudWatch, RightScale, Kaavo)


o Physical machines (Softlayer)
o Virtual machines (Amazon EC2, GoGrid, Rackspace Cloud Servers,
Terremark Enterprise Cloud)
o OS-level virtualisation
• Network (Amazon VPC)
• Storage [Raw] (Amazon EBS)

Servers
The server layer consists of computer hardware and/or computer software products which
are specifically and soley designed for the delivery of cloud services. For example:

• Fabric computing (Cisco UCS)

Architecture
Cloud architecture, the systems architecture of the software systems involved in the
delivery of cloud computing, comprises hardware and software designed by a cloud
architect who typically works for a cloud integrator. It typically involves multiple cloud
components communicating with each other over application programming interfaces,
usually web services.

This closely resembles the Unix philosophy of having multiple programs each doing one
thing well and working together over universal interfaces. Complexity is controlled and
the resulting systems are more manageable than their monolithic counterparts.

Cloud architecture extends to the client, where web browsers and/or software applications
access cloud applications.

Cloud storage architecture is loosely coupled, where metadata operations are centralized
enabling the data nodes to scale into the hundreds, each independently delivering data to
applications or users.

Types
1. Public cloud
Public cloud or external cloud describes cloud computing in the traditional mainstream
sense, whereby resources are dynamically provisioned on a fine-grained, self-service
basis over the Internet, via web applications/web services, from an off-site third-party
provider who shares resources and bills on a fine-grained utility computing basis.

2. Hybrid cloud
A hybrid cloud environment consisting of multiple internal and/or external providers
"will be typical for most enterprises".

3. Private cloud
Private cloud and internal cloud are neologisms that some vendors have recently used to
describe offerings that emulate cloud computing on private networks. These (typically
virtualisation automation) products claim to "deliver some benefits of cloud computing
without the pitfalls", capitalizing on data security, corporate governance, and reliability
concerns. They have been criticized on the basis that users "still have to buy, build, and
manage them" and as such do not benefit from lower up-front capital costs and less
hands-on management, essentially "[lacking] the economic model that makes cloud
computing such an intriguing concept".

While an analyst predicted in 2008 that private cloud networks would be the future of
corporate IT, there is some uncertainty whether they are a reality even within the same
firm. Analysts also claim that within five years a "huge percentage" of small and medium
enterprises will get most of their computing resources from external cloud computing
providers as they "will not have economies of scale to make it worth staying in the IT
business" or be able to afford private clouds. Analysts have reported on Platform's view
that private clouds are a stepping stone to external clouds, particularly for the financial
services, and that future datacenters will look like internal clouds.

The term has also been used in the logical rather than physical sense, for example in
reference to platform as a service offering, though such offerings including Microsoft's
Azure Services Platform are not available for on-premises deployment.

Vous aimerez peut-être aussi