Vous êtes sur la page 1sur 9

ARP Draft Proposal.

Saurabh Brajesh (GNOV10IT059) Siddhartta Srinivasan (GNOV10IT060)

Introduction
Cloud computing is location independent computing, whereby shared servers provide resources, software, and data to computers and other devices on demand, as with the electricity grid. The term cloud computing is probably influence by the use of a cloud image to represent the Internet or some big networked environment. Cloud computing can be said as an adoption of virtualization, SOA (service oriented architecture) and utility computing. In essence it can be easily describe as distributed computing. In cloud computing an application is built using the multiple services from multiple locations. According to Ling Qian and Zhiguo Luo (2009) it describes a new consumption, and delivery model for IT services based on the Internet, and it typically involves over-the-Internet provision of resources which are dynamically scalable and often virtualized. This frequently takes the form of web-based tools or applications that users can access and use through a web browser as if it was a program installed locally on their own computer. Cloud computing as a concept dates back long time, when John McCarthy (1960s) opined that "computation may someday be organized as a public utility. The actual term "cloud" has been borrowed from telephony. Now that telecommunications company, who until the 1990s primarily offered dedicated point-to-point data circuits, began offering Virtual Private Network (VPN) services. By switching traffic to balance utilization, they were able to utilize their network bandwidth more efficiently. The cloud symbol was used to demarcate point between that which was the responsibility of the service provider from that of the user. Cloud computing in present context extends this boundary to cover servers as well as the network infrastructure. The first scholarly use of the term cloud computing was in lecture by Ramnath Chellappa (1997).

Why Security is Relevant for Cloud Computing:


Despite all the hype that has been surrounding the cloud computing for some time, enterprise customers are still reluctant to adopt this wholeheartedly and the main reason is security Gartner (2008) and IEEE Computer Society (2009). Cloud computing is still very much a new frontier with very little in the way of specific standards for security or data privacy. Industry verticals like Banking and capital
1

markets, Defense and other high risk projects give far more weight age to security and fail safety than marginal reduction in capital expenditure or scalability factors. Cloud computing is fraught with security risks, according to analyst firm Gartner. Smart customers are expected to ask tough questions and consider getting a security assessment from a neutral experts before giving green signal to adopting cloud vendor .Cloud computing has "unique attributes that require risk assessment in areas such as data integrity, recovery, and privacy, apart from this an evaluation of legal issues in areas such as e-discovery, regulatory compliance, and auditing which are universal in nature and implementable across the board.

Literature Review

yyyyyyyyyyyy

Independent variables:
1. Privileged user access- Sensitive data processed outside the enterprise brings with it an inherent level of risk, because outsourced services bypass the "physical, logical and personnel controls" 2. Regulatory compliance- Customers are ultimately responsible for the security and integrity of their own data, even when it is held by a service provider. Traditional service providers are subjected to external audits and security certifications. Cloud computing providers who refuse to undergo this scrutiny are "signaling that customers can only use them for the most trivial functions"

3. Data location- When you use the cloud, you probably won't know exactly where your data is hosted. In fact, you might not even know what country it will be stored in. Ask providers if they will commit to storing and processing data in specific jurisdictions.

4. Data segregation- Data in the cloud is typically in a shared environment alongside data from other different clients. Encryption is effective but isn't a cure-all.

5. Recovery - Even if company don't know where the data is, a cloud provider should tell the customer what will happen to the data and service in case of disaster

6. Investigative support - Investigating inappropriate or illegal activity may be impossible in cloud computing, Gartner warns. "Cloud services are especially difficult to investigate, because logging and data for multiple

customers may be co-located and may also be spread across an everchanging set of hosts and data centers.

7. Long-term viability - it is rare for a companys cloud computing provider to go broke or get acquired(merger and acquisition) and swallowed up by a larger company. But the company must ensure your data will remain available even after such an event.

Dependent variables:
Cloud computing security yyyyyyyyyyyyyyyyyyyyy

Research Problem yyyyyyyyyyyyyyyyyyyyyy Objectives of the Research yyyyyyyyyyyyyyyyyyyyyyyy Scope of the Study yyyyyyyyyyyyyyyyyyyyyy

Research Methodology
The data source for a research revolves around two basic pillars Primary research and Secondary research. Primary Research: It is the process by which we try to discover the original data. For doing primary research, first a research plan has to be formulated which should encapsulate data gathering, analyzing the gathered data and come to conclusion on the analysis of the gathered data. There are two basic type of primary research:

Qualitative Data Collection: Qualitative data collection means collecting a nonnumerical data or explanation based on the attributes of the graph or source of data. Quantitative Data Collection: Quantities data collection is used to gather quantitative data information dealing with numbers and anything that is measurable. Statistics, tables and graphs, are often used to present the results of these methods. Secondary Research: It is the process of referring to the data gathered and recorded by someone else prior to and for a purpose other than current project. It is generally historical and is already assembled. Secondary research can be from: Internal Sources: The data available within the organization. External sources: The data available outside the organization i.e. published data.

Research Methodology Adopted:


The research methodology which we will be adopting will be a mix Primary and Secondary Research. For Secondary Research we will be using data published by A) Consulting agency such as Gartner, B) Research publication from journals such as IEEE, Computer Weekly etc. c) Trends published by companies such as IBM, HP, Amazon etc.

For Primary Research quantities data. The three most common quantitative methods are face to face interview, telephonic interview and online interviews. In our research we will adopted a mix of face to face and online interview because out target respondents are people who might be placed here in Singapore or in different geographical location. For the purpose of our research we will conduct a quantitative research methodology to find out the Major factors affecting security of Cloud computing.
5

This research methodology helped us to gather information about the leading factors and their respective impact on cloud security.

Research Framework

Dependent Variable

Independent variables

Sampling Method Adopted:


We filtered the respondents based on certain criteria. As the research is targeted to find out the potential factors affecting security of Cloud,

We targeted people who are already using applications based on cloud services and also included individuals who are working with organizations Providing cloud services. The idea behind the filter questions is to get responses from the intended respondents.

Systematic Sampling:
The sample was selected using non-probabilistic convenience sampling

methodology. In this methodology the target sample were selected Based on their relative ease of access. We restricted the sample size to 70 for the people who are already using applications based on cloud services And the sample size to 30 for individuals who are working with organizations providing cloud services.

Use of Quantitative Models


The quantitative model that will be applicable to this research paper will be. To find the correlation between category to which data belongs and security level company and people attach to it.

The null hypothesis would be that there is no correlation between category to which data belongs and security level company and people attach to it. Ho = No correlation category to which data belongs and security perception company and people attach to it Ha = Correlation between category to which data belongs and security perception company and people attach to it Given the objective, quantitative models can be divided into dependence and interdependence models. Dependence models will have one or more dependent variables and a set of independent variables. Interdependence variables are mostly multivariate.

Discussion, Analysis and Findings


Based on our research, quantitative technique models used and the analysis done, we would be able to find whether there exist any correlations between category to
7

which data belongs and security Perception Company and people attach to it. We would therefore be able to conclude whether the observations made in the literature survey done were right or not. The analysis would also help to answer whether taking into consideration companys and people' perception regarding their respective data category would help vendors providing cloud computing to achieve better industry penetration. These answers are especially useful considering the recent hurdles face by vendors when it came to implementing cloud services in Banking and Finance industry. At same time this could provide them insights into markets which can be more receptive towards could computing services.

Limitations of Research yyyyyyyyyyyyyyyy Summary, Recommendations and Conclusion

Vous aimerez peut-être aussi