Vous êtes sur la page 1sur 29

June 2013

Business Continuity
A compendium of our best recent coverage

This issue sponsored by:

Business Continuity
3
Most Liked

16 19 22 25

How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One
This issue sponsored by:

Inside
Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option

6 12

Most Tweeted

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

Dont Confuse Big Data With Storage


A large part of big data management is knowing what data to analyze, what to back up and what to dump, says disaster recovery expert.
[ By Jeff Bertolucci ]

Most Liked

DOWNLOAD PDF

ow much big data should your organization save? And how much should you back up? Big data plays an important role in todays business world, but its not up there with mission-critical applications that are essential to an organizations day-to-day operations. Thats according to Michael de la Torre, VP of product management for SunGard Availability Services, an IT services company that provides, among other things, disaster recovery services. Always remember that not all data has equal value, de la 3

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

Torre advises. And in most cases, big data is just another business application. For most companies its more of a business-critical app, de la Torre said in a phone interview. It really doesnt need to be up all of the time but youre going to lose business opportunities if it isnt up and running. This isnt to say, however, that big data doesnt matter. On the contrary, de la Torre sees big data as the next generation of business analytics. You have all this nonstructured or minimally structured data. Theres a lot of it. And its coming from different sources that you would typically think are outside of the business warehouse, he says. As such, you need new tools and techniques to get value out of that data. And part of that decision-making process is figuring out what information needs to saved, and what is expendable. Dont just save everything to save everything. That makes very little sense, says de la Torre. For instance, social media streams a classic big data example of high volume, velocity and variety dont necessarily need to be hoarded for eternity. But other

forms of big data may provide great value many years down the line. When you think about social [media], so much of the value of that data is that its very time-dependent. Its very volatile, and it loses its value almost immediately, de la Torre says. Other data such as weather, where youre doing long-term correlations, will potentially remain viable for years. OK, so all big data isnt created equal. But whats worth saving? One solution is to store summary data from a particular time period or event, along with a small amount of anecdotal information. Thats better than saving a million logs, de la Torre advises. Do you need the summary, or do you need all the detail? Obviously, the summary data method is more cost effective and easier to manage than the save-everything approach. It also works with sensor-generated information, a big data category that includes data from field equipment in remote locations. Manufacturing companies figured this out a long time 4

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

ago. You dont store the data from spinning equipment, de la Torre says. You dont want to pay for the bandwidth costs. You dont need all that data. The solution: Put an expert system in place. And ultimately thats what big data is: an expert system that makes meaning out of data, he adds. Like many data professionals, de la Torre believes the term big data is mostly marketing hype. Its advanced business analytics using new sources of data, he says. And somebody said, Hey, lets call it big data. While it may not be a mission-critical app, big data can provide a lot of value to organizations. For instance, it can help companies find interesting ways to use their proprietary data, and to create business opportunities from it, says de la Torre. p

DOWNLOAD PDF

[Business Continuity]

Inside

COMMENTARY

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

Moving Legacy Apps To The Cloud


Can you run that old ERP system on AWS? Yes, and it just may save you money.
[ By Jonathan Feldman ]

DOWNLOAD PDF

he idea that you should wait until youre ready to update a legacy application before moving it to the cloud is frankly crazy. With portfolios of hundreds or thousands of apps, CIOs must take advantage of the economies of scale cloud computing delivers. Trust us, the startups gunning for your customers arent being held back by a circa 1992 CRM system. Dont get us wrong. The question isnt, Can all legacy applications live in the pure and fluffy cloud? Clearly, the answer is no. But can many legacy applications leverage at least some of the technology advances inherent in infrastructure-as-a-service? The answer to that is yes, certainly. Note that Can this legacy application shift to SaaS?

ABOUT THE AUTHOR Jonathan Feldman is CIO for a rapidly growing North Carolina city recognized nationally and internationally for improving services to citizens and reducing expense through IT innovation. His previous work in the private sector includes providing infrastructure and security services to government, military, law enforcement, and financial services organizations.

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

is a different and less interesting question that hinges on whether a given software provider can revamp its business model and provide a specific application as a service. This aint Salesforce; its Can SAS provide SaaS? One interesting finding of InformationWeeks 2013 State of Cloud Computing Survey, which asked 446 business technology pros about their cloud use, is an eight-point drop in the percentage of cloud adopters using SaaS. It turns out that many independent software vendors arent equipped to meet the demands of todays enterprises. Their infrastructure costs tend to be high and their response times poor because theyre not running IaaS theyre running multitenant, big-box, hosted infrastructure in one or two data centers. Because its as a service, they call it cloud, but its really not. Its hosting. If the price point and features work for you, great. But our experience is that ISVs turned hosting providers tend to charge more and deliver less. True IaaS providers deliver savings, agility and scaling benefits. So why do IT shops run so many applications on their own servers? Part of the reason you dont see apps

aggressively moving to the cloud is because of the refresh cycle, says Josh Crowe, senior VP of product development at Internap, a content delivery network and cloud hosting provider. In short, IT teams have other stuff to do, and theyre content to wait. But CIOs need to push back, given the benefits to be had. Here are the top five objections to moving apps to the cloud and our suggested responses.

1. Our legacy applications are too complex to move to IaaS.


Erik Sebesta, CTO of Cloud Technology Partners, a professional services firm, says he sees plenty of companies taking legacy spaghetti and making it cloud spaghetti. We would argue that most complex applications started like those 100-foot, perfectly wrapped cords you bought six months ago at Home Depot. Over time, they all end up as spaghetti, thanks to moves, adds and changes, so set your expectations accordingly. By transitioning away from in-house infrastructure, most companies can save money, but dont insist that the deployment be elegant. 7

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

How much can they save? Sebesta says his firm helped a large telecommunications organization move off expensive legacy hardware that wasnt well-utilized and recoup more than $20 million annually.

2. It wont save money because we still have to run a data center.


True, you probably wont shutter your data center. However, there are three compelling scenarios from a cost perspective that CIOs who want to take advantage of sophisticated cloud capabilities can cite. >> A combination of private and public clouds to do being able to scale up for usage spikes. If youve ever paid cloudbursting. You keep the ability to host internally and the bill for running an app 24/7 in Amazon Web Services, realize savings over a 100% cloud approach, while still youll appreciate this model. In fact, some n-tier archi8

DOWNLOAD PDF

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

tecture enterprise apps are just about ready to support cloudbursting, though customization may be needed to programmatically monitor, load and request more servers from the provider when needed. Select target applications carefully. Customer-facing apps are much more likely than internally focused systems to need elasticity. >> Business continuity and disaster recovery. Look at your budget and you will likely see one of two things: massively expensive operational contingency contracts for disaster recovery for important systems, or capital expenditures for building out and maintaining disaster recovery hardware and associated data center assets, yours or hosted. BC/DR fairly begs for the automation and pay-as-yougo capabilities offered by IaaS providers. You can spin up apps in the cloud for a fraction of the cost of your own infrastructure build-out or using a conventional DR or SaaS vendor. We recently spoke with a CIO who was being pitched by a SaaS vendor on its costly DR capabilities: You just have to let us know a week before you test so that we have the people in place to handle it. Oh,

yes, thats just what we want a vendor that demands warning before testing its premium-priced ability to handle a disaster. >> Development and testing. This use case seems piddly compared with the resources expended for business continuity and disaster recovery, but over time, costs add up. Since theres always infinite demand for a free service (or apparently free, since most app dev costs arent well aligned with or transparent to departmental spending), business units think nothing of building multiple test environments. Weve seen more than a dozen for some enterprise apps. Think about being able to tell business units: You can have as many test environments as you want for $20 per hour each. All of a sudden youre not only reducing the number of test environments from willy-nilly to whats really needed, youre also likely saving considerable staff time, since setup will be far more automated.

3. Why would we move this application outside its life cycle?


The biggest pushback will likely come from the guardians 9

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

of the application life cycle. In the data center model, applications are identified, architectures defined, and hardware and software procured then comes the agonizing manual labor of loading and configuring, which can take a week or more. In the cloud, theres still agony in loading and configuring, but instead of grunting over individual servers, the heavy lifting is associated with building templates and scripts that will auto-build, or orchestrate, servers and apps. Your application life cycle guardians will remember well what an effort it was to configure the servers for that legacy system. Though they may smile politely while you unveil your big cloud plans, as soon as you reveal that those plans involve destroying their servers and rebuilding them off-site using templates and scripts, the conversation is over, in their minds. This isnt always the case, but the point is that IT leaders think, No big deal. Just write some scripts and templates and move the app to the cloud. Application and infrastructure teams often view this quite differently and will push back that the app shouldnt be moved outside

of the life cycle. Frankly, its hard for a CIO to argue with that, especially when teams point out the amount of effort required. Yet if we wait until the end of life for applications that may hang around for a decade, we lose much of the benefit of cloud, and for what?

4. Well end up locked into some cloud provider.


There are many ways to migrate an application to the cloud. Some involve consultants, manual processes and being melded to a cloud provider. However, there are tools to help avoid lock-in while bringing the benefits of automation. You can follow the lead of The Associated Press, CBS Interactive, Zynga and a slew of sophisticated startups: Use a platform such as RightScale, Scalr or enStratus that abstracts complexity. These systems provide management and orchestration and are typically built around components of on-the-fly provisioning technologies, such as Puppet and Chef. Using a platform like RightScale requires a completely different way of thinking than what enterprise app teams 10

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

are accustomed to and, to be fair, while these management platforms have templates for building things like SQL Server, you wont find templates for most legacy applications. Your app team will have to build these, and, as weve said, its just about as much effort as creating the server farm in the first place. Dont expect people to be excited about this, particularly because there are new technologies to be learned and new obstacles to overcome, such as creating a virtual private cloud or dealing with a new and borderless server architecture. Another issue that tends to bring IT pros outside their comfort zones is when outside services require internal resources. When a customer used CliQr to deploy a .NET app into a cloud providers infrastructure, the question of how to print on site came up. The answer was to use a CliQr feature that allows for service proxy of given services no VPN was needed. Creativity is required.

center with a crackerjack security team and a CSO who used to be with the FBI more, or the programmer whos been outsourcing his job and sending his physical authentication token to China via VPN? In shops with hundreds or thousands of apps that are working just fine apps that people like, that are reliable, that provide benefits to the business theres no compelling reason to burn it all down and rebuild it in the cloud way. But make no mistake: Todays rich IaaS provider ecosystem means low prices and little risk of lock-in, and automation means you can quickly provision systems to meet your organizations needs. So start now, and be pragmatic with legacy apps. If you sit back until applications run their natural life cycles, youll be waiting a long time.p

5. Its not broken, so why the heck risk a security breach?


DOWNLOAD PDF

To paraphrase Dilbert, do you trust a vetted cloud data 11

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

VMware Users Gain Bluelock Recovery Option


As VMware prepares for the public cloud market, compatible suppliers like Bluelock add services.
[ By Charles Babcock ]

Most Tweeted

DOWNLOAD PDF

luelock, an early member of VMwares public cloud ecosystem, has launched recovery-as-aservice for users of VMware virtual machines, whether on premises or in a cloud service. Recovery-as-a-service is expected to one day evolve into a lower-cost replacement for disaster recovery systems, where duplicate hardware and software sit in reserve at a companys secondary or alternative data center. Bluelock is one of the first general-purpose services available online to VMware customers, based on familiar VMware vSphere 5 and 12

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

vCloud suite products. It will compete with SunGard and other disaster recovery services that provide similar services across the board to multiple hypervisor users. Bluelock is part of VMwares public cloud partner ecosystem, a group that is likely to find itself under increasing pressure as VMware prepares to enter the list of infrastructure-as-a-service providers. VMware is slated to announce in May how it expects to encourage and sustain its existing ecosystem, on one hand, and compete more effectively with Amazon Web Services on the other. Other primary cloud partners include Colt, SingTel, SoftBank, Dell, CSC and AT&T. In addition, hundreds of other regional providers offer vCloud suite compatibility in their public IaaS offerings. Bluelock appears unfazed by the prospect of VMware offering an alternative public cloud service. Bluelock will supply a VMware customer with a service that creates clones of vital systems running in the enterprise data center and establishes them in a Bluelock data center. It will also provide recovery service in a second Bluelock data center. Bluelock has data centers in Las Vegas and Salt Lake City as

well as Indianapolis, where the company is headquartered. In each case, Bluelock creates a virtual data center unit, a designated set of virtual servers with well-defined software stacks, in an alternate location to the one in which the data center is running. In the event of a malware invasion, data center power loss or natural disaster, the recovery systems are activated and preset means of feeding in up-to-date data are triggered. Bluelock calls the recovery-as-a-service offering its 4-Series Virtual Datacenter, a name that appears to reflect the fact that it runs Tier 4 designated data centers, facilities that meet a 99.995% level of reliability, resiliency and compliance as determined by the Uptime Institute. The institute is an independent unit of the 451 Group, the company that owns 451 Research and the Yankee Group. Pat ODay, CTO of Bluelock, says recovery-as-a-service is easier to test than physical disaster recovery systems. In many cases, the complexity of a physical duplicate of existing mission-critical systems makes IT staffs reluctant to stage a full-scale test, where production systems are shut 13

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

down in hopes that disaster recovery systems would work as planned and pick up the tasks. Bluelock bases its replication of existing systems and establishment of data feeds on the Zerto Virtual Replication software system. It clones virtual machines and data traffic in running systems and feeds a copy to the recovery site. The system relies on a constant, up-todate data stream rather than snapshots. Zerto spokesmen say it does so without impacting existing application performance. Using virtualization as the basis for a recovery system reduces its cost and provides an appropriate balance between IT disaster recovery capabilities, affordability and risk mitigation, ODay said in the service announcement. The service is priced to represent an estimated 40% of the cost of running a second physical production system, he said in an interview. The recovery service keeps a journal of all transactions of a running system. In the To-Cloud version, the customer goes to the vSphere control panel tab for disaster recovery and uses it to activate the backup system. The

replacement system is given a data feed that represents up-to-date data for the application. If there has been a data corruption incident, the recovery system can go back in the transaction log to a point that preceded the corruption and reactivate only good data, ODay explained. The service includes all the commands needed to run a test of the system, which the customer may activate when he chooses. If he schedules the test with Bluelock or conducts it on a weekend, there are no fees for twice-a-year testing, ODay said. Testing disaster recovery systems is a sensitive point; few IT shops wish to risk bringing their systems down unintentionally in the process of DR testing. Its not uncommon for many small businesses or nonprofit organizations to have only the most rudimentary recovery system, such as a snapshot tape taken once a week or once a day. An early adopter of the service, Ursinus College in Collegeville, Pa., didnt previously have a disaster recovery system. Were excited to have a partner who we know will be there to help get us back on 14

[Business Continuity]
ASIGRA RESOURCE CENTER
Cloud Storage: Lower Cost and Increase Uptime This Aberdeen Analyst report presents results from IT experts on their cloud initiatives & benefits cloud storage users recognize upon adoption. Read more. Steps to Take Before Choosing a Business Continuity Partner This Aberdeen Analyst report details the necessities in creating business continuity plans & if theres the need to outsource part/all of the process. Read more. 2012 Gartner Magic Quadrant for Enterprise Backup/Recovery Gartner published its Magic Quadrant for Enterprise Backup/Recovery for companies to pilot the many solutions to address their backup/recovery needs. Read more Agentless is Not a Myth Read this white paper to learn about the seven benefits agentless backup and recovery software delivers to complex enterprise environments. Read more. Requirements Checklist for Choosing a Backup and Recovery Service Provider Find out the top five questions you should be asking when selecting the most qualified backup and recovery service provider. Read more.

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

our feet in the event of a [disaster] declaration, said James Shuttleworth, director of network systems and infrastructure at the college, in the announcement. The service for existing Bluelock cloud customers is dubbed the Bluelock Virtual Datacenter 4500. The service for on-premises VMware users looking for a cloud-based recovery site is called the Bluelock Virtual Datacenter 4000. Bluelock was clustered with GoGrid and Joyent in the challengers section of Gartners magic quadrant on infrastructure-as-a-service suppliers.p

DOWNLOAD PDF

15

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

How University Of Oklahoma Protects Records From Disaster


Enterprise content management helps University of Oklahoma make student records safer, more accessible.
[ By David F. Carr ]

DOWNLOAD PDF

or the head of the academic advising program at the University of Oklahoma College of Arts and Sciences, digital record keeping means never having to say, Im sorry as in, Im sorry, but I dont have that record, or even, Im sorry, all your student records were lost in the flood. Rhonda Kyncl, assistant dean for academic services, says she wants to make sure her department is ready when a student comes to that critical checkpoint of entering senior year and reviewing his or her records. This is the moment when the adviser says, Sure, just 16

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

complete these courses with a grade of C or better, and youll be all set to graduate in May, or Sorry, looks like youre going to come up short. And it would be nice if that answer were correct, based on all the right information being available to prove, for example, that the student had permission to substitute one course for another thats normally required. We want to leave the student with a comprehensive record so the student doesnt have to do all that jumping around of tracking down documents from different teachers and academic departments, Kyncl says. By moving to a Laserfiche enterprise content management system, she has made the advising function digital, replacing paper folders of records with online folders of documents. In addition to keeping the records more organized, the ECM system makes the documents accessible from an iPad or any other tablet or computer. That makes it easier for busy students and advisers to meet in a coffee shop or any other convenient location and have all the necessary information instantly available, she says. The trick is that not all the required information for a

student advising session can be represented in one neat report, Kyncl says. Theres always going to be the need to track down documentation for exceptions like substituting one course for another represented in a departmental memo. The record also includes documents such as notes from previous student-adviser meetings. In the course of a meeting for an aspiring graduate, the adviser will also produce documents like a checklist of outstanding requirements the student must meet. As much as possible, the university now tries to produce these documents digitally from the outset, Kyncl says. However, some still start out on paper for example, course add/drop forms and documentation from outside agencies such as Veterans Affairs and get scanned in. Kimberly Samuelson, VP of strategy at Laserfiche, says she is seeing more interest from higher education as the institutions position themselves to operate more effi ciently. Theyre also seeing technology as being an advantage in the way they position themselves to the student base, she says. While Kyncl believes she is ahead of many of her peers 17

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

at other colleges in installing a system like this, she admits she was driven less by forward-thinking inspiration than fear inspired by a near disaster. The reason we started looking into digital records is we had a flood in our building at the end of 2009. Fortunately, it didnt damage student records but it came within about 50 feet of doing that, she says. Rather than an overflowing river, this flood was caused by a burst water pipe in the water-handling system on the roof, which flowed down into the building for four hours before the maintenance staff discovered it. Plenty of other records in the building were reduced to a useless, sodden mess, and only by luck did the student records escape. That really would have been bad; I dont know what we would have done, Kyncl says. In some cases, duplicate records would have existed elsewhere around the university, but most of the records for the advising department itself existed only in that one place. We would have been redoing thousands of files. It would have been a nightmare, and it certainly got us thinking, Kyncl says.

Some of the peers whom she consulted through a university technology email list had started doing digital archiving of documents to protect against such disasters. Yet as Kyncl investigated the options, she thought, Why wouldnt I do that for my processes as well? So she looked for a system that would manage document creation, organization and everyday access, in addition to archiving. Kyncl was introduced to Laserfiche by the college admissions director, who was also investigating (and has since implemented) the electronic record-keeping system. Two other colleges (out of more than a dozen that make up the university) have also implemented it, she says. Although the universitys central IT organization has been involved only tangentially, she says, I think other people are watching what were doing, since we are a large college.p

18

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

Dont Trade High Availability For Flash Performance


The rise of SSDs has also resurrected the single-controller architecture along with its single point of failure. IT shouldnt gamble on high availability just for flashs amped-up performance.
[ By Howard Marks ]

DOWNLOAD PDF

s flash-based solid-state drives revolutionize the storage industry, it might be worth taking a look at how some basic storage system architectures compare when the storage media change from spinning disks to SSDs. The most basic storage architecture is essentially a RAID controller with a SAN or NAS target. The controller, whether custom hardware or a standard server, is a single point of failure. As a result, unicontroller systems have been relegated to the very low end of the disk array market, where 19

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

theyre used by SMBs or to hold additional copies of data. The vast middle of the storage market is dominated by dual-controller, modular arrays that will fail over transparently if one controller goes down. Amazingly enough, the move to SSD has resurrected the unicontroller design in the form of rack-mount SSDs from the likes of IBMs Texas Memory Systems and Astute Networks ViSX, as well as more feature-rich systems like Nimbus Datas S-Class. The risk of data loss inherent in a unicontroller design might be tolerable for some applications, such as analytics or virtual desktop infrastructure with nonpersistent desktops, but for the vast majority of cases it would be hard to pay $50,000 or more for a product that doesnt offer high availability. When asked about high availability, proponents of unicontroller systems will generally recommend a pair of appliances with synchronous replication. If the vendor has done its homework and written a failover mechanism into its arrays, a cluster of unicontrollers is available enough for most applications. Basically, a typical dual-controller, active/passive modu-

lar storage system is what the systems guys would call a shared disk cluster, much like a typical Windows Server cluster. A pair of unicontroller systems that replicate data is a shared nothing cluster. In the disk era, unicontrollers were built on industrystandard servers, which offset the additional cost of a second set of disk drives. This meant that unicontroller designs, some of which provided some degree of scaleout as well, like LeftHands iSCSI array and NexentaStor, have sold thousands of units. The problem with unicontroller systems in the SSD era is that the flash makes up a much higher portion of the cost of a storage system than disk drives. In fact, some all-flash unicontroller systems cost as much as competing systems that include HA. Ive even heard vendors suggest that customers buy one flash device and manage HA by using host volume managers or storage virtualization appliances. But if you mirror in your host computers volume manager or synchronously replicate from an all-SSD array to a disk-based system to avoid the cost of two all-SSD 20

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

systems, you give up the performance advantage the allflash system has on writes. Thats because writes will only be acknowledged to your applications after theyve been written to both the flash and disk-based systems. This limits application performance to the write speed of the slower disk system. If instead you asynchronously replicate data across the mixed storage systems through host- or application-level software, youve turned a simple device failure into a fullblown disaster with associated recovery point and recovery time objectives. By contrast, device failure on a true high-availability system would cause no data loss, and at worst a few seconds of failover delay. Users and senior management can accept some downtime, and even some performance loss, in the face of a disaster caused by an external event such as a tornado or hurricane. Theyre a lot less understanding when they are inconvenienced by a problem within the IT department, even if it was the failure of a key piece of equipment. The only place I can think of where a replicating pair of unicontroller systems might be an advantage would be on

a college campus. At the college where I worked, we had two data centers at opposite ends of the campus connected by a loop of 128 strands of single-mode fiber. In an environment like that, a user could put one system in each data center, and get both high availability and disaster recovery with one replicating array pair taking advantage of the lower cost of unicontroller systems. A year or two ago, speed was the only performance factor that people cared about with all-flash systems; we were so happy with the performance we didnt care about other functionality. But as the all-flash market matures, Im less willing to sacrifice things like high availability for speed. Howard Marks is founder and chief scientist at DeepStorage, a storage consultancy and independent test lab concentrating on storage and data center networking. p

21

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

Rackspace Launches Global OpenStack Expansion


IaaS provider will enlist partners to add Rackspace data center locations around the world, strengthen competitiveness with the likes of CenturyLink Savvis and Verizon Terremark.
[ By Charles Babcock ]

DOWNLOAD PDF

ackspace, as it stands, is a set of data centers in the U.S., Hong Kong, London and Sydney, which still leaves it in the junior league compared with the facilities operated by Amazon Web Services, Google, CenturyLink Savvis or Verizon Terremark. But Rackspace is about to become a much larger, interconnected chain of cloud data centers. Its launching an effort with telcos and other partners to build out more Rackspace22

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

and OpenStack-branded cloud facilities. The goal isnt to add to a list of local data centers, like the ones that were quickly isolated, then knocked off the air when Hurricane Sandy hit the Middle Atlantic region last year. Rather, it plans to build a chain of interconnected data centers that can give customers backup and recovery or disaster recovery services in a different geographic area. With connections already established, it can also provide a quick migration route out of one facility and into another, if both are architected and operated similarly. The goal is not to just enable more individual data centers but a federated network of them, Jim Curry, senior VP and general manager of Rackspace Private Cloud, said in an interview. Weve been asking ourselves, how can we ensure a worldwide network of OpenStack cloud data centers? he says. Rackspace was a founder of the OpenStack project with NASA and remains a major contributor to it. As a $1.3 billion company, Rackspace is not in a position financially to build out a string of modern, new facili-

ties, which can cost $200 million or more per site, on its own. It operates three data centers in the U.S., in Dallas, Chicago and northern Virginia, as well as three internationally. There are no additional data centers in the planning stages at the moment, Curry says. Rackspace doesnt expect to have trouble finding partners to invest in its experience in providing OpenStack cloud services, Curry says. Weve had a constant stream of requests that we do this for the past seven years, he says. Rackspace will initially select one or two partners and aim to get new facilities in operation this year, operating under both the host company and Rackspace brands. The initial launch will be modest. We want to make sure to get the first two or three done correctly. Next year well continue to scale, he adds. For a company to be in the infrastructure-as-a-service business, its necessary to compete on a worldwide level, he added. Amazon is a consistent experience each time it opens another data center, giving customers already familiar with it a location and facility into which to expand their own operations, Curry notes. 23

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

We are working to build a global alliance. ... We are working to provide a consistent experience for all Rackspace users, he says. After the initial two locations come online, Rackspace will seek to expand the number that it authorizes and helps build as it gains experience working with partners. Rackspace needs partners to address a weakness intercity and inter-data center communications, or networking expertise. That would suggest a telecommunications company would be among its first partners. Curry wouldnt name a target partner but he doesnt disagree: We can work with partners that know a lot about communications. Rackspace may also be watching the growing competition from telcos already in the IaaS business. CenturyLink acquired the Savvis cloud service in 2011, and its executives lead CenturyLinks administration of cloud services. Savvis offers a data storage, backup and recovery service across more than one data center. Verizon acquired Terremark in 2011; Terremark executives led Verizons entry into cloud services. Both telcos

have strings of 50 interconnected data centers around the world, though they havent necessarily built up the level of IaaS in each one to the same point. Looking at the telcos ability to spread services across many centers may have been a spur to the Rackspace move. Acquisition by CenturyLink gave Savvis access to CenturyLinks 207,000 miles of fiber-optic networking. Savvisdirect IaaS is hosted in data centers in Washington, D.C., and Santa Clara, Calif. Its also expanding into London. By mid-2013, it expects to offer the service from Singapore and Hong Kong. CenturyLink is the third largest telecommunications provider in the U.S. AT&T was also early to market, offering its own Synaptic Compute Cloud from multiple centers. Were in discussion with a number of very large telcos, Scott Sanchez, Rackspaces director of strategy, said in an interview. Our discussions kind of cover the globe theyre taking place on all continents. Well do maybe two this year. Next year, the number might be more like six, he says.p 24

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

EMC Merges High Availability, Disaster Recovery Into One


The company is combining products and services to turn high availability and disaster recovery into a single concept it calls Continuous Availability.
[ By David Hill ]

DOWNLOAD PDF

ervice providers and large enterprises have a goal of delivering 24/7 forever availability, particularly for mission-critical services and applications. EMC wants to help customers meet that goal with the concept of continuous availability (CA), which marries high availability (HA) and disaster recovery (DR). The CA approach is built around EMCs VPLEX product, as well as a new service offering to perform assessments and analyze costs. The first step in delivering a 24/7 forever is to provide enough extra server and storage capacity to create an HA system. The HA system 25

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

is the first line of defense against problems that threaten five nines application availability. Different services and applications will require different levels of redundancy. For an enterprise database application, servers are typically replicated 100% for redundancy. EMC estimates, though, that for a Web farm, only about 20% more servers need to be provided, so only 20% redundancy is necessary. The second step is to create a DR capability at a site geographically separate from the original data center. This typically requires 100% redundancy in both servers and storage. Note that the 100% is true both for enterprise databases and Web farms because if availability is impacted it would be the whole site (otherwise it would not qualify as a disaster). Notice that the redundancy required to fully protect availability is an extra 200% in the case of databases and 120% in the case of Web farms.

An Alternative To The Conventional Architecture


DOWNLOAD PDF

With its new Continuous Availability Advisory Services

offering, EMC proposes an alternative to traditional scenarios a merger of traditional single-site HA with dual-site DR to create a continuous availability system. In a full CA architecture, transactions from the same application are processed in each of the two sites simultaneously. This is done by using global load balancing to distribute transactions to each site. Web and application farms are stretched between sites, creating active-active applications. At the data layer, for example, a local Oracle RAC cluster can be stretched between the sites to provide a locking mechanism over the databases. And then the storage layer is connected via EMCs VPLEX to provide a data coherency mechanism that syncs the data between the storage arrays deployed between the sites. The final piece of the architecture is the use of activeactive data center infrastructure components, such as a shared name space and common IP addressing, which are deployed so that applications can run seamlessly in either site. Probably the most interesting thing about EMCs approach is that the company claims the architecture can 26

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

be provisioned with off-the-shelf components and most applications can be adapted without code changes. And where an application does not fit nicely into the mold of an active-active application architecture over Oracle RAC, a near-CA architecture can be deployed where application and database clusters run normally in one site and fail over to another site. In this near-CA architecture, the storage layer is still using VPLEX, and the applications and databases are set up in a two-site HA mode. This new paradigm that EMC is rolling out can provide many different combinations of CA and two-site HA modes at the Web (presentation), application, data and storage layers to provide a level of resiliency above what was previously achievable. In this architecture, EMC argues, each of the two sites needs only about 60% of the original performance capabilities for a total of 120%, which is 20% redundant. What magic does the company use to achieve this? EMC employs an approach called fractional provisioning of the server count. Under normal circumstances, 100% is enough by definition, and if you look at the CPU utiliza-

tion under most day-to-day circumstances, utilization averages somewhere in the 50% to 70% range. The remaining free space (above the 50% to 70% mark) is headroom and is used during peak hours or heavy business usages. So, the logic goes, put the average compute of 60% of the need in each site for a total of 120%. The 20% extra server capacity can accommodate processing needs if a few servers fail or if demand fluctuates. If one site goes down, you immediately have the average compute available. If the outage is prolonged, then you may need to run some triage and defer workloads or add capacity, which is getting easier to do in the virtual world. But how likely is this to happen? EMC reports that a study shows of all the events that affect availability (including not only natural disasters but also business mergers, acquisitions and data center relocations), less than 1% require a failover from one site to another. In other words, disaster recovery is a necessary but expensive insurance policy. By contrast, a CA approach provides the DR shield, allowing companies to put insurance premiums to work elsewhere while saving on the overall IT 27

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

DOWNLOAD PDF

business continuity investment. Note that the 20% redundancy offered in CA is much less than the redundancy in a standard HA-plus-DR combination. Because of this, EMC says its approach offers a potential 28% to 50% reduction in overall compute costs. The CA model requires that applications be run in parallel (or failover in near-CA versions) between two geographically separated sites. The synchronization of workloads at two distributed sites has to occur with low enough latencies (less than or equal to 5 milliseconds) that no one will be able to discern any difference in performance. EMCs VPLEX Metro enables users to have read and write access to the exact same information at the same time (for all practical purposes) in two separate locations. It can support that performance level at distances of up to roughly 100 kilometers and work with common thirdparty clustering products notably, VMware, Oracle RAC, HACMP/PowerHA, MC/Service Guard, MSCS and VCS. In other words, IT doesnt necessarily have to change how it works to accommodate the applications

in the EMC CA world. But what about the distance factor? Typically, 100 kilometers isnt considered enough for true DR. Some organizations may be willing to take the risk of having a second site within the 100-kilometer range, with the understanding that a major disaster (think Hurricane Katrina) may still bring down a CA architecture. Others may have to take additional steps, such as bringing in third-party site for worst-case scenarios. A good economic case can be made for a CA approach based on a reduced server count and power savings. But from an IT perspective, the simplification inherent in EMCs CA services is a big benefit because maintaining both HA and DR environments is a significant challenge. Moreover, in CA, the IT personnel at the remote site are not an afterthought but are fully involved in normal operations. Still, IT organizations will have to perform thorough evaluations to justify a move to CA.

Mesabi Musings
The consolidation of HA and DR into one CA system 28

[Business Continuity]

Inside

3 6 12 16 19 22 25

Dont Confuse Big Data With Storage Commentary: Moving Legacy Apps To The Cloud VMware Users Gain Bluelock Recovery Option How University Of Oklahoma Protects Records From Disaster Dont Trade High Availability For Flash Performance Rackspace Launches Global OpenStack Expansion EMC Merges High Availability, Disaster Recovery Into One

seems to be a logical evolutionary step. The same benefit (24/7 forever availability) at lower cost and greater IT simplification makes a CA approach attractive. However, it is also a major decision that IT organizations have to think through carefully. So even though technologies that can do the job are available, enterprises may still have to be convinced to go forward. David Hill is principal of Mesabi Group LLC, which focuses on helping organizations make complex IT infrastructure decisions simpler and easier to understand. He is the author of the book Data Protection: Governance, Risk Management and Compliance. EMC is a client of David Hill and the Mesabi Group.p

DOWNLOAD PDF

29

Vous aimerez peut-être aussi