Vous êtes sur la page 1sur 3

Rackspace Sales FAQs - Cloud Load Balancers

By Chris Anderson

Cloud Load Balancer is ideal for


Mission critical web-based applications and workloads requiring high availability. Load balancing distributes workloads across two or more servers, network links, and other resources to maximize throughput, minimize response time and avoid overload. Rackspace Cloud Load Balancers allow customers to quickly load balance multiple Cloud Servers for optimal resource utilization. Load Balancers Overview http://www.rackspace.com/cloud/cloud_hosting_products/loadbalancers/ High Throughput & Bandwidth o Since the Load Balancer is connected via 10Gb/second network to both public and Rackspaces internal network Does this mean a customer can connect all backend nodes to their Load Balancer using the Private network interfaces, (effectively doubling the network throughput limits they would have with the Public interface)? And what are the limiting factors that may influence the actual throughput at a given time?... The thing to keep in mind that this is shared infrastructure, so that while we can support a high level of throughput, it's important to keep in mind that if a customer is going to require a dedicated level, then we'll want to have a more in-depth conversation with them. For most customers, however, the throughput limitations are purely a factor of the aggregate ServiceNet traffic limitation imposed on the Cloud Servers. Specifically, two Cloud Servers behind a load balancer with a 20mbps bandwidth cap are capable of 40mbps of throughput (theoretical) through the load balancer. Load Balancer Tech page http://www.rackspace.com/cloud/cloud_hosting_products/loadbalancers/technology/

o And if they do use the Private interfaces to connect their backend nodes to Load Balancer, does this mean they will only be billed for the In/Outbound bandwidth usage for the Load Balancers public traffic? (assuming the backend nodes Public interfaces are closed) If customers are load balancing Rackspace Cloud Servers using the private IP address, customers will be billed for cloud load balancers bandwidth; however, there are no additional charges for Cloud Servers bandwidth. Load Balancer Pricing http://www.rackspace.com/cloud/cloud_hosting_products/loadbalancers/pric ing/

Zeus Technology & the API w/ Load Balancer o When a customer uses the Load Balancers API, will they be making calls directly to the Zeus infrastructure? Or is it run against a custom-built shell on top of Zeus that then translates onto the Zeus infrastructure? We leverage Zeus Traffic Manager (http://www.zeus.com/products/trafficmanager). Rackspace built the control system, infrastructure, API (everything that is customer facing). Not all Zeus features have been implemented in Cloud Load Balancers yet (e.g. SSL, Cache). Zeus Traffic Manager Brochure PDF http://www.zeus.com/sites/default/files/files/products/brochures/zeustraffic managerbrochure.pdf o What are the main advantages to using our Cloud Load Balancer (leveraging Zeus technology), over using HAProxy or any other open source traffic managers? If customers use HAProxy, customers have to manage another cloud server, which includes configuration, management, patching, etc. HA Proxy makes load balancing far more difficult and less cost effective. For examplewith network throughput, you have to scale up your Cloud Server to get a larger network cap. With cloud load balancers, you dont have to worry about management of an additional operating system or network capacity on a cloud server. Built-in High Availability & Scalability o Our cloud load balancer solution has high-availability functionality built-in. You only need to buy one cloud load balancer and you get high-availability at no additional charge What happens if a Load Balancer goes down? In the event of a Load Balancer failure, the system is to failover to a partner device. In this event, the failover is to result in less than 30 seconds of disruption. o Is there a limit to the number of backend nodes connected to a Load Balancer? By default, no more than 25 backend nodes can be connected to a single Load Balancer instance. o Can users configure auto-scaling on Cloud LB services? (ie Set parameters at the LB level stating when nodes 1, 2 & 3 reach X load, deploy additional cloned node (node 4) to automatically scale out the load from 3 nodes, to 4 We give our customers all the elements needed to auto scale, but they would need to actually develop the code/scripts to implement it. Advanced HTTP Health Monitoring o What is the process for alert/escalations, from the customers perspective, once a backend node fails and is quickly removed from the LB rotation? No alerting process currently exists or is available as part of the product offering. The alerts are accessible via an ATOM (an XML-based format) so any system could consume these alerts. Once we have monitoring as an integrated product, it will make sense to streamline this for customers

These probes are executed at configured intervals; in the event of a failure, the node status changes to OFFLINE and the node will not receive traffic. If, after running a subsequent test, the probe detects that the node has recovered, then the node's status is changed to ONLINE and it is capable of servicing requests.

SSL Termination at Load Balancer o What specifically, does this mean to the customer? And what advantage is there to terminating SSL at Load Balancer? This is really more about convenience than anything else SSL private keys, certificates, and intermediate certificates must be maintained on all backend nodes. Terminating on the load balancer allows for these items to be maintained from one location to simplify management. Although at this time, SSL termination at the Load Balancer is not possible. Static IP Addresses o Is it possible for a customer continuing using their public IP from a current Cloud Server from the LB? Unfortunately, IP addresses are not portable between products. I would recommend any user in this situation to setup a new load balancer, test it using the virtual IP issued by the service, and update their DNS settings to hit the Cloud Load Balancer Virtual IP. The original load balancer can be removed after DNS caches have cleared. Session Persistence o What type of session persistence do we provide with Cloud Load Balancers? If you are load balancing HTTP traffic, the session persistence feature utilizes an HTTP cookie to ensure subsequent requests are directed to the same node in your load balancer pool. But we do not offer Source IP based persistence. Source IP based persistence (aka sticky source IP) is in relation to the load balancer keeping track of the IP of a user that has come in through the web so that every time the users comes in they are directed to the same server they originally visited. Connection Throttling o Is there a limit to the number of concurrent connections with Cloud Load Balancers? There is a default cap of 150,000 concurrent connections per Cloud Load Balancer.

Vous aimerez peut-être aussi