Vous êtes sur la page 1sur 60

Riverbed Certified Solutions Professional (RCSP)

Study Guide

Exam 199-01 for RiOS v3.0

April, 2008
Version 1.0.13
RCSP Study Guide

COPYRIGHT © 2007-2008 Riverbed Technology, Inc.


ALL RIGHTS RESERVED
All content in this manual, including text, graphics, logos, icons, and images, is the exclusive property of Riverbed
Technology, Inc. (“Riverbed”) and is protected by U.S. and international copyright laws. The compilation (meaning
the collection, arrangement, and assembly) of all content in this manual is the exclusive property of Riverbed and is
also protected by U.S. and international copyright laws. The content in this manual may be used as a resource. Any
other use, including the reproduction, modification, distribution, transmission, republication, display, or
performance, of the content in this manual is strictly prohibited.
TRADEMARKS
RIVERBED TECHNOLOGY, RIVERBED, STEELHEAD, RiOS, INTERCEPTOR, and the Riverbed logo are
trademarks or registered trademarks of Riverbed. All other trademarks mentioned in this manual are the property of
their respective owners. The trademarks and logos displayed in this manual may not be used without the prior
written consent of Riverbed or their respective owners.
PATENTS
Portions, features and/or functionality of Riverbed's products are protected under Riverbed patents, as well as
patents pending.
DISCLAIMER
THIS MANUAL IS PROVIDED BY RIVERBED ON AN "AS IS" BASIS. RIVERBED MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, AS TO THE
INFORMATION, CONTENT, MATERIALS, OR PRODUCTS INCLUDED OR REFERENCED IN THE
MANUAL. TO THE FULL EXTENT PERMISSIBLE BY APPLICABLE LAW, RIVERBED DISCLAIMS ALL
WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.
Although Riverbed has attempted to provide accurate information in this manual, Riverbed assumes no
responsibility for the accuracy or completeness of the information. Riverbed may change the programs or products
mentioned in this manual at any time without notice, but Riverbed makes no commitment to update the programs or
products mentioned in this manual in any respect. Mention of non-Riverbed products or services is for information
purposes only and constitutes neither an endorsement nor a recommendation.
RIVERBED WILL NOT BE LIABLE UNDER ANY THEORY OF LAW, FOR ANY INDIRECT, INCIDENTAL,
PUNITIVE OR CONSEQUENTIAL DAMAGES, INCLUDING, BUT NOT LIMITED TO, LOSS OF PROFITS,
BUSINESS INTERRUPTION, LOSS OF INFORMATION OR DATA OR COSTS OF REPLACEMENT GOODS,
ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL OR ANY RIVERBED PRODUCT OR
RESULTING FROM USE OF OR RELIANCE ON THE INFORMATION PRESENT, EVEN IF RIVERBED
MAY HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
CONFIDENTIAL INFORMATION
The information in this manual is considered Confidential Information (as defined in the Reseller Agreement entered
with Riverbed or in the Riverbed License Agreement currently available at www.riverbed.com/license, as
applicable).

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 1


RCSP Study Guide

Table of Contents
Preface ..................................................................................................................................................................................................................... 3
Certification Overview ............................................................................................................................................................................................ 3
Benefits of Certification......................................................................................................................................................................................... 3
Exam Information.................................................................................................................................................................................................. 3
Certification Checklist ........................................................................................................................................................................................... 4
Recommended Resources for Study.................................................................................................................................................................... 4
RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE .............................................................................................................. 6
I. General Knowledge ............................................................................................................................................................................................. 6
Optimizations Performed by RiOS........................................................................................................................................................................ 6
TCP/IP ................................................................................................................................................................................................................ 10
Common Ports.................................................................................................................................................................................................... 10
RiOS Auto-discovery Process ............................................................................................................................................................................ 11
Connection Pooling............................................................................................................................................................................................. 12
In-path Rules ...................................................................................................................................................................................................... 12
Peering Rules ..................................................................................................................................................................................................... 13
Steelhead Appliance Models and Capabilities ................................................................................................................................................... 15
II. Deployment ....................................................................................................................................................................................................... 16
In-path................................................................................................................................................................................................................. 17
Virtual In-path ..................................................................................................................................................................................................... 18
PBR..................................................................................................................................................................................................................... 18
WCCP Deployments........................................................................................................................................................................................... 19
Advanced WCCP Configuration ......................................................................................................................................................................... 20
Server-Side Out-of-Path Deployments ............................................................................................................................................................... 22
Asymmetric Route Detection .............................................................................................................................................................................. 24
Connection Forwarding....................................................................................................................................................................................... 25
Simplified Routing............................................................................................................................................................................................... 26
Datastore Synchronization.................................................................................................................................................................................. 26
Authentication and Authorization........................................................................................................................................................................ 27
Central Management Console (CMC) ................................................................................................................................................................ 28
III. Features ............................................................................................................................................................................................................ 31
Feature Licensing ............................................................................................................................................................................................... 31
HighSpeed TCP (HSTCP) .................................................................................................................................................................................. 31
Quality of Service................................................................................................................................................................................................ 33
PFS (Proxy File Service) Deployments .............................................................................................................................................................. 36
NetFlow............................................................................................................................................................................................................... 41
IPSec .................................................................................................................................................................................................................. 43
Operation on VLAN Tagged Links...................................................................................................................................................................... 43
IV. Troubleshooting .............................................................................................................................................................................................. 45
Common Deployment Issues.............................................................................................................................................................................. 45
Reporting and Monitoring ................................................................................................................................................................................... 47
Troubleshooting Best Practices.......................................................................................................................................................................... 50
V. Exam Questions ............................................................................................................................................................................................... 52
Types of Questions............................................................................................................................................................................................. 52
Sample Questions .............................................................................................................................................................................................. 52
VI. Appendix .......................................................................................................................................................................................................... 56
Acronyms and Abbreviations .............................................................................................................................................................................. 56

2 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

Preface
This Riverbed Certification Study Guide is aimed at anyone who wants to become certified in
the Riverbed Steelhead products and Riverbed Optimization System (RiOS). The Riverbed
Certified Solutions Professional (RCSP) program is designed to validate the skills required of
technical professionals who work in the implementation of the Riverbed Steelhead products.
This study guide provides a combination of theory and practical experience needed for a general
understanding of the subject matter. It also provides sample questions that will help in the
evaluation of personal progress and provide familiarity with the types of questions that will be
encountered in the exam.
This publication does not replace practical experience, nor is it designed to be a stand-alone
guide for any subject. Instead, it is an effective tool that, when combined with education
activities and experience, can be a very useful preparation guide for the exam.
Certification Overview
The Riverbed Certified Solutions Professional certificate is granted to individuals who
demonstrate advanced knowledge and experience with the RiOS product suite. The typical RCSP
will have taken a Riverbed approved training class such as the Steelhead Appliance Deployment
& Management course in addition to having hands-on experience in performing deployment,
troubleshooting, and maintenance of RiOS products in small, medium, and large organizations.
While there are no set requirements prior to taking the exam, candidates who have taken a
Riverbed training class and have at least six months of hands-on experience with RiOS products
have a significantly higher chance of receiving the accreditation. We would like to emphasize
that solely taking the class will not adequately prepare you for the exam.
To obtain the RCSP certification, you are required to pass a computerized exam available at any
Pearson VUE testing center worldwide.
Benefits of Certification
1. Establishes your credibility as a knowledgeable and capable individual in regard to
Riverbed's products and services.
2. Helps improve your career advancement potential.
3. Qualifies you for discounts and benefits for Riverbed sponsored events and training.
4. Entitles you to use the Riverbed certification logo on your business card.
Exam Information
Exam Specifications
• Exam Number: 199-01
• Exam Name: Riverbed Certified Solutions Professional
• Version of RiOS: Up to RiOS version 3.x
• Number of Questions: 65
• Total Time: 75 minutes for exam, 15 minutes for Survey and Tutorial (90 minutes total)
• Exam Provider: Pearson VUE
• Exam Language: English Only. Riverbed allows a 30-minute time extension for English
exams taken in non-English speaking countries for students that request it. English speaking
countries are Australia, Bermuda, Canada, Great Britain, Ireland, New Zealand, Scotland,

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 3


RCSP Study Guide

South Africa, and the United States. A form will need to be completed by the candidate and
submitted to Pearson VUE.
• Special Accommodations: Yes (must submit written request to Pearson VUE for ESL or
ADA accommodations; includes time extensions and/or a reader)
• Offered Locations: Worldwide (4200 testing locations worldwide)
• Pre-requisites: None (although taking a Riverbed training class is highly recommended)
• Available to: Everyone (partners, customers, employees, etc)
• Passing Score: 700 out of 1000 (70%)
• Certification Expires: Every 2 years (must recertify every 2 years, no grace period)
• Wait Between Failed Attempts: 72 hours. No retakes allowed on passed exams.
• Cost: $150.00 (USD)
• Number of Attempts Allowed: Unlimited (though statistics are kept)
Certification Checklist
As the RCSP exam is geared towards individuals who have both the theoretical knowledge and
hands on experience with the RiOS product suite, ensuring proficiency in both areas is crucial
towards passing the exam. For individuals starting out with the process, we recommend the
following steps to guide you along the way:
1. Building Theoretical Knowledge
The easiest way to become knowledgeable in deploying, maintaining, and troubleshooting
the RiOS product suite is to take a Riverbed sanctioned training class. To ensure the greatest
possibility of passing the exam, it is recommended that you review the RCSP Study Guide
and ensure your familiarity with all topics listed, prior to any examination attempts.
2. Gaining Hands-on Experience
While the theoretical knowledge will get you halfway there, it's the hands-on knowledge that
can get you over the top and allow you to pass the exam. Since all deployments are different,
providing an exact amount of experience required is difficult. Generally, we recommend that
resellers and partners perform at least five deployments in a variety of technologies prior to
attempting the exam. For customers, and alternatively for resellers and partners, starting from
the design and deployment phase and having at least six months of experience in a
production environment would be beneficial.
3. Taking the Exam
The final step in becoming an RCSP is to take the exam at a Pearson VUE authorized testing
center. To register for any Riverbed Certification exam, please visit
http://www.pearsonvue.com/riverbed.
Recommended Resources for Study
Riverbed Training Courses
Information on Riverbed Training can be found at: http://www.riverbed.com/support/training/.
• Steelhead Appliance Deployment and Management
Publications
Recommended Reading (In No Particular Order)
• This study guide
• Riverbed documentation
o Steelhead Management Console User's Guide
4 © 2007-2008 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide

o Steelhead Command-Line Interface Reference Guide


o Steelhead Deployment Guide
o Steelhead Installation Guide
o Central Management Console User's Guide
Other Reading (URLs Subject to Change)
• http://www.ietf.org/rfc.html
o RFC 793 (Original TCP RFC)
o RFC 1323 TCP extensions for high performance
o RFC 3649 (HighSpeed TCP for Large Congestion Windows)
o RFC 3742 (Limited Slow-Start for TCP with Large Congestion Windows)
o RFC 2474 (Differentiated Services Code Point)
• http://www.caida.org/tools/utilities/flowscan/arch.xml (NetFlow Protocol and Record
Headers)
• http://ubiqx.org/cifs/Intro.html (CIFS)
• Microsoft Windows 2000 Server Administrator’s Companion by Charlie Russell and Sharon
Crawford (Microsoft Press, 2000)
• Common Internet File System (CIFS) Technical Reference by the Storage Networking
Industry Association (Storage Networking Industry Association, 2002)
• TCP/IP Illustrated, Volume I, The Protocols by W. R. Stevens (Addison-Wesley, 1994)
• Internet Routing Architectures (2nd Edition) by Bassam Halabi (Cisco Press, 2000)

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 5


RCSP Study Guide

RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE


The Riverbed Certified Solutions Professional exam, and therefore this study guide, covers the
Riverbed products and technologies through RiOS version 3.0 only.
I. General Knowledge
Optimizations Performed by RiOS
Optimization is the process of increasing data throughput and network performance over the
WAN using Steelhead appliances. An optimized connection exhibits bandwidth reduction as it
traverses the WAN.
Transaction Acceleration (TA)
TA is composed of the following optimization mechanisms:
• A connection bandwidth-reducing mechanism called Scalable Data Referencing (SDR)
• A Virtual TCP Window Expansion (VWE) mechanism that repacks TCP payloads with
references that represent arbitrary amounts of data
• A latency reduction and avoidance mechanism called Transaction Prediction (TP).
SDR and TP can work independently or in conjunction with one another depending on the
characteristics and workload of the data sent across the network. The results of the optimization
vary, but typically result in throughput improvements in the range of 10 to 100 times over
unaccelerated links.
Scalable Data Referencing (SDR)
Bandwidth optimization is delivered through SDR. SDR uses a proprietary algorithm to break up
TCP data streams into data chunks that are stored in the hard disk (data store) of the Steelhead
appliances. Each data chunk is assigned a unique integer label (reference) before it is sent to the
peer Steelhead appliance across the WAN. If the same byte sequence is seen again in the TCP
data stream, then the reference is sent across the WAN instead of the raw data chunk. The peer
Steelhead appliance uses this reference to reconstruct the original data chunk and the TCP data
stream. Data and references are maintained in persistent storage in the data store within each
Steelhead appliance. There are no consistency issues even in the presence of replicated data.
How Does SDR Work?
When data is sent for the first time across a network (no commonality with any file ever sent
before), all data and references are new and are sent to the Steelhead appliance on the far side of
the network. This new data and the accompanying references are compressed using conventional
algorithms so as to improve performance, even on the first transfer.
When data is changed, new data and references are created. Thereafter, whenever new requests
are sent across the network, the references created are compared with those that already exist in
the local data store. Any data that the Steelhead appliance determines already exists on the far
side of the network are not sent—only the references are sent across the network.
As files are copied, edited, renamed, and otherwise changed or moved, the Steelhead appliance
continually builds the data store to include more and more data and references. References can
be shared by different files and by files in different applications if the underlying bits are
common to both. Since SDR can operate on all TCP-based protocols, data commonality across
protocols can be leveraged so long as the binary representation of that data does not change
between the protocols. For example, when a file transferred via FTP is then transferred using

6 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

WFS (Windows File System), the binary representation of the file is basically the same and thus
references can be sent for that file.
Lempel-Ziv (LZ) Compression
SDR and compression are two different features and can be switched on and off separately.
However, LZ compression is the primary form of data reduction for cold transfers.
The Lempel-Ziv compression methods are among the most popular algorithms for lossless
storage. Compression is turned on by default. In-path rules can be used to define which
optimization features will be used for which set of traffic flowing through the Steelhead
appliance.
TCP Optimizations & Virtual Window Expansion (VWE)
As Steelhead appliances are designed to optimize data transfers across wide area networks, they
make extensive use of standards-based enhancements to the TCP protocol that may not be
present in the TCP stack of many desktop and server operating systems. This includes improved
transport capability for high bandwidth delay product networks via the use of HighSpeed TCP,
TCP Vegas for lower bandwidth links, partial acknowledgements, and other more obscure but
throughput enhancing and latency reducing features.
VWE allows Steelhead appliances to repack TCP payloads with references that represent
arbitrary amounts of data. This is possible because unlike other compression products, Steelhead
appliances operate at the Application Layer and terminate TCP, which gives them more
flexibility in the way they optimize WAN traffic.
Essentially, the TCP payload is increased from its normal window size to an arbitrarily large
amount. Because of this increased payload, a given application that relies on TCP performance
(for example, HTTP or FTP) takes fewer trips across the WAN to accomplish the same task. For
example, consider a client-to-server connection that may have a 64KB TCP window. In the event
that there is 256KB of data to transfer, it would take several TCP windows to accomplish this in
a network with high latency. With SDR however, that 256KB of data can be potentially reduced
to fit inside a single TCP window, removing the need to wait for acknowledgements to be sent
prior to sending the next window, and thus speed the transfer.
Transaction Prediction (TP)
Latency optimization is delivered through TP. TP leverages an intimate understanding of
protocol semantics to reduce the chattiness that would normally occur over the WAN. By acting
on foreknowledge of specific protocol request-response mechanisms, Steelhead appliances
streamline the delivery of data that would normally be delivered in small increments through
large numbers of interactions between the client and server over the WAN. As transactions are
executed between the client and server, the Steelhead appliances intercept each transaction,
compare it to the database of past transactions, and make decisions about the probability of
future events.
Based on this model, if a Steelhead appliance determines there is a high likelihood of a future
transaction occurring, it performs that transaction, rather than waiting for the response from the
server to propagate back to the client and then back to the server. Dramatic performance
improvements result from the time saved by not waiting for each serial transaction to arrive prior
to making the next request. Instead, the transactions are pipelined one right after the other.
Of course, transactions are executed by Steelhead appliances ahead of the client only when it is
safe to do so. To ensure data integrity, Steelhead appliances are designed with knowledge of the
underlying protocols to know when it is safe to do so. Fortunately, a wide range of common

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 7


RCSP Study Guide

applications have very predictable behaviors and, consequently, TP can enhance WAN
performance significantly. When combined with SDR, TP improves overall WAN performance
up to 100 times.
Common Internet File System (CIFS) Optimization
CIFS is a proposed standard protocol that lets programs make requests for files and services on
remote computers on the Internet. CIFS uses the client/server programming model. A client
program makes a request of a server program (usually in another computer) for access to a file or
to pass a message to a program that runs in the server computer. The server takes the requested
action and returns a response. CIFS is a public or open variation of the Server Message Block
Protocol developed and used by Microsoft.
In the Steelhead appliance, CIFS optimization is enabled by default. Typically, you only disable
CIFS optimization to troubleshoot the system.
Overlapping Opens
Due to the way certain applications handle the opening of files, file locks are not properly
granted to the application in such a way that would allow a Steelhead appliance to optimize
access to that file using TP. To prevent any compromise to data integrity, the Steelhead
appliance only optimizes data to which exclusive access is available (in other words, when locks
are granted). When an oplock is not available, the Steelhead appliance does not perform
application-level latency optimizations but still performs SDR and compression on the data as
well as TCP optimizations. The CIFS overlapping opens feature remedies this problem by having
the server-side Steelhead handle file locking operations on behalf of the requesting application. If
you disable this feature, the Steelhead appliance will still increase WAN performance, but not as
effectively.
Enabling this feature on applications that perform multiple opens of the same file to complete an
operation will result in a performance improvement (for example, CAD applications):
• Optimize the Following Extensions. Specify a list of extensions you want to optimize using
overlapping opens. The default values are: doc, pdf, ppt, sldasm, slddrw, slddwg, sldprt, txt,
vsd, xls.
• Do Not Optimize the Following Extensions. Specify a list of extension you do not want to
optimize using overlapping opens. The default values are: ldb, mdb.
NOTE: If a remote user opens a file which is optimized using the overlapping opens feature and
a second user opens the same file they might receive an error if the file fails to go through a v3.x
Steelhead appliance or if it does not go through a Steelhead appliance (for example, certain
applications that are sent over the LAN). If this occurs, you should disable overlapping opens for
those applications.
Messaging Application Programming Interface (MAPI) Optimization
MAPI optimization is enabled by default. Only uncheck this box if you want to disable MAPI
optimization. Typically, you disable MAPI optimization to troubleshoot problems with the
system. For example, if you are experiencing problems with Microsoft Outlook clients
connecting to Exchange, you can disable MAPI latency acceleration (while continuing to
optimize with SDR for MAPI).
• Read ahead on attachments
• Read ahead on large emails
• Write behind on attachments
8 © 2007-2008 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide

• Write behind on large emails


• Fails if user authentication set too high (downgrades to SDR/TCP acceleration only, no TP)
MAPI Prepopulation
Without MAPI prepopulation, if a user closes Microsoft Outlook or switches off the workstation
the TCP sessions are broken. With MAPI prepopulation, the Steelhead appliance can start acting
as if it is the mail client. If the client closes the connection, the client-side Steelhead appliance
will keep an open connection to the server-side Steelhead appliance and the server-side
Steelhead appliance will keep the connection open to the server. This allows for data to be
pushed through the data store before the user logs on to the server again. The default timer is set
to 96 hours, after that, the connection will be reset.
• Optimized MAPI connections held open after client exit (acts like the client left the PC on);
think of it as virtual client
• Keep reading mail until timeout
• No one is ever reconnected to the prepopulation session (including the original user)
• No need for more CALs; no agents to deploy
• Can configure frequency check and timeout or to disable it
• Enables transmission during off times even in consolidated environments
• You can disable the feature
Microsoft® SQL Optimization
Steelhead appliance MS SQL protocol support includes the ability to perform prefetching and
synthetic pre-acknowledgement of queries on database applications. By default, rules that
increase optimization for Microsoft Project Enterprise Edition ship with the unit. This
optimization is not enabled by default, and enabling MS SQL optimization without adding
specific rules will rarely have an effect on any other applications. MS SQL packets must be
carried in TDS (Tabular Data Stream) format for a Steelhead appliance to be able to perform
optimization.
Enable MS SQL protocol support in the Management Console Setup: Optimization Service -
Protocol: MS-SQL page.
You can also use MS SQL protocol optimization to optimize other database applications, but you
must define SQL rules to obtain maximum optimization. If you are interested in enabling the MS
SQL feature for other database applications, contact Riverbed Professional Services. When
contacting Riverbed Professional Services, you will be asked for packet traces of the application.
It is critical that the traces include full packet captures (using the -s 0 command when using
tcpdump).
NFS Optimization
You can configure Steelhead appliances to use TP to perform application-level latency
optimization on NFS. Application-level latency optimization improves NFS performance over
high latency WANs.
NFS latency optimization optimizes TCP connections and is only supported for NFS v3.
You can configure NFS settings globally for all servers and volumes, or you can configure NFS
settings that are specific to particular servers or volumes. When you configure NFS settings for a

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 9


RCSP Study Guide

server, the settings are applied to all volumes on that server unless you override settings for
specific volumes.
• Read-ahead and read caching (checks freshness with modify date)
• Write-behind
• Metadata prefetching and caching
• Convert multiple requests into one larger request
• Special symbolic link handling
TCP/IP
General Operation
Steelhead appliances are typically placed on two ends of the WAN as close to the client and
server as possible (no additional WAN links between the end node and the Steelhead appliance).
By placing Steelhead appliances in the network, the TCP session between client and server can
be intercepted, therefore a level of control over the TCP session can be obtained. TCP sessions
have to be intercepted in order to be optimized; therefore the Steelhead appliances must see all
traffic from source to destination and back. For any given optimized session, there are three
distinct sessions. There is a TCP connection between the client and the client-side Steelhead
appliance, between the server and the server-side Steelhead appliance, and finally a connection
between the two Steelhead appliances.
Common Ports
Ports Used by RiOS
Port Type
7744 Datastore Sync port
7800 In-path port
7801 NAT port
7810 Out-of-path port
7820 Failover port
7830 MAPI Exchange 2003 port
7840 NSPI port
7850 Connection forwarding (neighbor) port

Interactive Ports Commonly Passed through by Default on Steelhead Appliances (Partial


List)
Port Type
7 TCP ECHO
23 Telnet
37 UDP/Time
107 Remote Telnet Service
179 Border Gateway Protocol

10 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

Port Type
513 Remote Login
514 Shell
1494, 2598 Citrix
3389 MS WBT, TS/Remote Desktop
5631 PC Anywhere
5900 - 5903 VNC
600 X11

Secure Ports Commonly Passed through by Default on Steelhead Appliances (Partial List)
Port Type
22/TCP ssh
49/TCP tacacs
443/TCP https
465/TCP smtps
563/TCP nntps
585/TCP imap4-ssl
614/TCP sshell
636/TCP ldaps
989/TCP ftps-data
990/TCP ftps
992/TCP telnets
993/TCP imaps
995/TCP pop3s
1701/TCP l2tp
1723/TCP pptp
3713/TCP tftp over tls

RiOS Auto-discovery Process


Auto-discovery is the process by which the Steelhead appliance automatically intercepts and
optimizes traffic on all IP addresses and ports. By default, auto-discovery is applied to all IP
addresses and the ports which are not secure, interactive, or Riverbed well-known ports.
Packet Flow
The diagram below shows the first connection packet flow for traffic that is classified as to be
optimized. The TCP SYN sent by the client is intercepted by the Steelhead appliance. A TCP
option is attached in the TCP header; this allows the remote Steelhead appliance to recognize
that there is a Steelhead appliance on the other side of the network. When the server-side
Steelhead appliance sees the option (also known as a TCP probe) it responds to the option by

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 11


RCSP Study Guide

sending a TCP SYN/ACK back. After auto-discovery has taken place, the Steelhead appliances
continue to set up the TCP inner session and the TCP outer sessions.
Client-side Server-side
Client Steelhead Steelhead Server

IP(C)→IP(S):SYN
IP(C)→IP(S):SYN+Probe

IP(S)→IP(C):SYN/ACK+Probe rsp (SSH)


Probe result is
cached for 10 sec
IP(CSH)→IP(SSH):SYN

IP(SSH)→IP(CSH):SYN/ACK Listening on
port 7800
IP(CSH)→IP(SSH):ACK

Setup Information
IP(C)→IP(S):SYN

IP(S)→IP(C):SYN/ACK
Connect Result
IP(C)→IP(S):ACK
IP(S)→IP(C):SYN/ACK
Connect result is
IP(C)→IP(S):ACK cached until failure

TCP Option
The TCP option used for auto-discovery is 0x4C which is 76 in decimal format. The client-side
Steelhead appliance attaches a 10 byte option to the TCP header; the server-side Steelhead
appliance attaches a 14 byte option in return. Note that this is only done in the initial discovery
process and not during connection setup between the Steelhead appliances and the outer TCP
sessions.
Connection Pooling
General Operation
By default, all auto-discovered Steelhead appliance peers will have a default connection pool of
20. The connection pool is a user configurable value which can be configured for each Steelhead
appliance peer. The purpose of connection pooling is to avoid the TCP handshake for the inner
session between the Steelhead appliances across the high latency WAN. By pre-creating these
sessions between peer Steelhead appliances, when a new connection request is made by a client,
the client-side Steelhead appliance can simply use the connections in the pool. Once a
connection is pulled from the pool, a new connection is created to take its place so as to maintain
the specified number of connections.
In-path Rules
General Operation
In-path rules allow a client-side Steelhead appliance to determine what action to perform when
intercepting a new client connection (the first TCP SYN packet for a connection). The action
taken depends on the type of in-path rule selected and is outlined in detail below. It is important
to note that the rules are matched based on source/destination IP information, destination port,
and/or VLAN, and are processed from the first rule in the list to the last (top down). The rules
processing stops when the first rule matching the parameters specified is reached, at which point
the action selected by the rule is taken. In version 3.x, Steelhead appliances have three
12 © 2007-2008 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide

passthrough rules by default, and a fourth implicit rule to auto-discover remote Steelhead
appliances. They attempt to optimize traffic if the first three rules are not matched by traffic. The
three default passthrough rules include port groupings matching interactive traffic (i.e., Telnet,
VNC, RDP), encrypted traffic (i.e., server-side Steelhead), and Riverbed related used ports (i.e.,
7800, 7810).
Different Types and Their Function
• Passthrough. Passthrough rules identify traffic that is passed through the network
unoptimized. For example, you may define passthrough rules to exclude subnets from
optimization. Traffic is also passed through when the Steelhead appliance is in bypass mode.
(Passthrough might occur because of in-path rules, because the connection was established
before the Steelhead appliance was put in place, or before the Steelhead service was
enabled.)
• Fixed Target. Fixed-target rules specify out-of-path Steelhead appliances near the target
server that you want to optimize. Determine which servers you want the Steelhead appliance
to optimize (and, optionally which ports), and add rules to specify the network of servers,
ports, port labels, and out-of-path Steelhead appliances to use.
• Auto-discovery. Auto-discovery is the process by which the Steelhead appliance
automatically intercepts and optimizes traffic on all IP addresses and ports. By default, auto-
discovery is applied to all IP addresses and the ports which are not secure, interactive, or
default Riverbed ports. Defining in-path rules modifies this default setting.
• Discard. Packets for the connection that match the rule are dropped silently. The Steelhead
appliance filters out traffic that matches the discard rules. This process is similar to how
routers and firewalls drop disallowed packets; the connection-initiating device has no
knowledge of the fact that its packets were dropped until the connection times out.
• Deny. When packets for connections match the deny rule, the Steelhead appliance actively
tries to reset the connection. With deny rules, the Steelhead appliance actively tries to reset
the TCP connection being attempted. Using an active reset process rather than a silent
discard allows the connection initiator to know that its connection is disallowed.
Peering Rules
Applicability and Conditions of Use
Peering Rules
Configuring peering rules defines what to do when a Steelhead appliance receives an auto-
discovery probe from another Steelhead appliance. As such, the scope of a peering rule is limited
to a server-side Steelhead appliance (the one receiving the probe). Note that peering rules on an
intermediary Steelhead appliance (or server-side) will have no effect in preventing optimization
with a client-side Steelhead appliance if it is using a fixed target rule designating the
intermediary Steelhead appliance as its destination (since there is no auto-discovery probe in a
fixed target rule).

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 13


RCSP Study Guide

Site A Site B Site C


Client Steelhead1 Steelhead2 Steelhead3 Server 2

WAN 1 WAN 2

Server 1

Server1 is on the same LAN as Steelhead2 so connections from the client to Server1 should be
optimized between Steelhead1 and Steelhead2. Concurrently, Server2 is on the same LAN as
Steelhead3 and connections from the client to Server2 should be optimized between Steelhead1
and Steelhead3.
• You do not need to define any rules on Steelhead1 or Steelhead3.
• Add peering rules on Steelhead2 to process connections normally going to Server1 and to
pass through all other connections so that connections to Server2 are not optimized by
Steelhead2.
• A rule to pass through inner connections between Steelhead1 and Steelhead3 is already in
place by default (by default connection to destination port 7800 is included by port label
“RBT-Proto”).
This configuration causes connections going to Server1 to be intercepted by Steelhead2, and
connections going to anywhere else to be intercepted by another Steelhead appliance (for
example, Steelhead3 for Server2).
Overcoming Peering Issues Using Fixed-Target Rules
If you do not enable automatic peering or define peering rules as described in the previous
sections, you must define:
• A fixed-target rule on Steelhead1 to go to Steelhead3 for connections to Server2.
• A fixed-target rule on Steelhead3 to go to Steelhead1 for connections to servers in the same
site as Steelhead1.
• If you have multiple branches that go through Steelhead2, you must add a fixed-target rule
for each of them on Steelhead1 and Steelhead3.

14 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

Steelhead Appliance Models and Capabilities


Model Specifications

Steelhead Appliance Ports


A Steelhead appliance has a Console, Aux, Primary, WAN, and LAN ports.
• The Primary and Aux ports cannot share the same network subnet.
• The Primary and in-path interfaces can share the same network subnet.
• You must use the Primary port on the server-side for out-of-path deployment.
• You can not use the Auxiliary port except for management.
• If the Steelhead appliance is deployed between two switches, both the LAN and WAN ports
must be connected with straight-through cables.
Interface Naming Conventions
The interface names for the bypass cards are a combination of the slot number and the port pairs
(<slot>_<pair>, <slot>_<pair>). For example, if a four-port bypass card is located in slot 0 of
your appliance, the interface names are: lan0_0, wan0_0, lan0_1, and wan0_1 respectively.
Alternatively, if the bypass card is located in slot 1 of your appliance, the interface names are:
lan1_0, wan1_0, lan1_1, and wan1_1 respectively. The maximum number of pairs is six, which
are three four-port bypass cards.
Console Emulation Settings
• 9600 bps
• 8 Data bits
• No Parity
• 1 Stop bit
• VT100 Emulation
• No flow control

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 15


RCSP Study Guide

II. Deployment
Deployment Methods
Physical In-path
In a physical in-path deployment, the Steelhead appliance is physically in the direct path between
clients and servers. The clients and servers continue to see client and server IP addresses and the
Steelhead appliance bridges traffic from its LAN facing side to its WAN facing side (and vise
versa). Physical in-path configurations are suitable for any location where the total bandwidth is
within the limits of the installed Steelhead appliance or serial cluster of Steelhead appliances. It
is generally one of the simplest deployment options and among the easiest to maintain.
Logical In-path
In a logical in-path deployment, the Steelhead appliance is logically in the path between clients
and servers. In a logical in-path deployment, clients and servers continue to see client and server
IP addresses. This deployment differs from a physical in-path deployment in that a packet
redirection mechanism is used to direct packets to Steelhead appliances that are not in the
physical path of the client or server.
Commonly used technologies for redirection are: Layer-4 switches, Web Cache Communication
Protocol (WCCP), and Policy Based Routing (PBR).
Server-Side Out-of-Path
A server-side out-of-path deployment is a network configuration in which the Steelhead
appliance is not in the direct or logical path between the client and the server. Instead, the server-
side Steelhead appliance is connected through the primary interface and listens on port 7810 to
connections coming from client-side Steelhead appliances. In an out-of-path deployment, the
Steelhead appliance acts as a proxy and does not perform NAT of the client’s IP address as with
in-path deployments (to allow the server to see the original client IP address), but will instead
source NAT to the primary interface address on the Steelhead appliance that is in server-side out-
of-path. A server-side out-of-path configuration is suitable for data center locations when
physical in-path or logical in-path configurations are not possible or when certain forms of NAT
are done between Steelhead appliances. With server-side out-of-path, client IP visibility is no
longer available to the server (due to the NAT) and optimization initiated from the server side is
not possible (since there is no redirection of packets to the Steelhead appliance).
Physical Device Cabling
Steelhead appliances have multiple physical and virtual interfaces. The primary interface is
typically used for management purposes, data store synchronization (if applicable), and for
server-side out-of-path configurations. The primary interface can be assigned an IP address and
connected to a switch. You would use a straight-through cable for this configuration.
The LAN and WAN interfaces are purely L1/L2. No IP addresses can be assigned. Instead, a
logical L3 interface is created. This is the “in-path” interface and it is designated a name on a per
slot and port basis (in LAN/WAN pairs). A bypass card (or in-path card) in slot0 with just one
LAN and one WAN interface will have a logical interface called inpath0_0. In-path interfaces
for a 4-port card in slot1 will get inpath1_0 and inpath1_1, representing the pair or LAN/WAN
ports respectively.
Inpath1_0 will represent LAN1_0 and WAN1_0. Inpath1_1 will represent LAN1_1 and
WAN1_1.

16 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

For a physical in-path deployment, when connecting the LAN and WAN interface to the
network, both of them are to be treated as a router. When connecting to a router, host, or firewall,
a crossover cable needs to be used. When connecting to a switch, a straight-through cable has to
be used. The Steelhead appliance supports auto-MDIX (medium dependent interface crossover),
however when using the wrong cables you run the risk of breaking the connection between the
components the Steelhead appliances placed in-between, especially in bypass. These components
may not support auto-MDIX.
For a virtual in-path deployment the WAN interface needs to be connected. The LAN interface
does not need to be connected and will be shut down automatically as soon as the virtual in-path
option is enabled in the Steelhead appliances configuration.
For server-side out-of-path deployments only the primary interface needs to be connected.
In-path
In-path Networks
Physical in-path configurations are suitable for locations where the total bandwidth is within the
limits of the installed Steelhead appliance or serial cluster of Steelhead appliances.
The Steelhead appliance can be physically connected to access both ports and trunks. When the
Steelhead appliance is placed on a trunk, the in-path interface has to be able to tag its traffic with
the correct VLAN number. The supported trunking protocol is Dot1Q. A tag can be assigned via
the GUI or the CLI. The CLI command for this is:
HOSTNAME (config) # in-path interface inpathx_x vlan xxx
There are several variations of the in-path deployment. Steelhead appliances could be placed in
series to be redundant. Peering rules based on a peer IP address will have to be applied to both
Steelhead appliances to avoid peering between each other. When using 4-port cards, and thus
multiple in-path IP addresses, all addresses will have to be defined to avoid peering.
A serial cluster is a failover design that can be used to mitigate the risk of possible network
instabilities and outages caused by a single Steelhead appliance failure (typically caused by
excessive bandwidth as there is no longer data reduction occurring). When the maximum number
of TCP connections for a Steelhead appliance is reached, that appliance stops intercepting new
connections. This allows the next Steelhead appliance in the cluster the opportunity to intercept
the new connections, if it has not reached its maximum number of connections. The in-path
peering rules and in-path rules are used so that the Steelhead appliances in the cluster know not
to intercept connections between themselves.
Appliances in a failover deployment process the peering rules you specify in a spill-over fashion.
A keepalive method is used between two Steelhead appliances to monitor each others status and
set a master and backup state for both Steelhead appliances. It is recommended to assign the
LAN-side Steelhead appliance to be the master due to the amount of passthrough traffic from
Steelhead to client or server. Optionally, data stores can be synchronized to ensure warm
performance in case of a failure.
In case the Steelhead appliances are deployed in parallel of each other, measures need to be
taken to avoid asymmetrical traffic from being passed through without optimization. This usually
occurs when two or more routing points in the network exist where traffic is spread over the
links simultaneously. Connection forwarding can be used to exchange flow information between
the Steelhead appliances in the parallel deployment. Multiple Steelhead appliances can be
bundled together.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 17


RCSP Study Guide

Virtual In-path
Introduction to Virtual In-path Deployments
In a virtual in-path deployment, the Steelhead is virtually in the path between clients and servers.
Traffic moves in and out of the same WAN interface. This deployment differs from a physical
in-path deployment in that a packet redirection mechanism is used to direct packets to Steelhead
appliances that are not in the physical path of the client or server.
Redirection mechanisms:
• Layer-4 Switch. You enable Layer-4 switch (or server load-balancer) support when you
have multiple Steelhead appliances in your network to manage large bandwidth
requirements.
• PBR. Policy-Based Routing (PBR) enables you to redirect traffic to a Steelhead appliance
that is configured as virtual in-path device. PBR allows you to define policies to redirect
packets instead of relying on routing protocols. You define policies to redirect traffic to the
Steelhead appliance and policies to avoid loop-back.
• WCCP. WCCP (Web Cache Communication Protocol) was originally implemented on Cisco
routers, multi-layer switches, and web caches to redirect HTTP requests to local web caches
(version 1). Version 2, which is supported on Steelhead appliances, can redirect any type of
connection from multiple routers or web caches and different ports.
PBR
Introduction to PBR
PBR is a router configuration that allows you to define policies to route packets instead of
relying on routing protocols. It is enabled on an interface basis and packets coming into a PBR-
enabled interface are checked to see if they match the defined policies. If they do match, the
packets are routed according to the rule defined for the policy. If they do not match, packets are
routed based on the usual routing table. The rules can redirect the packets to a specific IP
address.
To avoid an infinite loop, PBR must be enabled on the interfaces where the client traffic is
arriving and disabled on the interfaces corresponding to the Steelhead appliance. The common
best practice is to place the Steelhead appliance on a separate subnet.
One of the major issues with PBR is that it can black hole traffic (drop all TCP connections to a
destination) if the device it is redirecting to fails. To avoid black holing traffic, PBR must have a
way of tracking whether the PBR next hop is available. You can enable this tracking feature in a
route map with the following Cisco router command:
set ip next-hop verify-availability
With this command, PBR attempts to verify the availability of the next hop using information
from CDP. If that next hop is unavailable, it skips the actions specified in the route map. PBR
checks availability in the following manner:
1. When PBR first attempts to send to a PBR next hop, it checks the CDP neighbor table to see
if the IP address of the next hop appears to be available. If so, it sends an Address Resolution
Protocol (ARP) request for the address, resolves it, and begins redirecting traffic to the next
hop (the Steelhead appliance).

18 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

2. After PBR has verified the next hop, it continues to send to the next hop as long as it obtains
answers from the ARP request for the next hop IP address. If the ARP request fails to obtain
an answer, it then rechecks the CDP table. If there is no entry in the CDP table, it no longer
uses the route map to send traffic. This verification provides a failover mechanism.
In more recent versions of the Cisco IOS software, there is a feature called PBR with Multiple
Tracking Options. In addition to the old method of using CDP information, it allows methods
such as HTTP and ping to be used to determine whether the PBR next hop is available. Using
CDP allows you to run with older IOS 12.x versions.
WCCP Deployments
Introduction to WCCP
The WCCP protocol is a stateful language that the router and Steelhead appliance can use to
redirect traffic to the Steelhead appliance in order for it to optimize. Several functions will have
to be covered to make it stateful and scalable. Failover, load distribution, and negotiation of
connection parameters will all have to be communicated throughout the cluster that the Steelhead
appliance and router form upon successful negotiation. The protocol has four messages to
encompass all of the above functions:
• Here I am. Sent by Steelhead appliances to announce themselves.
• I see you. Sent by WCCP enabled routers to respond to announcements.
• Redirect Assign. Sent by the designated Steelhead appliance to determine flow distribution.
• Removal Query. Sent by router to check a Steelhead appliance after missed “here I am”
messages.
When you configure WCCP on a Steelhead appliance:
1. Routers and Steelhead appliances are added to the same service group.
2. Steelhead appliances announce themselves to the routers.
3. Routers respond with their view of the service group.
4. One Steelhead will be the designated CE and tells the routers how to redirect traffic among
the Steelhead appliances in the service group.
How Steelhead Appliances Communicate With Routers
Steelhead appliances can use one of the following methods to communicate with routers:
• Unicast UDP. The Steelhead appliance is configured with the IP address of each router. If
additional routers are added to the service group, they must be added on each Steelhead
appliance.
• Multicast UDP. The Steelhead appliance is configured with a multicast group. If additional
routers are added, you do not need to add or change configuration settings on the Steelhead
appliances.
Redirection
By default, all TCP traffic is redirected, optionally a redirect-list can be defined where only the
contents of the redirect-list are redirected. A redirect-list in a WCCP configuration refers to an
ACL that is configured on the router to select the traffic that will be redirected.
Traffic is redirected using one of the following schemes:
• Gre (Generic Routing Encapsulation). Each data packet is encapsulated in a GRE packet
with the Steelhead appliance IP address configured as the destination. This scheme is
applicable to any network.
• L2 (Layer-2). Each packet MAC address is rewritten with a Steelhead appliance MAC
address. This scheme is possible only if the Steelhead appliance is connected to a router at
Layer-2.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 19


RCSP Study Guide

• Either. The either value uses L2 (Layer-2) first—if Layer-2 is not supported, GRE is used.
This is the default setting.
You can configure your Steelhead appliance to not encapsulate return packets. This allows your
WCCP Steelhead appliance to negotiate with the router or switch as it if were going to send gre-
return packets, but to actually send l2-return packets. This configuration is optional but
recommended when connected at L2 directly. The command to override WCCP packet return
negotiation is wccp l2-return enable. Be sure the network design permits this.
Load Balancing and Failover
WCCP supports unequal load balancing. Traffic is redirected based on a hashing scheme and the
weight of the Steelhead appliances. Each router uses a 256-bucket Redirection Hash Table to
distribute traffic for a Service Group across the member Steelhead appliances. It is the
responsibility of the Service Group's designated Steelhead appliance to assign each router's
Redirection Hash Table. The designated Steelhead appliance uses a
WCCP2_REDIRECT_ASSIGNMENT message to assign the routers' Redirection Hash Tables.
This message is generated following a change in Service Group membership and is sent to the
same set of addresses to which the Steelhead appliance sends WCCP2_HERE_I_AM messages.
A router will flush its Redirection Hash Table if a WCCP2_REDIRECT_ASSIGNMENT is not
received within five HERE_I_AM_T seconds of a Service Group membership change. The
HASH algorithm can use several different input fields to come up with an 8 bit output (which is
the bucket value). Default input fields are source and destination IP address of the packet that is
redirected. Source and destination TCP port or any combination can be used.
The weight determines the percentage of traffic a Steelhead appliance in a cluster gets, the
hashing algorithm determines which flow is redirected to which Steelhead appliance. The default
weight is based on the Steelhead appliance model number. The weight is heavier for models that
support more connections. You can modify the default weight if desired.
With the use of weight you can also create an active/passive cluster by assigning a weight of 0 to
the passive Steelhead appliance. This Steelhead appliance will only get traffic when the active
Steelhead appliance fails.
Advanced WCCP Configuration
Using Multicast Groups
If you add multiple routers and Steelhead appliances to a service group, you can configure them
to exchange WCCP protocol messages through a multicast group. Configuring a multicast group
is advantageous because if a new router is added, it does not need to be explicitly added on each
Steelhead appliance.
Multicast addresses must be between 224.0.0.0 and 239.255.255.255.
Configuring Multicast Groups on the Router
On the router, at the system prompt, enter the following set of commands:
Router> enable
Router# configure terminal
Router(config)# ip wccp 90 group-address 224.0.0.3
Router(config)# interface fastEthernet 0/0
Router(config-if)# ip wccp 90 redirect in
Router(config-if)# ip wccp 90 group-listen

20 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

Router(config-if)# end
Router# write memory
NOTE: Multicast addresses must be between 224.0.0.0 and 239.255.255.255.
Configuring Multicast Groups on the Steelhead Appliance
On the WCCP Steelhead appliance, at the system prompt, enter the following set of commands:
WCCP Steelhead > enable
WCCP Steelhead # configure terminal
WCCP Steelhead (config) # wccp enable
WCCP Steelhead (config) # wccp mcast-ttl 10
WCCP Steelhead (config) # wccp service-group 90 routers 224.0.0.3
WCCP Steelhead (config) # write memory
WCCP Steelhead (config) # exit
Limiting Redirection by TCP Port
By default all TCP ports are redirected, but you can configure the WCCP Steelhead appliance to
tell the router to redirect only certain TCP source or destination ports. You can specify up to a
maximum of seven ports per service group.
Using Access Lists for Specific Traffic Redirection
If redirection is based on traffic characteristics other than ports, you can use ACLs on the router
to define what traffic is redirected.
ACL considerations:
• ACLs are processed in order, from top to bottom. As soon as a particular packet matches a
statement, it is processed according to that statement and the packet is not evaluated against
subsequent statements. Therefore, the order of your access-list statements is very important.
• If no port information is explicitly defined, all ports are assumed.
• By default all lists include an implied deny all entry at the end, which ensures that traffic that
is not explicitly included is denied. You cannot change or delete this implied entry.
Access Lists: Best Practice
To avoid requiring the router to do extra work, Riverbed recommends that you create an ACL
that routes only TCP traffic to the Steelhead appliance. When a WCCP configured Steelhead
appliance receives UDP, GRE, ICMP, and other non-TCP traffic, it returns the traffic to the
router.
Verifying and Troubleshooting WCCP Configuration
Checking the Router Configuration
On the router, at the system prompt, enter the following set of commands:
Router>en
Router#show ip wccp
Router#show ip wccp 90 detail
Router#show ip wccp 90 view
Verifying WCCP Configuration on an Interface
On the router, at the system prompt, enter the following set of commands:
Router>en

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 21


RCSP Study Guide

Router#show ip interface
Look for WCCP status messages near the end of the output.
You can trace WCCP packets and events on the router.
Checking the Access List Configuration
On the router, at the system prompt, enter the following set of commands:
Router>en
Router#show access-lists <access_list_number>
Tracing WCCP Packets and Events on the Router
On the router, at the system prompt, enter the following set of commands:
Router>en
Router#debug ip wccp events
WCCP events debugging is on
Router#debug ip wccp packets
WCCP packet info debugging is on
Router#term mon

Server-Side Out-of-Path Deployments


Out-of-path Networks
An out-of-path deployment is a network configuration in which the Steelhead appliance is not in
the direct physical or logical path between the client and the server. In an out-of-path
deployment, the Steelhead appliance acts as a proxy. An out-of-path configuration is suitable for
data center locations where physical in-path or virtual in-path configurations are not possible.
In an out-of-path deployment, the client-side Steelhead appliance is configured as an in-path
device, and the server-side Steelhead appliance is configured as an out-of-path device.
The command to enable server-side out-of-path is:
HOSTNAME (config) # out-of-path enable

A fixed target rule is applied on the client-side Steelhead appliance to make sure the TCP session
is intercepted and statically sent to the out-of-path Steelhead appliance on the server side. When
enabling out-of-path on the server-side Steelhead appliance, it starts listening on port 7810 for
incoming connections from a client-side Steelhead appliance.

22 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

Since the Steelhead appliance is not in the path between the client and server it does not perform
NAT. The server will see the IP address of the Steelhead appliance as the source of the
connection so the packets are returned to the Steelhead appliance instead of the client. This is
necessary to make sure that the bidirectional traffic is seen by the Steelhead appliance. Also keep
in mind that optimization will only occur when the TCP connection is initiated by the client.
Client-side Server-side
Client Steelhead Steelhead Server

IP(C)→IP(S):SYN SEQ1
IP(CSH)→IP(SSH):SYN
Listening on
IP(SSH)→IP(CSH):SYN/ACK port 7810
IP(CSH)→IP(SSH):ACK

Setup Information
IP(SSH)→IP(S):SYN SEQ2

IP(S)→IP(SSH):SYN/ACK
Connect Result IP(SSH)→IP(S):ACK
IP(S)→IP(C):SYN/ACK
Connect result is
IP(C)→IP(S):ACK cached until failure

Out-of-Path, Failover Deployment


An out-of-path, failover deployment serves networks where an in-path deployment is not an
option. This deployment is cost effective, simple to manage, and provides redundancy.
When both Steelhead appliances are functioning properly, the connections traverse the master
appliance. If the master Steelhead appliance fails, subsequent connections traverse the backup
Steelhead appliance. When the master Steelhead appliance is restored, the next connection
traverses the master Steelhead appliance. If both Steelhead appliances fail, the connection is
passed through unoptimized to the server. The way to do this is specify multiple target
appliances in the fixed-target in-path rule on the client-side Steelhead appliance.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 23


RCSP Study Guide

Hybrid: In-Path and Server-Side Out-of-Path Deployment


A hybrid deployment serves offices with one WAN routing point and users, and where the
Steelhead appliance must be referenced from remote sites as an out-of-path device (for example,
to avoid mistaken auto-discovery or to bypass intermediary Steelhead appliances).
The following figure illustrates the client-side of the network where the Steelhead appliance is
configured as both an in-path and server-side out-of-path device.

Firewall/VPN

PRI
DMZ

FTP Web
Server Server

In this hybrid design, a client-side Steelhead appliance (not shown) would use a typical auto-
discovery process to optimize any data going to or coming from the clients shown. If however, a
remote user would like to get optimization to the DMZ shown above, the standard auto-
discovery process would not function properly since the packet flow would prevent the auto-
discovery probe from ever reaching the Steelhead appliance. To remedy this, a fixed target rule
matching the destination address of the DMZ and targeted to the Primary (PRI) interface of the
Steelhead appliance above will ensure that the traffic will reach the Steelhead appliance, and due
to the server-side out-of-path NAT process, will ensure that it returns to the Steelhead appliance
for optimization on the return path.
Asymmetric Route Detection
Asymmetric auto-detection enables Steelhead appliances to detect the presence of asymmetry
within the network. Asymmetry is detected by the client-side Steelhead appliances. Once
detected, the Steelhead appliance will pass through asymmetric traffic unoptimized allowing the
TCP connections to continue to work. The first TCP connection for a pair of addresses might be
dropped because during the detection process the Steelhead appliances have no way of knowing
that the connection is asymmetric.
If asymmetric routing is detected, an entry is placed in the asymmetric routing table and any
subsequent connections from that IP address pair will be passed through unoptimized. Further
connections between these hosts are not optimized until that particular asymmetric routing cache
entry times out.
Type Description Asymmetric Routing Table and Log Entries
Packets traverse both Steelhead • Asymmetric Routing Table: bad RST
Complete
appliances going from the client • Log: Sep 5 11:16:38 gen-sh102 kernel:
to the server but bypass both [intercept.WARN] asymmetric routing
Asymmetry
Steelhead appliances on the between 10.11.111.19 and 10.11.25.23
return path. detected (bad RST)

24 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

Type Description Asymmetric Routing Table and Log Entries


• Asymmetric Routing Table: bad SYN/ACK
Packets traverse both Steelhead
appliances going from the client • Log: Sep 7 16:17:25 gen-sh102 kernel:
Server-Side [intercept.WARN] asymmetric routing
to the server but bypass the
Asymmetry between 10.11.25.23:5001 and
server-side Steelhead appliance
10.11.111.19:33261 detected (bad
on the return path.
SYN/ACK)
Packets traverse both Steelhead • Asymmetric Routing Table: no SYN/ACK
appliances going from the client • Log: Sep 7 16:41:45 gen-sh102 kernel:
Client-Side
to the server but bypass the [intercept.WARN] asymmetric routing
Asymmetry
client-side Steelhead appliance between 10.11.111.19:33262 and
on the return path. 10.11.25.23:5001 detected (no SYN/ACK)
There are two types of Multi- • Asymmetric Routing Table: probe-
SYN Retransmit: filtered(not-AR)
• Probe-filtered occurs when • Log: Sep 13 20:59:16 gen-sh102 kernel:
the client-side Steelhead [intercept.WARN] it appears as though
appliance sends out multiple probes from 10.11.111.19 to 10.11.25.23
SYN+ frames and does not are being filtered. Passing through
Multi-SYN get a response. connections between these two hosts.
Retransmit • SYN-rexmit occurs when the
client-side Steelhead
appliance receives multiple
SYN retransmits from a
client and does not see a
SYN/ACK packet from the
destination server.

Connection Forwarding
In asymmetric networks, a client request traverses a different network path from the server
response. Although the packets traverse different paths, to optimize a connection, packets
traveling in both directions must pass through the same client and server Steelhead appliances.
If you have one path (through Steelhead-2) from the client to the server and a different path
(through Steelhead-3) from the server to the client, you need to enable in-path connection
forwarding and configure the Steelhead appliances to communicate with each other. These
Steelhead appliances are called neighbors and exchange connection information to redirect
packets to each other.
You can configure multiple neighbors for a Steelhead appliance. Neighbors can be placed in the
same physical site or in different sites, but the latency between them should be small because the
packets traveling between them are not optimized.
When a SYN arrives on Steelhead-2, it will send a message on port 7850 telling it that it is
expecting packets for that connection. Steelhead-3 then acknowledges and once Steelhead-2 gets
the confirmation from Steelhead-3 it will continue with the SYN+ out to the WAN. When the
SYN/ACK+ comes back, if it arrives at Steelhead-3, it will encapsulate that packet and forward
it back to Steelhead-2. Once the connection has been established, there will be no more
encapsulation between the two Steelhead appliances for that flow.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 25


RCSP Study Guide

If a subsequent packet arrives on Steelhead-3, it will perform the destination IP/port rewrite. The
Steelhead appliance simply changes the destination IP of the packet to that of the neighbor
Steelhead appliance. No encapsulation is involved later on in the flow.
In WCCP deployments, connection forwarding can also be used to prevent outages whenever the
cluster and the redirection table changes. Default behavior of connection forwarding is that when
a neighbor is lost, the Steelhead appliance that lost the neighbor also will pass through the
connection since it is assuming asymmetric routing of traffic. In WCCP deployments this is not
the case and this behavior has to be avoided. The command in-path neighbor allow-
failure overrides the default behavior and allows the Steelhead appliances to continue
optimizing. Understanding the implication of applying this command prior to configuring it in a
production environment is recommended.
Commands to enable connection forwarding:
in-path neighbor enable
in-path neighbor ip address x.x.x.x
in-path neighbor allow-failure {optional}
IP addresses of neighbors with multiple in-path interfaces only have to be specified with the first
in-path interface.
Simplified Routing
Simplified routing collects the IP address for the next hop MAC address from each packet it
receives to use in addressing traffic. Enabling simplified routing eliminates the need to add static
routes when the Steelhead appliance is in a different subnet from the client and the server.
Without simplified routing, if a Steelhead appliance is installed in a different subnet from the
client or server, you must define one router as the default gateway and optionally define static
routes for the other subnets.
Without having static routes or other forms of ‘routing’ intelligence, packets can end up flowing
through the Steelhead appliance twice causing packet ricochet. This could potentially lead to
broken QoS models, firewalls blocking packets, and a performance decrease. Enabling simplified
routing eliminates these issues.
Datastore Synchronization
In a serial failover scenario the data stores are not synchronized by default. When the master
Steelhead appliance fails, the backup Steelhead appliance will take over but users will experience
cold performance again. Datastore synchronization can be turned on to exchange data store
content. This can either be done via the primary or the auxiliary interface. The synchronization
process runs on port 7744, the reconnect timer is set to 30 seconds. Datastore synchronization
can only occur between the same Steelhead appliance models and can only be used in pairs of
two.
The commands to enable automatic datastore synchronization are:
HOSTNAME (config) #datastore sync peer-ip "x.x.x.x"
HOSTNAME (config) #datastore sync port "7744"
HOSTNAME (config) #datastore sync reconnect "30"
HOSTNAME (config) #datastore sync master
HOSTNAME (config) #datastore sync enable

26 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

If you have not deployed datastore synchronization it is also possible to manually send the data
from one Steelhead appliance to another. The receiving Steelhead appliance will have to start a
listening process on the primary/auxiliary interface. The sending Steelhead appliance will have
to push the data to the IP address of the primary interface.
The commands to start this are:
HOSTNAME (config) # datastore receive port xxxx
HOSTNAME (config) # datastore send addr x.x.x.x port xxxxx

Authentication and Authorization


Authentication
The Steelhead appliance can use a RADIUS or TACACS+ authentication system for logging in
administrative and monitor users. The following methods for user authentication are provided
with the Steelhead appliance:
• local
• RADIUS
• TACACS+
The order in which authentication is attempted is based on the order specified in the AAA
method list. The local value must always be specified in the method list.
The authentication methods list provides backup methods should a method fail to authenticate a
user. If a method denies a user or is not reachable, the next method in the list is tried. If multiple
servers within a method (assuming the method is contacting authentication servers) and a server
time-out is encountered, the next server in the list is tried. If the current server being contacted
issues an authentication reject, no other servers for the method are tried and the next
authentication method in the list is attempted. If no methods validate a user, the user will not be
allowed access to the box.
The Steelhead appliance does not have the ability to set a per interface authentication policy. The
same default authentication method list is used for all interfaces. You cannot configure
authentication methods with subsets of the RADIUS or TACACS+ servers specified (that is,
there are no server groups).
When configuring the authentication server, it is important to specify the service rbt-exec along
with the appropriate custom attributes for authorization. Authorization can be based on either the
admin account or the monitor user account by using local-user-name=admin or local-user-
name=monitor, respectively.
The following CLI commands are available for RADIUS and TACACS+ authentication:

Category CLI Commands


• aaa authentication login default
• aaa authorization map default-user
Authentication
• aaa authorization map order
• show aaa
• radius-server host
• radius-server key
RADIUS Configuration
• radius-server retransmit
• radius-server timeout

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 27


RCSP Study Guide

Category CLI Commands


• tacacs-server host
• tacacs-server key
TACACS+ Configuration • tacacs-server retransmit
• tacacs-server timeout
• show tacacs
• username privilege
• username nopassword
• username password
• username password 0
User Accounts
• username password 7
• username password cleartext
• username password encrypted
• username disable

Central Management Console (CMC)


Introduction
The CMC facilitates the essential administration tasks for the Riverbed system:
• Configuration. The CMC enables you to automatically configure new Steelhead appliances
or to send configuration settings to appliances in remote offices. The CMC utilizes
configuration objects (profiles and groups) to facilitate centralized configuration and
reporting.
• Monitoring. The CMC provides both high-level status and detailed statistics of the
performance of Steelhead appliances and enables you to configure event notification for
managed Steelhead appliances.
• Management. The CMC enables you to start, stop, restart, and reboot remote Steelhead
appliances. You can also schedule jobs to send software upgrades and configuration changes
to remote appliances or to collect logs from remote Steelhead appliances.
CMC Configuration Objects
The CMC utilizes appliance common profiles and appliance groups to facilitate centralized
configuration and reporting. You use profiles as configuration templates to ensure proper
configuration for groups of Steelhead appliances that must have the same configuration for
common network settings, routing rules, and feature settings. You apply one or more common
profiles to Steelhead appliance groups to ensure that all appliances in the group share the
common configuration. For example, you might create a basic configuration profile (Base) for
all appliances that includes in-path settings, event and failure notification settings, and logging
options; and separate profiles that define protocol support that enables CIFS or MAPI features
(CIFS, MAPI). You would apply the basic common profile to all groups and the protocol-
specific common profile only to the group of appliances that require the protocol-specific
settings.
All Steelhead appliances belong to the default group all. Appliances can belong to multiple
groups. Appliance groups can contain zero, one, or more appliances. You use groups to
coordinate mass configuration (configuration push) and aggregated reporting. For example, you

28 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

might create many configuration-oriented groups that are related to the profile settings (Base,
CIFS, MAPI); and many reporting groups in addition to the group all, perhaps based again on
protocol support (CIFS, MAPI) or on geographic location (Asia, Europe). It is important to
note that grouping is entirely configurable by the administrator and there is no notion of groups
based on location, business division, or other parameters other than those simply designated as
such by the administrator. The use of a group for reporting or configuration purposes is entirely a
user attributed decision and there is no way of enforcing this logic on the CMC as all groups can
be used for reporting, configuration, or both.
The following figure illustrates the relationship between profiles, groups, and appliances.
Appliances

Configuration Report
Profiles Groups Groups

Base Base Europe

CIFS CIFS all

MAPI MAPI Asia

Location
Groups

Asia

Europe

Steelhead Appliance Auto-Registration


Steelhead appliances must be registered with the CMC so that you can monitor and manage them
with CMC tools. Steelhead appliances are programmed to send a registration request periodically
to the CMC—either to an IP address or host name you specify when you run the Steelhead
appliance installation wizard, or to a default CMC host name. In order for registration with the
default host name to work, you must configure your DNS server to map riverbedcmc to the IP
address of the CMC.
Assuming you install the CMC before you connect the Steelhead appliances:
1. Set up a DHCP server to assign IP addresses in your network.
2. Install the CMC.
3. Use the CMC to complete the registration entries for remote appliances. Specify:
• The serial number of the appliance.
• The user name and password of the account through which the configuration must be
performed (defaults are admin and password).
• An initial group assignment (optional).

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 29


RCSP Study Guide

4. Use the CMC to create the profile and group configuration objects you will use to manage
the Steelhead appliances in your system.
5. When you have completed the appliance configuration, display the Appliance Details page
and set the Configuration Ready check box.
6. Set up a DNS server to map the host name riverbedcmc to the IP address for the CMC.
7. Connect the remote Steelhead appliance primary network interface to the network and power
it on.
During startup you are asked if you want to configure using the CMC. Select Yes to confirm.
The next question is which CMC you wish to use. By default the name riverbedcmc is used, if
desired, you can change this to the correct DNS entry for the CMC you want to use.
When the Steelhead appliance contacts the CMC, the CMC will send the configuration to the
remote Steelhead appliance, the appliance will be registered with the CMC, and the CMC will
begin collecting performance metrics for the Steelhead appliance. When, during registration, no
group was assigned to the Steelhead appliance, the Steelhead appliance will end up in the default
group all only.
All Steelhead appliances belong to the group all and can be assigned to more groups as desired.
Steelhead Profiles
Two types of profiles exist in the CMC: appliance specific profiles and common profiles.
Appliance specific profiles contain configuration parameters that are unique for a Steelhead
appliance. These are parameters such as hostname, IP address information, port settings and in-
path settings.
Common profiles are profiles that can exist on multiple Steelhead appliances. These profiles
contain information such as optimization settings and in-path peering rules. Common profiles
can be pushed out to all Steelhead appliances registered in the CMC.

30 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

III. Features
Feature Licensing
Certain features on Steelhead appliances require a license for operation. In version 3.x, licenses
for all features, including platform specific licenses, are included with the purchase of a
Steelhead appliance. These licenses are factory installed, however licenses can be installed by
the user via the CLI or web management console. Version 3.x requires three licenses to be
installed for the base system to function, as well as the application acceleration for CIFS and
MAPI. This includes the Scalable Data Referencing license (base), the Windows File Servers
license (CIFS), and the Microsoft Exchange (EXCH) license. Additional licensed features that
will automatically be included upon activating the base license but do not require a separate
license key are the Microsoft SQL optimization module, and the NFS optimization module.
Starting in version 3.x, HighSpeed TCP no longer requires a license, and is included as standard
feature for data center sized Steelhead appliances (starting at the 5010, and higher models). All
licensed features with the exception of the Microsoft MS SQL optimization module are enabled
by default.
HighSpeed TCP (HSTCP)
Applicability and Considerations
To better utilize links that have high bandwidth and high latency, such as in GigE WANs,
OCx/STMx, or any other link that may be classified as a large BDP (bandwidth delay product)
link, enabling HSTCP should be considered. HSTCP is a feature you can enable on Steelhead
appliances to help reduce WAN data transfers inefficiencies that are caused by limitations with
regular TCP. Enabling the HSTCP feature allows for more complete utilization of these “long fat
pipes”. HSTCP is an IETF defined RFC standard (defined in RFC 3649 and RFC 3742), and has
been shown to provide significant performance improvements in networks with high BDP
values.
As a basis for determining the applicability of HSTCP for a given network, the following
formulas and their interpretation is provided below.
For any given TCP Cwnd (congestion window) size and network latency, the maximum
throughput can be calculated by dividing the window size by the latency (64KB/.1s=640KB/s).
End nodes that are limited to window sizes of 64KB or less (nodes that do not support TCP
window scaling as defined in RFC 1323), will prove inefficient in transferring data across links
with bandwidth exceeding the Cwnd/RTT limitation. While it is not HSTCP that introduced TCP
window scaling, it does typically make use of it as links that have high BDP values imply that a
large TCP window size would be needed. For a given transfer, the TCP window size should be
no less than the BDP in order to ensure that the full bandwidth of the link is used by that session.
By the same token, having a TCP window that exceeds the BDP may cause the receiving host, or
devices in between, to exhaust their resources and potentially cause severe bandwidth
degradation.
Additional considerations with HSTCP relate to how the Cwnd changes in size during a transfer.
For most non-HSTCP implementations, after a short period of exponential Cwnd growth (Slow
Start), the window size continues to grow at a rate of 1MSS/RTT. Most operating systems used a
value of 1460 bytes as their MSS, meaning that for each successful round trip (ACK received)
the window will increase by 1460 bytes. In the case of small BDP and thus Cwnd sizes, the 1460
bytes per RTT, represents a moderate growth rate that can peak within a few short seconds. In
the case of a large BDP value however, the 1460 bytes per RTT represents a significant amount

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 31


RCSP Study Guide

of time before the Cwnd would extend to the full BDP value. The problem of increasing the
Cwnd size at the rate prescribed by standard TCP is further compounded by considering that a
packet loss event causes TCP to “back off” by reducing the current Cwnd size by half. This
reduction is vital in allowing TCP to “play nicely” with other sessions sharing link bandwidth,
however in the case of high BDP links; the time to recover from such a loss event at standard
Cwnd growth rates would represent a very ineffective use of the bandwidth available.
For example, for a standard TCP connection with 1500-byte packets and a 100ms round-trip
time, achieving a steady-state throughput of 10Gbps would require an average congestion
window of 83,333 segments, and a packet drop rate of at most one congestion event every
5,000,000,000 packets (or equivalently, at most one congestion event every 1 2/3 hours). Clearly
this is not a likely possibility in real world networks, and is the basis for which HSTCP was
developed. HSTCP solves problems related to the rate at which to grow the Cwnd, as well as
how to respond when loss events occur and the Cwnd needs to be reduced. Further information
as to how this is achieved is explained in the RFCs referenced above.
The following table and graph displays how filling a Long Fat Network (OC-12) is done.
Test Scenario Bandwidth RTT Latency Throughput

Baseline 622 Mbps 15 ms 36 Mbps


A
With Steelhead Appliances 622 Mbps 15 ms 600+Mbps
Baseline 622 Mbps 100 ms 5 Mbps
B
With Steelhead Appliances 622 Mbps 100 ms 600+Mbps

S a m ple FTP tra nsfe rs (3 GB file )


Sample FTP Transfers (3 GB file)
700
bits/second)

600
its/sec)

500
illionb
Nuti(millions

400
l (m

300

200
W
WANAUtil

100

0
1 49 97 145 193 241 289 337 385 433 481 529 577 625 673

Time e
tim (seconds)
(s e c)

ww/Steelhead HSTCP HSTCP


/ Steelhead 15 ms
15 ms RTT
RTT

Operation and Configuration


To display HSTCP settings, use the CLI command show tcp highspeed, or navigate to the
“Protocol: HSTCP” page under the Setup tab in the Management Console.
Configuring HS TCP can be done via the CLI or the Management Console, with the key steps
involving enabling HSTCP, and configuring the appropriate buffer sizes for the LAN and WAN
interfaces. When adjusting the buffer sizes, it is important to configure them in accordance with
the specification of the link. More information about calculating the correct buffer values can be
found in the “BDP Calculations and Buffer Adjustments” section below.
To enable HSTCP, use the CLI command tcp highspeed enable. Alternatively, you can enable
HSTCP in the Management Console by clicking on the Enable High Speed TCP check box and
then clicking on Apply. Note that a service restart is required with either method.

32 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

BDP Calculations and Buffer Adjustments


In order to achieve the maximum throughput possible for a given link with TCP, it is important
to set the send and receive buffers to a proper size. Using buffers that are too small may not
allow the Cwnd to fully open, while using buffers that are too large may overrun the receiver and
break the flow control process. When configuring the send and receive WAN buffers on a
Steelhead, it is recommended that they be set to two times the Bandwidth Delay Product.
As an example, a 45Mb/s point to point connection with 100ms of latency, should have a buffer
size of 1,125,000 bytes set on the WAN send (for the sending Steelhead), and the same number
on the receive for the WAN interface on the receiving Steelhead ( (45,000,000bits/8*.1s) *2).
For a point-to-point connection such as this one, the send and receive buffers would typically be
the same value.
Additionally, it is recommended that buffers on WAN routers be set to accommodate the packet
influx by allocating at least one times the BDP worth of packets. As an example, considering the
case of the 45Mb/s connection above with 100ms of latency, and given that a packet is 1500
bytes in size, we realize that we need to back that buffer 375 packets deep [(45,000,000/8
*.1)/1500].
Quality of Service
QoS Concepts
You can configure QoS on Steelhead appliances to control the prioritization of different types of
network traffic and to ensure that Steelhead appliances give certain network traffic (for instance,
VoIP) higher priority than other network traffic.
QoS allows you to specify priorities for various classes of traffic and properly distributes excess
bandwidth among classes.
NOTE: QoS enforcement is available only in physical in-path deployments.
Steelhead appliances use HFSC (Hierarchical Fair Service Curve) QoS operations to
simultaneously control bandwidth and latency for each QoS class. For each class, you can set a:
• Priority level
• Minimum guaranteed bandwidth level, which specifies the minimum amount of
bandwidth a QoS class is guaranteed to receive when there is bandwidth contention. If
unused bandwidth is available, a QoS class receives more than its minimum guaranteed
bandwidth level. The percentage of excess bandwidth each QoS class receives is relative to
the percentage of minimum guaranteed bandwidth it has been allocated. The total minimum
guaranteed bandwidth level of all QoS classes must be less than or equal to 100%.
• Maximum bandwidth level, which specifies the maximum amount of bandwidth a QoS
class is allowed to use, regardless of the available excess bandwidth.
• Connection limit, which specifies a maximum number of connections the specified QoS
class will optimize. Connections over this limit are passed-through. Connection limit can
only be set in the CLI.
Once you have defined a QoS class, you can create one or more QoS rules to apply traffic to it.
QoS rules define source subnet or port, destination subnet or port, protocol, traffic type, and
VLAN and DSCP filters for a QoS class.
IMPORTANT: Familiarity with QoS classes and rules from the command line is required for
the RCSP exam.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 33


RCSP Study Guide

About QoS Class Priorities


There are five QoS priorities for Steelhead appliances. You assign a class priority when you
create a QoS class. Once you have created a QoS class, you can modify its class priority. In
descending order, class priorities are:
• Realtime
• Interactive
• Business Critical
• Normal Priority
• Low Priority
Priority levels are minimum priority guarantees. If higher priority service is available, a QoS
class will receive it even if the class has been assigned a lower priority level. For example, if a
QoS class is assigned the priority level Low Priority, and QoS classes that are assigned higher
priority levels are not active, the low priority QoS class adjusts to the highest possible priority
for the current traffic patterns.
Maximum Allowable QoS Classes and Rules
The number of QoS classes and rules you can create on a Steelhead appliance depends on the
appliance model number.
Steelhead Appliance Model Maximum Allowable QoS Classes Maximum Allowable QoS Rules
2xx and lower 20 60
5x0, 1xx0 60 180
2xx0 80 240
3xx0 120 360
5xx0 and higher 200 600

Riverbed QoS Implementation


Steelhead appliances make use of the HFSC QoS scheduling algorithm. Most traditional
algorithms allow you to define either the priority of a packet or the amount of bandwidth that
should be allocated for specific packet types (priority queuing, custom queuing). These methods
each suffer from problems such as starvation to lower priority queues or do not allow low
bandwidth queues with latency sensitive traffic the ability to leave the device sooner than larger
packets with more bandwidth allocated to them. Newer scheduling methods allow for a blend of
a priority queue for latency sensitive traffic, while other traffic would be placed into a general
purpose queue with bandwidth allocations specified by the administrator for each traffic type
(LLQ (Low Latency Queuing) uses this method).
Problems of having a single priority queue, or even multiple priority queues (of the same
priority, as is the case with LLQ), stem from the fact that today most networks carry traffic types
that cannot be classified with such a binary system (priority queue, or general queue). VoIP
traffic, which is typical very latency sensitive, should clearly be placed in a queue of high
priority. However, traffic such as stock quotes, video conferencing, remote PC control (i.e.
Remote Desktop Protocol, or PC Anywhere) are also latency sensitive, and placing them into
either the same priority queue or a separate priority queue with a different bandwidth allocation,
still causes the same problem – two or more queues of the same priority will give latency

34 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

preference to packets in the queue that have more bandwidth allocated to it. As an example,
consider a case of LLQ where two priority queues are created, one for voice traffic, and one for
video traffic. The voice queue is allocated 10% of the bandwidth, and the video queue which is
also latency sensitive, is allocated 40% of the bandwidth. Since the router has no ability to
differentiate that the small voice packets should generally be allowed out before the larger video
packets (up to the bandwidth limit), you will experience a case where small voice packets may
get stuck behind several larger video packets despite not fully utilizing their 10% bandwidth
allocation.
HSFC solves these problems by logically separating the latency element of queuing from the
bandwidth element. As such, you can define multiple queues, each of a different priority relative
to the other queues, and be assured that despite more bandwidth being allocated in lower queues;
the higher queues will still get serviced preferentially from a latency perspective, up to the
amount of bandwidth specified for that queue. Steelhead appliances implement five queues with
each queue starting at “Realtime” and ending with ”Low Priority,” with each queue in between
having lower latency priority than the next (Realtime having the highest). The strategy imposed
by HSFC lends itself particularly well to “bursty” traffic, as is the case with most networks.
Enforcing QoS for Active/Passive FTP
Active/Passive FTP Operation
To configure optimization policies for the FTP data channel, define an in-path rule with the
destination port 20 and set its optimization policy. Setting QoS for destination port 20 on the
client-side Steelhead appliance affects passive FTP, while setting the QoS for destination port 20
on the server-side Steelhead appliance affects active FTP.
In the case of an active FTP session, data connections originate on a server sourced on port 20
and destined to a random port specified by the client. As such, specifying a QoS rule on the
server-side Steelhead with a destination port of 20 is appropriate. With passive FTP however,
data connections initiate on the client from a random port, and are destined to a server on a
random port; as such, there is no seemingly simple way to apply a QoS rule based on the Layer 4
port information. To help solve this problem, the Steelhead allows you to define a client-side
QoS rule with a destination port of 20 to tell it that you would like to apply this QoS rule to a
passive FTP data connection. The Steelhead will intelligently identify the actual ports used for
the passive FTP data transfer, and apply the QoS logic set forth by the class where the rule has
been applied.
Converting between DSCP, IP Precedence, ToS
For the RCSP exam, you are expected to know how to convert various packet marking types.
This is important because the Steelhead appliances only understand DSCP (Differentiated
Services Code Point) values, while other network devices may support a different method of
marking or matching traffic. Various methods of converting to and from DSCP values are
defined by RFC 2474.
Interpreting and Converting Common Router Policies
In addition to being able to convert to and from DSCP values for proper marking and matching
between Steelhead appliances and other network nodes on the RCSP exam, understanding how
to convert simple QoS configurations from Cisco and other popular routing platforms is required.
Generally, some familiarity with QoS configuration on routers and an understanding of how
Steelhead appliances implement QoS (see “Riverbed QoS Implementation” section), should
make the process of converting configurations a more simple task.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 35


RCSP Study Guide

PFS (Proxy File Service) Deployments


Introduction to PFS
PFS is an integrated virtual file server that allows you to store copies of files on the Steelhead
appliance with Windows file access, creating several options for transmitting data between
remote offices and centralized locations with improved performance. Data is configured into file
shares and the shares are periodically synchronized transparently in the background over the
optimized connection of the Steelhead appliance.
PFS leverages the integrated disk capacity of the Steelhead appliance to store file-based data in a
format that allows it to be retrieved by NAS clients.
PFS is supported only on models 520, 1010, 1020, 1520, 2020, 3010, 3020, 5010, and 5020.
PFS Terms
PFS Term Description
A virtual file server resident on the Steelhead appliance, providing
Windows file access (with ACLs) capability at a branch office on the
Proxy File Server
LAN network, populated over an optimized WAN connection with
data from the origin server.
The server located in the data center which hosts the origin data
volumes.
Origin File Server IMPORTANT: The PFS share and the origin-server share name
cannot contain Unicode characters. The Management Console does
not support Unicode characters.
In Domain mode you join the Windows domain for which the
Domain Mode Steelhead appliance will be a member. Typically, this is the same
domain as your company’s domain.
Specifies the domain controller name, the host that provides user login
service in the domain. (Typically, with Windows Active Directory
Domain Controller
Service domains, given a domain name, the system automatically
retrieves the domain controller name.)
In Local Workgroup mode you define a workgroup and add individual
Local Workgroup Mode users that will have access to the PFS shares on the Steelhead
appliance.
The data volume exported from the origin server to the remote
Global Share
Steelhead appliance.
The name that you assign to a share on the Steelhead appliance. This
Local Name
is the name by which users identify and map a share.
The path to the data on the origin server or the Universal Naming
Remote Path Convention (UNC) path of a share to which you want to make
available to PFS.
Synchronization runs periodically in the background, ensuring that the
data on the proxy file server is synchronized with the origin server.
Share Synchronization
You have the Steelhead appliance refresh the data automatically by
setting the interval, in minutes; or manually at anytime.

36 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

When to Use PFS


Before you configure PFS, evaluate whether it is suitable for your network needs. Advantages of
using PFS are:
• LAN access to data residing across the WAN. File access performance is improved
between central and remote locations. PFS creates an integrated file server, enabling clients
to access data directly from the proxy filer on the LAN as opposed to the WAN.
Transparently in the background, data on the proxy filer is synchronized with data from the
origin file server over the WAN.
• Continuous access to files in the event of WAN disruption. PFS provides support for
disconnected operations. In the event of a network disruption that prevents access over the
WAN to the origin server, files can still be accessed on the local Steelhead appliance.
• Simple branch infrastructure and backup architectures. PFS consolidates file servers and
local tape backup from the branch into the data center. PFS enables a reduction in number
and size of backup windows running in complex backup architectures.
• Automatic content distribution. PFS provides a means for automatically distributing new
and changed content throughout a network.
If any of these advantages can benefit your environment, then enabling PFS in the Steelhead
appliance is appropriate. However, PFS requires pre-identification of files and is not appropriate
in environments in which there is concurrent read-write access to data from multiple sites.
• Pre-identification of PFS files. PFS requires that files accessed over the WAN are identified
in advance. If the data set accessed by the remote users is larger than the specified capacity of
your Steelhead appliance model or if it cannot be identified in advance, then you should have
end-users access the origin server directly through the Steelhead appliance without PFS.
(This configuration is also known as Global mode.)
• Concurrent read-write data access from multiple sites. In a network environment where
users from multiple branch offices update a common set of centralized files and records over
the WAN, the Steelhead appliance without PFS is the most appropriate solution because file
locking is directed between the client and the server. The Steelhead appliance always
consults the origin server in response to a client request; it never provides a proxy response
or data from its data store without consulting the origin server.
Prerequisites and Tips
This section describes prerequisites and tips for using PFS:
• Before you enable PFS, configure the Steelhead appliance to use NTP to synchronize the
time. To use PFS, the Steelhead appliance and DC clocks must be synchronized.
• The PFS Steelhead appliance must run the same version of the Steelhead appliance software
as the server-side Steelhead appliance.
• PFS traffic to and from the Steelhead appliance travels through the primary interface. PFS
requires that the traffic originated from the primary interface flows through both Steelhead
appliances. For physical in-path deployments the traffic from the primary interface has to
flow through the LAN interface of the same Steelhead appliance. When logically in-path this
traffic has to be redirected to the same Steelhead appliance.
• The PFS share and origin-server share names cannot contain Unicode characters because the
Management Console does not support Unicode characters.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 37


RCSP Study Guide

Enabling PFS does not reduce the amount of data store allocated for the SDR process performed
by a Steelhead appliance.
Version 2 vs Version 3 Setup
Version 2. Specify the server name and remote path for the share folder on the origin file server.
With Version v2.x, you must have the RCU service running on a Windows server—this can be
the origin file server or a separate server.
Riverbed recommends you upgrade your v2.x shares to 3.x shares so that you do not have to run
the RCU on a server.
Version 3. Specify the login, password, and remote path used to access the share folder on the
origin file server. With Version 3, the RCU runs on the Steelhead appliance—you do not need to
install the RCU service on a Windows server.
Upgrading V2.x PFS Shares
By default, when you configure PFS shares with Steelhead appliance software versions 3.x and
higher, you create v3.x PFS shares. PFS shares configured with Steelhead appliance software
v2.x are v2.x shares. V2.x shares are not upgraded when you upgrade Steelhead appliance
software.
If you have shares created with v2.x software, Riverbed recommends that you upgrade them to
v3.x shares in the Management Console. If you upgrade any v2.x shares, you must upgrade all of
them. Once you have upgraded shares to v3.x, you should only create v3.x shares.
If you do not upgrade your v.2.x shares:
• You should not create v3.x shares.
• You must install and start the RCU on the origin server or on a separate Windows host with
write-access to the data PFS uses. The account that starts the RCU must have write
permissions to the folder on the origin file server that contains the data PFS uses.
NOTE: In Steelhead appliance software version 3.x and higher, you do not need to install the
RCU service on the server for synchronization purposes. All RCU functionality has been
moved to the Steelhead appliance.
• You must configure domain, not workgroup, settings. Domain mode supports v2.x PFS
shares but Workgroup mode does not.
Domain and Local Workgroup Settings
If using your Steelhead appliance for PFS, configure either the domain or local workgroup
settings.
Domain Mode
In Domain mode, you configure the PFS Steelhead appliance to join a Windows domain
(typically, your company’s domain). When you configure the Steelhead appliance to join a
Windows domain, you do not have to manage local accounts in the branch office as you do in
Local Workgroup mode.
Domain mode allows a DC to authenticate users accessing its file shares. The DC can be located
at the remote site or over the WAN at the main data center. The Steelhead appliance must be
configured as a Member Server in the Windows 2000 or later ADS domain. Domain users are
allowed to access the PFS shares based on the access permission settings provided for each user.
Data volumes at the data center are configured explicitly on the proxy file server and are served
locally by the Steelhead appliance. As part of the configuration, the data volume and ACLs from
38 © 2007-2008 Riverbed Technology, Inc. All rights reserved.
RCSP Study Guide

the origin server are copied to the Steelhead appliance. PFS allocates a portion of the Steelhead
appliance data store for users to access as a network file system.
Before you enable Domain mode in PFS, make sure you:
• Configure the Steelhead appliance to use NTP to synchronize the time.
• Configure the DNS server correctly.
• Set the owner of all files and folders in all remote paths to a domain account and not a local
account.
• A DNS entry should exist for the Steelhead appliance primary interface when using Domain
mode.
NOTE: PFS only supports domain accounts on the origin file server; PFS does not support local
accounts on the origin file server. During an initial copy from the origin file server to the PFS
Steelhead appliance, if PFS encounters a file or folder with permissions for both domain and
local accounts, only the domain account permissions are preserved on the Steelhead appliance.
Local Workgroup Mode
In Local Workgroup mode you define a workgroup and add individual users that will have
access to the PFS shares on the Steelhead appliance.
Use Local Workgroup mode in environments where you do not want the Steelhead appliance to
be a part of a Windows domain. Creating a workgroup eliminates the need to join a Windows
domain and vastly simplifies the PFS configuration process.
NOTE: If you use Local Workgroup mode, you must manage the accounts and permissions for
the branch office on the Steelhead appliance. The local workgroup account permissions might
not match the permissions on the origin file server.
PFS Share Operating Modes
PFS provides Windows file service in the Steelhead appliance at a remote site. When you
configure PFS, you specify an operating mode for each individual file share on the Steelhead
appliance. The proxy file server can export data volumes in Local mode, Broadcast mode, and
Stand-Alone mode. After the Steelhead appliance receives the initial copy of the data and ACLs,
shares can be made available to local clients. In Broadcast and Local mode only, shares on the
Steelhead appliance are periodically synchronized with the origin server at intervals you specify,
or manually if you choose. During the synchronization process the Steelhead appliance optimizes
this traffic across the WAN.
• Broadcast Mode. Use Broadcast mode for environments seeking to broadcast a set of read-
only files to many users at different sites. Broadcast mode quickly transmits a read-only copy
of the files from the origin server to your remote offices. The PFS share on the Steelhead
appliance contains read-only copies of files on the origin server. The PFS share is
synchronized from the origin server according to parameters you specify when you configure
it. However, files deleted on the origin server are not deleted on the Steelhead appliance until
you perform a full synchronization. Additionally, if, on the origin server, you perform
directory moves (for example, move .\dir1\dir2 .\dir3\dir2) regularly, incremental
synchronization will not reflect these directory changes. You must perform a full
synchronization frequently to keep the PFS shares in synchronization with the origin server.
• Local Mode. Use Local mode for environments that need to efficiently and transparently
copy data created at a remote site to a central data center, perhaps where tape archival

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 39


RCSP Study Guide

resources are available to back up the data. Local mode enables read-write access at remote
offices to update files on the origin file server.
After the PFS share on the Steelhead appliance receives the initial copy from the origin
server, the PFS share copy of the data becomes the master copy. New data generated by
clients is synchronized from the Steelhead appliance copy to the origin server based on
parameters you specify when you configure the share. The folder on the origin server
essentially becomes a back-up folder of the share on the Steelhead appliance. If you use
Local mode, users must not directly write to the corresponding folder on the origin server.
NOTE: In Local mode, the Steelhead appliance copy of the data is the master copy; do not make
changes to the shared files from the origin server while in Local mode. Changes are propagated
from the remote office hosting the share to the origin server.
Riverbed recommends that you do not use Windows file shortcuts if you use PFS.
• Stand-Alone Mode. Use Stand-Alone mode for network environments where it is more
effective to maintain a separate copy of files that are accessed locally by the clients at the
remote site. The PFS share also creates additional storage space. The PFS share on the
Steelhead appliance is a one-time, working copy of data mapped from the origin server. You
can specify a remote path to a directory on the origin server, creating a copy at the branch
office. Users at the branch office can read from or write to stand-alone shares but there is no
synchronization back to the origin server since a stand-alone share is an initial and one-time
only synchronization.

Lock Files
When you configure a v3.x Local mode share or any v2.x share (except a Stand-Alone share in
which you do not specify a remote path to a directory on the origin server), a text file
(._rbt_share_lock.txt) that keeps track of which Steelhead appliance owns the share is created
on the origin server. Do not remove this file.

40 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

If you remove the._rbt_share_lock.txt file on the origin file server, PFS will not function
properly. (V3.x Broadcast and Stand-Alone shares do not create these files.)
Notes:
• To join a domain, the Windows domain account must have the correct privileges to perform a
join domain operation.
• The PFS share and the origin-server share name cannot contain Unicode characters. The
Management Console does not support Unicode characters.
• If you have shares that were created with RiOS v2.x, the account that starts the RCU must
have write permissions to the folder on the origin file server. Also, the logon user for the
RCU server must to be a member of the Administrators group either locally on the file server
or globally in the domain.
• Make sure the users are members of the Administrators group on the remote share server,
either locally on the file server (the local Administrators group) or globally in the domain
(the Domain Administrator group).
• Riverbed recommends that you do not run a mixed system of PFS shares, that is, v2.x shares
and v3.0 shares.
NetFlow
Operation and Implementation
Starting with version 3.x, Steelhead appliances support the export of NetFlow v5 data. NetFlow
can play an important role in an organization’s network by providing detailed accounting
between hosts. This information can then be used for various purposes such as billing,
identifying top talkers, and capacity planning to name a few. It can also assist in troubleshooting
denial-of-service attacks.
It is common to configure NetFlow on the WAN routers in order to monitor the traffic traversing
the WAN. However, when the Steelhead appliances are in place, the WAN routers will only see
the inner Steelhead TCP session traffic and not the real IP addresses/ports of the client and
server. By supporting NetFlow v5 on the Steelhead appliance, this becomes a non-issue
altogether. In fact, it is possible to only have the Steelhead export the NetFlow data instead of the
router without compromising any functionality. By doing so, the router can spend more CPU
cycles on its core functionality: routing and switching of packets.
Similar to configuring NetFlow on the routers, NetFlow statistics are collected on the ingress
interfaces of the Steelhead appliance. Therefore, to see a complete flow or conversation between
the server and client, it is necessary to configure NetFlow on both the client-side Steelhead
appliance as well as the server-side Steelhead appliance, For example, to determine the amount
of CIFS traffic on the LAN between a server and client, configure NetFlow to collect on the
following interfaces:
• Client-side Steelhead LAN interface (this will show pre-optimized traffic going from client
to server).
• Server-side Steelhead LAN interface (this will show pre-optimized traffic going from server
to client).
Similarly, to determine the amount of CIFS traffic on the WAN between a server and client,
configure NetFlow to collect on the following interfaces:

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 41


RCSP Study Guide

• Client-side Steelhead WAN interface (this will show optimized traffic going from server to
client).
• Server-side Steelhead WAN interface (this will show optimized traffic going from client to
server).
NetFlow Protocol Header and Record Header
NetFlow version 5 supports the ordering of NetFlow packets by way of a sequence number
transmitted in each packet. Information that can be obtained from a NetFlow packet can be
observed by reviewing the supported fields shown in the flow entry packet and include common
information such as the IP addresses, interfaces, number of packets, and other data related to the
data transfer. Flow information is available for both optimized and passthrough data.
NetFlow Version 5 Flow Header:

NetFlow Version 5 Flow Entry:

42 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

Adjusting NetFlow Timers


By default, the Steelhead appliance will export active flows every 30 minutes and inactive flows
every 15 seconds. An inactive flow is defined as a flow where no traffic has been sent in the last
15 seconds. Terminated flows (either with a FIN or RST packet) will be exported immediately.
Some NetFlow collectors provide real-time reporting and the 30 minutes export may be too long.
In this case, you can use the hidden CLI command to change the timeout.
ip flow-setting active_to <seconds>
ip flow-setting inactive_to 60
However, bear in mind that more frequent exports could impact the performance of the Steelhead
appliance and more network bandwidth will be required to transmit the extra data.
IPSec
You configure IPSec encryption to allow data to be communicated securely between peer
Steelhead appliances. Enabling IPSec encryption makes it difficult for a third party to view your
data or pose as a machine you expect to receive data from. To enable IPSec, you must specify at
least one encryption (DES and NULL are supported in 3.x) and authentication algorithm. Only
optimized data is protected, passthrough traffic is not.
IMPORTANT: You must set IPSec support on each peer Steelhead appliance in your network
for which you want to establish a secure connection. You must also specify a shared secret on
each peer Steelhead appliance.
NOTE: If you NAT traffic between Steelhead appliances, you cannot use the IPSec channel
between the Steelhead appliances because the NAT changes the packet headers causing IPSec to
reject them.
Operation on VLAN Tagged Links
A Steelhead appliance can be placed on trunk (802.1q) links. The difference between a trunk and
a non-trunk link is that there are multiple VLANs flowing over a single link. To ensure traffic
can be processed correctly, the traffic is tagged with a VLAN number with the exception of
traffic in the native VLAN. When a packet enters a trunk, a tag is attached, and when a packet
exits the trunk the tag is removed again. Without the tag, there is no way of knowing which
VLAN the packet belongs to. With a Steelhead appliance physically on the trunk, the Steelhead
appliance has to be able to read the tags attached by the trunking devices. Since the Steelhead
appliance is intercepting and originating traffic, it needs IP connectivity to the network and
therefore also needs the ability to write tags for the traffic originated by the Steelhead appliance
leaving on the inpath interface.
The command to write tags is:
in-path interface inpathx_x vlan [nr]
When you specify the VLAN Tag ID for the in-path interface, all packets originating from the
Steelhead appliance (inpath interface) are tagged with that VLAN number. The subnet that is
specified by the VLAN for an inpath interface is the one that the appliance will use to setup its
inner channel with other Steelhead appliances in your network. For passthrough traffic, the same
VLAN tag is applied to the packet upon exiting the opposite interface it enters in on. As an
example, if a passthrough packet enters the LAN interface on VLAN 10, it will leave the WAN
interface on VLAN 10 as well. For optimized traffic however, the packet may enter the LAN
interface on VLAN 10, but after the auto-discovery process will use a connection that the inpath
interface is configured on for the inner connection. Traffic returned to the Steelhead appliance

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 43


RCSP Study Guide

from another appliance via the inner TCP session will be placed on the correct VLAN upon
return. The VLAN Tag ID might be the same value or a different value than the VLAN tag used
on the client. A zero (0) value specifies non-tagged (or native) VLAN.
When considering the use of a Steelhead appliance on a trunk link, routing is often a point of
concern due to the potential for many networks. While static inpath routes can be used,
simplified routing commonly allows for an easier deployment.
NOTE: When the Steelhead appliance communicates with a client or a server it uses the same
VLAN tag as the client or the server. If the Steelhead appliance cannot determine which VLAN
the client or server is in, it uses its own VLAN until it is able to determine that information.

44 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

IV. Troubleshooting
Common Deployment Issues
Speed and Duplex
Some symptoms around speed and duplex could be:
• Access does not speed up.
• If you look at interface counters and see errors (sometimes counters on a Steelhead appliance
stay low, increase on network gear).
• There should be alarm/log messages about error counts.
• Packet traces see lots of retransmissions. In Ethereal use:
o tcp.analysis.retransmission
o tcp.analysis.fast_retransmission
o tcp.analysis.lost_segment
o tcp.analysis.duplicate_ack
A likely problem is that the router is set to 100Full (fixed) whereas the Steelhead appliance is set
to Auto. In this case, check with flood-ping, ping –f –I {in-path-ip} –s 1400 {clientIP}
or from server-side Steelhead appliance to server. Do not perform across the WAN. Change the
interface speed/duplex to match.
NOTE: Ideally the WAN and LAN have the same duplex settings, otherwise the devices around
the Steelhead appliance will have a duplex mismatch when in bypass.
SMB (Server Message Block) Signing
SMB signing is a protocol add-on to protect permission distribution. It adds a cryptographic
signature to CIFS packets and authenticates endpoints to prevent man-in-the-middle attacks (or
optimization).
A symptom could be that file access does either not speed up or perhaps not as much. You
should see a log message about signed connections. Check the logs for
error=SMB_SHUTDOWN_ERR_SEC_SIG_REQUIRED messages.
A likely problem is that either the server has SMB signing enabled as does the client (1.X only)
or, the server has SMB signing required and the client has it enabled. In this case, change the
server to not be required:
(if enable:enable) protocol cifs secure-sig-opt enable
Packet Ricochet
If network connections fail on their first attempt but succeed on subsequent attempts, it could be
due to packet ricochet. You should suspect packet ricochet if:
• The Steelhead appliance on one or both sides of a network has an in-path interface that is
different from that of the local host.
• There are no in-path routes defined in your network but are needed.
You experience packet ricochet symptoms. Symptoms of packet ricochet are:
• Connections between the Steelhead appliance and the clients or server are routed through the
WAN interface to a WAN gateway, and then they are routed through a Steelhead appliance
to the next-hop LAN gateway.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 45


RCSP Study Guide

• The WAN router drops SYN packets from the Steelhead appliance before it issues an ICMP
redirect. Note that some routers might not be able or could be configured to not send ICMP
redirect packets. ICMP redirects are on by default on most routers and are sent whenever the
router has to send the packet out the same interface it arrived on to route it towards the
destination and when the next hop is on the same subnet as the source IP address. ICMP
redirect information is stored for five minutes on the Steelhead appliance.
Opportunistic Locks (Oplocks)
Windows (CIFS) uses opportunistic locking (oplock) to determine the level of safety the
OS/application has in working with a file.
Types of Oplocks
The following list describes the types of oplock that a client may hold:
• Level II oplock. Informs a client that there are multiple concurrent clients of a file, and none
have yet modified it. It allows the client to perform read operations and file attribute fetches
using cached or read-ahead local information, but all other requests have to be sent to the
server.
• Exclusive oplock. Informs a client that it is the only one to have a file open. It allows the
client to perform all file operations using cached or read-ahead local information until it
closes the file, at which time the server has to be updated with any changes made to the state
of the file (contents and attributes).
• Batch oplock. Informs a client that it is the only one to have a file open. It allows the client
to perform all file operations on cached or read-ahead local information (including open and
close operations).
Losing an oplock may pose a problem for several reasons including anti-virus programs. The
oplock controls the consistency of optimizations such as read-ahead. Oplock levels are reduced
when conflicting opens are made to a file. The Steelhead appliance maintains the safety, thus it
reduces optimization when a client has shared access to a file instead of exclusive access in order
to keep correctness.
Asymmetric Routing (AR)
AR occurs when the transmit path is different than the return path for packets. For a Steelhead
appliance to optimize traffic it must see the flow bi-directionally. Traffic can flow
asymmetrically everywhere else in the network (between Steelhead appliances).
Detecting Asymmetric Routing
To detect AR by a client-side Steelhead look for things like:
• A RST packet from the client with an invalid SYN number while the connection is in the
SYN_SENT state
• Receiving a SYN/ACK packet from the server with an invalid ACK number while the
connection is in the SYN_SENT state
• Receiving an unusually high number of SYN retransmits from the client
• Receiving an ACK packet from the client while the connection in SYN_SENT state
Asymmetric Route Passthrough
Asymmetric route passthrough allows connections to be passed through and an entry to be placed
into the AR table. The entry is placed in the table for a default of 24 hours. For SYN

46 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

retransmissions AR will be placed in AR table at first for 10 seconds. If AR is confirmed, the


timeout is increased to the default (24 hours) with a reason code updated to SYN Rexmit
(confirmed AR). If a SYN/ACK is received after we stop probing then it is placed in the table for
5 minutes with a reason code of probe filtered (not AR). If AR passthrough is disabled and we
detect AR, we will not pass through the connection. A warning message is still placed in the log.
However, the alarm is not raised and no email notifications are sent.
Normal functionality would be to send a probe each time. However, adding a passthough rule for
24 hours is a better approach. Sending a probe out again will save on overhead of re-transmitting
the probe.
Reporting and Monitoring
Logging
Eight logging levels:
• Emergency
• Alert
• Critical
• Error
• Warning
• Notice
• Info
• Debug
Logging local <log level> default none
Logging trap <log level> default none
Alarm Definitions and Resolution
Viewing Alarm Status Reports
The Health-Alarm Status report provides status for the Steelhead appliance alarms.
The Health-Alarm Status report contains the following table of statistics that summarize traffic
activity by application.
• To refresh your report every 15 seconds, click 15 Seconds.
• To refresh your report every 30 seconds, click 30 Seconds.
• To turn off refresh, click Off.
Alarms
Admission Control - Whether the system connection limit has been reached. Additional
connections are passed through unoptimized. The alarm clears when the Steelhead appliance
moves out of this condition.
Asymmetric Routing - Indicates OK if the system is not experiencing asymmetric traffic. If the
system does experience asymmetric traffic, this condition is detected and reported here. In
addition, the traffic is passed through, and the route appears in the Asymmetric Routing table.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 47


RCSP Study Guide

Central Processing Unit (CPU) Utilization - Whether the system has reached the CPU
threshold for any of the CPUs in the Steelhead appliance. If the system has reached the CPU
threshold, check your settings. If your alarm thresholds are correct, reboot the Steelhead
appliance.
NOTE: If more than 100 MB of data is moved through a Steelhead appliance, while performing
PFS synchronization, the CPU utilization might become high and result in a CPU alarm. This
CPU alarm should not be cause for concern.
Data Store - Whether the data store is corrupt. To clear the data store of data, restart the
Steelhead service and click Clear Data Store on Next Restart.
Fan Error - Whether the system has detected a problem with the fans. Fans in 3U systems can
be replaced.
IPMI - Whether the system has encountered an Intelligent Platform Management Interface
(IPMI) error. The system will display a blinking amber LED. To clear the alarm, run the clear
hardware error-log command.
Licensing - Whether your licenses are current.
Link State - Whether the system has detected a link that is down. You are notified via SNMP
traps, email, and alarm status.
Memory Error - Whether the system has encountered a memory error.
Memory Paging - Whether the system has reached the memory paging threshold. If 100 pages
are swapped approximately every two hours the Steelhead appliance is functioning properly. If
thousands of pages are swapped every few minutes, then reboot the Steelhead appliance. If
rebooting does not solve the problem, contact Riverbed Technical Support.
Neighbor Incompatibility - Whether the system has encountered an error in reaching a
Steelhead appliance configured for connection forwarding.
Network Bypass - Whether the system is in bypass mode. If the Steelhead appliance is in bypass
mode, restart the Steelhead service. If restarting the service does not resolve the problem, reboot
the Steelhead appliance. If rebooting does not resolve the problem, shut down and restart the
Steelhead appliance.
NFS V2/V4 Alarm (If NFS enabled and V2/V4 used) - Whether the system has triggered a v2
or v4 NFS alarm.
Optimization Service - Whether the system has detected a software error in the Steelhead
service. The Steelhead service continues to function, but an error message appears in the logs
that you should investigate.
Prepopulation or Proxy File Service Configuration Error - Whether there has been a PFS or
prepopulation operation error. If an operation error is detected, restart the Steelhead service and
PFS.
Prepopulation or Proxy File Service Operation Failed - Whether a synchronization operation
has failed. If an operation failure is detected, attempt the operation again.
Proxy File Service partition Full - Whether the PFS partition is full.
RAID - Whether the system has encountered RAID errors (for example, missing drives, pulled
drives, drive failures, and drive rebuilds). For drive rebuilds, if a drive is removed and then
reinserted, the alarm continues to be triggered until the rebuild is complete.

48 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

IMPORTANT: Rebuilding a disk drive can take 4-6 hours.


NOTE: RAID status applies only to the Steelhead appliance, Series 3000, 5000, and 6000.
Software Version Mismatch - Whether there is a mismatch between software versions in your
network. If a software mismatch is detected, resolve the mismatch by upgrading or reverting to a
previous version of the software.
NOTE: If a software version mismatch occurs and you are running v1.2 and client-side v2.1
Steelhead appliances, you must set the correct version of the Steelhead service protocol on the
client-side v2.1 appliances using the Steelhead CLI:
sh> peer <addr> version min 5
sh> peer <addr> version max 5
SSL Alarms - Whether an error has been detected in your SSL configuration.
System Disk Full - Whether the system partitions (not the data store) are almost full. For
example, /var which is used to hold logs, statistics, system dumps, TCP dumps, and so forth.
Temperature - Whether the CPU temperature has exceeded the critical threshold. The default
value for the rising threshold temperature is 80º C; the default reset threshold temperature is
70ºC.
System Dumps
A system dump file contains data that can help Riverbed Technical Support diagnose problems.
From the CLI:
debug generate dump
To view system dump files
show files debug-dump
To update the file to a remote host:
file debug-dump upload <filename> <URL>
TCPDump
Tcpdump <options>
Tcpdump options:
The tcpdump command takes the standard Linux options:
• -a Attempt to convert network and broadcast addresses to names.
• -c Exit after receiving count packets.
• -e Print the link-level header on each dump line.
• -E Use algo:secret for decrypting IPSec ESP packets.
• -f Print foreign internet addresses numerically rather than symbolically.
• -i Listen on interface. If unspecified, tcpdump searches the system interface list for the
lowest numbered, configured up interface.
• -n Do not convert addresses (that is, host addresses, port numbers, and so forth) to names.
• -m Load SMI MIB module definitions from file module. This option can be used several
times to load several MIB modules into tcpdump.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 49


RCSP Study Guide

• -q Quiet output. Print less protocol information so output lines are shorter.
• -r Read packets from file (which was created with the -w option).
• -S Print absolute, rather than relative, TCP sequence numbers.
• -s Snarf snaplen bytes of data from each packet rather than the default of 68. 68 bytes is
adequate for IP, ICMP, TCP and UDP but may truncate protocol information from name
server and NFS packets. Packets truncated because of a limited snapshot are indicated in the
output with “[|proto]”, where proto is the name of the protocol level at which the truncation
has occurred.
• -v (Slightly more) verbose output. For example, the time to live, identification, total length
and options in an IP packet are printed. Also enables additional packet integrity checks such
as verifying the IP and ICMP header checksum.
• -w Write the raw packets to file rather than parsing and printing them out. They can later be
printed with the -r option. Standard output is used if file is -.
• -x Print each packet (minus its link level header) in hex. The smaller of the entire packet or
snaplen bytes will be printed.
• -X When printing hex, print ASCII too. Thus if -x is also set, the packet is printed in
hex/ascii. This option enables you to analyze new protocols.
To delete or upload a tcpdump file from the CLI type:
file tcpdump {delete <filename> | upload <filename> <URL or
scp://username:password@hostname/path/filename>}

Troubleshooting Best Practices


Physical Environment
• Cables. Make sure you have connected your cables properly.
• Straight-through cables. Primary and LAN ports on the appliance to the LAN switch.
• Cross-over cable. WAN port on the appliance to the WAN router.
• Speed and duplex settings. Do not assume network auto-sensing is functioning properly.
Make sure your speed and duplex settings match on the Steelhead appliance and the router or
switch. Use a ping flood to test duplex settings.
• WAN/LAN connections. Ensure the WAN interface is connected to traffic egress and the
LAN interface is connected to traffic ingress.
Appliance Configuration
IP addresses. To verify the IP address has been configured correctly:
• Ensure the Steelhead appliances are reachable via the IP address. For instance, use the
Steelhead CLI command ping.
• Verify that the server-side Steelhead appliance is visible to the client-side Steelhead
appliance. For example, at the system prompt, enter the CLI command:
tproxytrace -i inpath0_0 server:port
• Verify that the client-side Steelhead appliance is visible to the server-side Steelhead
appliance. For example, at the system prompt, enter the CLI command:
tproxytrace -i inpath0_0 client:port

50 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

In-path rules. Verify that in-path rules are configured correctly. For example, at the system
prompt, enter the CLI command:
show in-path rules
In-path routes. Verify that in-path routes are configured correctly. For example, at the system
prompt, enter the CLI command:
sh ip in-path route <interface-name>
Steelhead service. If necessary, enable the Steelhead service. For example, at the system
prompt, enter the CLI command:
service enable
In-path support. If necessary, enable in-path support. For example, at the system prompt, enter
the CLI command:
in-path enable
In-path client out-path support. If necessary, disable in-path client out-of-path support. For
example, at the system prompt, enter the CLI command:
no in-path oop all-port enable
Network (LAN/WAN) Topology
Packet traversion. Physically draw out both sides of the entire network and make sure that
packets traverse the same client and server Steelhead appliances in both directions (from the
client to the server and from the server to the client). Verify packet traversion by running a
traceroute from the client to the server and the server to the client.
Bi-directional continuity. Make sure there is bi-directional continuity between the client and the
client-side Steelhead appliance, and the server Steelhead appliance and the network server.
Auto-discovery. If the auto-discovery mechanism is failing, try implementing a fixed-target rule.
You can define fixed-target rules using the Management Console or the CLI.
Auto-discovery can fail due to devices dropping TCP options, which sometimes occurs with
certain satellite links and firewalls. To fix this problem, create fixed-target rules that point to the
remote Steelhead appliance’s in-path interface on port 7800.
LAN/WAN bandwidth and reliability. Check if there are any client and server duplex issues or
VoIP traffic that may be clogging the T1 lines.
Protocol optimization. Are all protocols that you expect to optimize actually optimized in both
directions? If no protocols are optimized, only some of the expected protocols are optimized, or
expected protocols are not optimized in both directions, check:
• That connections have been successfully established.
• That Steelhead appliances on the other side of a connection are turned on.
• For secure or interactive ports that are preventing protocol optimization.
• For any passthrough rules that could be causing some protocols to passthrough Steelhead
appliances unoptimized.
• That the LAN and WAN cables are not inadvertently swapped.

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 51


RCSP Study Guide

V. Exam Questions
Types of Questions
The RSCP exam includes a variety of question types including true/false, single-answer multiple
choice, multiple-answer multiple choice, and fill in the blank. The question distribution is
heavily targeted towards the multiple choice variety, however, fill in the blank questions are used
in situations where the command is believed to be important part of everyday Steelhead
appliance operation. Regardless of the type of question, selecting the best answer(s) in response
the questions will yield the best score.
Sample Questions
1. How do you view the full configuration in the CLI? (one answer)
a. SH > show con
b. SH > show configuration
c. SH > show config all
d. SH # show config full
e. SH (config) # show con
2. Under what circumstances will the NetFlow cache entries flush (be sent to the collector)?
(Multiple answers)
a. When inactive flows have remained for 15 seconds.
b. When inactive flows have remained for 30 minutes.
c. When active flows have remained for 30 minutes.
d. When the TCP URG bit is set.
e. When the TCP FIN bit is set.
3. The auto-discovery probe uses which TCP option number?
a. 0x4e (76 decimal)
b. 0x4c (76 decimal)
c. 0x42 (66 decimal)
d. Auto-discovery does not use TCP options
4. In order to achieve optimization using auto-discovery for traffic coming from site C and
destined to site A in the exhibit, which configuration below would be required?
a. In-path fixed target rule on site B Steelhead pointing to Site A Steelhead
b. Peering rule on site B Steelhead passing through probes from site C
c. Peering rule on site B Steelhead passing through probe responses from site A
d. Both A and C
e. Both B and C

52 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

5. You are configuring HighSpeed TCP in an environment with an OC-12 (622Mbit/s) and 60
milliseconds of round-trip latency. The WAN router queue length is set to BDP for the link.
Assuming 1500 byte packets, the queue length for this link would be closest to:
a. 3,110 packets
b. 6,220 packets
c. 775 packets
d. 150 packets
e. 10,000 packets
6. Which of the following correctly describe the combination of cable types used in a fail-to-
wire scenario for the interconnected devices shown in the accompanying figure? Assume
Auto-MDIX is not enabled on any device.
a. Cable 1: Cross-over, Cable 2: Cross-over
b. Cable 1: Straight-through, Cable 2: Straight-through f0
Cable 1
c. Cable 1: Cross-over, Cable 2: Straight-through wan0_0
d. Cable 1: Straight-through, Cable 2: Cross-over

lan0_0
Cable 2
f0/1

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 53


RCSP Study Guide

7. In the accompanying figure, on which interfaces would you capture the NetFlow export data
for active FTP data packets when a client performs a GET operation? (Assume you are not
interested in client response packets such as acknowledgements.) (One answer)
a. A and B
b. B and D
c. C and D
d. B and C
e. A and C

SH3 B C SH4
f0 s0 s0 f0
wan WAN wan
lan lan
A
D
L3 Switch
L2 Switch

FTP Server FTP Client

8. Which of the following control messages are NOT used by WCCP? (Single answer)
a. HERE_I_AM
b. I_SEE_YOU
c. REDIRECT_ASSIGN
d. REMOVAL_QUERY
e. KEEPALIVE
9. A customer wants to mark the DSCP value for active FTP data connection as AF22. Which
of the following are true?
a. Specify qos dscp rule at client-side Steelhead with a dest-port of 21 and with a DSCP
value of 22.
b. Specify qos dscp rule at client-side Steelhead with a src-port of 20 and with a DSCP
value of 20.
c. Specify qos dscp rule at server-side Steelhead with a dest-port of 21 and with a DSCP
value of 22
d. Specify qos dscp rule at client-side Steelhead with a dest-port of 20 and with a DSCP
value of 22.
e. Specify qos dscp rule at server-side Steelhead with a dest-port of 20 and with a DSCP
value of 20.

54 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

10. Type in the command used to show information regarding the current health (status) of a
Steelhead, the current version, the uptime, and the model number. (fill in the blank)
_______________

Answers
1d, 2ace, 3b, 4b, 5a, 6c, 7e, 8e, 9e, 10 show info

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 55


RCSP Study Guide

VI. Appendix
Acronyms and Abbreviations
Acronym/Abbreviation Definition
AAA Authentication, Authorization, and Accounting
ACL Access Control List
ACS (Cisco) Access Control Server
AD Active Directory
ADS Active Directory Services
AR Asymmetric Routing
ARP Address Resolution Protocol
BDP Bandwidth-Delay Product
BW Bandwidth
CA Certificate Authority
CAD Computer Aided Design
CDP Cisco Discovery Protocol
CHD Computed Historical Data
CIFS Common Internet File System
CLI Command-Line Interface
CMC Central Management Console
CPU Central Processing Unit
CSR Certificate Signing Request
CSV Comma-Separated Value
DC Domain Controller
DER Distinguished Encoding Rules
DHCP Dynamic Host Configuration Protocol
DNS Domain Name Service
DSA Digital Signature Algorithm
DSCP Differentiated Services Code Point
ECC Error-Correcting Code
ESD Electrostatic Discharge
FDDI Fiber Distributed Data Interface
FIFO First in First Out
FSID File System ID
FTP File Transfer Protocol
GB Gigabytes
GMT Greenwich Mean Time
GRE Generic Routing Encapsulation
GUI Graphical User Interface
HFSC Hierarchical Fair Service Curve
HSRP Hot Standby Routing Protocol

56 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

Acronym/Abbreviation Definition
HSTCP High-Speed Transmission Control Protocol
HTTP HyperText Transport Protocol
HTTPS HyperText Transport Protocol Secure
ICMP Internet Control Message Protocol
ID Identification number
IGP Interior Gateway Protocol
IOS (Cisco) Internetwork Operating System
IKE Internet Key Exchange
IP Internet Protocol
IPSec Internet Protocol Security Protocol
ISL InterSwitch Link
L2 Layer-2
L4 Layer-4
LAN Local Area Network
LED Light-Emitting Diode
LZ Lempel-Ziv
MAC Media Access Control
MAPI Messaging Application Protocol Interface
MEISI Microsoft Exchange Information Store Interface
MIB Management Information Base
MOTD Message of the Day
MS SQL Microsoft Structured Query Language
MSFC Multilayer Switch Feature Card
MTU Maximum Transmission Unit
MX-TCP Max-Speed TCP
NAS Network Attached Storage
NAT Network Address Translate
NFS Network File System
NIS Network Information Services
NSPI Name Service Provider Interface
NTLM Windows NT LAN Manager
NTP Network Time Protocol
OSI Open System Interconnection
OSPF Open Shortest Path First
PAP Password Authentication Protocol
PBR Policy-Based Routing
PCI Peripheral Component Interconnect
PEM Privacy Enhanced Mail
PFS Proxy File Service
PKCS12 Public Key Cryptography Standard #12
© 2007-2008 Riverbed Technology, Inc. All rights reserved. 57
RCSP Study Guide

Acronym/Abbreviation Definition
PRTG Paessler Router Traffic Grapher
QoS Quality of Service
RADIUS Remote Authentication Dial-In User Service
RAID Redundant Array of Independent Disks
RCU Riverbed Copy Utility
ROFS Read-Only File System
RSA Rivest-Shamir-Adleman encryption method by RSA Security
SA Security Association
SDR Scalable Data Referencing
SFQ Stochastic Fairness Queuing
SH Riverbed Steelhead Appliance
SMB Server Message Block
SMI Structure of Management Information
SMTP Simple Mail Transfer Protocol
SNMP Simple Network Management Protocol
SQL Structured Query Language
SSH Secure Shell or server-side Steelhead
SSL Secure Sockets Layer
TA Transaction Acceleration
TACACS+ Terminal Access Controller Access Control System
TCP Transmission Control Protocol
TCP/IP Transmission Control Protocol/Internet Protocol
TP Transaction Prediction
TTL Time to Live
ToS Type of Service
U Unit
UDP User Datagram Protocol
UNC Universal Naming Convention
URL Uniform Resource Locator
UTC Universal Time Code
VGA Video Graphics Array
VLAN Virtual Local Area Network
VoIP Voice over IP
VWE Virtual Window Expansion
WAAS Wide-Area Application Services
WAFS Wide-Area File Services
WAN Wide Area Network
WCCP Web Cache Communication Protocol

58 © 2007-2008 Riverbed Technology, Inc. All rights reserved.


RCSP Study Guide

© 2007-2008 Riverbed Technology, Inc. All rights reserved. 59

Vous aimerez peut-être aussi