Vous êtes sur la page 1sur 38

RHINO (PRODUCTION) GETTING STARTED GUIDE

Copyright and Disclaimers


This is "live" documentation. This documentation has been designed to be read and maintained online. In the printed or PDF format, there may be: links that are hidden references to "clicking" or other screen elements that don't display or work in hard copy information that has been updated since the PDF was generated or the document printed. Please see the OpenCloud Documentation Portal for the latest version of this and other Rhino documentation.

Copyright 2009 OpenCloud Limited. All rights reserved. OpenCloud is a trademark of OpenCloud Limited. Level 4, 54-56 Cambridge Terrace, Wellington, New Zealand DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. The information in this document is confidential and meant for use only by the intended recipient and only in connection with and subject to the terms of its contractual relationship with OpenCloud. Acceptance and/or use of any of the information contained in this document indicates agreement not to disclose or otherwise make available to any person who is not an employee of the intended recipient, or to any other entity, any of the information contained herein. This documentation has the sole purpose of providing information regarding OpenCloud software products and/or services and shall be disclosed only to those individuals who have a need to know. Any entity or person with access to this information shall be subject to this confidentiality statement. No part of this publication may be reproduced or transmitted in any form or by any means for any purpose without the express written permission of OpenCloud. 3GPP is a trademark or registered trademark of ETSI. GSM is a trademark of the GSM Association. Java and all Java based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.

Rhino (Production) Getting Started Guide


March 31, 2009

Contents
Rhino (Production) Getting Started Guide.............................................1
1 Preparing to Install Rhino........................................................................2 1.1 Check Hardware & OS Prerequisites........................................................3 1.2 Install Required Software.......................................................................4 1.3 Configure Network Features...................................................................7 2 Installing Rhino.......................................................................................9 2.1 Unpack and Gather Information............................................................10 2.2 Install Nodes......................................................................................12 2.2.1 Install Interactively.......................................................................13 2.2.2 Install Unattended........................................................................14 2.2.3 Transfer Installations.....................................................................16 2.3 Create New Nodes..............................................................................17 2.4 Initialise the Database.........................................................................18 3 Running Rhino.......................................................................................19 3.1 Start Rhino........................................................................................20 3.2 Stop Rhino.........................................................................................23 Appendixes...............................................................................................24 A. Configuring the Installation....................................................................25 B. Installed Files.......................................................................................28 C. Runtime Files.......................................................................................33 D. Uninstalling.........................................................................................35

developer.opencloud.com

Commercial in Confidence

ii

Rhino (Production) Getting Started Guide


March 31, 2009

Rhino (Production) Getting Started Guide


This document is the first place to go to get started using the production version of Rhino 2.1. It includes hardware and software requirements, installation instructions, and the basic steps for starting and stopping a Rhino SLEE.

Topics
This document includes the following topics: Preparing to Install Rhino checking hardware and operating system requirements, installing Java and PostgreSQL, and configuring the network (IP addresses, host names and firewall rules). Installing Rhino unpacking and installing the Rhino base, creating cluster nodes, and initialising the database. Running Rhino creating the primary component, starting nodes, starting the SLEE and stopping nodes and clusters. Appendixes Optional configuration, installed files, runtime files and procedures for uninstalling.

Audience and Scope Intended Audience


This document is for administrators (and anyone else responsible for) installing and getting started using the production version of Rhino 2.1. This document assumes a basic knowledge of core concepts about Rhino and JAIN SLEE (see Rhino Overview and Concepts).

Scope
This document provides a basic overview of the steps needed to install and run the production version Rhino 2.1. (For the SDK version, see Rhino (SDK) Getting Started Guide.) For more information, including Administration & Deployment see the main Rhino Documentation page. For troubleshooting and other topics, see the How-to Guides and Knowledgebase on OpenCloud's Developer Portal.

developer.opencloud.com

Commercial in Confidence

Rhino (Production) Getting Started Guide


March 31, 2009

1 Preparing to Install Rhino


Before installing Rhino, you need to: 1.1 Check Hardware & OS Prerequisites 1.2 Install Required Software 1.3 Configure Network Features

developer.opencloud.com

Commercial in Confidence

Rhino (Production) Getting Started Guide


March 31, 2009

1.1 Check Hardware & OS Prerequisites

What are the requirements of a production system?


Operating System
The production version of Rhino is officially supported on Solaris 10 and RedHat AS4. (It has also been successfully deployed on other versions of Linux (and Windows for the SDK version) for help with other operating systems, please contact OpenCloud Sales).

Hardware
The general hardware requirements below are for a Rhino 2.1 production system, used for: performance testing to validate whether or not the combination of Rhino, resource adaptors and applications exceeds performance requirements failure testing to validate whether or not the combination of Rhino, resource adaptors and applications displays appropriate characteristics in failure conditions and (ultimately) live deployment.

Hardware Type of machine

Minimum Current commodity hardware and CPUs 1

Recommended

Number of machines (one cluster node per machine) Number of CPU cores per machine RAM requirements per machine Network interface Network interface requirements per machine

2 or more

2 1 GB Switched ethernet 2 interfaces at 100MB (one interface for cluster communication)

2 16 2+ GB

2 or more interfaces at 1GB (one interface for cluster communication)

Performance measures and targets vary, based on the application. The requirements above are for configurations without load-generation hardware or SS7 connectivity. For more information on how to configure Rhino as a two node cluster please refer to Rhino Administration and Deployment Guide section 8 Cluster Membership. If you would like help sizing Rhino for production deployments, please contact OpenCloud Professional Services.

developer.opencloud.com

Commercial in Confidence

Rhino (Production) Getting Started Guide


March 31, 2009

1.2 Install Required Software

Install Java and PostgreSQL


Before installing Rhino, you need to install and configure the following software:

Install Java
Rhino 2.1 requires Sun Java SE, version 5 or 6. OpenCloud strongly recommends using the most recent Sun Java 6 release, which can be downloaded and installed from http://java.sun.com. For Solaris installations, also apply the patches for Java available from the Java Downloads page of Sun's website. These patches are required for correct operation of the JVM. Please contact your sales representative or OpenCloud support for details on the particular releases of Sun Java 5 and Sun Java 6 OpenCloud is using and testing. Check the Java version Some machines may have old versions of Sun Java, or versions of Java from different vendors. To check the the version of the Java installed in the system path, run java -version, for example:
$ java -version java version "1.6.0_11" Java(TM) SE Runtime Environment (build 1.6.0_11-b03) Java HotSpot(TM) Server VM (build 11.0-b16, mixed mode)

If the version reported is not the Sun 1.5 or 1.6 series, please download it from http:// java.sun.com and install before continuing.

Install PostgreSQL
The Rhino SLEE requires a PostgreSQL RDBMS database for persisting the main working memory to nonvolatile memory. (The main working memory in Rhino contains the runtime state, deployments, profiles, resource adaptor entity configuration state, etc.) The Rhino SLEE remains available whether or not the PostgreSQL database is available. The database does not affect or limit how Rhino SLEE applications are written or operate. The PostgreSQL database provides a backup of the working memory only, to restore a cluster if that cluster has entirely failed and needs to be restarted. The Rhino SLEE has been tested on PostgreSQL versions 7.4.*, 8.1.* and 8.3.*. Please contact your sales representative or OpenCloud support for details on the particular releases of PostgreSQL OpenCloud is using and supporting. The PostgreSQL database can be installed on any network-reachable host, although it is typically installed on the same local host as Rhino SLEE. Only a single PostgreSQL database is required for the entire Rhino SLEE cluster. (The Rhino SLEE can replicate the main working memory across multiple PostgreSQL servers.) To install and configure PostgreSQL for Rhino, you need to: 1 Download and install PostgreSQL To download and install the PostgreSQL platform: See http://www.postgresql.org/download. PostgreSQL is included by default in Solaris 10; Solaris/SPARC users can also get binary PostgreSQL server packages from http://www.blastwave.org or http://www.sun.com/software/solaris/freeware. PostgreSQL is usually also available as a package on the various Linux distributions.

developer.opencloud.com

Commercial in Confidence

Rhino (Production) Getting Started Guide


March 31, 2009

2 Create a user for the SLEE Once you have installed PostgreSQL, the next step is to create or assign a database user for the Rhino SLEE. This user will need permissions to create databases, but does not need permissions to create users. To create a new user for the database, use the createuser script supplied with PostgreSQL, as follows:
[postgres]$ createuser Enter name of user to add: rhino Shall the new user be allowed to create databases? (y/n) y Shall the new user be allowed to create more new users? (y/n) n CREATE USER 120

3 Configure for TCP/IP The PostgreSQL server needs to be configured to accept TCP/IP connections so that it can be used with the Rhino SLEE. Instructions for configuring TCP/IP differ depending on the version of PostgreSQL you are using:

PostgreSQL 8.0 or newer No configuration needed as of version 8.0 of PostgreSQL, the parameter for configuring TCP/IP support is no longer required, and the database will accept TCP/IP socket connections by default.

Older versions of PostgreSQL Manually enable TCP/IP support edit the tcpip_socket parameter in the $PGDATA/ postgresql.conf file, as follows:
tcpip_socket = 1

4 Configure access-control rules Instructions for configuring access-control rules differ depending on whether Rhino SLEE and PostgreSQL are on the same or separate hosts:

Rhino SLEE and PostgreSQL on the same host The default installation of PostgreSQL trusts connections from the local host. If the Rhino SLEE and PostgreSQL are installed on the same host, the access control for the default configuration is sufficient. A sample access control configuration is shown below, from the file $PGDATA/pg_hba.conf:
#TYPE local host DATABASE all all USER all all IP-ADDRESS 127.0.0.1 IP-MASK 255.255.255.255 METHOD trust trust

Rhino SLEE and PostgreSQL on separate hosts When the Rhino SLEE and PostgreSQL need to be installed on separate hosts (or when a stricter security policy is needed), the access control rules in $PGDATA/pg_hba.conf will need to be tailored to allow connections from Rhino to the database manager. For example, to allow connections from a Rhino instance on another host:
#TYPE local host host DATABASE all all rhino USER all all postgres IP-ADDRESS 127.0.0.1 192.168.0.5 IP-MASK 255.255.255.255 255.255.255.0 METHOD trust trust password

5 Restart the server Once these changes have been made, you must completely restart the PostgreSQL server.

developer.opencloud.com

Commercial in Confidence

Rhino (Production) Getting Started Guide


March 31, 2009

Telling the server to reload the configuration file does not cause it to enable TCP/IP networking TCP/IP is only initialised when the database is started. To restart PostgreSQL, use one of the following: the command supplied by the package (for example, /etc/init.d/postgresql restart) the pg_ctl restart command provided with PostgreSQL.

developer.opencloud.com

Commercial in Confidence

Rhino (Production) Getting Started Guide


March 31, 2009

1.3 Configure Network Features

Configure IP addresses, host names, and multicast addresses


Before installing Rhino, please configure the following network features:

Feature IP address Host names

What to configure Make sure the system has an IP address and is visible on the network. Make sure that: the system can resolve localhost to the loopback interface the host name of the machine resolves to an external IP address, not a loopback address.

Multicast addresses (firewall rules)

If the local system has a firewall installed, modify its rules to allow multicast UDP traffic: Multicast addresses are, by definition, in the range 224.0.0.0/4 (224.0.0.0-239.255.255.255). (This range is separate from the unicast address range that machines use for their host addresses.) Rhino SLEE uses multicast UDP to distribute main working memory between cluster members. During the install it asks for a range of multicast addresses to use. By default, the port numbers which are required are: 45601,45602,46700-46800. All nodes in the cluster must use the same multicast addresses this is how they see each other. Ensure that the firewall is configured to allow multicast messages through on the multicast ranges/ports that are configured during installation. As with most system services, it is not a good idea to make sudden changes to the system clock. The Rhino SLEE assumes that time will only ever go forwards, and that time increments are less than a few seconds. Set the network time protocol (NTP) service to gradually slew the system clock to the correct time. It is vitally important that system time is only ever gradually slewed when it is being set to the correct time. If the system clock is suddenly set to a time in the past, the Rhino SLEE may exhibit unpredictable behaviour. If the system clock is set to a value more than 8 seconds forward from the current time, nodes in the cluster will assume that they are no longer part of the quorum of nodes and will leave the cluster. Use extreme care when manually setting the time on any machine that will host a Rhino node. Before starting any nodes on the machine, manually set the system clock to approximately the correct time and configure ntp in skew mode to correct any inaccuracies or clock drift. Manually setting the system clock should not be performed while a node is running on the machine. When using a cluster of nodes, ntpd is useful to keep the system clocks on all nodes synchronised and to have all nodes configured to use the same timezone. This helps, for example, for keeping timestamps in logging output from all nodes synchronised.

System Clock

developer.opencloud.com

Commercial in Confidence

Rhino (Production) Getting Started Guide


March 31, 2009

Refer to the tzselect or tzconfig commands on Solaris or Linux for instructions on how to configure the timezone.

developer.opencloud.com

Commercial in Confidence

Rhino (Production) Getting Started Guide


March 31, 2009

2 Installing Rhino
To install Rhino: 2.1 2.2 2.3 2.4 Unpack and Gather Information Install Nodes Create New Nodes Initialise the Database

developer.opencloud.com

Commercial in Confidence

Rhino (Production) Getting Started Guide


March 31, 2009

2.1 Unpack and Gather Information

Unpack and review


To begin the Rhino installation: 1 Unpack Rhino-2.1.tar The Rhino SLEE is delivered as an uncompressed tar file named Rhino-2.1.tar. This will need to be unpacked using the tar command, as follows:
$ tar xvf Rhino-2.1.tar $ cd rhino-install

This will create the distribution directory, rhino-install, in the current working directory. The rhino-install distribution directory is temporary you create the actual program directory as part of the installation. You may remove rhino-install, if it is no longer required, after the installation is complete. 2 Read the release notes Be sure to read any last-minute instructions included with your release of Rhino. The rhino-install distribution directory will include: README any last-minute information doc/CHANGELOG what's changed between previous versions of Rhino and this one. 3 Gather information required for installation The Rhino installation will prompt for the following information:

Parameter Directory

Default ~/rhino

Comment Where to install Rhino (~ = the user's home directory on Linux/Unix).

Postgres host Postgres port Postgres user Postgres password Database name

localhost 5432 user

rhino

Will be created in your Postgres server and configured with the default tables for Rhino SLEE support. Used for accessing the Management MBeans from a Java RMI client, such as the Rhino command-line utilities.

Management interface remote method invocation (RMI) registry port Management interface RMI object port

1199

1200

Used for accessing the Management MBeans from a Java RMI client, such as the Rhino command-line utilities.

developer.opencloud.com

Commercial in Confidence

10

Rhino (Production) Getting Started Guide


March 31, 2009

Interface JMX remote service port Secure web console HTTPS port Java installation directory

1202

Used for accessing the JMX remote server. The Rhino Web Console uses this for remote management.

8443

Used for the Web Console (Jetty) server and provides remote management user interface. This is a secure port (TLS).

Value of the JAVA_HOME environment variable (if it is set) 512

Location of your Java J2SE/JDK installation. Must be at least version 1.5.0. (see 1.2 Install Required Software).

Java heap size

An argument passed to the JVM to specify the amount of main memory (in megabytes) which should be allocated for running Rhino. To prevent extensive disk swapping, this should be set to less than the total physical memory installed in the machine. An integer ID that uniquely identifies this cluster. A pool of multicast addresses that will be used for group communication by Rhino services.

Cluster ID Address Pool Start

100 224.0.24.1

Address Pool End Location of psql client

224.0.24.8 As per the current environment PATH (if found). The Rhino installation needs to use the PostgreSQL interactive client, psql. Enter the full path to your local psql client here. If you do not have a psql client installed (for example, if postgres is running on a remote host and not installed on this one), then enter '-' to skip this question. You will still need to initialise the database on the remote host using init-management-db.sh.

developer.opencloud.com

Commercial in Confidence

11

Rhino (Production) Getting Started Guide


March 31, 2009

2.2 Install Nodes

Installing Rhino on multiple machines


Multiple nodes in a cluster provide resiliency against software error, and multiple nodes on separates machines add resiliency against hardware failure. A typical and basic "safedefault" configuration for a Rhino SLEE cluster, is to use three machines, each hosting one node. Multiple nodes on separate machines for a cluster must be configured exactly the same (except for node ID). To do this, you can:

Method Install Interactively

Description Install each node from the distribution .tar file, and be very careful to answer each question with exactly the same answer.

Pros/Cons Does not require any special installer options or files to be copied during installation. Allows different nodes to be installed at different filesystem locations on each machine. Error-prone. Must copy keystores after installation.

Install Unattended

Install each node from the distribution .tar file, but create an "answers" file from the first installation and use it in the subsequent ones.

Avoids typos entering the configuration. Requires use of special installer options and copying a file during installation. Still requires keystores to be copied after installation. The Rhino directory must be in the same location on the filesystem on each machine, or configuration files must be edited.

Transfer Installations

Install one node, then copy the entire base directory to each machine. This method is recommended for machines using the same configuration (program directory, JAVA_HOME, etc.).

Avoids typos. Saves having to find the keystores or answer files to copy. The Rhino directory must be in the same location on the filesystem on each machine, or configuration files must be edited.

developer.opencloud.com

Commercial in Confidence

12

Rhino (Production) Getting Started Guide


March 31, 2009

2.2.1 Install Interactively

Run rhino-install.sh and answer prompts


To install Rhino onto a machine (to use as a cluster node): From within the distribution directory (rhino-install), run the rhino-install.sh script. If the installer detects a previous installation, it will ask if it should first delete it. Answer each prompt with information about your installation (see 2.1 Unpack and Gather Information). After the installation, set the $RHINO_HOME environment variable to the program directory where Rhino is installed (see A. Configuring the Installation). The default values are normally satisfactory for a working installation. Following the installation you can always edit configuration values as needed.

Manually copy keystores for multiple nodes


Each time you run the rhino-install.sh script, it generates a matching set of server and client keys, for authenticating SSL connections. For a client to connect to a server, their keystores must match. If you have multiple nodes for a cluster, for different clients to connect to different nodes, you will need to copy over their keys. The keys are stored in: rhino-private.keystore contains a key entry for the SSL server and a trust entry for the SSL client rhino-public.keystore contains a key entry for the SSL client and a trust entry for the SSL server (web-console.keystore needed to run the HTTPS listener for the standalone web console not used to connect to Rhino, and doesn't need to match another keystore). To allow a single Rhino client to connect to multiple Rhino nodes, copy rhino-private.keystore, from the Rhino base directory of the node on which rhino-install.sh was run with that client, to the Rhino base directory on the other nodes to which you want that client to be able to connect. To view the keys in each keystore, and to check that the keyEntry in a keystore matches the trustCertEntry in another, use the commands keytool -keystore rhino-private.keystore -list and keytool -keystore rhino-public.keystore -list.

developer.opencloud.com

Commercial in Confidence

13

Rhino (Production) Getting Started Guide


March 31, 2009

2.2.2 Install Unattended

Use -r, -a and -d switches


When you need to automate or repeat installations, you can set the installer to perform a non-interactive installation, based on an answer file, which the installer can create automatically from the answers you specify during an interactive installation. The install script has the following options:
$ ./rhino-install.sh -h Usage: ./rhino-install.sh [options] Command line options: -h, --help - Print this usage message. -a - Perform an automated install. This will perform a non-interactive install using the installation defaults. -r - Reads in the properties from before starting the install. This will set the installation defaults to the values contained in the properties file. -d - Outputs a properties file containing the selections made during install (suitable for use with -r).

You'll use: -d to create the answer file -r to read the answer file -a to install in non-interactive mode. For example, to create the answer file:
$ ./rhino-install.sh -d answer.config

And then to install, unattended, based on that answer file:


$ ./rhino-install.sh -r answer.config -a

After installing multiple nodes for a cluster unattended, you must manually copy the keystores between them, so the clients can connect.

Sample "answer" file


Below is an example of an answer file:
DEFAULT_RHINO_HOME=/home/rhino/rhino DEFAULT_RHINO_BASE=/home/rhino/rhino DEFAULT_RHINO_WORK_DIR=/home/rhino/rhino/work DEFAULT_JAVA_HOME=/usr/local/java DEFAULT_JVM_ARCH=32 DEFAULT_FILE_URL=file: DEFAULT_MANAGEMENT_DATABASE_NAME=rhino DEFAULT_MANAGEMENT_DATABASE_HOST=localhost DEFAULT_MANAGEMENT_DATABASE_PORT=5432 DEFAULT_MANAGEMENT_DATABASE_USER=rhino DEFAULT_MANAGEMENT_DATABASE_PASSWORD=password DEFAULT_PSQL_CLIENT=/usr/bin/psql DEFAULT_RMI_MBEAN_REGISTRY_PORT=1199 DEFAULT_JMX_SERVICE_PORT=1202 DEFAULT_SNAPSHOT_BASEPORT=42000 DEFAULT_HEAP_SIZE=512m DEFAULT_RHINO_PUBLIC_STORE_PASS=changeit DEFAULT_RHINO_PRIVATE_STORE_PASS=changeit DEFAULT_RHINO_PUBLIC_KEY_PASS=changeit DEFAULT_RHINO_PRIVATE_KEY_PASS=changeit DEFAULT_WEB_CONSOLE_HTTP_PORT=8066 DEFAULT_WEB_CONSOLE_HTTPS_PORT=8443 DEFAULT_WEB_CONSOLE_KEY_PASS=changeit

developer.opencloud.com

Commercial in Confidence

14

Rhino (Production) Getting Started Guide


March 31, 2009

DEFAULT_WEB_CONSOLE_STORE_PASS=changeit DEFAULT_WEB_CONSOLE_HOSTNAME=rhino.opencloud.co.nz DEFAULT_RHINO_PASSWORD=password DEFAULT_RHINO_USERNAME=admin DEFAULT_LOCALIPS="[fe80:0:0:0:230:1bff:febc:1f29%2] 192.168.0.1 [0:0:0:0:0:0:0:1%1] 127.0.0.1" DEFAULT_RHINO_WATCHDOG_STUCK_INTERVAL=45000 DEFAULT_SAVANNA_CLUSTER_ID=100 DEFAULT_SAVANNA_CLUSTER_ADDR=224.0.50.1 DEFAULT_SAVANNA_MCAST_START=224.0.50.1 DEFAULT_SAVANNA_MCAST_END=224.0.50.8 DEFAULT_RHINO_WATCHDOG_THREADS_THRESHOLD=50 DEFAULT_LICENSE=-

developer.opencloud.com

Commercial in Confidence

15

Rhino (Production) Getting Started Guide


March 31, 2009

2.2.3 Transfer Installations

Tar up an installation and transfer it


To transfer an existing Rhino installation from one host to another: 1 Copy the cluster configuration issue the following commands on the local host:
$ cd /tmp $ tar cvf rhino-cluster.tar $RHINO_HOME

2 Copy the tar file to the target host. 3 On the target host issue the following commands (in the example, the tarball has been copied to /tmp):
$ cd /tmp $ tar xvf rhino-cluster.tar $RHINO_HOME

4 Once the cluster configuration has been transferred to the target host, it is important to edit the config_variables file in the config directory, to reflect the new machine's local environment: Put the IP addresses of the local machine into the LOCAL_IPS value. Update RHINO_HOME and JAVA_HOME to reflect their respective locations. Update any other variable required to accurately reflect the system environment on the machine.

developer.opencloud.com

Commercial in Confidence

16

Rhino (Production) Getting Started Guide


March 31, 2009

2.3 Create New Nodes

Run $RHINO_HOME/create-node.sh
After installing Rhino on a machine, you can create a new node by executing the $RHINO_HOME/createnode.sh shell script. When a node-NNN directory is created, the default configuration for the node is copied from $RHINO_HOME/etc/default. Ideally any configuration changes should be made in the etc/ defaults directory before creating new node directories (and made at the same time in any existing ({node-NNN}} directories). See also A. Configuring the Installation. Once a node has been created, its configuration cannot be transferred to another machine. It must be created on the host on which it will run. In the following example, node 101 is created:
$ /home/user/rhino/create-node.sh Chose a Node ID (integer 1..255) Node ID [101]: 101 Creating new node /home/user/rhino/node-101 Deferring database creation. This should be performed before starting Rhino for the first time. Run the "/home/user/rhino/node-101/init-management-db.sh" script to create the database. Created Rhino node in /home/user/rhino/node-101.

You can also use a node-id argument with create-node.sh, for example:
$ /home/user/rhino/create-node.sh 101 Creating new node /home/user/rhino/node-101 Deferring database creation. This should be performed before starting Rhino for the first time. Run the "/home/user/rhino/node-101/init-management-db.sh" script to create the database.

developer.opencloud.com

Commercial in Confidence

17

Rhino (Production) Getting Started Guide


March 31, 2009

2.4 Initialise the Database

Run init-management-db.sh
Rhino uses a PostgreSQL database to keep a backup of the current state of the SLEE. Before you can use Rhino, you must initialise this database. You create and initialise the database by executing the init-management-db.sh shell script from a node directory (see 2.3 Create New Nodes), for example:
$ cd $RHINO_HOME/node-101 $ ./init-management-db.sh

This script only needs to be run once for the entire cluster. The SLEE administrator can also use this script to wipe all state held within the SLEE. The init-management-db.sh script produces the following console output:
$ ./init-management-db.sh CREATE DATABASE You are now connected to database "rhino". NOTICE: CREATE TABLE / PRIMARY KEY will create "versioning" CREATE TABLE COMMENT NOTICE: CREATE TABLE / PRIMARY KEY will create "keyspaces" CREATE TABLE COMMENT NOTICE: CREATE TABLE / PRIMARY KEY will create "timestamps" CREATE TABLE COMMENT NOTICE: CREATE TABLE / PRIMARY KEY will create "registrations" CREATE TABLE COMMENT

implicit index "versioning_pkey" for table

implicit index "keyspaces_pkey" for table

implicit index "timestamps_pkey" for table

implicit index "registrations_pkey" for table

developer.opencloud.com

Commercial in Confidence

18

Rhino (Production) Getting Started Guide


March 31, 2009

3 Running Rhino
This section includes the following topics: 3.1 Start Rhino 3.2 Stop Rhino See also the 1 Operational State section of the Rhino Administration and Deployment Guide.

developer.opencloud.com

Commercial in Confidence

19

Rhino (Production) Getting Started Guide


March 31, 2009

3.1 Start Rhino

Start the primary component, other nodes, and the SLEE


The tabs below summarise the startup phase of a cluster lifecycle, which includes: creating and starting the primary component starting other nodes then starting the SLEE.

Starting a node
To start a node, run the start-rhino.sh shell script ($RHINO_HOME/node-NNN/start-rhino.sh, which causes the following sequence of events: 1. The host launches a Java Virtual Machine process. 2. The node generates and reads its configuration. 3. The node checks to see if it should become part of the primary component. If it was previously part of the primary component, or the -p option was specified on startup, it tries to join the primary component. 4. The node waits to enter the primary component of the cluster. 5. The node connects to PostgreSQL and synchronises state with the rest of the cluster. Only one node in the cluster connects to Postgres to load and store the persistent state. Once that data is loaded into memory, all other nodes obtain their copies from the in-memory state, not from Postgres. 6. The node starts per-node (or per-machine if not already started by another node in the Rhino cluster, running on the same machine) m-lets (management agents). 7. The node becomes ready to receive management commands. For more information on cluster lifecycle management, see the Rhino Administration and Deployment Guide. Click the Startup options tab for information on creating the primary component, starting as a quorum node, setting automatic restart, and transitioning the SLEE to RUNNING.

Startup options
The start-rhino.sh script supports the following arguments:

ArgumentDescription -p -q -k -d create the primary component start as a quorum node set the node to automatically restart delete per-node activation state from the starting node. Any installed services and resource adaptor entities will revert to the INACTIVE state on this node. The SLEE will also revert to the STOPPED state, unless the -s option is also specified.

-c copy per-node activation state from the given node to the starting node. The starting node <nodeid> will assume the same activation state for installed services, resource adaptor entities, and the SLEE, as the given node. -s transition the SLEE to the RUNNING state on the node after bootup is complete.

developer.opencloud.com

Commercial in Confidence

20

Rhino (Production) Getting Started Guide


March 31, 2009

-x

force the SLEE to remain in the STOPPED state on the node after bootup is complete. This can be useful if the node was previously in the RUNNING state but administrative tasks need to be performed on the node before event processing functions are restarted.

The -s, -x, -d, and -c options cannot be used in conjunction with the -q option. The -s and -x options are also mutually exclusive and cannot be used together. The -d and -c options must be used together if the starting node already has per-node activation state and you want that state replaced with the state from another node.

Primary component
The primary component is the set of nodes which know the authoritative state of the cluster. A node will not accept management commands or perform work until it is in the primary component, and a node which is no longer in the primary component will shut itself down. At least one node in the cluster must be told to create the primary component, typically only once the first time the cluster is started. The primary component is created when a node is started with the -p option. When a node is restarted, it will remember whether it was part of the primary component without the need to specify the -p option. It does this by looking at configuration written to the work directory. If the primary component configuration already exists in the work directory then the node will refuse to start if the -p option is specified. The following command will start a node and create the primary component. The SLEE on the node will transition into the state it was previously in, or the STOPPED state if there is no existing persistent state for the node.
$ cd node-101 $ ./start-rhino.sh -p

Quorum node
Quorum nodes are lightweight nodes that do not perform any event processing, nor do they participate in management-level operations. They are intended to be used strictly for determining which parts of the cluster remain in the primary component, in the event of node failures. To run a node as a quorum node, specify the -q option with the start-rhino.sh shell script, as follows:
$ cd node-101 $ ./start-rhino.sh -q

Auto-restart
To set a node to automatically restart in the event of failure (such as a JVM crash), use the -k option with start-rhino.sh. This option works by checking for a $RHINO_HOME/work/halt_file file after the node exits. Rhino writes the halt file if the node: fails to start (because it has been incorrectly configured) is manually shutdown (using the relevant management commands) is killed (using the stop-rhino.sh script). If Rhino does not find the halt file, start-rhino.sh assumes that the node exited unexpectedly and restarts it after 30 seconds. If the node originally started with the -p, -s or -x options, Rhino restarts it without any of these options, to avoid changing the cluster state. For more information on Rhino startup options, see the Rhino Administration and Deployment Guide.

Starting the SLEE


You can start and stop SLEE event-routing functions on each individual cluster node.

developer.opencloud.com

Commercial in Confidence

21

Rhino (Production) Getting Started Guide


March 31, 2009

To transition the SLEE on a node to the RUNNING state: Use the -s option, when starting the node with the start-rhino.sh command. For example:
$ cd $RHINO_HOME/node-101 $ ./start-rhino.sh -s

Invoke the start operation after the node has booted, and once connected through the web console or command console (see the Rhino Administration and Deployment Guide). For example: To start all nodes currently in the primary component:
$ cd $RHINO_HOME $ ./client/bin/rhino-console start

To start only selected nodes:


$ cd $RHINO_HOME $ ./client/bin/rhino-console start -nodes 101,102

Typical startup sequence


To start a cluster for the first time and create the primary component, the system administrator typically starts the first node with the -p option and all nodes with the -s option, as follows. On the first machine:
$ cd node-101 $ ./start-rhino.sh -p -s

On the second machine:


$ cd node-102 $ ./start-rhino.sh -s

On the last machine:


$ cd node-103 $ ./start-rhino.sh -s

developer.opencloud.com

Commercial in Confidence

22

Rhino (Production) Getting Started Guide


March 31, 2009

3.2 Stop Rhino

Stop nodes or stop the cluster


The following tabs summarise the steps to stopping Rhino.

Stop a node
You can stop a node using the $RHINO_HOME/node-NNN/stop-rhino.sh shell script. This script has the following options:
$ cd node-101 $ ./stop-rhino.sh --help Usage: stop-rhino.sh (--cluster|--node|--kill) [node-id] Terminates either a node or the entire Rhino cluster. Options: --cluster - Performs a cluster wide shutdown. --node <node-id> - Cleanly removes the node with the given node ID from the cluster. --kill - Terminates this node's JVM.

For example:
$ cd node-101 $ ./stop-rhino.sh --node 101 Shutting down node 101. Shutdown complete.

This terminates the node process, while leaving the remainder of the cluster running.

Stop the cluster


Use the following command to stop and shutdown the cluster. The Rhino SLEE will transition to the STOPPED state on each node, and then every node will terminate.
$ cd node-101 $ ./stop-rhino.sh --cluster Shutting down cluster. Stopping SLEE on node(s) 101,102,103. Waiting for SLEE to enter STOPPED state on node(s) 101,102,103. Shutting down SLEE. Shutdown complete.

developer.opencloud.com

Commercial in Confidence

23

Rhino (Production) Getting Started Guide


March 31, 2009

Appendixes
This document includes the following appendixes: A. Configuring the Installation B. Installed Files C. Runtime Files D. Uninstalling

developer.opencloud.com

Commercial in Confidence

24

Rhino (Production) Getting Started Guide


March 31, 2009

A. Configuring the Installation

Configure your Rhino installation


For existing node directories, modify additional variables and restart If you have already created a node directory (using create-node.sh), just editing the configuration file in etc/default/config won't work. When you create a node directory, the system copies files from etc/default/config to node-NNN/config. If the environment changes, you should always modify $RHINO_HOME/etc/defaults_config/ config_variables. And if node-NNN directories already exist, apply the same changes to the node-NNN/config/config_variables file (for all NNN). Note also that a Rhino node only reads the configuration file when it starts so if you change the configuration, the node must be restarted for the changes to take effect. Follow the instructions below to configure: default variables, ports, usernames and passwords, web console and watchdog..

Default configuration variables


After installation, you can modify the default configuration variables if needed, for each node, by editing node-NNN/config/config_variables. This file includes the following entries:

Entry RHINO_HOME JAVA_HOME JVM_ARCH

Description Absolute path to your installation. Absolute path to the Sun JDK. Whether to use the default 32-bit JVM (JVM_ARCH=32), or the 64-bit JVM on platforms with 64-bit CPUs (JVM_ARCH=64). Name of the PostgreSQL database where the SLEE stores its state. TCP/IP host where the database resides. TCP/IP port that the database listens to. Username used to connect to the database.

MANAGEMENT_DATABASE_NAME MANAGEMENT_DATABASE_HOST MANAGEMENT_DATABASE_PORT MANAGEMENT_DATABASE_USER

MANAGEMENT_DATABASE_PASSWORD Password used to connect to the database, in plaintext. HEAP_SIZE Maximum heap size that the JVM may occupy in the local computer's memory. List of IP addresses (delimited by white spaces) that refer to the local host. IPv6 addresses are expressed in square brackets. Integer that must be unique to the entire cluster, but must be the same value for every node in this cluster. Several clusters sharing the same multicast address ranges can co-exist on the same physical network provided that they have unique cluster IDs. Start of an address range that this cluster uses to communicate with other cluster nodes. Every node on this cluster must have the same settings for SAVANNA_MCAST_START and SAVANNA_MCAST_END. End of an address range that this cluster uses to communicate with other cluster nodes.

LOCAL_IPS

SAVANNA_CLUSTER_ID

SAVANNA_MCAST_START

SAVANNA_MCAST_END

developer.opencloud.com

Commercial in Confidence

25

Rhino (Production) Getting Started Guide


March 31, 2009

NODE_ID

Unique integer identifier, in the range of 0 to 255, that refers to this node. Each node in a cluster must have a unique node ID.

Typically, these values should not need to be changed unless environmental changes occur, for example: If a new JVM is installed, JAVA_HOME will need to be updated to reflect that change. If the IP addresses of the local host change or if a node is moved to a new machine, LOCAL_IPS must be updated.

Configure ports
The ports chosen during installation time can be changed at a later stage by editing the file $RHINO_HOME/etc/defaults/config/config_variables. See the Default configuration variables tab.

Configure usernames and passwords


The default usernames and passwords for remote JMX access can be changed by editing the file $RHINO_HOME/etc/defaults/config/rhino.passwd. For example,
@RHINO_USERNAME@:@RHINO_PASSWORD@:admin #the web console users for the web-console realm #rhino:rhino:admin,view,invoke #invoke:invoke:invoke,view #view:view:view #the jmx delegate for the jmxr-adaptor realm jmx-remote-username:jmx-remote-password:jmx-remote

For more on usernames and passwords, see the Rhino Administration and Deployment Guide.

Configure web console


The Rhino SLEE has two ways of running the web console: embedded and external. The embedded web console is enabled by default to allow simpler administration of the Rhino SLEE. In a CPU-sensitive environment such as a production cluster, it is recommended that the embedded web console be disabled and an external web console is run on another host. To stop the embedded Web Console, edit the file $RHINO_HOME/etc/defaults/config/permachinemlet.conf and set enabled="false". For example,
<mlet enabled="false"> <classpath> <jar-url>@FILE_URL@@RHINO_BASE@/client/lib/web-console-jmx.jar</jar-url> <jar-url>@FILE_URL@@RHINO_BASE@/client/lib/web-console.war</jar-url> <jar-url>@FILE_URL@@RHINO_BASE@/client/lib/javax.servlet.jar</jar-ur ... </classpath> ... <class>com.opencloud.slee.mlet.web.WebConsole</class> ... </mlet>

To start up an external web console on another host, execute $RHINO_HOME/client/bin/web-console start on that remote host. A web browser can then be directed to https://remotehost:8443.

developer.opencloud.com

Commercial in Confidence

26

Rhino (Production) Getting Started Guide


March 31, 2009

Configure watchdog
The watchdog thread is a lightweight thread which monitors the Rhino SLEE for undesirable behaviour. Currently, the only user-configurable settings for the watchdog thread relate to its behaviour when dealing with stuck worker threads. A stuck worker thread is a thread which has taken more than a reasonable period of time to execute the service logic associated with an event. The cause for this may be faulty service logic, or service logic which blocks while waiting on an external resource (such as a database). The period (in milliseconds) after which a worker thread is presumed to be stuck can be configured by editing the RHINO_WATCHDOG_THREADS_THRESHOLD variable in $RHINO_HOME/etc/defaults/config/ config_variables, for example:
RHINO_WATCHDOG_STUCK_INTERVAL=45000

If too many worker threads become stuck, there can be a performance impact on the Rhino SLEE, and in extreme cases can prevent all future event processing entirely. The watchdog thread can be configured to terminate a node in the event that a certain percentage of its worker threads have become stuck by modifying the following variable from $RHINO_HOME/etc/defaults/config/config_variables:
RHINO_WATCHDOG_THREADS_THRESHOLD=50

The value specified for RHINO_WATCHDOG_THREADS_THRESHOLD is the percentage of worker threads which must remain alive (unstuck) before a node will self-terminate. If RHINO_WATCHDOG_THREADS_THRESHOLD is set to 100, it means that if any of the worker threads become stuck, the node will terminate itself. If this setting is set to 0, it means that the node will never terminate itself due to stuck worker threads. This provides a mechanism for cluster nodes which have stuck worker threads to free up those threads by terminating the JVM and restarting (assuming the nodes have been configured to restart automatically). By default, the watchdog thread will kill a node in which less than half (50) of the worker threads are still alive. See also the Default configuration variables tab.

developer.opencloud.com

Commercial in Confidence

27

Rhino (Production) Getting Started Guide


March 31, 2009

B. Installed Files

Rhino files and directories


A typical Rhino installation includes the following files.

File or directory ./client ./client/bin ./client/bin/ant ./client/bin/cascadeuninstall ./client/bin/generateclient-configuration ./client/bin/rhino-console ./client/bin/rhino-export ./client/bin/rhino-import ./client/bin/rhino-passwd ./client/bin/rhino-snapshot ./client/bin/rhino-stats ./client/bin/snapshotdecode ./client/bin/snapshot-toexport ./client/bin/web-console

Description Directory containing remote Rhino management clients. Directory containing all remote management client scripts. Script for starting bundled version of Ant. Script for undeploying a component and all components that depend on that component. Script for generating a web console configuration for standalone deployment. Script for starting the command-line client. Script for exporting Rhino configuration to disk. Script for importing a previous Rhino configuration export. Script for generating a password hash for rhino.passwd. Script for quickly generating a snapshot of deployed profiles. Script for starting the Rhino statistics and monitoring client Script for converting a profile snapshot into a .csv file.

Script for converting a profile snapshot into a Rhino configuration export. Script for starting the web console as a standalone management client. Directory containing configuration for remote management clients. Security policy for Rhino management clients. Configuration settings common to all Rhino management clients.

./client/etc ./client/etc/client.policy ./client/etc/ client.properties ./client/etc/common.xml ./client/etc/dtd/* ./client/etc/ jdk.logging.properties ./client/etc/jetty-fileauth.xml ./client/etc/jetty-jmxauth.xml

Ant task definitions used for remote deployments using Ant. Client related DTDs log4j logging configuration used by JMX Remote implementation.

Jetty configuration for the standalone Rhino web console, using filebased authentication. Jetty configuration for the standalone Rhino web console, using jmxremote authentication.

developer.opencloud.com

Commercial in Confidence

28

Rhino (Production) Getting Started Guide


March 31, 2009

./client/etc/jetty.policy ./client/etc/rhino-clientcommon ./client/etc/rhino-common ./client/etc/rhino-consolelog4j.properties ./client/etc/templates/*

Security policy for the standalone Rhino web console. Contains script functions common to multiple scripts.

Contains script functions common to multiple scripts. Log4j configuration for the command line management client.

Templates used by generate-client-configuration to populate the client/etc/ directory. Log4j properties for the web console.

./client/etc/web-consolelog4j.properties ./client/etc/webconsole.jaas ./client/etc/webconsole.passwd ./client/etc/webconsole.properties ./client/etc/webapps ./client/etc/webapps/webconsole.xml ./client/etc/webdefault.xml ./client/lib/* ./client/log

Configuration for web console login contexts.

Usernames, passwords and roles for file-based login context.

Configuration settings specific to the web console.

Web console configuration files.

Java libraries used by the remote management clients. Directory used for logj4 output from the remote management clients. Keystore used to secure connections.

./client/rhinopublic.keystore ./client/webconsole.keystore ./client/work ./create-node.sh

Keystore used to secure connections.

Temporary working directory. Script for generating new Rhino node directories from the templates stored in etc/defaults/. Rhino documentation Release notes. Rhino and SLEE related DTDs Documentation README Directory containing configuration defaults used by create-node.sh.

./doc ./doc/CHANGELOG ./doc/dtd/* ./doc/README ./etc ./etc/defaults ./etc/defaults/config

Directory containing Rhino configuration.

developer.opencloud.com

Commercial in Confidence

29

Rhino (Production) Getting Started Guide


March 31, 2009

./etc/defaults/config/ config_variables ./etc/defaults/config/ defaults.xml ./etc/defaults/config/ jetty.xml ./etc/defaults/config/ permachine-mlet-jmx1.conf ./etc/defaults/config/ permachine-mlet.conf ./etc/defaults/config/ pernode-mlet.conf ./etc/defaults/config/ rhino-config.xml ./etc/defaults/config/ rhino.jaas ./etc/defaults/config/ rhino.passwd ./etc/defaults/config/ rhino.policy ./etc/defaults/ config/rmissl.jmxradaptor.properties ./etc/defaults/config/ savanna/* ./etc/defaults/config/ webapps ./etc/defaults/config/ webapps/diameter-docs.xml ./etc/defaults/config/ webapps/sip-docs.xml ./etc/defaults/config/ webapps/web-console.xml ./etc/defaults/ dumpthreads.sh ./etc/defaults/generateconfiguration ./etc/defaults/generatesystem-report.sh ./etc/defaults/initmanagement-db.sh ./etc/defaults/read-configvariables

Contains configuration of various Rhino settings.

Default Rhino configuration used when starting Rhino for the first time. Jetty configuration for the embedded web console.

Mlet configuration.

Mlet configuration.

Configuration file for settings not covered elsewhere.

Configuration for web console login contexts.

Usernames, passwords, and roles for file based login context.

Rhino security policy.

Secure RMI configuration.

Internal clustering configuration.

Internal configuration used by the web console.

Script for sending a SIGQUIT to Rhino to cause a threaddump.

Script used internally to populate a node's working directory with templated configuration files. Script used to produce an archive containing useful debugging information. Script for reinitializing the Rhino postgres database.

Script used internally for performing templating operations.

developer.opencloud.com

Commercial in Confidence

30

Rhino (Production) Getting Started Guide


March 31, 2009

./etc/defaults/ README.postgres ./etc/defaults/rhino-common ./etc/defaults/runcompiler.sh ./etc/defaults/run-jar.sh ./etc/defaults/startrhino.sh ./etc/defaults/stoprhino.sh ./examples/* ./lib/* ./licenses/* ./README ./rhino-common ./rhino-private.keystore ./rhino-public.keystore ./web-console.keystore

Postgres database setup information.

Contains script functions common to multiple scripts. Script used by Rhino to compile dynamically generated code.

Script used by Rhino to run the external 'jar' application. Script used to start Rhino.

Script used to stop Rhino.

Example services. Libraries used by Rhino. Third-party software licenses. Rhino README. Contains script functions common to multiple scripts. JKS keystore used for secure connections from management clients. rhino-public.keystore JKS keystore used to sign the web console application.

developer.opencloud.com

Commercial in Confidence

31

Rhino (Production) Getting Started Guide


March 31, 2009

C. Runtime Files

Node and log files


A Rhino installation includes the following runtime files.

Node directory
Creating a new Rhino node (by running the create-node.sh script) involves making a directory for that node. This directory contains the following files, which that node uses to store state, including configuration, logs and temporary files. The following table summarises the files for a node with id 101.

File or directory node-101 node-101/config/*

Description Instantiated Rhino node. Directory containing a set configurations files, which Rhino uses when a node starts (or re-starts). Once the node joins the cluster, it stores and retrieves settings from the in-memory database ("MemDB"). The Rhino SLEE can overwrite files in the config/ directory for example, if the administrator changes the SLEE's logging configuration (using management tools), the SLEE updates each node's logging.xml file at runtime. Before a node can join the cluster, Rhino needs to load the logging configuration from logging.xml and then load rest of the cluster's configuration from the database. Script for sending a SIGQUIT to Rhino to cause a threaddump. Script used internally to populate a node's working directory with templated configuration files. Script used to produce an archive containing useful debugging information. Script for reinitializing the Rhino postgres database.

node-101/dumpthreads.sh node-101/generateconfiguration node-101/generate-systemreport.sh node-101/init-managementdb.sh node-101/read-configvariables node-101/README.postgres node-101/rhino-common node-101/run-compiler.sh node-101/run-jar.sh node-101/start-rhino.sh node-101/stop-rhino.sh node-101/work node-101/work/deployments

Script used internally for performing templating operations.

Postgres database setup information. Contains script functions common to multiple scripts. Script used by Rhino to compile dynamically generated code. Script used by Rhino to run the external 'jar' application. Script used to start Rhino. Script used to stop Rhino. Rhino working directory. Directory that stores deployable units, component jars, code generated as a result of deployment actions, and any other deployment-related information Rhino requires.

developer.opencloud.com

Commercial in Confidence

32

Rhino (Production) Getting Started Guide


March 31, 2009

node-101/work/log

Directory containing a set of log files. These constantly change and rotate, as the Rhino SLEE continually outputs logging information. Rhino automatically manages the total size of this directory (to keep it from getting too big). Log containing licensing auditing. Log containing useful configuration information, written on startup.

node-101/work/log/audit.log node-101/work/log/ config.log node-101/work/log/ encrypted.audit.log node-101/work/log/rhino.log node-101/work/startrhino.sh

Encrypted audit log.

Log combining all Rhino logs. Temporary directory. Used when starting the Rhino SLEE the system copies files in the config directory here, and then makes all variable substitutions (replacing all @variables@ in the configuration files with their values from the config_variables file). Working set of configuration files in use by an actve Rhino node

node-101/work/startrhino.sh/config/* node-101/work/state node-101/work/tmp/*

Temporary directory. Savanna primary component runtime state.

The tmp/, deployments/ and start-rhino.sh/ directories are temporary directories. However nothing in the work directory should be deleted while the node is running (except for tmp/ as long as no deployment action is in progress, and any old logs in log/.)

Logging output
The Rhino SLEE uses the Apache log4j libraries for logging. In the default configuration, it sends logging output to both the standard error stream (the user's console) and also the following log files in the work/ log directory: rhino.log all logs Rhino has output config.log just changes to Rhino's configuration audit.log auditing information (there is also an encrypted version this file, for use by OpenCloud support staff). For more on Rhino SLEE's logging system and how to configure it, see the Rhino Administration and Deployment Guide. Log File Format Each statement in the log file has a particular structure. Here is an example:
2005-12-13 17:02:33.019 INFO [rhino.alarm.manager] <Thread-4> Alarm 56875825090732034 (Node 101, 13-Dec-05 13:31:54.373): Major [rhino.license] License with serial '107baa31c0e' has expired.

This includes:

Example 2005-12-13 17:02:33.019

Field Current date

Description The 13th of December, 2005 at 5:02pm, 33 seconds and 19 milliseconds. The milliseconds value is often useful for

developer.opencloud.com

Commercial in Confidence

33

Rhino (Production) Getting Started Guide


March 31, 2009

determining if log messages are related; if they occur within a few milliseconds of each other, then they probably have a causal relationship. Also, if there is a timeout in the software somewhere, that timeout may often be found by looking at this timestamp. INFO Log level INFO is standard. This can also be WARN for more serious happenings in the SLEE, or DEBUG if debug messages are enabled. Every log message has a key, and this shows what part of Rhino this log message came from. Verbosity of each logger key can be controlled, as also discussed in Rhino Administration and Deployment Guide. The name of the thread that output this message. In this case, an alarm message.

[rhino.alarm.manager]

Logger name

<Thread-4>

Thread identifier Actual log message

Alarm 56875825090732034 (Node 101, 13-Dec-05 13:31:54.373): Major [rhino.license] License with serial '107baa31c0e' has expired.

developer.opencloud.com

Commercial in Confidence

34

Rhino (Production) Getting Started Guide


March 31, 2009

D. Uninstalling

Stop the SLEE, remove the database, delete the directory


To uninstall the Rhino SLEE: 1. Stop the Rhino SLEE. 2. Remove the database that the Rhino SLEE was using (see below). 3. Delete the directory into which the Rhino SLEE was installed. The Rhino SLEE keeps all of its files in the same directory and does not store data elsewhere on the system except for the state kept in the PostgreSQL database. Run psql -d to remove the database The psql utility can be used to remove the database that the Rhino SLEE was configured to use. The name of the database is stored in the file node-NNN/config/config_variables} as the setting for MANAGEMENT_DATABASE_NAME. To remove the database, do the following (substituting MANAGEMENT_DATABASE_NAME for the value from config_variables above):
$ psql -d template1 Welcome to psql 8.0.7, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit template1=# drop database MANAGEMENT_DATABASE_NAME; DROP DATABASE template1=#

developer.opencloud.com

Commercial in Confidence

35