Académique Documents
Professionnel Documents
Culture Documents
T AF
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
ii
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Table of Contents
1. Installing OpenStack Walk-through .............................................................................. 1 Compute and Image System Requirements .............................................................. 1 Compute Network Planning .................................................................................... 3 Preconfiguring with Managed IT Packages .............................................................. 3 Installing Network Time Protocol (NTP) ................................................................... 4 2. Installing OpenStack Identity Service ........................................................................... 5 Installing and Configuring the Identity Service ......................................................... 5 Defining Roles and Users in the Identity Service (Keystone) ...................................... 8 Define Services and Endpoints ......................................................................... 9 Verifying the Identity Service Installation ................................................................. 9 3. Installing OpenStack Compute and Image Service ...................................................... 12 Installing and Configuring the Image Service ......................................................... 12 Configuring the Image Service ....................................................................... 12 Defining Compute and Image Service Credentials .................................................. 13 Verifying the Image Service Installation ................................................................. 13 Packaged Compute Installation with the Image and Identity Services ..................... 14 Pre-configuring the network .................................................................................. 15 Configuring the SQL Database (MySQL) on the Cloud Controller ............................ 15 Installing the Cloud Controller ............................................................................... 16 Configuring OpenStack Compute .......................................................................... 16 Configuring the Database for Compute ................................................................. 18 Creating the Network for Compute VMs ............................................................... 18 Verifying the Compute Installation ........................................................................ 18 4. Uploading Images ..................................................................................................... 20 5. Installing the OpenStack Dashboard .......................................................................... 22 About the Dashboard ........................................................................................... 22 System Requirements for the Dashboard ............................................................... 22 Installing the OpenStack Dashboard ...................................................................... 22 Configuring the Dashboard ................................................................................... 23 Validating the Dashboard Install ............................................................................ 24 A. Appendix: Configuration File Examples ..................................................................... 25 glance-registry.conf ............................................................................................... 25 glance-api.conf ...................................................................................................... 26 glance-scrubber.conf .............................................................................................. 29 nova.conf .............................................................................................................. 30 api-paste.ini ........................................................................................................... 31 Credentials (openrc) .............................................................................................. 33 Dashboard configuration ....................................................................................... 34
iii
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
List of Tables
1.1. Hardware Recommendations .................................................................................... 1
iv
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Table1.1.Hardware Recommendations
Server Cloud Controller node (runs network, Recommended Hardware Notes Processor: 64-bit x86 Memory: 12 GB RAM Two NICS are recommended but not required. A quad core server with 12 GB RAM would be more than sufficient for a cloud controller node.
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Recommended Hardware Notes 32-bit processors will work for the cloud controller node.
volume, API, scheduler Disk space: 30 GB (SATA and image services) or SAS or SSD)
The package repositories referred to in this guide do not contain Volume storage: two i386 packages. disks with 2 TB (SATA) for volumes attached to the compute nodes Network: one 1 GB Network Interface Card (NIC)
Processor: 64-bit x86 Memory: 32 GB RAM Disk space: 30 GB (SATA) Network: two 1 GB NICs
Note that you cannot run 64-bit VM instances on a 32-bit compute node. A 64-bit compute node can run either 32- or 64-bit VMs, however. With 2 GB RAM you can run one m1.small instance on a node or three m1.tiny instances without memory swapping, so 2 GB RAM would be a minimum for a test-environment compute node. As an example, Rackspace Cloud Builders use 96 GB RAM for compute nodes in OpenStack deployments. Specifically for virtualization on certain hypervisors on the node or nodes running nova-compute, you need a x86 machine with an AMD processor with SVM extensions (also called AMD-V) or an Intel processor with VT (virtualization technology) extensions. For Xen-based hypervisors, the Xen wiki contains a list of compatible processors on the HVM Compatible Processors page. For XenServer-compatible Intel processors, refer to the Intel Virtualization Technology List. For LXC, the VT extensions are not required. The packages referred to in this guide do not contain i386 packages.
Operating System: OpenStack currently has packages for the following distributions: Ubuntu, RHEL, SUSE, Debian, and Fedora. These packages are maintained by community members, refer to http://wiki.openstack.org/Packaging for additional links. This guide refers to packages from two community sources: Rackspace Cloud Builder packages for Ubuntu Maverick, Natty, or Oneiric and Managed IT packages for Oneiric.
Note
While certain parts of OpenStack are known to work on various operating systems, currently the only feature-complete, production-supported host environment is Linux. Database: For OpenStack Compute, you need access to either a PostgreSQL or MySQL database, or you can install it as part of the OpenStack Compute installation process. For Object Storage, the container and account servers use SQLite, and you can install it as part of the installation process. Permissions: You can install OpenStack Compute, the Image Service, or Object Storage either as root or as a user with sudo permissions if you configure the sudoers file to enable all the permissions. Network Time Protocol: You must install a time synchronization program such as NTP. For Compute, time synchronization keeps your cloud controller and compute nodes talking to the same time server to avoid problems scheduling VM launches on compute nodes. For 2
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Object Storage, time synchronization ensure the object replications are accurately updating objects when needed so that the freshest content is served.
Now ensure that these packages are used instead of those bundled with Ubuntu:
sudo apt-get install -y managedit-openstack-pin
If you don't want to go through the manual install process, refer to https://github.com/ managedit/openstack-setup for scripts and template files for the configuration files. 3
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Set up the NTP server on your controller node so that it receives data by modifying the ntp.conf file and restarting the service.
sudo sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf sudo service ntp restart
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
After installing, you need to delete the sqlite database it creates, then change the configuration to point to the mysql database. Delete the keystone.db file created in the /var/lib/keystone/ directory.
sudo rm /var/lib/keystone/keystone.db
Configure the production-ready backend data store. For Compute you must use a SQLAlchemy-compatible database, such as MySQL or PostgreSQL. This example shows MySQL. First, install MySQL with:
sudo apt-get install python-mysqldb mysql-server
During the install, you'll be prompted for the mysql root password. Enter a password of your choice and verify it. Edit /etc/mysql/my.cnf to change "bind-address" from localhost (127.0.0.1) to any (0.0.0.0) and restart the mysql service:
sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf sudo service mysql restart
For MySQL, create a MySQL database named "keystone" and a MySQL user named "keystone". Grant the "keystone" user full access to the "keystone" MySQL database. Start the mysql command line client by running:
mysql -u root -p
Enter the mysql root user's password when prompted. To configure the MySQL database, create the keystone database.
mysql> CREATE DATABASE keystone;
Create a MySQL user for the newly-created keystone database that has full control of the database.
mysql> GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'yourpassword';
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Edit /etc/keystone/keystone.conf to include the --sql_connection to use the backend data store you just created. Ensure that it is owned by keystone and protect the MySQL password stored in it.
sudo nano /etc/keystone/keystone.conf sudo chown keystone /etc/keystone/keystone.conf sudo chmod 0640 /etc/keystone/keystone.conf
Edit /etc/keystone/keystone.conf to use the IP address and ports for your environment. Here is an example keystone.conf. Ensure that the ports for keystone are correct, since the default keystone auth port changed from 5001 to 35357 and the packages install a conf file with 5001 for the auth_port setting.
[DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = False # Show debugging output in logs (sets DEBUG log level output) debug = False # Which backend store should Keystone use by default. # Default: 'sqlite' # Available choices are 'sqlite' [future will include LDAP, PAM, etc] default_store = sqlite # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_dir = /var/log/keystone log_file = keystone.log # List of backends to be configured backends = keystone.backends.sqlalchemy #For LDAP support, add: ,keystone.backends.ldap # Dictionary Maps every service to a header.Missing services would get header # X_(SERVICE_NAME) Key => Service Name, Value => Header Name service-header-mappings = { 'nova' : 'X-Server-Management-Url', 'swift' : 'X-Storage-Url', 'cdn' : 'X-CDN-Management-Url'} # Address to bind the API server # TODO Properties defined within app not available via pipeline. service_host = 0.0.0.0 # Port the bind the API server to service_port = 5000
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
# Address to bind the Admin API server admin_host = 0.0.0.0 # Port the bind the Admin API server to admin_port = 35357 #Role that allows to perform admin operations. keystone-admin-role = Admin #Role that allows to perform service admin operations. keystone-service-admin-role = KeystoneServiceAdmin #Tells whether password user need to be hashed in the backend hash-password = True [keystone.backends.sqlalchemy] # SQLAlchemy connection string for the reference implementation registry # server. Any valid SQLAlchemy connection string is fine. # See: http://bit.ly/ideIpI sql_connection = mysql://keystone:yourpassword@192.168.206.130/keystone backend_entities = ['UserRoleAssociation', 'Endpoints', 'Role', 'Tenant', 'User', 'Credentials', 'EndpointTemplates', 'Token', 'Service'] # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. sql_idle_timeout = 30 [pipeline:admin] pipeline = urlrewritefilter d5_compat admin_api [pipeline:keystone-legacy-auth] pipeline = urlrewritefilter legacy_auth d5_compat service_api [app:service_api] paste.app_factory = keystone.server:service_app_factory [app:admin_api] paste.app_factory = keystone.server:admin_app_factory [filter:urlrewritefilter] paste.filter_factory = keystone.middleware.url:filter_factory [filter:legacy_auth] paste.filter_factory = keystone.frontends.legacy_token_auth:filter_factory [filter:d5_compat] paste.filter_factory = keystone.frontends.d5_compat:filter_factory [filter:debug] paste.filter_factory = keystone.common.wsgi:debug_filter_factory
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Next, you configure the Identity Service by defining roles and users.
In return for these commands, you should see a SUCCESS message, such as:
SUCCESS: Tenant openstackDemo created.
Now add an invented token (any combination of numbers and letters will do) to the admin user for the openstackdemo tenant and ensure there's an expiration date assigned. This one expires in about four years.
sudo keystone-manage token add 11121314151617181920 adminUser adminTenant 2015-02-05T00:0
If you see an error like "Creating a token requires a token id, user, tenant, and expiration" it's possible you're missing the expiration date. Create the Admin role and the Member role.
sudo keystone-manage role add Admin sudo keystone-manage role add Member
Grant the Admin role to the admin user and then grant the Member role to the demo user.
sudo keystone-manage role grant Admin adminUser sudo keystone-manage role grant Member demoUser
Grant the Admin role to the adminUser user for the openstackDemo and adminTenant tenant. Grant the Member role to the demoUser for the openstackDemo tenant.
sudo keystone-manage role grant Admin adminUser openstackDemo sudo keystone-manage role grant Admin adminUser adminTenant sudo keystone-manage role grant Member demoUser openstackDemo
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Now add endpoint templates each of these now-named services, which put together the IP addresses, port values, and API version number to make an entire endpoint.
sudo keystone-manage endpointTemplates add RegionOne nova http://192.168.206.130:8774/v1.1/%tenant_id% http://192.168.206.130:8774/ v1.1/%tenant_id% http://192.168.206.130:8774/v1.1/%tenant_id% 1 1 sudo keystone-manage endpointTemplates add RegionOne glance http://192.168.206.130:9292/v1 http://192.168.206.130:9292/v1 http://192.168.206.130:9292/v1 1 1 sudo keystone-manage endpointTemplates add RegionOne keystone http://192.168.206.130:5000/v2.0 http://192.168.206.130:35357/v2.0 http://192.168.206.130:5000/v2.0 1 1
Optionally, you can add EC2 credentials to the Identity Service for any commands you may want to use euca2ools for.
sudo keystone-manage credentials add adminUser:adminTenant EC2 'secretword' adminTenant sudo keystone-manage credentials add demoUser:openstackDemo EC2 'secretword' openstackDemo
The Identity Service, Keystone, is now configured and ready to accept requests.
In return, you should receive the token you created for the adminUser user.
... "token": { "expires": "2015-02-05T00:00:00", "id": "11121314151617181920", "tenant": { "id": "2", "name": "adminTenant" } }
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
You can also get a token that expires in 24 hours using the adminUser account:
curl -d '{"auth": {"tenantName": "openstackDemo", "passwordCredentials": {"username": "adminUser", "password": "secretword"}}}' -H "Content-type: application/json" http://192.168.206.130:35357/v2.0/tokens | python mjson.tool
10
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
"id": "2", "name": "demoUser", "roles": [ { "id": "2", "name": "Member", "tenantId": "1" }, { "id": "2", "name": "Member" } ] } } }
11
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
After installing, you need to delete the sqlite database it creates, then change the configuration to point to the mysql database. Delete the glance.db file created in the /var/lib/glance/ directory.
sudo rm /var/lib/glance/glance.sqlite
Enter the mysql root user's password when prompted. To configure the MySQL database, create the glance database.
mysql> CREATE DATABASE glance;
Create a MySQL user for the newly-created glance database that has full control of the database.
mysql> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'yourpassword';
Edit /etc/glance/glance-registry.conf:
sudo nano /etc/glance/glance-registry.conf
An example glance-registry.conf file is included in the Appendix. At the bottom of the glance-registry.conf file, change the admin_token to the one you created with Keystone previously. 12
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Ensure that the auth sections match the ones in the included glance-api.conf example file found in the Appendix. Restart the glance service after changing the settings in the glance-registry.conf and glanceapi.conf. If you change the database you also update glance-scrubber.conf to point to the MySQL database.
sudo nano /etc/glance/glance-scrubber.conf
Any time you change the .conf files, restart the corresponding service:
sudo restart glance-registry sudo restart glance-api
The database tables are created when you restart glance-registry, so there is no need to run a database sync command.
Next, ensure these are used in your environment. If you see 401 Not Authorized errors on commands using tokens, ensure that you have properly sourced your credentials and that all the pipelines are accurate in the configuration files.
source ~/creds/openrc
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Another verification at this point for the Glance installation working with Keystone is to issue a glance details command with a token from Keystone assigned to the adminUser. The one below is just an example. You use a curl command to get a token for the adminUser. If you get no errors (and no response) it means that the token is accepted by Keystone and no images are yet loaded to Glance. If you get a "NotAuthorized" error, check your configuration files, check your environment variables, and ensure the Keystone endpoints are accurate with "sudo keystone-manage endpointTemplates list".
glance details -A d1819479-be8b-451d-8682-82c654502ddb
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Ensure that you set up the bridge, although if you use -\-flat_network_bridge=br100 in your nova.conf file, nova will set up the bridge for you when you run the nova-manage network command.
sudo brctl addbr br100
Lastly, restart networking to have these changes take effect. (This method is deprecated but "restart networking" doesn't always work.)
sudo /etc/init.d/networking restart
Enter the mysql root user's password when prompted. To configure the MySQL database, create the nova database. 15
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
Create a MySQL user for the newly-created nova database that has full control of the database.
mysql> GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'yourpassword';
The database will be populated after running the nova-manage db sync command.
Install the required nova- packages, and dependencies are automatically installed.
sudo apt-get install nova-compute nova-volume nova-vncproxy nova-api novaajax-console-proxy nova-doc nova-scheduler nova-network
The hypervisor is set either by editing /etc/nova/nova.conf or referring to novacompute.conf in the nova.conf file. The hypervisor defaults to "kvm", but if you are working within a VM already, switch to "qemu" on the --libvirt_type= line. Ensure the database connection defines your backend data store by adding a -sql_connection line to nova.conf: "--sql_connection=mysql://[ user ]:[ pass ]@[ primary IP ]/ [ db name ]", such as --sql_connection=mysql://nova:yourpassword@192.168.206.130/nova. Add these settings to /etc/nova/nova.conf for the network configuration assumptions made for this installation scenario. You can place comments in the nova.conf file by entering a new line with a # sign at the beginning of the line. To see a listing of all possible flag settings, see the output of running /bin/nova-api --help.
--network_manager=nova.network.manager.FlatDHCPManager
16
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
You also need to configure the api-paste.ini file to enable Keystone as the Identity service. An example api-paste.ini file is included in the Appendix. Update the /etc/nova/apipaste.ini file according to the sample file. Restart the nova- services prior to running db sync. 17
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
If you see any response, you can look in /var/log/nova/nova-manage.log to see the problem. No response means the command completed correctly and your nova database is now populated. On both nodes, restart all services in total, just to cover the entire spectrum:
sudo sudo sudo sudo sudo sudo sudo restart nova-api restart nova-compute restart nova-network restart nova-scheduler restart nova-vncproxy restart libvirt-bin /etc/init.d/rabbitmq-server restart
All nova services are now installed and started. If the "restart" command doesn't work, your services may not be running correctly (or not at all). Review the logs in /var/log/nova.
In return you should see "smiley faces" rather than three X symbols. Here's an example.
Binary Host State Updated_At nova-scheduler ubuntu :-) 2011-12-23 17:43:56.492606 nova-vncproxy ubuntu :-) 2011-12-23 17:43:55.559948 Zone nova nova Status enabled enabled
18
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
nova-network ubuntu :-) 2011-12-23 17:43:58.159612 nova-compute ubuntu :-) 2011-12-23 17:43:58.313404
nova nova
enabled enabled
You can find the version of the installation by using the nova-manage command:
sudo nova-manage version list
The version number 2011.3 corresponds with the Diablo release of Compute.
2011.3 (2011.3-LOCALBRANCH:LOCALREVISION)
19
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
4. Uploading Images
First, download some images that are known to work with OpenStack. The smallest test image is a tty testing image with a username/password of root/password. You can get it from http://images.ansolabs.com/tty.tgz. This is the image we'll use for this walkthrough. Create a directory to house your image files.
sudo mkdir stackimages
You can get a token in order to upload images using this curl command.
curl -d '{"auth":{"passwordCredentials":{"username": "adminUser", "password": "secretword"}}}' -H "Content-type: application/json" http://192.168.206.130:35357/v2.0/tokens | python -mjson.tool
Now add the kernel image to the Image Service with glance add commands. In this case we're hard-coding the kernel_id and ramdisk_id values in the third command, but for additional images, you need to get the return values for the first two commands to enter into the third command.
glance add -A d1819479-be8b-451d-8682-82c654502ddb name="tty-kernel" is_public=true container_format=aki disk_format=aki < stackimages/aki-tty/ image glance add -A d1819479-be8b-451d-8682-82c654502ddb name="tty-ramdisk" is_public=true container_format=ari disk_format=ari < stackimages/ari-tty/ image glance add -A d1819479-be8b-451d-8682-82c654502ddb name="tty" is_public=true container_format=ami disk_format=ami kernel_id=1 ramdisk_id=2 < stackimages/ ami-tty/image
For other images, you can get Ubuntu Oneiric (11.10) at http://cloud-images.ubuntu.com/ oneiric/current/oneiric-server-cloudimg-amd64-disk1.img. You can ensure that your Image service has indexed these images by using the glance CLI with an authorization token:
glance -A c9640d83-ef4f-4718-a0c2-32437a931196 index
20
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
ari aki
ari aki
If you see an ECONNREFUSED error in return, it means your glance-registry.conf file is incorrectly configured. Double-check and compare to the file in the Appendix. If you get a 401 Unauthorized error, check your environment variables.
21
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
sudo apt-get install -y libapache2-mod-wsgi sudo apt-get install -y openstack-dashboard openstackx python -sqlite
Enter the mysql root user's password when prompted. To configure the MySQL database, create the dash database.
mysql> CREATE DATABASE dash;
Create a MySQL user for the newly-created dash database that has full control of the database.
mysql> GRANT ALL ON dash.* TO 'dash'@'%' IDENTIFIED BY 'yourpassword';
Enter quit at the mysql> prompt to exit MySQL. After configuring the local_settings.py as shown below, you can run the syncdb command to populate this newly-created database. Now you can configure the dashboard application by modifying the local_settings.py file. With the Cloud Builders packages, an example is provided that you can copy to local_settings.py and then modify for your environment. If you are using Cloud Builders packages, copy the /var/lib/dash/local/ local_settings.py.example file to local_settings.py. If you are using Managed IT packages, edit the /etc/openstack-dashboard/local_settings.py file. A full example local_settings.py file is included in the Appendix. In the /local/local_settings.py file, change these options: DATABASES: Change the database section to point to the Mysql database named dash:
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dash', 'USER': 'dash', 'PASSWORD': 'yourpassword', 'HOST': 'localhost', 'default-character-set': 'utf8' }, }
SWIFT_ENABLED: If an Object Storage (Swift) endpoint is available and configured in the Identity service catalog, set SWIFT_ENABLED = True. QUANTUM_ENABLED: For now, the Network Connection (Quantum) service is enabled in local_settings.py, but the project is still in incubation for Diablo and should be available in the Essex release. You can set QUANTUM_ENABLED = False. 23
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
As a result, you should see the following at the end of what returns:
Installing custom SQL ... Installing indexes ... DEBUG:django.db.backends:(0.008) CREATE INDEX `django_session_c25c2c28` ON `django_session` (`expire_date`);; args=() No fixtures found.
If you want to avoid a warning when restarting apache2, create a blackhole directory in the dashboard directory like so:
sudo mkdir -p /var/lib/dash/.blackhole
Restart Apache to pick up the default site and symbolic link settings.
sudo /etc/init.d/apache2 restart
Restart the nova-api service to ensure the API server can connect to the Dashboard and to avoid an error displayed in the dashboard.
sudo restart nova-api
24
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
glance-registry.conf
The Identity service's registry, which stores the metadata about images, is found in / etc/glance/glance-registry.conf. This file needs to be modified after installing with the packages.
[DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = True # Show debugging output in logs (sets DEBUG log level output) debug = False # Address to bind the registry server bind_host = 0.0.0.0 # Port the bind the registry server to bind_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/registry.log # Send logs to syslog (/dev/log) instead of to file specified by `log_file` use_syslog = False # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/ connections.html#sqlalchemy.create_engine #sql_connection = sqlite:////var/lib/glance/glance.sqlite sql_connection = mysql://glance:yourpassword@192.168.206.130/glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects # before MySQL can drop the connection. sql_idle_timeout = 3600 # Limit the api to return `param_limit_max` items in a call to a container. If # a larger `limit` query param is provided, it will be reduced to this value. api_limit_max = 1000 # If a `limit` query param is not provided in an api request, it will # default to `limit_param_default` limit_param_default = 25
25
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
[pipeline:glance-registry] # pipeline = context registryapp # NOTE: use the following pipeline for keystone # pipeline = authtoken keystone_shim context registryapp pipeline = authtoken keystone_shim registryapp [app:registryapp] paste.app_factory = glance.registry.server:app_factory [filter:context] paste.filter_factory = glance.common.context:filter_factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 11121314151617181920 [filter:keystone_shim] context_class = glance.registry.context.RequestContext paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory
glance-api.conf
The configuration file for the Identity API is found in /etc/glance/glance-api.conf. You need to change this file to look like this example after installing from packages.
[DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = True # Show debugging output in logs (sets DEBUG log level output) debug = False # Which backend store should Glance use by default is not specified # in a request to add a new image to Glance? Default: 'file' # Available choices are 'file', 'swift', and 's3' default_store = file # Address to bind the API server bind_host = 0.0.0.0 # Port the bind the API server to bind_port = 9292 # Address to find the registry server registry_host = 0.0.0.0 # Port the registry server is listening on registry_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/api.log
26
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
# Send logs to syslog (/dev/log) instead of to file specified by `log_file` use_syslog = False # ============ Notification System Options ===================== # Notifications can be sent when images are create, updated or deleted. # There are three methods of sending notifications, logging (via the # log_file directive), rabbit (via a rabbitmq queue) or noop (no # notifications sent, the default) notifier_strategy = noop # Configuration options if sending notifications via rabbitmq (these are # the defaults) rabbit_host = localhost rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = guest rabbit_password = guest rabbit_virtual_host = / rabbit_notification_topic = glance_notifications # ============ Filesystem Store Options ======================== # Directory that the Filesystem backend store # writes image data to filesystem_store_datadir = /var/lib/glance/images/ # ============ Swift Store Options ============================= # Address where the Swift authentication service lives swift_store_auth_address = 127.0.0.1:8080/v1.0/ # User to authenticate against the Swift authentication service swift_store_user = jdoe # Auth key for the user authenticating against the # Swift authentication service swift_store_key = a86850deb2742ec3cb41518e26aa2d89 # Container within the account that the account should use # for storing images in Swift swift_store_container = glance # Do we create the container if it does not exist? swift_store_create_container_on_put = False # What size, in MB, should Glance start chunking image files # and do a large object manifest in Swift? By default, this is # the maximum object size in Swift, which is 5GB swift_store_large_object_size = 5120 # When doing a large object manifest, what size, in MB, should # Glance write chunks to Swift? This amount of data is written # to a temporary disk buffer during the process of chunking # the image file, and the default is 200MB swift_store_large_object_chunk_size = 200 # Whether to use ServiceNET to communicate with the Swift storage servers. # (If you aren't RACKSPACE, leave this False!)
27
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
# # To use ServiceNET for authentication, prefix hostname of # `swift_store_auth_address` with 'snet-'. # Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/ swift_enable_snet = False # ============ S3 Store Options ============================= # Address where the S3 authentication service lives s3_store_host = 127.0.0.1:8080/v1.0/ # User to authenticate against the S3 authentication service s3_store_access_key = <20-char AWS access key> # Auth key for the user authenticating against the # S3 authentication service s3_store_secret_key = <40-char AWS secret key> # Container within the account that the account should use # for storing images in S3. Note that S3 has a flat namespace, # so you need a unique bucket name for your glance images. An # easy way to do this is append your AWS access key to "glance". # S3 buckets in AWS *must* be lowercased, so remember to lowercase # your AWS access key if you use it in your bucket name below! s3_store_bucket = <lowercased 20-char aws access key>glance # Do we create the bucket if it does not exist? s3_store_create_bucket_on_put = False # ============ Image Cache Options ======================== image_cache_enabled = False # Directory that the Image Cache writes data to # Make sure this is also set in glance-pruner.conf image_cache_datadir = /var/lib/glance/image-cache/ # Number of seconds after which we should consider an incomplete image to be # stalled and eligible for reaping image_cache_stall_timeout = 86400 # ============ Delayed Delete Options ============================= # Turn on/off delayed delete delayed_delete = False [pipeline:glance-api] # pipeline = versionnegotiation context apiv1app # NOTE: use the following pipeline for keystone # pipeline = versionnegotiation authtoken context apiv1app # To enable Image Cache Management API replace pipeline with below: # pipeline = versionnegotiation context imagecache apiv1app # NOTE: use the following pipeline for keystone auth (with caching) # pipeline = versionnegotiation authtoken context imagecache apiv1app pipeline = versionnegotiation authtoken keystone_shim apiv1app [pipeline:versions] pipeline = versionsapp
28
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
[app:versionsapp] paste.app_factory = glance.api.versions:app_factory [app:apiv1app] paste.app_factory = glance.api.v1:app_factory [filter:versionnegotiation] paste.filter_factory = glance.api.middleware.version_negotiation:filter_factory [filter:imagecache] paste.filter_factory = glance.api.middleware.image_cache:filter_factory [filter:context] paste.filter_factory = glance.common.context:filter_factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 35357 auth_protocol = http auth_uri = http://127.0.0.1:5000/v2.0 admin_token = 11121314151617181920 [filter:keystone_shim] context_class = glance.registry.context.RequestContext paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory
glance-scrubber.conf
An additional configuration file for the Identity service is found in /etc/glance/glancescrubber.conf. You need to ensure you point to the mysql database in this configuration file.
[DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = True # Show debugging output in logs (sets DEBUG log level output) debug = False # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/scrubber.log # Send logs to syslog (/dev/log) instead of to file specified by `log_file` use_syslog = False # Delayed delete time in seconds scrub_time = 43200 # Should we run our own loop or rely on cron/scheduler to run us daemon = False # Loop time between checking the db for new items to schedule for delete wakeup_time = 300
29
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
# SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/ connections.html#$ sql_connection = mysql://glance:yourpassword@192.168.206.130/glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects # before MySQL can drop the connection. sql_idle_timeout = 3600 [app:glance-scrubber] paste.app_factory = glance.store.scrubber:app_factory
nova.conf
The configuration file for Compute (nova) settings is stored in /etc/nova/nova.conf. To see a list of all possible flags for this file, you can run nova-<service> --help from the command line, for example, nova-api --help.
# DATABASE --sql_connection=mysql://nova:yourpassword@192.168.206.130/nova # LOGS/STATE --verbose --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova # RABBITMQ --rabbit_password=guest --rabbit_port=5672 --rabbit_host=192.168.206.130 # SCHEDULER --scheduler_driver=nova.scheduler.simple.SimpleScheduler # NETWORK --network_manager=nova.network.manager.FlatDHCPManager --fixed_range=192.168.100.0/24 --flat_network_dhcp_start=192.168.100.2 --public_interface=eth0 --flat_interface=eth0 --flat_network_bridge=br100 # GLANCE --image_service=nova.image.glance.GlanceImageService --glance_api_servers=192.168.206.130:9292 # COMPUTE --compute_manager=nova.compute.manager.ComputeManager --libvirt_type=qemu
30
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
api-paste.ini
The configuration file for Compute (Nova) for the EC2 API and OpenStack Compute API is stored in /etc/nova/api-paste.ini. You should check the admin token in this file to ensure it matches the one you created when setting up the Identity service. You should also add all the Keystone configuration specified in the sample file below.
####### # EC2 # ####### [composite:ec2] use = egg:Paste#urlmap /: ec2versions /services/Cloud: ec2cloud /services/Admin: ec2admin /latest: ec2metadata /2007-01-19: ec2metadata /2007-03-01: ec2metadata /2007-08-29: ec2metadata /2007-10-10: ec2metadata /2007-12-15: ec2metadata /2008-02-01: ec2metadata /2008-09-01: ec2metadata /2009-04-04: ec2metadata /1.0: ec2metadata [pipeline:ec2cloud] # pipeline = logrequest authenticate cloudrequest authorizer ec2executor #pipeline = logrequest ec2lockout authenticate cloudrequest authorizer ec2executor pipeline = logrequest totoken authtoken keystonecontext cloudrequest authorizer ec2executor [pipeline:ec2admin] pipeline = logrequest totoken authtoken keystonecontext adminrequest authorizer ec2executor [pipeline:ec2metadata] pipeline = logrequest ec2md [pipeline:ec2versions] pipeline = logrequest ec2ver [filter:logrequest] paste.filter_factory = nova.api.ec2:RequestLogging.factory
31
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
[filter:ec2lockout] paste.filter_factory = nova.api.ec2:Lockout.factory [filter:totoken] paste.filter_factory = keystone.middleware.ec2_token:EC2Token.factory [filter:ec2noauth] paste.filter_factory = nova.api.ec2:NoAuth.factory [filter:authenticate] paste.filter_factory = nova.api.ec2:Authenticate.factory [filter:cloudrequest] controller = nova.api.ec2.cloud.CloudController paste.filter_factory = nova.api.ec2:Requestify.factory [filter:adminrequest] controller = nova.api.ec2.admin.AdminController paste.filter_factory = nova.api.ec2:Requestify.factory [filter:authorizer] paste.filter_factory = nova.api.ec2:Authorizer.factory [app:ec2executor] paste.app_factory = nova.api.ec2:Executor.factory [app:ec2ver] paste.app_factory = nova.api.ec2:Versions.factory [app:ec2md] paste.app_factory = nova.api.ec2.metadatarequesthandler:MetadataRequestHandler.factory
############# # Openstack # ############# [composite:osapi] use = egg:Paste#urlmap /: osversions /v1.0: openstackapi10 /v1.1: openstackapi11 [pipeline:openstackapi10] # pipeline = faultwrap noauth ratelimit osapiapp10 # NOTE(vish): use the following pipeline for deprecated auth # pipeline = faultwrap auth ratelimit osapiapp10 # NOTE(vish): use the following pipeline for keystone pipeline = faultwrap authtoken keystonecontext ratelimit osapiapp10 [pipeline:openstackapi11] # pipeline = faultwrap noauth ratelimit extensions osapiapp11 # NOTE(vish): use the following pipeline for deprecated auth # pipeline = faultwrap auth ratelimit extensions osapiapp11 # NOTE(vish): use the following pipeline for keystone pipeline = faultwrap authtoken keystonecontext ratelimit extensions osapiapp11 [filter:faultwrap]
32
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
paste.filter_factory = nova.api.openstack:FaultWrapper.factory [filter:auth] paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory [filter:noauth] paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory [filter:ratelimit] paste.filter_factory = nova.api.openstack.limits:RateLimitingMiddleware.factory [filter:extensions] paste.filter_factory = nova.api.openstack.extensions:ExtensionMiddleware.factory [app:osapiapp10] paste.app_factory = nova.api.openstack:APIRouterV10.factory [app:osapiapp11] paste.app_factory = nova.api.openstack:APIRouterV11.factory [pipeline:osversions] pipeline = faultwrap osversionapp [app:osversionapp] paste.app_factory = nova.api.openstack.versions:Versions.factory ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = keystone.middleware.nova_keystone_context:NovaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 192.168.206.130 service_port = 5000 auth_host = 192.168.206.130 auth_port = 35357 auth_protocol = http auth_uri = http://192.168.206.130:5000/v2.0/ admin_token = 11121314151617181920
Credentials (openrc)
This file contains the credentials used by Compute, Image, and Identity services, you can optionally store in /home/openrc. The important concept to avoid errors is to ensure that it is sourced in the environment from which you issue commands. Run "env | grep OS_" or "env | grep NOVA_" to view what is being used in your environment.
export export export export export NOVA_USERNAME=adminUser NOVA_PROJECT_ID=openstackDemo NOVA_PASSWORD=secretword NOVA_API_KEY=${NOVA_PASSWORD} NOVA_URL=http://192.168.206.130:5000/v2.0/
33
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
export NOVA_VERSION=1.1 export NOVA_REGION_NAME=RegionOne export export export export export OS_AUTH_USER=${NOVA_USERNAME} OS_AUTH_KEY=${NOVA_PASSWORD} OS_AUTH_TENANT=${NOVA_PROJECT_ID} OS_AUTH_URL=${NOVA_URL} OS_AUTH_STRATEGY=keystone
Dashboard configuration
This file contains the database and configuration settings for the OpenStack Dashboard.
import os DEBUG = True TEMPLATE_DEBUG = DEBUG PROD = False USE_SSL = False LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) #DATABASES = { # 'default': { # 'ENGINE': 'django.db.backends.sqlite3', # 'NAME': os.path.join(LOCAL_PATH, 'dashboard_openstack.sqlite3'), # }, #} DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dash', 'USER': 'dash', 'PASSWORD': 'yourpassword', 'HOST': 'localhost', 'default-character-set': 'utf8' }, } CACHE_BACKEND = 'dummy://' SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db' # Send email to the console by default EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' # Or send them to /dev/null #EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend' # django-mailer uses a different settings attribute MAILER_EMAIL_BACKEND = EMAIL_BACKEND # # # # # Configure these for your outgoing email host EMAIL_HOST = 'smtp.my-company.com' EMAIL_PORT = 25 EMAIL_HOST_USER = 'djangomail' EMAIL_HOST_PASSWORD = 'top-secret!'
OPENSTACK_KEYSTONE_URL = "http://localhost:5000/v2.0/"
34
R A F T -D R A F T -D R A F T -D R A F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA F T -D RA FT-
stable/diablo
# FIXME: this is only needed until keystone fixes its GET /tenants call # so that it doesn't return everything for admins OPENSTACK_KEYSTONE_ADMIN_URL = "http://localhost:35357/v2.0" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member" # NOTE(tres): Available services should come from the service # catalog in Keystone. SWIFT_ENABLED = False # Configure quantum connection details for networking QUANTUM_ENABLED = False QUANTUM_URL = '127.0.0.1' QUANTUM_PORT = '9696' QUANTUM_TENANT = '1234' QUANTUM_CLIENT_VERSION='0.1' # If you have external monitoring links EXTERNAL_MONITORING = [ ['Nagios','http://foo.com'], ['Ganglia','http://bar.com'], ] # If you do not have external monitoring links # EXTERNAL_MONITORING = [] # Uncomment the following segment to silence most logging # django.db and boto DEBUG logging is extremely verbose. #LOGGING = { # 'version': 1, # # set to True will disable all logging except that specified, unless # # nothing is specified except that django.db.backends will still log, # # even when set to True, so disable explicitly # 'disable_existing_loggers': False, # 'handlers': { # 'null': { # 'level': 'DEBUG', # 'class': 'django.utils.log.NullHandler', # }, # 'console': { # 'level': 'DEBUG', # 'class': 'logging.StreamHandler', # }, # }, # 'loggers': { # # Comment or Uncomment these to turn on/off logging output # 'django.db.backends': { # 'handlers': ['null'], # 'propagate': False, # }, # 'django_openstack': { # 'handlers': ['null'], # 'propagate': False, # }, # } #} # How much ram on each compute host? COMPUTE_HOST_RAM_GB = 16
35