Vous êtes sur la page 1sur 13

Solaris LDOMs virtualization : setup guide

Sun Logical Domains or LDoms is a full virtual machine that runs an independent operating system instance and contains virtualized CPU, memory, storage, console, and cryptographic devices. This technology allows you to allocate a system resources into logical groupings and create multiple, discrete systems, each with their own operating system, resources, and identity within a single computer system. We can run a variety of applications software in different logical domains and keep them independent of performance and security purposes. The LDoms environment can help to achieve greater resource usage, better scaling, and increased security and isolation.

Logical Domains
View more documents from Jarod Wang.

Logical & Control domain : The control domain communicates with the hypervisor to create and manage all logical domain configurations within a server platform. The Logical Domains Manager is used to create and manage logical domains. The Logical Domains Manager maps logical domains to physical resources. Without access to the Logical Domains Manager all logical domain resource levels remain static. The initial domain created when installing Logical Domains software is a control domain and is named primary.

Image from : http://www.sun.com/blueprints/0207/820-0832.pdf

You can download Logical Domain manager from http://sun.com/ldoms . Please read the release notes for system firmware requirements and patch requirements. By default, Ldoms software gets installed to /opt/SUNWldm/. Make sure the below commands works - and that confirms Logical domain manager is running. solfoo23# /opt/SUNWldm/bin/ldm list Name State Flags Cons VCPU Memory Util Uptime primary active -t-cv SP 32 16128M 49% 90mm

Creating default services : You need to create the default virtual services that the control domain uses to provide disk services, console access and networking. The below commands explains them.
1. Create Virtual Disk server(vds) : Virtual disk server helps importing virtual disks into

a logical domain from the control domain.

solfoo23# ldm add-vds primary-vds0 primary

2. Create Virtual Console concentrator Server(vcc) : Virtual Console concentrator server

provides terminal service to logical domain consoles.

solfoo23# ldm add-vcc port-range=5000-5100 primary-vcc0 primary


3. Create Virtual Switch server(vsw) : Virtual Switch server enables networking

between virtual network devices in logical domains.

solfoo23# ldm add-vsw net-dev=e1000g0 primary-vsw0 primary


4. List the default services created

solfoo23# ldm list-services primary VDS NAME VOLUME OPTIONS DEVICE primary-vds0

VCC NAME PORT-RANGE primary-vcc0 5000-5100

VSW NAME MAC NET-DEV DEVICE MODE primary-vsw0 00:11:5a:12:dc:fc e1000g1 switch@0 prog,promisc Control Domain Creation : The next step is to perform the initial setup of the primary domain, which will act as the control domain. You should specify the resources that the primary domain will use and what will be released for use by other guest domains. In this document, we are creating the control domain with 2 cpu's and 1gb RAM.

solfoo23# ldm set-mau 0 primary solfoo23# ldm set-vcpu 2 primary solfoo23# ldm set-memory 1024M primary

Now, set these modified configuration permanent using list-spconfig option. solfoo23# ldm list-spconfig factory-default [current]

solfoo23# ldm add-spconfig initial

solfoo23# ldm list-spconfig factory-default [current] initial [next]

Reboot the server and it will come up with initial configuration.

Networking between domains : Networking between control, service and other domains is disabled by default. To enable this, the virtual switch device should be configured as a network device. On the server console and perform the following network configuration steps.
1. Plumb the virtual switch(vsw0)

solfoo23# ifconfig vsw0 plumb


2. Bring down the primary interface

solfoo23# ifconfig e1000g1 down unplumb

3. Configure Virtual switch with the primary interface details

solfoo23# ifconfig vsw0 <ip> netmask <netmask> broadcast + up


4. Modify the hostname file to make this configuration permanent

solfoo23# mv /etc/hostname.e1000g1 /etc/hostname.vsw0


5. Enable Virtual Network terminal server daemon

solfoo23# svcadm enable vntsd

Now the setup is done. Run "ldm list-bindings primary" and make sure they are ok.

Logical Domain Creation : Now that the system is ready, prepare and plan for the logical domain configuration. In this document, we are creating a logical domain with 2 CPUs and 1GB memory and "domfoo" is the name. solfoo23# ldm add-domain domfoo solfoo23# ldm add-vcpu 2 domfoo solfoo23# ldm add-memory 1G domfoo solfoo23# ldm add-vnet vnet1 primary-vsw0 domfoo solfoo23# ldm add-vdsdev /dev/dsk/c1t2d0s2 vol1@primary-vds0 solfoo23# ldm add-vdisk vdisk1 vol1@primary-vds0 domfoo solfoo23# ldm bind domfoo solfoo23# ldm set-var auto-boot\?=false domfoo solfoo23# ldm start-domain domfoo

You will be able see the domain using "ldm list-domain"

solfoo23# ldm list-domain NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv SP 2 2G 0.2% 3h 4m domfoo inactive ----- 2 1G

Connect to the logical domain console by telneting to the virtual console port. solfoo23# telnet localhost 5000 Trying 127.0.0.1... Connected to localhost.... Escape character is ^]. Connecting to console "domfoo" in group "domfoo" .... Press ~? for control options .. {0} ok

Your LDom is up!. You can install it using jumpstart. Your LDoms environment is ready!

What's up with LDoms: Part 2 - Creating a first, simple guest


By Stefan Hinker on Jun 29, 2012

Welcome back! In the first part, we discussed the basic concepts of LDoms and how to configure a simple control domain. We saw how resources were put aside for guest systems and what infrastructure we need for them. With that, we are now ready to create a first, very simple guest domain. In this

first example, we'll keep things very simple. Later on, we'll have a detailed look at things like sizing, IO redundancy, other types of IO as well as security. For now,let's start with this very simple guest. It'll have one core's worth of CPU, one crypto unit, 8GB of RAM, a single boot disk and one network port. (If this were a T4 system, we'd not have to assign the crypto units. Since this is T3, it makes lots of sense to do so.) CPU and RAM are easy. The network port we'll create by attaching a virtual network port to the vswitch we created in the primary domain. This is very much like plugging a cable into a computer system on one end and a network switch on the other. For the boot disk, we'll need two things: A physical piece of storage to hold the data - this is called the backend device in LDoms speak. And then a mapping between that storage and the guest domain, giving it access to that virtual disk. For this example, we'll use a ZFS volume for the backend. We'll discuss what other options there are for this and how to chose the right one in a later article. Here we go:
root@sun # ldm create mars root@sun # ldm set-vcpu 8 mars root@sun # ldm set-mau 1 mars root@sun # ldm set-memory 8g mars root@sun # zfs create rpool/guests root@sun # zfs create -V 32g rpool/guests/mars.bootdisk root@sun # ldm add-vdsdev /dev/zvol/dsk/rpool/guests/mars.bootdisk \ mars.root@primary-vds root@sun # ldm add-vdisk root mars.root@primary-vds mars root@sun # ldm add-vnet net0 switch-primary mars

That's all, mars is now ready to power on. There are just three commands between us and the OK prompt of mars: We have to "bind" the domain, start it and connect to its console. Binding is the process where the hypervisor actually puts all the pieces that we've configured together. If we made a mistake, binding is where we'll be told (starting in version 2.1, a lot of sanity checking has been put into the config commands themselves, but binding will catch everything else). Once bound, we can start (and of course later stop) the domain, which will trigger the boot process of OBP. By default, the domain will then try to boot right away. If we don't want that, we can set "auto-boot?" to false. Finally, we'll use telnet to connect to the console of our newly created guest. The output of "ldm list" shows us what port has been assigned to mars. By default, the console service only listens on the loopback interface, so using telnet is not a large security concern here.
root@sun # ldm set-variable auto-boot\?=false mars root@sun # ldm bind mars root@sun # ldm start mars root@sun # ldm list NAME STATE primary active mars active FLAGS -n-cv-t---CONS UART 5000 VCPU 8 8 MEMORY 7680M 8G UTIL 0.5% 12% UPTIME 1d 4h 30m 1s

root@sun # telnet localhost 5000

Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ~Connecting to console "mars" in group "mars" .... Press ~? for control options .. {0} ok banner SPARC T3-4, No Keyboard Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.33.1, 8192 MB memory available, Serial # 87203131. Ethernet address 0:21:28:24:1b:50, Host ID: 85241b50. {0} ok

We're done, mars is ready to install Solaris, preferably using AI, of course ;-) But before we do that, let's have a little look at the OBP environment to see how our virtual devices show up here:
{0} ok printenv auto-boot? auto-boot? = false {0} ok printenv boot-device boot-device = disk net {0} ok devalias root net0 net disk virtual-console name /virtual-devices@100/channel-devices@200/disk@0 /virtual-devices@100/channel-devices@200/network@0 /virtual-devices@100/channel-devices@200/network@0 /virtual-devices@100/channel-devices@200/disk@0 /virtual-devices/console@1 aliases

We can see that setting the OBP variable "auto-boot?" to false with the ldm command worked. Of course, we'd normally set this to "true" to allow Solaris to boot right away once the LDom guest is started. The setting for "boot-device" is the default "disk net", which means OBP would try to boot off the devices pointed to by the aliases "disk" and "net" in that order, which usually means "disk" once Solaris is installed on the disk image. The actual devices these aliases point to are shown with the command "devalias". Here, we have one line for both "disk" and "net". The device paths speak for themselves. Note that each of these devices has a second alias: "net0" for the network device and "root" for the disk device. These are the very same names we've given these devices in the control domain with the commands "ldm add-vnet" and "ldm add-vdisk". Remember this, as it is very useful once you have several dozen disk devices... To wrap this up, in this part we've created a simple guest domain, complete with CPU, memory, boot disk and network connectivity. This should be enough to get you going. I will cover all the more advanced features and a little more theoretical background in several follow-on articles. For some background reading, I'd recommend the following links:

LDoms 2.2 Admin Guide: Setting up Guest Domains

Virtual Console Server: vntsd manpage - This includes the control sequences and commands available to control the console session. OpenBoot 4.x command reference - All the things you can do at the ok prompt

Creating a base LDOM image


From Peter Pap's Technowiki
Jump to: navigation, search

Now that the Control domain is setup, you can start creating LDOM's at will. The idea here is to create a base LDOM image that can be cloned to quickly provision new servers. 1. Create a ZFS file system to store the disk image for our LDOM
zfs create storage/base

This will create a ZFS file system called storage/base that will be mounted at /storage/base 2. Create a ZFS disk image to hold the LDOM's data
zfs create -V 36gb storage/base/disk0

This will create a 36Gb disk image. You can see the results in the zfs list output:
# zfs list NAME rpool rpool/ROOT rpool/ROOT/s10s_u7wos_08 rpool/dump rpool/swap storage storage/base storage/base/disk0 USED 25.4G 5.36G 5.36G 10.0G 10G 37.1G 37.1G 37.1G AVAIL 109G 109G 109G 109G 118G 630G 630G 667G REFER 93K 18K 5.36G 10.0G 16.4M 36.5K 34.9K 26.6K MOUNTPOINT /rpool legacy / /storage /storage/base -

3. Create the new LDOM


ldm add-domain base

4. Assign virtual CPU's to the new LDOM

ldm add-vcpu 8 base

5. Assign memory to the new LDOM


ldm add-memory 2G base

6. Create a virtual disk device and assign it to the new LDOM


ldm add-vdsdev /dev/zvol/dsk/storage/base/disk0 base-vol1@primary-vds0 ldm add-vdisk vdisk1 base-vol1@primary-vds0 base

This creates a virtual disk device called base-vol1@primary-vds0 that is then assigned to the LDOM base. ldm list-services should now look like this:
# ldm list-services VCC NAME primary-vcc0 VSW NAME DEVICE LINKPROP primary-vsw0 switch@0 VDS NAME LDOM MPGROUP DEVICE primary-vds0 primary /dev/zvol/dsk/storage/base/disk0 VOLUME base-vol1 OPTIONS LDOM MAC NET-DEV ID DEFAULT-VLAN-ID PVID VID MTU MODE primary 00:14:4f:fa:40:c8 e1000g0 0 1 1 1500

LDOM primary

PORT-RANGE 5000-5100

7. Add a virtual network interface to the new LDOM


ldm add-vnet vnet1 primary-vsw0 base

8. Adjust boot parameters of the LDOM so that it will automatically boot when the host system starts up and it knows that the vdisk1 device is it's boot device:
# ldm set-var auto-boot\?=true base # ldm set-var boot-device=vdisk1 base

You can also change other boot parameters as you see fit
# ldm set-var boot-file="-v -mverbose" base

9. Bind the new LDOM


ldm bind base

10. Check the parameters associated with your LDOM

# ldm list-bindings base NAME STATE base bound UUID

FLAGS ------

CONS 5000

VCPU 8

MEMORY 2G

UTIL

UPTIME

16277b70-4c8c-c240-da5b-f73075ba2a89 MAC 00:14:4f:f9:33:a5 HOSTID 0x84f933a5 CONTROL failure-policy=ignore DEPENDENCY master= CORE CID 1 VCPU VID 0 1 2 3 4 5 6 7 PID 8 9 10 11 12 13 14 15 CID 1 1 1 1 1 1 1 1 UTIL STRANDK99mksysidcfg 100% 100% 100% 100% 100% 100% 100% 100% PA 0x88000000 SIZE 2G CPUSET (8, 9, 10, 11, 12, 13, 14, 15)

MEMORY RA 0x8000000

VARIABLES auto-boot?=true boot-device=vdisk1 boot-file=-v -mverbose NETWORK NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP vnet1 prod-vsw0@primary 0 network@0 00:14:4f:f8:78:9c 1 1500 PEER MAC MODE PVID VID MTU LINKPROP prod-vsw0@primary 00:14:4f:fa:40:c8 1 1500 DISK NAME MPGROUP vdisk1 VOLUME base-vol1@primary-vds0 TOUT ID 0 DEVICE disk@0 SERVER primary

VCONS NAME base

SERVICE primary-vcc0@primary

PORT 5000

This shows that the console port for telnet is 5000, the MAC address of the virtual NIC is 00:14:4f:f8:78:9c, amongst other valuable information :-) 11. Start the new LDOM
ldm start-domain base

12. Log on to the console of the new LDOM. It should be waiting at the OK prompt for you to do a Jumpstart install. Use the MAC address you got from the previous step and Jumpstart install an OS on the LDOM over the network.
telnet localhost 5000

NOTE: The MAC address that you get from the OBP banner command is NOT the MAC adderess of the vnet1 interface! You will need the MAC address you got from the previous step! 13. Once you've built the LDOM OS and have customized it to your liking, you will need to create a sysidcfg file. However, in this instance make the file /etc/sysidcfg.bk. The reasons will become evident shortly. You can find details of how to create a sysidcfg using man sysidcfg. My sysidcfg file looks like this:
system_locale=en_US.UTF-8 timezone=Australia/Victoria timeserver=localhost terminal=vt100 name_service=DNS{domain_name=mydomain.com.au name_server=192.168.1.101 search=mydomain.com.au} network_interface=PRIMARY {default_route=192.168.1.1 netmask=255.255.255.0 protocol_ipv6=no} security_policy=NONE root_password=my_password_hash nfs4_domain=dynamic

14. Create the script /etc/rc0.d/K99mksysidcfg with the following contents


#!/bin/sh /usr/bin/cp -p /etc/sysidcfg.bk /etc/sysidcfg /usr/bin/rm /etc/rc0.d/K99mksysidcfg

Make sure it is executable

chmod 744 K99mksysidcfg

15. Create the script /etc/rc3.d/S99ldomreconfig to add host file entries and sendmail configs. Make sure it is executable
chmod 744 S99ldomreconfig

16. We now unconfigure the LDOM to remove it's identity. This will remove it's hostname, IP, host file entries. Hence the reason for steps 13 and 14! To do this:
# sys-unconfig WARNING This program will unconfigure your system. It will cause it to revert to a "blank" system - it will not have a name or know about other systems or networks. This program will also halt the system. Do you want to continue (y/n) ? y

The LDOM will now shutdown and will have no identity! 17. Exit out of the telnet console, and unbind the LDOM
ldm stop-domain base ldm unbind base

The LDOM is now ready to clone!!!

Vous aimerez peut-être aussi