Blog|How To|Solaris|SPARC Logical Domains

SPARC Logical Domains: Alternate Service Domains Part 3

SPARC Logical Domains: Alternate Service Domains Part 3

In Part One of this series, we went through the initial configuration of our Logical Domain hypervisor and took some time to explain the process of mapping out the PCI Root Complexes, so that we would be able to effectively split them between the primary and an alternate domain.

In Part Two of this series we took the information from Part One and split out our PCI Root Complexes and we configured and installed an alternate domain.  We were also able to reboot the primary domain without impacting the operation of the alternate domain.

In Part Three (this article) we will be creating redundant virtual services as well as some guests that will use the redundant services that we created, and will go through some testing to see the capabilities of this architecture.  At the end of this article, we will be able to reboot either the primary or alternate domain without it having an impact on any of the running guests.

Create Redundant Virtual Services

So at this point, we have a fully independent I/O Domain named alternate.  This is great for some use cases, however, if we don’t enable it to be a Service Domain as well then we won’t be able to extend that independence to our Guest Domains.  This will require that we create Virtual Services for each of these critical components of a domain.

We previously created a primary-vds0, and that will suit us just fine, however, we will also need an alternate-vds0.

# ldm add-vdiskserver primary-vds0 primary<br />

# ldm add-vdiskserver alternate-vds0 alternate

We did not provision any Virtual Switches previously as we had no need of it since we had handed out physical NICs directly to primary and alternate.  Here we will create both primary-vsw0 and alternate-vsw0.

# ldm add-vswitch net-dev=net0 primary-vsw0 primary<br />

# ldm add-vswitch net-dev=net0 alternate-vsw0 alternate

To connect to the console of LDOMs we must have a virtual console concentrator.  This should have been setup previously to install the alternate domain.

# ldm add-vconscon port-range=5000-5100 primary-vcc0 primary

Now let’s save our setting since we have setup the services.

# ldm add-config redundant-virt-services

With our progress saved we can move on.

Creating Multipath Storage Devices

In order to utilize the redundancy of LDM, we will need to create redundant virtual disk devices.  The key difference here is that we will need to specify a mpgroup.

# ldm add-vdsdev mpgroup=san01-fc primary-backend ldm1-disk0@primary-vds0

And now the same device, using the alternate domain.

# ldm add-vdsdev mpgroup=san01-fc alternate-backend ldm1-disk0@alternate-vds0

Now another thing to notice, is that when using multiple protocols on the same SAN it is important to have a different mpgroup, this is because you can have failures in the interconnect layers, that don’t affect other protocols.  Case in point a failure of the FC fabric wouldn’t affect the availability of NFS services. So those failures need to be monitored separately. The jury is still out where the line should be drawn in terms of what goes into a single mpgroup.  As I was testing live migration it seems to be more effective to use the VM and the protocol as the boundary, as it checks the mpgroup for a number of members on both sides as part of its check. So, in this case, it might be ldm1-fc and ldm1-nfs.

# ldm add-vdsdev mpgroup=san01-nfs primary-backend ldm1-disk1@primary-vds0

Again the same device for the alternate domain.

# ldm add-vdsdev mpgroup=san01-nfs alternate-backend ldm1-disk1@alternate-vds0

Now we are ready to support the domain, next, we will create the domain and assign the disk resources.  Important to note, that we do not assign BOTH disk resources, only the primary. The mpgroup will take care of the redundancy.

# ldm add-domain ldm1<br />

# ldm set-vcpu 16 ldm1<br />

# ldm set-memory 16G ldm1<br />

# ldm add-vdisk disk0 ldm1-disk0@primary-vds0 ldm1

In the next section we will create some redundant network interfaces.

Creating Redundant Guest Networking

Redundant networking is really not any different than non-redundant networking, we simply create two VNICs, one on  primary-vsw0 and the other on alternate-vsw0. Once provisioned we create an IPMP interface inside of the guest. I theory you could use DLMP as well, though I haven’t tested this option.

# ldm add-vnet vnet0 primary-vsw0 ldm1<br />

# ldm add-vnet vnet1 alternate-vsw0 ldm1

Inside of the guest we now need to bind, start, and install.

# ldm bind ldm1<br />

# ldm start ldm1

I am assuming that you know how to install Solaris, as you already would have done so at least twice to get to this point.  Now time to configure networking. If you need help with configuring networking see the following articles.

Solaris 11: Network Configuration Basics

Solaris 11: Network Configuration Advanced

ldm1# ipadm create-ip net0<br />

ldm1# ipadm create-ip net1<br />

ldm1# ipadm create-ipmp -i net0 -i net1 ipmp0<br />

ldm1# ipadm create-addr -T static -a 192.168.1.11/24 ipmp0/v4<br />

ldm1# route -p add default 192.168.1.1

Now at this point, you will have all the pieces in place to have redundant guests.  Now it is time to do some rolling reboots of the primary and alternate domains and ensure your VM stays up and running.  Inside the guest, the only thing that is amiss is you will see ipmp members go into a failed state, and then come back up as the services are restored.

One final note.  From the ilom if you issue a -> stop SYS this will shutdown the physical hardware, which is both domains and all guests.

News & Insights