Showing posts with label Cloud computing. Show all posts
Showing posts with label Cloud computing. Show all posts

Aug 15, 2011

Adding a KVM hypervisor to OpenNebula (II)

Once I have finished to configure the KVM computing node, Adding a KVM hypervisor to OpenNebula (I), today I am going to conclude this series of technical articles about OpenNebula by setting a new instance up on kvm01.

First of all, I am going to use for my testing, a ttylinux image, downloaded directly from the OpenNebula website. This sort of Linux distribution is designed to consume fewer resources than a typical operating system such as Debian or CentOS.

oneadmin@frontend01:/tmp$ wget http://dev.opennebula.org/attachments/download/170/ttylinux.tar.gz

oneadmin@frontend01:/tmp$ tar xvzf ttylinux.tar.gz ; cd ~/templates

Next step is to define an image template so as to register it into OpenNebula.

oneadmin@frontend01:~/templates$ cat ttylinux.img
NAME        = "ttylinux"
PATH        = /tmp/ttylinux.img
DESCRIPTION = "Very small Linux distribution based on a 2.6 kernel"

oneadmin@frontend01:~/templates$ oneimage register ttylinux.img

oneadmin@frontend01:~/templates$ oneimage list
ID     USER                 NAME TYPE              REGTIME PUB PER STAT  #VMS
 0 oneadmin   Ubuntu Server 8.04   OS   Jul 02, 2011 10:34  No  No  rdy     0
 1 oneadmin             ttylinux   OS   Aug 07, 2011 18:30  No  No  rdy     0

Now we have a virtual image ready to be used on our KVM nodes, in this case kvm01.

oneadmin@frontend01:~/templates$ ls -lh ../var/images/8625d68b699fd30e64360471eb2c38fed47fcfb6
-rw-rw---- 1 oneadmin cloud 40M 2011-08-07 20:30 var/images/8625d68b699fd30e64360471eb2c38fed47fcfb6

oneadmin@frontend01:~/templates$ file ../var/images/8625d68b699fd30e64360471eb2c38fed47fcfb6
var/images/8625d68b699fd30e64360471eb2c38fed47fcfb6: x86 boot sector, LInux i386 boot LOader; partition 1: ID=0x83, starthead 1, startsector 63, 81585 sectors, code offset 0xeb

Then we have to make up a virtual network which will be utilized by all virtual machines built on our KVM computing node. Note that the key of this network is the bridge created in the previous article.

oneadmin@frontend01:~/templates$ cat kvm.net
NAME            = "KVM Network"
TYPE            = RANGED
PUBLIC          = NO
BRIDGE          = br0
NETWORK_ADDRESS = 192.168.1.160
NETWORK_SIZE    = 16
NETMASK         = 255.255.255.0
GATEWAY         = 192.168.1.1
DNS             = 194.30.0.1

oneadmin@frontend01:~/templates$ onevnet create kvm.net

oneadmin@frontend01:~/templates$ onevnet list
ID USER     NAME              TYPE BRIDGE P #LEASES
 0 oneadmin KVM Network     Ranged    br0 N       0

And lastly, we just have to set an instance template up where we outline the characteristics of our virtual machine and thus, to be able to run it over kvm01.

oneadmin@frontend01:~/templates$ cat ttylinux01.vm
NAME   = ttylinux01
CPU    = 1
MEMORY = 128

DISK   = [ SOURCE = "/srv/cloud/one/var/images/8625d68b699fd30e64360471eb2c38fed47fcfb6",
           TARGET = "hda" ]

NIC    = [ NETWORK = "KVM Network" ]

oneadmin@frontend01:~/templates$ onevm create ttylinux01.vm

oneadmin@frontend01:~/templates$ onevm list
ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
 0 oneadmin ttylinux runn   0      0K           kvm01 00 00:01:03


Aug 9, 2011

Adding a KVM hypervisor to OpenNebula (I)

After ending up how to add a VMware ESXi hypervisor to OpenNebula, now it is turn to configure a KVM node into our cloud infraestructure with OpenNebula.

To begin with, we are going to make up a network bridge on kvm01. For this purpose, we must put the NIC into manual mode and associate it to the bridge (br0). Remember that this new interface has also to have an IP address belonging to the own subnetwork.

root@kvm01:~# cat /etc/network/interfaces
...
auto eth0
   iface eth0 inet manual

auto br0
   iface br0 inet static
   address 192.168.1.12
   netmask 255.255.255.0
   network 192.168.1.0
   broadcast 192.168.1.255
   gateway 192.168.1.1
   dns-nameservers 194.30.0.1
   dns-search opennebula.local
   bridge_ports eth0
   bridge_fd 9
   bridge_hello 2
   bridge_maxage 12
   bridge_stp off

root@kvm01:~# /etc/init.d/networking restart

The reason for creating a bridge is clear: to be able to address the virtual machines built in this node. Otherwise, we would never link them.

Then we have to install the corresponding packages to be able to virtualize machines through KVM. The ruby package will be used to manage the node from OpenNebula and nfs-common to mount the shared area exported by storage01. As you can see, the libvirtd daemon must be put into listening mode without authentication.

root@kvm01:~# aptitude install kvm libvirt-bin ruby nfs-common

root@kvm01:~# cat /etc/libvirt/libvirtd.conf
...
listen_tls = 0
listen_tcp = 1
auth_tcp   = "none"

root@kvm01:~# cat /etc/libvirt/qemu.conf
...
dynamic_ownership = 0

root@kvm01:~# cat /etc/init/libvirt-bin.conf
...
env libvirtd_opts="-d -l"

root@kvm01:~# restart libvirt-bin

Besides, it is necessary to uncomment the line which says "dynamic_ownership = 1" (libvirt should dynamically change file ownership to match the configured user/group) and modify it to 0. Otherwise, you would get an error as follows.

neadmin@frontend01:~/templates$ tail -f ../var/oned.log
...
Sat Aug 13 20:32:11 2011 [TM][D]: Message received: TRANSFER SUCCESS 1 -
Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 Command execution fail: 'if [ -x "/var/tmp/one/vmm/kvm/deploy" ]; then /var/tmp/one/vmm/kvm/deploy /srv/cloud/one/var//1/images/deployment.0; else                              exit 42; fi'
Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 STDERR follows.
Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 error: Failed to create domain from /srv/cloud/one/var//1/images/deployment.0
Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 error: unable to set user and group to '104:112' on '/srv/cloud/one/var//1/images/disk.0': Invalid argument
Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 ExitCode: 255

Next step is to add a new user called oneadmin (with ID 1001, the same that in the rest of computers). I prefer to set a password up for this user because later, you have to copy the frontend01's public key in this machine.

root@kvm01:~# mkdir -p /srv/cloud/one/var

root@kvm01:~# groupadd --gid 1001 cloud

root@kvm01:~# useradd --uid 1001 -s /bin/bash -d /srv/cloud/one -g cloud -G kvm,libvirtd oneadmin

root@kvm01:~# passwd oneadmin

root@kvm01:~# chown -R oneadmin:cloud /srv/cloud

root@kvm01:~# id oneadmin
uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud),112(kvm),113(libvirtd)

root@kvm01:~# cat /etc/fstab
...
storage01:/srv/cloud/one/var /srv/cloud/one/var      nfs4    _netdev,auto    0       0

root@kvm01:~# mount -a

In addition, the node must be synchronized with all the machines of the cluster.

root@kvm01:~# crontab -e
...
0 * * * * ntpdate pool.ntp.org

And finally, we have to copy the public key from frontend01, so that this computer can be remotely handled by OpenNebula.

oneadmin@frontend01:~$ ssh-copy-id -i .ssh/id_rsa.pub oneadmin@kvm01

So as to check the installation, we can execute the next order from frontend01.

oneadmin@frontend01:~$ lib/remotes/im/run_probes kvm kvm01
ARCH=x86_64 MODELNAME="Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz"

Now we are ready to use the new KVM node in our cloud computing architecture.

oneadmin@frontend01:~$ onehost create kvm01 im_kvm vmm_kvm tm_nfs

oneadmin@frontend01:~$ onehost list
ID NAME              CLUSTER  RVM   TCPU   FCPU   ACPU    TMEM    FMEM STAT
 0 kvm01             default    0    100    100    100      2G    1.9G   on


Jul 17, 2011

Adding a VMware ESXi hypervisor to OpenNebula (III)

This is the last article about Adding a VMware ESXi hypervisor to OpenNebula. In this posting, we are going to build a virtual machine up on the esxi01 node.

To begin with, I am going to download a virtual image from the Virtual Appliances Marketplace, specifically an Ubuntu Server 8.04 LTS distribution (I will drop it off into the tmp directory).

oneadmin@frontend01:/tmp/ubuntu-server-8.04.1-i386$ ls -lh
total 529M
-rw-rw-r-- 1 oneadmin cloud  269 2008-07-05 17:59 README-vmware-image.txt
-rw------- 1 oneadmin cloud 8.5K 2008-07-05 17:59 ubuntu-server-8.04.1-i386.nvram
-rw------- 1 oneadmin cloud 144M 2008-07-05 17:59 ubuntu-server-8.04.1-i386-s001.vmdk
-rw------- 1 oneadmin cloud 207M 2008-07-05 17:59 ubuntu-server-8.04.1-i386-s002.vmdk
-rw------- 1 oneadmin cloud 177M 2008-07-05 17:59 ubuntu-server-8.04.1-i386-s003.vmdk
-rw------- 1 oneadmin cloud 1.8M 2008-07-05 17:59 ubuntu-server-8.04.1-i386-s004.vmdk
-rw------- 1 oneadmin cloud  64K 2008-07-05 17:59 ubuntu-server-8.04.1-i386-s005.vmdk
-rw------- 1 oneadmin cloud  592 2008-07-05 17:59 ubuntu-server-8.04.1-i386.vmdk
-rw------- 1 oneadmin cloud    0 2008-07-05 17:59 ubuntu-server-8.04.1-i386.vmsd
-rwxr-xr-x 1 oneadmin cloud 1.1K 2008-07-05 17:59 ubuntu-server-8.04.1-i386.vmx

Then I am going to register the image in OpenNebula. For this purpose, it is necessary to define an image template.

oneadmin@frontend01:/tmp/ubuntu-server-8.04.1-i386$ cat ubuntu-server-8.04.img
NAME        = "Ubuntu Server 8.04"
DESCRIPTION = "Ubuntu Server 8.04 LTS (32 bits)"

oneadmin@frontend01:/tmp/ubuntu-server-8.04.1-i386$ onevmware register --disk-vmdk ubuntu-server-8.04.1-i386.vmdk --disk-flat ubuntu-server-8.04.1-i386-s001.vmdk,ubuntu-server-8.04.1-i386-s002.vmdk,ubuntu-server-8.04.1-i386-s003.vmdk,ubuntu-server-8.04.1-i386-s004.vmdk,ubuntu-server-8.04.1-i386-s005.vmdk ubuntu-server-8.04.img

What happens now? We have a virtual image ready to be used. This image has been stored into the images directory.

oneadmin@frontend01:~$ oneimage list
ID     USER                 NAME TYPE              REGTIME PUB PER STAT  #VMS
 0 oneadmin   Ubuntu Server 8.04   OS   Jul 02, 2011 10:34  No  No  rdy     0

oneadmin@frontend01:~$ ls -l var/images/0ffb8867916a29e279e5ac2374833faa84fe5193/
total 540740
-rw-rw---- 1 oneadmin cloud       592 2011-07-02 12:34 disk.vmdk
-rw-rw---- 1 oneadmin cloud 150339584 2011-07-02 12:34 ubuntu-server-8.04.1-i386-s001.vmdk
-rw-rw---- 1 oneadmin cloud 216268800 2011-07-02 12:35 ubuntu-server-8.04.1-i386-s002.vmdk
-rw-rw---- 1 oneadmin cloud 185204736 2011-07-02 12:36 ubuntu-server-8.04.1-i386-s003.vmdk
-rw-rw---- 1 oneadmin cloud   1835008 2011-07-02 12:37 ubuntu-server-8.04.1-i386-s004.vmdk
-rw-rw---- 1 oneadmin cloud     65536 2011-07-02 12:37 ubuntu-server-8.04.1-i386-s005.vmdk

What is the next step? Easy, to make up a virtual network so as to be able to connect our future virtual machine on it. In my example, I have created a simple ranged network called ESXi Network. You can review the different parameters by taking a look at the exposed link in this paragraph.

oneadmin@frontend01:~$ mkdir templates ; cd templates

oneadmin@frontend01:~/templates$ cat esxi.net
NAME            = "ESXi Network"
TYPE            = RANGED
PUBLIC          = NO
BRIDGE          = "VM Network"
NETWORK_ADDRESS = 192.168.1.160
NETWORK_SIZE    = 16
NETMASK         = 255.255.255.0
GATEWAY         = 192.168.1.1
DNS             = 194.30.0.1

oneadmin@frontend01:~/templates$ onevnet create esxi.net

oneadmin@frontend01:~/templates$ onevnet list
ID USER     NAME              TYPE BRIDGE P #LEASES
 0 oneadmin ESXi Network    Ranged VM Net N       0

And finally, we must just set up an instance template in order to declare the features of our virtual machine. Also point out that a virtual machine is known as instance as well.

oneadmin@frontend01:~/templates$ cat ubuntu-server01.vm
NAME   = "UbuntuServer-01"
CPU    = 1
MEMORY = 512

DISK   = [ IMAGE  = "Ubuntu Server 8.04",
           TARGET = hda ]

NIC    = [ NETWORK = "ESXi Network" ]

oneadmin@frontend01:~/templates$ onevm create ubuntu-server01.vm

oneadmin@frontend01:~/templates$ onevm list
ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
 0 oneadmin UbuntuSe pend   0      0K                 00 00:00:07

We can see in the output of the preceding onevm list command that, the state of the instance is pend (pending), that is to say, it is waiting to be deployed on a hypervisor, in my case, esxi01. So let's go.

oneadmin@frontend01:~/templates$ onevm deploy 0 0

oneadmin@frontend01:~/templates$ onevm list
ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
 0 oneadmin UbuntuSe prol   0      0K          esxi01 00 00:00:36

oneadmin@frontend01:~/templates$ onevm list
ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
 0 oneadmin UbuntuSe boot   0      0K          esxi01 00 00:00:50

oneadmin@frontend01:~/templates$ onevm list
ID     USER     NAME STAT CPU     MEM        HOSTNAME        TIME
 0 oneadmin UbuntuSe runn   0      0K          esxi01 00 00:01:07

When we treat to deploy the virtual machine on the node, its first state is prol (prolog), then it reaches a boot state (booting), and lastly, runn (running), once the instance has been started up.


Jul 9, 2011

Adding a VMware ESXi hypervisor to OpenNebula (II)

Let's go ahead with the development of the cloud computing infraestructure based on OpenNebula by adding a VMware ESXi hypervisor. In the previous article, we saw how to configure a VMware vSphere node and now, we are going to set up the needed part on frontend01.

In order to manage the VMware node from OpenNebula, it is necessary to install libvirt on frontend01, and on top of all that, this software must be compiled with the ESX support. In this way, you cannot use the corresponding package located in the Ubuntu repositories. I am going to utilize the 0.9.2 version of libvirt.

root@frontend01:~# aptitude install libgnutls-dev libdevmapper-dev libcurl4-gnutls-dev python-dev libnl-dev libapparmor-dev

root@frontend01:/tmp# wget http://libvirt.org/sources/libvirt-0.9.2.tar.gz

root@frontend01:/tmp# tar xvzf libvirt-0.9.2.tar.gz ; cd libvirt-0.9.2

root@frontend01:/tmp/libvirt-0.9.2# ./configure --with-esx --with-apparmor --sysconfdir=/etc --libdir=/usr/lib --sbindir=/usr/sbin --datarootdir=/usr/share --localstatedir=/var --libexecdir=/usr/lib/libvirt

root@frontend01:/tmp/libvirt-0.9.2# make ; make install

I have also compiled the source code with AppArmor support, so I have had to move the required files.

root@frontend01:/tmp/libvirt-0.9.2# mkdir -p /etc/apparmor.d/libvirt

root@frontend01:/tmp/libvirt-0.9.2# cp -a examples/apparmor/usr.* /etc/apparmor.d/
root@frontend01:/tmp/libvirt-0.9.2# cp -a examples/apparmor/TEMPLATE /etc/apparmor.d/libvirt/
root@frontend01:/tmp/libvirt-0.9.2# cp -a examples/apparmor/libvirt-qemu /etc/apparmor.d/abstractions/

root@frontend01:/tmp/libvirt-0.9.2# cat /etc/apparmor.d/usr.sbin.libvirtd
...
owner /srv/cloud/one/var/** rw,
}

root@frontend01:/tmp/libvirt-0.9.2# /etc/init.d/apparmor restart

Next step is to download and install the VMware Drivers Addon (version 2.2.0 in my case). This wrapper enables the communication between OpenNebula and VMware ESXi through libvirt. This operation must be effected by means of the oneadmin user.

When you run the install.sh script and if everything went well, you will get a message about a specific code that you have to add into the oned.conf file.

oneadmin@frontend01:/tmp$ wget http://dev.opennebula.org/attachments/download/350/vmware-2.2.0.tar.gz

oneadmin@frontend01:/tmp$ tar xvzf vmware-2.2.0.tar.gz ; cd vmware-2.2.0

oneadmin@frontend01:/tmp/vmware-2.2.0$ ./install.sh
VMWare Drivers Addon successfully installed

# After the installation, please add the following to your oned.conf file
# and restart OpenNebula to activate the VMware Drivers Addon

#-------------------------------------------------------------------------------
#  VMware Driver Addon Virtualization Driver Manager Configuration
#-------------------------------------------------------------------------------
VM_MAD = [
name       = "vmm_vmware",
executable = "one_vmm_sh",
arguments  = "vmware",
default    = "vmm_sh/vmm_sh_vmware.conf",
type       = "vmware" ]
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
#  VMware Driver Addon Information Driver Manager Configuration
#-------------------------------------------------------------------------------
IM_MAD = [
name       = "im_vmware",
executable = "one_im_sh",
arguments  = "vmware" ]
#-------------------------------------------------------------------------------

#-------------------------------------------------------------------------------
# VMware Driver Addon Transfer Manager Driver Configuration
#-------------------------------------------------------------------------------
TM_MAD = [
name       = "tm_vmware",
executable = "one_tm",
arguments  = "tm_vmware/tm_vmware.conf" ]
#-------------------------------------------------------------------------------

Due to there is a mistake in the install.sh script, you must execute manually the following two orders.

oneadmin@frontend01:/tmp/vmware-2.2.0$ mkdir -p $ONE_LOCATION/var/remotes/im/vmware.d && cp -r im/remotes/* $ONE_LOCATION/var/remotes/im/vmware.d

oneadmin@frontend01:/tmp/vmware-2.2.0$ mkdir -p $ONE_LOCATION/var/remotes/vmm/vmware && cp -r vmm/remotes/* $ONE_LOCATION/var/remotes/vmm/vmware

This problem causes that you cannot start the VMware node up and the log file will show the next error.

oneadmin@frontend01:~$ cat var/oned.log
...
[ONE][E]: syntax error, unexpected $end, expecting VARIABLE at line 2, columns 1:2

Before restarting OpenNebula, you must type the user and password used to access to esxi01 and include a line into the sudoers file, so that OpenNebula may properly set some permissions.

oneadmin@frontend01:~$ cat etc/vmwarerc
...
USERNAME      = "oneadmin"
PASSWORD      = "xxxxxx"

root@frontend01:~# cat /etc/sudoers
...
oneadmin ALL=NOPASSWD:/srv/cloud/one/share/hooks/fix_owner_perms.sh ""

root@frontend01:~# /etc/init.d/opennebula restart

So as to check the correct installation, run the following command. Thereby, you will be able to obtain the physical resources of the managed node.

oneadmin@frontend01:~$ lib/remotes/im/run_probes vmware esxi01
/srv/cloud/one/lib/ruby/vmwarelib.rb:26: warning: already initialized constant ONE_LOCATION
/srv/cloud/one/lib/ruby/vmwarelib.rb:32: warning: already initialized constant RUBY_LIB_LOCATION
HYPERVISOR=vmware TOTALCPU=100 CPUSPEED=3001 TOTALMEMORY=2096460 FREEMEMORY=1677168

Now if you want to add the VMware vSphere node configured to OpenNebula, you can execute the next order.

oneadmin@frontend01:~$ onehost create esxi01 im_vmware vmm_vmware tm_vmware

oneadmin@frontend01:~$ onehost list
ID NAME              CLUSTER  RVM   TCPU   FCPU   ACPU    TMEM    FMEM STAT
0  esxi01            default    0    100    100    100      2G    1.6G   on


Jul 2, 2011

Adding a VMware ESXi hypervisor to OpenNebula (I)

This is the first article about how to add a VMware vSphere hypervisor to OpenNebula. In the two previous entries (I and II), I carried out the installation of OpenNebula on Ubuntu.

First of all, it is very important to underline that OpenNebula can only work with VMware vSphere hypervisor if it has got an evaluation license (60 days) or a standard, advanced, enterprise or enterprise plus license. If you opt for a free license, this mode lacks many remote commands and just supports the read-only API.




For instance, if you try to remotely define a virtual machine, you will get an error as the follows.

oneadmin@frontend01:~$ virsh -c esx://esxi01/?no_verify=1
Enter username for esxi01 [root]: oneadmin
Enter oneadmin's password for esxi01:
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
     'quit' to quit

virsh #  define /srv/cloud/one/var/0/deployment.0
error: Failed to define domain from /srv/cloud/one/var/0/deployment.0
error: internal error HTTP response code 500 for call to 'RegisterVM_Task'. Fault: ServerFaultCode - fault.RestrictedVersion.summary

After this explanation, let's get started by configuring the ntp daemon so that this hypervisor will be synchronized with the rest of nodes which make up the cloud computing infraestructure. Remember that the VMware vSphere ESXi version which I am going to use for my tests is the 4.1.

So as to set up a NTP server (in my case, a public NTP server such as pool.ntp.org), we must go to Configuration, Time Configuration and press the Properties link.




Next step is to aggregate a new group named cloud, with ID 1001, by clicking with the right button of the mouse on Local Users & Groups, Groups tab and pressing the Add command.




Then you have to create a new user named oneadmin, by making the same operation, but this time on the Users tab section. This user have to belong to the cloud group and also have the 1001 ID.




In addition, this new user must have full privilegies on the node. Therefore, we have to click with the right button of the mouse over our hypervisor, choose the Add Permission option and select the fields that you may see in the following figure.




And finally, we have to mount the shared storage exported by the storage01 machine. In order to realize this task, you must go to Configuration, Storage and press on the Add Storage link. In this way, a wizard will be popped up which will allow us to mount the remote storage.




In the first screen, you must pick out the Network File System storage type and then, fulfill the fields showed in the preceding image. Note that we are just importing the /srv/cloud/one/var directory, because in this folder will be stored the virtual machines. Also say that I have called the datastore images.


Jun 19, 2011

OpenNebula installation on Ubuntu (II)

Once we have made up the shared storage, the database and the requeried users, we are going to carry on the article about OpenNebula installation on Ubuntu by installing the dependences of OpenNebula.

root@frontend01:~# aptitude install build-essential ruby libxmlrpc-c3-dev scons libopenssl-ruby libssl-dev flex bison ruby-dev rake rubygems libxml-parser-ruby libxslt1-dev libnokogiri-ruby libsqlite3-dev

Now we are ready to download the source code of OpenNebula, compile it (with the MySQL option activated) and install it into the /srv/cloud/one directory.

root@frontend01:~# su - oneadmin ; cd /tmp

oneadmin@frontend01:/tmp$ wget http://dev.opennebula.org/attachments/download/395/opennebula-2.2.1.tar.gz

oneadmin@frontend01:/tmp$ tar xvzf opennebula-2.2.1.tar.gz ; cd opennebula-2.2.1

oneadmin@frontend01:/tmp/opennebula-2.2.1$ cat src/vmm/LibVirtDriverVMware.cc
...
        if ( emulator != "vmware" )
        {
                file << "\t\t\t<driver name='";

                if ( !driver.empty() )
                {
                        file << driver << "'/>" << endl;
                }
                else
                {
                        file << default_driver << "'/>" << endl;
                }
        }
...

oneadmin@frontend01:/tmp/opennebula-2.2.1$ scons mysql=yes parsers=yes

oneadmin@frontend01:/tmp/opennebula-2.2.1$ ./install.sh -d /srv/cloud/one

Before compiling OpenNebula, it is necessary to fix a severe mistake into a file from the source code (LibVirtDriverVMware.cc). This bug does not allow to deploy virtual machines on VMware ESXi, since when OpenNebula makes the deployment templates includes the raw format as the default DRIVER.

oneadmin@frontend01:~/templates$ virsh -c esx://esxi01/?no_verify=1
Enter username for esxi01 [root]: oneadmin
Enter oneadmin's password for esxi01:
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # define /srv/cloud/one/var/0/deployment.0
error: Failed to define domain from /srv/cloud/one/var/0/deployment.0
error: internal error Unknown driver name 'raw'

The last step before starting OpenNebula, is to set up (within the oned.conf file) the MySQL connection parameters (remember that in this case, you have to comment the line related to SQLite).

oneadmin@frontend01:/tmp/opennebula-2.2.1$ cat /srv/cloud/one/etc/oned.conf
...
# DB = [ backend = "sqlite" ]

DB = [ backend = "mysql",
   server  = "localhost",
   port    = 0,
   user    = "oneadmin",
   passwd  = "xxxxxx",
   db_name = "opennebula" ]
...

By means of the one script (situated in $ONE_LOCATION/bin), we can run and stop the OpenNebula daemon (oned) and the scheduler (mm_sched). Also say that the log files are located in $ONE_LOCATION/var.

oneadmin@frontend01:~$ one start

oneadmin@frontend01:~$ one stop

And finally, mention too that if we want that the operating system to start automatically OpenNebula during the boot, we must create a LSB init script for this purpose.

root@frontend01:~# cat /etc/init.d/opennebula
#!/bin/bash

### BEGIN INIT INFO
# Provides:          OpenNebula
# Required-Start:    $remote_fs $syslog $network
# Required-Stop:     $remote_fs $syslog $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start daemon at boot time
# Description:       Enable service provided by daemon.
### END INIT INFO

export ONE_LOCATION=/srv/cloud/one
export ONE_AUTH=$ONE_LOCATION/.one/one_auth
export ONE_XMLRPC=http://localhost:2633/RPC2
export PATH=$ONE_LOCATION/bin:$PATH

RETVAL=0

start()
{
     su oneadmin -s /bin/bash -c '$ONE_LOCATION/bin/one start' ; RETVAL=$?
     return $RETVAL
}

stop()
{
     su oneadmin -s /bin/bash -c '$ONE_LOCATION/bin/one stop' ; RETVAL=$?
}

case "$1" in
     start)
             sleep 5
             start
             ;;
     stop)
             stop
             ;;
     restart)
             stop
             start
             ;;
     *)
             echo $"Usage: service opennebula {start stop restart}"
esac

exit $RETVAL


root@frontend01:~# chmod +x /etc/init.d/opennebula

root@frontend01:~# update-rc.d opennebula start 90 2 3 4 5 . stop 10 0 1 6 .


Jun 14, 2011

OpenNebula installation on Ubuntu (I)

After presenting the OpenNebula cloud computing architecture, let's start indicating the versions which will be used.

All Linux machines (frontend01, storage01 and kvm01) have an Ubuntu Server 11.04 (64 bits) installed on them, and esxi01, a VMware ESXi 4.1 hypervisor. The OpenNebula version employed for the tests is 2.2.1 and it will be compiled and installed directly from its source code.

First of all, we are going to set up the shared storage on storage01, by means of NFS protocol. We need too an OpenNebula administrator user (oneadmin) which must be added on all machines (for this purpose, its UID and GID have to be the same for all them - 1001 in my case). The group of this user will be cloud.

root@storage01:~# mkdir -p /srv/cloud/one

root@storage01:~# groupadd --gid 1001 cloud
root@storage01:~# useradd --uid 1001 -g cloud -s /bin/bash -d /srv/cloud/one oneadmin
root@storage01:~# chown -R oneadmin:cloud /srv/cloud

root@storage01:~# aptitude install nfs-kernel-server

root@storage01:~# cat /etc/exports
/srv/cloud      192.168.1.0/255.255.255.0(rw,anonuid=1001,anongid=1001)

root@storage01:~# /etc/init.d/nfs-kernel-server restart

Now I may export the cloud directory to any machine belonging to the subnet (192.168.1.0/24).

Afterwards we must mount that shared on frontend01.

root@frontend01:~# aptitude install nfs-common ; modprobe nfs

root@frontend01:~# mkdir -p /srv/cloud

root@frontend01:~# cat /etc/fstab
...
storage01:/srv/cloud /srv/cloud      nfs4    _netdev,auto    0       0

root@frontend01:~# mount -a

Next step is to create an OpenNebula administrator user on the system.

root@frontend01:~# groupadd cloud

root@frontend01:~# useradd -s /bin/bash -d /srv/cloud/one -g cloud oneadmin

root@frontend01:~# id oneadmin
uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud)

Any OpenNebula account that we add to the system, must have the following environment variables established.

root@frontend01:~# su - oneadmin

oneadmin@frontend01:~$ cat .bashrc
# ~/.bashrc

if [ -f /etc/bash.bashrc ]; then
  . /etc/bash.bashrc
fi

export ONE_AUTH=$HOME/.one/one_auth
export ONE_LOCATION=/srv/cloud/one
export ONE_XMLRPC=http://localhost:2633/RPC2
export PATH=$ONE_LOCATION/bin:$PATH

oneadmin@frontend01:~$ cat .profile
# ~/.profile

if [ -n "$BASH_VERSION" ]; then
  if [ -f "$HOME/.bashrc" ]; then
          . "$HOME/.bashrc"
  fi
fi

OpenNebula is started by using the ONE_AUTH information.

oneadmin@frontend01:~$ mkdir .one

oneadmin@frontend01:~$ cat .one/one_auth
oneadmin:xxxxxx

We need to generate ssh keys for oneadmin user, in order to be able to connect with the rest of servers without typing a password. By means of the hushlogin file, we avoid the SSH welcome banner, and through the StrictHostKeyChecking directive, the SSH client not to ask about adding hosts to known_hosts file.

oneadmin@frontend01:~$ ssh-keygen

oneadmin@frontend01:~$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys

oneadmin@frontend01:~$ cat .ssh/config
Host *
    StrictHostKeyChecking no

oneadmin@frontend01:~$ touch .hushlogin

And finally, we are going to install MySQL (and its necessary dependences for the OpenNebula compilation) and set up a database called opennebula. It will be manage by OpenNebula so as to store its data.

root@frontend01:~# aptitude install mysql-server libmysql++-dev libxml2-dev

root@frontend01:~# mysql_secure_installation

root@frontend01:~# mysql -u root -p
...
mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'oneadmin' IDENTIFIED BY 'xxxxxx';
Query OK, 0 rows affected (0.00 sec)

Also say that as in any cluster, all nodes have to be synchronized.

root@frontend01:~# crontab -e
...
0 * * * * ntpdate pool.ntp.org

root@storage01:~# crontab -e
...
0 * * * * ntpdate pool.ntp.org


Jun 7, 2011

Cloud computing with OpenNebula

OpenNebula is an IaaS (Infraestructure as a Service) open source solution which allows to build private, public and hybrid clouds. It has been designed to be able to integrate with any kind of network or storage system and supports the main types of hypervisors: KVM, VMware vSphere (ESXi) and Xen.

Next figure shows a typical schema of an OpenNebula infrastructure, which I am going to develop in future articles.




The server named fronted01 runs OpenNebula and the cluster services. The principal components of OpenNebula are the daemon (manage the life cycle of virtual machines, network, storage and hypervisors), the scheduler (manage the deployment of virtual machines) and the drivers (manage the hypervisor interfaces - VMM -, monitoring - IM - and virtual machines transfers - TM -).

OpenNebula needs too a database to save the information. We have two options: SQLite and MySQL. In my architecture, I will use MySQL and it will be installed on frontend01.

With respect to the storage, OpenNebula works with three possibilities: shared - NFS (there is a shared data area accessible by OpenNebula server and computational nodes), non-shared - SSH (there is no shared area - live migrations cannot be used) and LVM (there must be a block device available in all nodes).

In the articles that I will write about it, I will configure a NFS shared into the storage01 server. It is normal as well to find some architecture where the storage is established inside the front-end. Also point out that the storage is used for keeping the virtual images and machines.

As in any classical IaaS solution, we require the computing nodes (also known as worker nodes), which supply the raw computing power and where the virtual machines are run. In this example, I will employ two hypervisors: KVM (kvm01) and VMware vSphere (esxi01). OpenNebula must be able to start, control and monitor the virtual machines. The communication between OpenNebula and nodes will be carry out through the drivers previously configured.

We can appreciate that OpenNebula is a fully scalable system, since we can add more computational nodes or storage servers based on our needs.

Other features are portability and interoperability, due to we may utilize most of the existing hardware to set up the clusters.

And finally, we face an open and standard architecture model, and besides, it can operate with other public clouds such as Amazon.