To begin with, we are going to make up a network bridge on kvm01. For this purpose, we must put the NIC into manual mode and associate it to the bridge (br0). Remember that this new interface has also to have an IP address belonging to the own subnetwork.
root@kvm01:~# cat /etc/network/interfaces ... auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.168.1.12 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 dns-nameservers 188.8.131.52 dns-search opennebula.local bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off root@kvm01:~# /etc/init.d/networking restart
The reason for creating a bridge is clear: to be able to address the virtual machines built in this node. Otherwise, we would never link them.
Then we have to install the corresponding packages to be able to virtualize machines through KVM. The ruby package will be used to manage the node from OpenNebula and nfs-common to mount the shared area exported by storage01. As you can see, the libvirtd daemon must be put into listening mode without authentication.
root@kvm01:~# aptitude install kvm libvirt-bin ruby nfs-common root@kvm01:~# cat /etc/libvirt/libvirtd.conf ... listen_tls = 0 listen_tcp = 1 auth_tcp = "none" root@kvm01:~# cat /etc/libvirt/qemu.conf ... dynamic_ownership = 0 root@kvm01:~# cat /etc/init/libvirt-bin.conf ... env libvirtd_opts="-d -l" root@kvm01:~# restart libvirt-bin
Besides, it is necessary to uncomment the line which says "dynamic_ownership = 1" (libvirt should dynamically change file ownership to match the configured user/group) and modify it to 0. Otherwise, you would get an error as follows.
neadmin@frontend01:~/templates$ tail -f ../var/oned.log ... Sat Aug 13 20:32:11 2011 [TM][D]: Message received: TRANSFER SUCCESS 1 - Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 Command execution fail: 'if [ -x "/var/tmp/one/vmm/kvm/deploy" ]; then /var/tmp/one/vmm/kvm/deploy /srv/cloud/one/var//1/images/deployment.0; else exit 42; fi' Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 STDERR follows. Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 error: Failed to create domain from /srv/cloud/one/var//1/images/deployment.0 Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 error: unable to set user and group to '104:112' on '/srv/cloud/one/var//1/images/disk.0': Invalid argument Sat Aug 13 20:32:12 2011 [VMM][D]: Message received: LOG - 1 ExitCode: 255
Next step is to add a new user called oneadmin (with ID 1001, the same that in the rest of computers). I prefer to set a password up for this user because later, you have to copy the frontend01's public key in this machine.
root@kvm01:~# mkdir -p /srv/cloud/one/var root@kvm01:~# groupadd --gid 1001 cloud root@kvm01:~# useradd --uid 1001 -s /bin/bash -d /srv/cloud/one -g cloud -G kvm,libvirtd oneadmin root@kvm01:~# passwd oneadmin root@kvm01:~# chown -R oneadmin:cloud /srv/cloud root@kvm01:~# id oneadmin uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud),112(kvm),113(libvirtd) root@kvm01:~# cat /etc/fstab ... storage01:/srv/cloud/one/var /srv/cloud/one/var nfs4 _netdev,auto 0 0 root@kvm01:~# mount -a
In addition, the node must be synchronized with all the machines of the cluster.
root@kvm01:~# crontab -e ... 0 * * * * ntpdate pool.ntp.org
And finally, we have to copy the public key from frontend01, so that this computer can be remotely handled by OpenNebula.
oneadmin@frontend01:~$ ssh-copy-id -i .ssh/id_rsa.pub oneadmin@kvm01
So as to check the installation, we can execute the next order from frontend01.
oneadmin@frontend01:~$ lib/remotes/im/run_probes kvm kvm01 ARCH=x86_64 MODELNAME="Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz"
Now we are ready to use the new KVM node in our cloud computing architecture.
oneadmin@frontend01:~$ onehost create kvm01 im_kvm vmm_kvm tm_nfs oneadmin@frontend01:~$ onehost list ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM STAT 0 kvm01 default 0 100 100 100 2G 1.9G on