After presenting the
OpenNebula cloud computing architecture, let's start indicating the versions which will be used.
All Linux machines (
frontend01,
storage01 and
kvm01) have an Ubuntu Server 11.04 (64 bits) installed on them, and
esxi01, a VMware ESXi 4.1 hypervisor. The OpenNebula version employed for the tests is 2.2.1 and it will be compiled and installed directly from its source code.
First of all, we are going to set up the shared storage on
storage01, by means of NFS protocol. We need too an OpenNebula administrator user (
oneadmin) which must be added on all machines (for this purpose, its UID and GID have to be the same for all them - 1001 in my case). The group of this user will be
cloud.
root@storage01:~# mkdir -p /srv/cloud/one
root@storage01:~# groupadd --gid 1001 cloud
root@storage01:~# useradd --uid 1001 -g cloud -s /bin/bash -d /srv/cloud/one oneadmin
root@storage01:~# chown -R oneadmin:cloud /srv/cloud
root@storage01:~# aptitude install nfs-kernel-server
root@storage01:~# cat /etc/exports
/srv/cloud 192.168.1.0/255.255.255.0(rw,anonuid=1001,anongid=1001)
root@storage01:~# /etc/init.d/nfs-kernel-server restart
Now I may export the
cloud directory to any machine belonging to the subnet (192.168.1.0/24).
Afterwards we must mount that shared on
frontend01.
root@frontend01:~# aptitude install nfs-common ; modprobe nfs
root@frontend01:~# mkdir -p /srv/cloud
root@frontend01:~# cat /etc/fstab
...
storage01:/srv/cloud /srv/cloud nfs4 _netdev,auto 0 0
root@frontend01:~# mount -a
Next step is to create an OpenNebula administrator user on the system.
root@frontend01:~# groupadd cloud
root@frontend01:~# useradd -s /bin/bash -d /srv/cloud/one -g cloud oneadmin
root@frontend01:~# id oneadmin
uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud)
Any OpenNebula account that we add to the system, must have the following environment variables established.
root@frontend01:~# su - oneadmin
oneadmin@frontend01:~$ cat .bashrc
# ~/.bashrc
if [ -f /etc/bash.bashrc ]; then
. /etc/bash.bashrc
fi
export ONE_AUTH=$HOME/.one/one_auth
export ONE_LOCATION=/srv/cloud/one
export ONE_XMLRPC=http://localhost:2633/RPC2
export PATH=$ONE_LOCATION/bin:$PATH
oneadmin@frontend01:~$ cat .profile
# ~/.profile
if [ -n "$BASH_VERSION" ]; then
if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
fi
OpenNebula is started by using the
ONE_AUTH information.
oneadmin@frontend01:~$ mkdir .one
oneadmin@frontend01:~$ cat .one/one_auth
oneadmin:xxxxxx
We need to generate ssh keys for
oneadmin user, in order to be able to connect with the rest of servers without typing a password. By means of the
hushlogin file, we avoid the SSH welcome banner, and through the
StrictHostKeyChecking directive, the SSH client not to ask about adding hosts to
known_hosts file.
oneadmin@frontend01:~$ ssh-keygen
oneadmin@frontend01:~$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys
oneadmin@frontend01:~$ cat .ssh/config
Host *
StrictHostKeyChecking no
oneadmin@frontend01:~$ touch .hushlogin
And finally, we are going to install MySQL (and its necessary dependences for the OpenNebula compilation) and set up a database called
opennebula. It will be manage by OpenNebula so as to store its data.
root@frontend01:~# aptitude install mysql-server libmysql++-dev libxml2-dev
root@frontend01:~# mysql_secure_installation
root@frontend01:~# mysql -u root -p
...
mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'oneadmin' IDENTIFIED BY 'xxxxxx';
Query OK, 0 rows affected (0.00 sec)
Also say that as in any cluster, all nodes have to be synchronized.
root@frontend01:~# crontab -e
...
0 * * * * ntpdate pool.ntp.org
root@storage01:~# crontab -e
...
0 * * * * ntpdate pool.ntp.org