All Linux machines (frontend01, storage01 and kvm01) have an Ubuntu Server 11.04 (64 bits) installed on them, and esxi01, a VMware ESXi 4.1 hypervisor. The OpenNebula version employed for the tests is 2.2.1 and it will be compiled and installed directly from its source code.
First of all, we are going to set up the shared storage on storage01, by means of NFS protocol. We need too an OpenNebula administrator user (oneadmin) which must be added on all machines (for this purpose, its UID and GID have to be the same for all them - 1001 in my case). The group of this user will be cloud.
root@storage01:~# mkdir -p /srv/cloud/one
root@storage01:~# groupadd --gid 1001 cloud
root@storage01:~# useradd --uid 1001 -g cloud -s /bin/bash -d /srv/cloud/one oneadmin
root@storage01:~# chown -R oneadmin:cloud /srv/cloud
root@storage01:~# aptitude install nfs-kernel-server
root@storage01:~# cat /etc/exports
/srv/cloud 192.168.1.0/255.255.255.0(rw,anonuid=1001,anongid=1001)
root@storage01:~# /etc/init.d/nfs-kernel-server restart
Now I may export the cloud directory to any machine belonging to the subnet (192.168.1.0/24).
Afterwards we must mount that shared on frontend01.
root@frontend01:~# aptitude install nfs-common ; modprobe nfs
root@frontend01:~# mkdir -p /srv/cloud
root@frontend01:~# cat /etc/fstab
...
storage01:/srv/cloud /srv/cloud nfs4 _netdev,auto 0 0
root@frontend01:~# mount -a
Next step is to create an OpenNebula administrator user on the system.
root@frontend01:~# groupadd cloud
root@frontend01:~# useradd -s /bin/bash -d /srv/cloud/one -g cloud oneadmin
root@frontend01:~# id oneadmin
uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud)
Any OpenNebula account that we add to the system, must have the following environment variables established.
root@frontend01:~# su - oneadmin
oneadmin@frontend01:~$ cat .bashrc
# ~/.bashrc
if [ -f /etc/bash.bashrc ]; then
. /etc/bash.bashrc
fi
export ONE_AUTH=$HOME/.one/one_auth
export ONE_LOCATION=/srv/cloud/one
export ONE_XMLRPC=http://localhost:2633/RPC2
export PATH=$ONE_LOCATION/bin:$PATH
oneadmin@frontend01:~$ cat .profile
# ~/.profile
if [ -n "$BASH_VERSION" ]; then
if [ -f "$HOME/.bashrc" ]; then
. "$HOME/.bashrc"
fi
fi
OpenNebula is started by using the ONE_AUTH information.
oneadmin@frontend01:~$ mkdir .one
oneadmin@frontend01:~$ cat .one/one_auth
oneadmin:xxxxxx
We need to generate ssh keys for oneadmin user, in order to be able to connect with the rest of servers without typing a password. By means of the hushlogin file, we avoid the SSH welcome banner, and through the StrictHostKeyChecking directive, the SSH client not to ask about adding hosts to known_hosts file.
oneadmin@frontend01:~$ ssh-keygen
oneadmin@frontend01:~$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys
oneadmin@frontend01:~$ cat .ssh/config
Host *
StrictHostKeyChecking no
oneadmin@frontend01:~$ touch .hushlogin
And finally, we are going to install MySQL (and its necessary dependences for the OpenNebula compilation) and set up a database called opennebula. It will be manage by OpenNebula so as to store its data.
root@frontend01:~# aptitude install mysql-server libmysql++-dev libxml2-dev
root@frontend01:~# mysql_secure_installation
root@frontend01:~# mysql -u root -p
...
mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'oneadmin' IDENTIFIED BY 'xxxxxx';
Query OK, 0 rows affected (0.00 sec)
Also say that as in any cluster, all nodes have to be synchronized.
root@frontend01:~# crontab -e
...
0 * * * * ntpdate pool.ntp.org
root@storage01:~# crontab -e
...
0 * * * * ntpdate pool.ntp.org
root@storage01:~# aptitude install install nfs-kernel-server
ReplyDeletetienes install 2 veces
Gracias, arreglado.
ReplyDeleteI have had to add the parameters "anonuid=1001,anongid=1001" into the exports file, because when a VMware vSphere hypervisor works inside the shared storage, there are some operations which are not carried out by the oneadmin user.
ReplyDeleteOtherwise the virtual machines will not be started up. Below you can see a typical error message showed by VMware for this case:
Reason: Insufficient permission to access file.
Cannot open the disk '/vmfs/volumes/258f88a2-e0eb18e5/14/images/disk.0/disk.vmdk' or one of the snapshot disks it depends on.
The owner of disk.vmdk is oneadmin, that's right, but it is possible that when VMware tries to develop some task, uses another user such as root.
In addition, if you take a look at the directory where the virtual machine is deployed, there is one moment where several files are created (one-1.*) and the owner is nobody. This is due to these files have been created for another user different from oneadmin.
oneadmin@frontend01:~$ ls -l var/1/images/disk.0/
total 540748
-rw-r----- 1 oneadmin cloud 592 2011-07-03 14:00 disk.vmdk
-rw-r--r-- 1 nobody nogroup 0 2011-07-03 14:03 one-1.vmsd
-rw-r--r-- 1 nobody nogroup 665 2011-07-03 14:03 one-1.vmx
-rw-r--r-- 1 nobody nogroup 260 2011-07-03 14:03 one-1.vmxf
-rw-r----- 1 oneadmin cloud 150339584 2011-07-03 14:00 ubuntu-server-8.04.1-i386-s001.vmdk
-rw-r----- 1 oneadmin cloud 216268800 2011-07-03 14:01 ubuntu-server-8.04.1-i386-s002.vmdk
-rw-r----- 1 oneadmin cloud 185204736 2011-07-03 14:02 ubuntu-server-8.04.1-i386-s003.vmdk
-rw-r----- 1 oneadmin cloud 1835008 2011-07-03 14:00 ubuntu-server-8.04.1-i386-s004.vmdk
-rw-r----- 1 oneadmin cloud 65536 2011-07-03 14:00 ubuntu-server-8.04.1-i386-s005.vmdk