Jun 28, 2011

Resolving and installing dependences with gdebi

It may seem incredible but after many years using Linux, I have found out a great application in order to install deb packages, and automatically, their dependences: gdebi.

I have always had to install something, by general I have made it either from the corresponding repository or from the source code (by manually working out the dependences in this last case).

But what happens if we want to install directly a deb package? Typically we are going to employ the dpkg command.

javi@kubuntu:~$ sudo dpkg -i package.deb

If the package to be installed needs some dependence, the dpkg output shows it and we have to manually set it up. But so as to work around this task, gdebi can carry out all these necessary actions: to resolve the dependences and install the final package.

javi@kubuntu:~$ sudo gdebi package.deb

And finally, also say that this tool is available for Debian systems and their derivatives (Ubuntu, Kubuntu, Knoppix, etc.).


Jun 19, 2011

OpenNebula installation on Ubuntu (II)

Once we have made up the shared storage, the database and the requeried users, we are going to carry on the article about OpenNebula installation on Ubuntu by installing the dependences of OpenNebula.

root@frontend01:~# aptitude install build-essential ruby libxmlrpc-c3-dev scons libopenssl-ruby libssl-dev flex bison ruby-dev rake rubygems libxml-parser-ruby libxslt1-dev libnokogiri-ruby libsqlite3-dev

Now we are ready to download the source code of OpenNebula, compile it (with the MySQL option activated) and install it into the /srv/cloud/one directory.

root@frontend01:~# su - oneadmin ; cd /tmp

oneadmin@frontend01:/tmp$ wget http://dev.opennebula.org/attachments/download/395/opennebula-2.2.1.tar.gz

oneadmin@frontend01:/tmp$ tar xvzf opennebula-2.2.1.tar.gz ; cd opennebula-2.2.1

oneadmin@frontend01:/tmp/opennebula-2.2.1$ cat src/vmm/LibVirtDriverVMware.cc
...
        if ( emulator != "vmware" )
        {
                file << "\t\t\t<driver name='";

                if ( !driver.empty() )
                {
                        file << driver << "'/>" << endl;
                }
                else
                {
                        file << default_driver << "'/>" << endl;
                }
        }
...

oneadmin@frontend01:/tmp/opennebula-2.2.1$ scons mysql=yes parsers=yes

oneadmin@frontend01:/tmp/opennebula-2.2.1$ ./install.sh -d /srv/cloud/one

Before compiling OpenNebula, it is necessary to fix a severe mistake into a file from the source code (LibVirtDriverVMware.cc). This bug does not allow to deploy virtual machines on VMware ESXi, since when OpenNebula makes the deployment templates includes the raw format as the default DRIVER.

oneadmin@frontend01:~/templates$ virsh -c esx://esxi01/?no_verify=1
Enter username for esxi01 [root]: oneadmin
Enter oneadmin's password for esxi01:
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # define /srv/cloud/one/var/0/deployment.0
error: Failed to define domain from /srv/cloud/one/var/0/deployment.0
error: internal error Unknown driver name 'raw'

The last step before starting OpenNebula, is to set up (within the oned.conf file) the MySQL connection parameters (remember that in this case, you have to comment the line related to SQLite).

oneadmin@frontend01:/tmp/opennebula-2.2.1$ cat /srv/cloud/one/etc/oned.conf
...
# DB = [ backend = "sqlite" ]

DB = [ backend = "mysql",
   server  = "localhost",
   port    = 0,
   user    = "oneadmin",
   passwd  = "xxxxxx",
   db_name = "opennebula" ]
...

By means of the one script (situated in $ONE_LOCATION/bin), we can run and stop the OpenNebula daemon (oned) and the scheduler (mm_sched). Also say that the log files are located in $ONE_LOCATION/var.

oneadmin@frontend01:~$ one start

oneadmin@frontend01:~$ one stop

And finally, mention too that if we want that the operating system to start automatically OpenNebula during the boot, we must create a LSB init script for this purpose.

root@frontend01:~# cat /etc/init.d/opennebula
#!/bin/bash

### BEGIN INIT INFO
# Provides:          OpenNebula
# Required-Start:    $remote_fs $syslog $network
# Required-Stop:     $remote_fs $syslog $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start daemon at boot time
# Description:       Enable service provided by daemon.
### END INIT INFO

export ONE_LOCATION=/srv/cloud/one
export ONE_AUTH=$ONE_LOCATION/.one/one_auth
export ONE_XMLRPC=http://localhost:2633/RPC2
export PATH=$ONE_LOCATION/bin:$PATH

RETVAL=0

start()
{
     su oneadmin -s /bin/bash -c '$ONE_LOCATION/bin/one start' ; RETVAL=$?
     return $RETVAL
}

stop()
{
     su oneadmin -s /bin/bash -c '$ONE_LOCATION/bin/one stop' ; RETVAL=$?
}

case "$1" in
     start)
             sleep 5
             start
             ;;
     stop)
             stop
             ;;
     restart)
             stop
             start
             ;;
     *)
             echo $"Usage: service opennebula {start stop restart}"
esac

exit $RETVAL


root@frontend01:~# chmod +x /etc/init.d/opennebula

root@frontend01:~# update-rc.d opennebula start 90 2 3 4 5 . stop 10 0 1 6 .


Jun 14, 2011

OpenNebula installation on Ubuntu (I)

After presenting the OpenNebula cloud computing architecture, let's start indicating the versions which will be used.

All Linux machines (frontend01, storage01 and kvm01) have an Ubuntu Server 11.04 (64 bits) installed on them, and esxi01, a VMware ESXi 4.1 hypervisor. The OpenNebula version employed for the tests is 2.2.1 and it will be compiled and installed directly from its source code.

First of all, we are going to set up the shared storage on storage01, by means of NFS protocol. We need too an OpenNebula administrator user (oneadmin) which must be added on all machines (for this purpose, its UID and GID have to be the same for all them - 1001 in my case). The group of this user will be cloud.

root@storage01:~# mkdir -p /srv/cloud/one

root@storage01:~# groupadd --gid 1001 cloud
root@storage01:~# useradd --uid 1001 -g cloud -s /bin/bash -d /srv/cloud/one oneadmin
root@storage01:~# chown -R oneadmin:cloud /srv/cloud

root@storage01:~# aptitude install nfs-kernel-server

root@storage01:~# cat /etc/exports
/srv/cloud      192.168.1.0/255.255.255.0(rw,anonuid=1001,anongid=1001)

root@storage01:~# /etc/init.d/nfs-kernel-server restart

Now I may export the cloud directory to any machine belonging to the subnet (192.168.1.0/24).

Afterwards we must mount that shared on frontend01.

root@frontend01:~# aptitude install nfs-common ; modprobe nfs

root@frontend01:~# mkdir -p /srv/cloud

root@frontend01:~# cat /etc/fstab
...
storage01:/srv/cloud /srv/cloud      nfs4    _netdev,auto    0       0

root@frontend01:~# mount -a

Next step is to create an OpenNebula administrator user on the system.

root@frontend01:~# groupadd cloud

root@frontend01:~# useradd -s /bin/bash -d /srv/cloud/one -g cloud oneadmin

root@frontend01:~# id oneadmin
uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud)

Any OpenNebula account that we add to the system, must have the following environment variables established.

root@frontend01:~# su - oneadmin

oneadmin@frontend01:~$ cat .bashrc
# ~/.bashrc

if [ -f /etc/bash.bashrc ]; then
  . /etc/bash.bashrc
fi

export ONE_AUTH=$HOME/.one/one_auth
export ONE_LOCATION=/srv/cloud/one
export ONE_XMLRPC=http://localhost:2633/RPC2
export PATH=$ONE_LOCATION/bin:$PATH

oneadmin@frontend01:~$ cat .profile
# ~/.profile

if [ -n "$BASH_VERSION" ]; then
  if [ -f "$HOME/.bashrc" ]; then
          . "$HOME/.bashrc"
  fi
fi

OpenNebula is started by using the ONE_AUTH information.

oneadmin@frontend01:~$ mkdir .one

oneadmin@frontend01:~$ cat .one/one_auth
oneadmin:xxxxxx

We need to generate ssh keys for oneadmin user, in order to be able to connect with the rest of servers without typing a password. By means of the hushlogin file, we avoid the SSH welcome banner, and through the StrictHostKeyChecking directive, the SSH client not to ask about adding hosts to known_hosts file.

oneadmin@frontend01:~$ ssh-keygen

oneadmin@frontend01:~$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys

oneadmin@frontend01:~$ cat .ssh/config
Host *
    StrictHostKeyChecking no

oneadmin@frontend01:~$ touch .hushlogin

And finally, we are going to install MySQL (and its necessary dependences for the OpenNebula compilation) and set up a database called opennebula. It will be manage by OpenNebula so as to store its data.

root@frontend01:~# aptitude install mysql-server libmysql++-dev libxml2-dev

root@frontend01:~# mysql_secure_installation

root@frontend01:~# mysql -u root -p
...
mysql> GRANT ALL PRIVILEGES ON opennebula.* TO 'oneadmin' IDENTIFIED BY 'xxxxxx';
Query OK, 0 rows affected (0.00 sec)

Also say that as in any cluster, all nodes have to be synchronized.

root@frontend01:~# crontab -e
...
0 * * * * ntpdate pool.ntp.org

root@storage01:~# crontab -e
...
0 * * * * ntpdate pool.ntp.org


Jun 7, 2011

Cloud computing with OpenNebula

OpenNebula is an IaaS (Infraestructure as a Service) open source solution which allows to build private, public and hybrid clouds. It has been designed to be able to integrate with any kind of network or storage system and supports the main types of hypervisors: KVM, VMware vSphere (ESXi) and Xen.

Next figure shows a typical schema of an OpenNebula infrastructure, which I am going to develop in future articles.




The server named fronted01 runs OpenNebula and the cluster services. The principal components of OpenNebula are the daemon (manage the life cycle of virtual machines, network, storage and hypervisors), the scheduler (manage the deployment of virtual machines) and the drivers (manage the hypervisor interfaces - VMM -, monitoring - IM - and virtual machines transfers - TM -).

OpenNebula needs too a database to save the information. We have two options: SQLite and MySQL. In my architecture, I will use MySQL and it will be installed on frontend01.

With respect to the storage, OpenNebula works with three possibilities: shared - NFS (there is a shared data area accessible by OpenNebula server and computational nodes), non-shared - SSH (there is no shared area - live migrations cannot be used) and LVM (there must be a block device available in all nodes).

In the articles that I will write about it, I will configure a NFS shared into the storage01 server. It is normal as well to find some architecture where the storage is established inside the front-end. Also point out that the storage is used for keeping the virtual images and machines.

As in any classical IaaS solution, we require the computing nodes (also known as worker nodes), which supply the raw computing power and where the virtual machines are run. In this example, I will employ two hypervisors: KVM (kvm01) and VMware vSphere (esxi01). OpenNebula must be able to start, control and monitor the virtual machines. The communication between OpenNebula and nodes will be carry out through the drivers previously configured.

We can appreciate that OpenNebula is a fully scalable system, since we can add more computational nodes or storage servers based on our needs.

Other features are portability and interoperability, due to we may utilize most of the existing hardware to set up the clusters.

And finally, we face an open and standard architecture model, and besides, it can operate with other public clouds such as Amazon.


Jun 1, 2011

Zabbix server installation on Ubuntu (II)

We are going to conclude the last part of the Zabbix server installation on Ubuntu by setting up a new Apache web site for Zabbix.

root@ubuntu-server:~# cat /etc/apache2/sites-available/zabbix
<VirtualHost *:80>
Alias /zabbix /usr/share/zabbix
ErrorLog /var/log/apache2/zabbix-error.log
CustomLog /var/log/apache2/zabbix-access.log common
</VirtualHost>


root@ubuntu-server:~# a2dissite default

root@ubuntu-server:~# a2ensite zabbix

Besides it is also necessary to modify the PHP configuration file for adjusting it with the Zabbix requirements.

root@ubuntu-server:~# cat /etc/php5/apache2/php.ini
...
memory_limit = 256M

post_max_size = 32M

upload_max_filesize = 16M

max_execution_time = 600

max_input_time = 600

date.timezone = Europe/Madrid

Finally, we have to open a web browser, point to the Zabbix URL (http://ubuntu-server/zabbix in my case) and fulfill the wizard. In the first screen, Zabbix checks the pre-requisites and warns us if something is wrong.




In the fourth step (Configure DB connection), we have to enter the configuration parameters for the database connection.




At the end of the wizard, we must download the Zabbix PHP configuration file (zabbix.conf.php) by clicking on the Save configuration file button.




Then we have to copy that file into the /usr/share/zabbix/conf directory and fix it the suitable permissions.

root@ubuntu-server:~# chmod 600 /usr/share/zabbix/conf/zabbix.conf.php

root@ubuntu-server:~# chown www-data:www-data /usr/share/zabbix/conf/zabbix.conf.php