Oct 30, 2010

Patching VMware vSphere (ESXi) with vCLI

I think that it is very important to have up to date our VMware vSphere (ESXi) systems, due to VMware periodically releases a series of patches which fixes bugs and security breaches on them.

For this purpose, we can use either vSphere Host Update Utility (graphical tool included within VMware vSphere package) or vihostupdate command, belonging to the vCLI (vSphere Command-Line Interface) enviroment, which allows us to perform several tasks (managing virtual machines, files, storages, users, etc.) remotely on VMware vSphere.

When I attempted to use vSphere Host Update Utility at work, I had problems because my PC is behind a proxy... In theory, you can set the ProxyServer tag into the settings.config file, but not working properly.

Then we are going to see how to apply a patch on VMware vSphere with vCLI. The tests will be realized on a Windows XP system, but do not worry because the commands are the same for Linux systems.

We must have installed the vCLI utility on our PC. I recommend to install vSphere CLI 4.1 version, because in this way we will be able to manage VMware vSphere 4.0 or 4.1. If we try to use vCLI 4.0 Update 1 or previous with VMware vSphere 4.1, we will get the following error: "This operation is NOT supported on 4.1.0 platform".

In this example, I am going to apply the lastest patch available (ESXi400-201009001) on a VMware vSphere 4.0 (update02) with IP address.

First, we must open in a browser the web site with the patches available for VMware products: Download Patches. We can use the search tool to find them. In our case, we pick out the ESXi (Embedded and Installable) 4.0.0 product.

Then we access to a new screen with all patches released for our device, where we can see that they are ordered by date and version, and besides, we can take a look at their description, bulletins and classification.

In general, a patch includes one or more bulletins and it is important to know that the patches are cumulative, that is to say, a current patch contains all corrections of a previous release.

Next step is to download the patch. I usually drop it off into the same directory where are the vCLI Perl scripts.

C:\Archivos de programa\VMware\VMware vSphere CLI\bin>dir
El volumen de la unidad C no tiene etiqueta.
El número de serie del volumen es: A01F-3A26

Directorio de C:\Archivos de programa\VMware\VMware vSphere CLI\bin

28/10/2010  10:48    <DIR>          .
28/10/2010  10:48    <DIR>          ..
09/02/2010  15:59                49 .directory
20/04/2009  20:54             7.638 esxcfg-advcfg.pl
20/04/2009  20:54             8.214 esxcfg-cfgbackup.pl
20/04/2009  20:54             7.892 esxcfg-dns.pl
20/09/2010  04:14       184.519.878 ESXi400-201009001.zip

We must check out what bulletins included inside the patch can be applied to the VMware vSphere. In the present case, we figure out that there are two bulletins available.

C:\Archivos de programa\VMware\VMware vSphere CLI\bin>vihostupdate.pl --server --scan --bundle ESXi400-201009001.zip
Enter username: root
Enter password:
The bulletins which apply to but are not yet installed on this ESX host are listed.

---------Bulletin ID---------   ----------------Summary-----------------
ESXi400-201009401-BG            Updates Firmware
ESXi400-201009402-BG            Updates VMware Tools

Now we are ready to run the order which will apply the updates. Before setting the patch, VMware vSphere must be put into maintenance mode. For that, we have to use the vSphere client and press the Enter Maintenance Mode option in the Summary tab.

C:\Archivos de programa\VMware\VMware vSphere CLI\bin>vihostupdate.pl --server --install --bundle ESXi400-201009001.zip --bulletin ESXi400-201009401-BG,ESXi400-201009402-BG
Enter username: root
Enter password:
The update completed successfully, but the system needs to be rebooted for the changes to be effective.

The last step after rebooting the machine, is to make sure that the patch has been applied correctly.

C:\Archivos de programa\VMware\VMware vSphere CLI\bin>vihostupdate.pl --server --query
Enter username: root
Enter password:
---------Bulletin ID--------- -----Installed----- ----------------Summary-----------------
ESXi400-Update02              2010-10-25T14:47:24 VMware ESXi 4.0 Update 2
ESXi400-201006203-UG          2010-10-25T14:47:24 VI Client update for 4.0 U2 release
ESXi400-201009401-BG          2010-10-25T15:18:23 Updates Firmware
ESXi400-201009402-BG          2010-10-25T15:18:23 Updates VMware Tools

Do not forget to exit Maintenance Mode and remember that we can also use this utility to upgrade a VMware vSphere from the 4.0 to 4.1 version.

Oct 24, 2010

MySQL optimization (IV)

This is the last article about MySQL tunning, and we are going to present the way to change the mentioned parameters. The previous issue was MySQL optimization (III).

These parameters were established into the MySQL configuration file (my.cnf). Such modifications will not take effect until the mysqld service is rebooted. But there may be the case where we cannot reset the service, for example because the computer is on a production environment.

In this situation we must know that in MySQL, there are several dynamic variables wich can be modified at runtime. In order to see all system variables, we can run the following order:

mysql> show global variables;
| Variable_name                   | Value                                                      |
| auto_increment_increment        | 1                                                          |
| auto_increment_offset           | 1                                                          |
| automatic_sp_privileges         | ON                                                         |
| back_log                        | 50                                                         |
| basedir                         | /usr/                                                      |
| bdb_cache_size                  | 8384512                                                    |
| bdb_home                        | /var/lib/mysql/                                            |
| bdb_log_buffer_size             | 262144                                                     |

So as to show the value of a concrete variable:

mysql> show global variables like 'table_cache';
| Variable_name | Value |
| table_cache   | 64    |
1 row in set (0.00 sec)

And if we want to modify its value at runtime (it must be changed globally wherever possible):

mysql> set global table_cache=1024;
Query OK, 0 rows affected (0.00 sec)

Oct 18, 2010

Kubuntu 10.10 Maverick Meerkat

I have been testing the new distribution of Canonical, Kubuntu 10.10 Maverick Meerkat, since October 10, the day which was launched.

There are no many features in this release, but I think that Kubuntu goes on improving over and over. Now, we have got a new kernel, 2.6.35, and a new version of KDE, 4.5.1.

On the desktop, we can make out a new style for the system tray and a new Ubuntu font by default.

KPackageKit have been improved too, and now, we can filter the applications by categories, such as Accesibility, Developer Tools, Internet, etc. and for example inside Internet, we can choose Chat, File Sharing, Mail and Web Browsers.

We can also appreciate a new web browser, Rekonq, which reminds me to Google Chrome.

When I comment a new release of Kubuntu, I like to measure the boot time of the operating system on my laptop. In Kubuntu 9.10 Karmic Koala, it was 12 sg for the power on (to the kdm screen) and shutdown (from the desktop). Now it is around 9 sg for both.

Finally, also note that I had a little problem at work with the upgrade process... My computer is behind a proxy and I supposed that configuring the proxy URL (ProxyHTTP parameter) into the KPackageKit configuration file (PackageKit.conf) it would be enough... I was wrong, there must be a bug in that KPackageKit version (0.5.4) and it does not work, and therefore, you cannot update to Kubuntu 10.10 Maverick Meerkat from 10.04 Lucid Lynx behind a proxy.

The solution was to install the update-manager package and use this program to launch the upgrade task (it handles the http_proxy environment variable correctly).

Oct 10, 2010

MySQL optimization (III)

Let's go on with the series of articles about MySQL tunning. Remember that in the previous issue, MySQL optimization (II), we got going to break down the suggestions provided by the Tunning Primer Script.

Now, we are going to continue regarding the query cache, since the script shows us which is disabled (Query cache is supported but not enabled).

When a query is executed, the database engine always performs the same task: processes the query, determines how to run it, loads the information from the disk and returns the value to the client. Through this cache, MySQL saves the result of a particular query in memory, so that in many cases the system performance can be significantly improved.

In order to display the query cache status, we can run the following order:

mysql> show status like 'qcache%';
| Variable_name           | Value |
| Qcache_free_blocks      | 0     |
| Qcache_free_memory      | 0     |
| Qcache_hits             | 0     |
| Qcache_inserts          | 0     |
| Qcache_lowmem_prunes    | 0     |
| Qcache_not_cached       | 0     |
| Qcache_queries_in_cache | 0     |
| Qcache_total_blocks     | 0     |
8 rows in set (0.00 sec)

The most important values are Qcache_free_memory (free cache memory), Qcache_inserts (insertions performed in the cache), Qcache_hits (successful insertions) and Qcache_lowmem_prunes (number of times that the cache runs out of memory and must be cleaned).

The result of Qcache_inserts/Qcache_hits division is known as percentage of losses. If the value of this ratio is for example 20%, it means that the 80% of the queries are attended from the cache.

Other important parameter is Qcache_free_blocks, which indicates us that the memory is fragmented whether it has got a high value. To defragment the non contiguous memory blocks, we can run the following command (in fact, there should be set a cron job to run this command every 4 or 8 hours).

[root@centos ~]# mysql -u root -p -e "flush query cache"

[root@centos ~]# crontab -e
0 */4 * * * mysql -u root -pxxxxxx -e "flush query cache"

The parameter wich allows us to adjust the cache size is Qcache_lowmem_prunes, since the larger it is, the more times the cache must be restarted. In order to fit the query cache size, we must set a value for the query_cache_size parameter, inside the MySQL configuration file.

[root@centos ~]# cat /etc/my.cnf
query_cache_size  = 128M
query_cache_limit = 4M
query_cache_type  = 1

The query_cache_limit parameter establishes the maximum result wholes size stored in the query cache. If we want whenever possible a query is cached, we must activate the query_cache_type variable.

Another recommendation is related to the table cache (You should probably increase your table_cache). The table_cache variable indicates how many tables can be simultaneously opened. Each table is represented by one disk file (descriptor) and it must be opened before being read.

In order to adjust this parameter, we must take a look at the Open_tables (currently open tables) and Opened_tables (tables wich have been opened) variables.

mysql> show global status like 'open%tables';
| Variable_name | Value |
| Open_tables   | 64    |
| Opened_tables | 30    |
2 rows in set (0.00 sec)

If the Opened_tables grows up very quicly, it means that are opening and closing tables for lack of descriptors. In that case, we should increase the table_cache value.

To modify this value in MySQL, we have to edit its configuration file. As well we have to take into account that this variable has to be always less than open_files_limit. Otherwise, we must change it.

And besides, we can also set the table_definition_cache variable, which represents the number of table definitions that can be stored in the definition cache (it should be normally the same as table_cache) and unlike the table_cache, it does not use file descriptors.

[root@centos ~]# cat /etc/my.cnf
table_cache = 512
table_definition_cache = 512

open_files_limit = 1024

The script also shows us that around 30% of the temporary tables are created in the disk, with what we could increase the size of the tmp_table_size (if a temporary table in memory exceeds this size, it is automatically moved to disk) and/or max_heap_table_size (maximum value that the tables can grow up in memory) variables.

So as to set correctly these values, you can analyze the Created_tmp_disk_tables (number of temporary tables created on disk) and Created_tmp_tables (number of temporary tables created in memory).

mysql> show status like 'created_tmp%';
| Variable_name           | Value |
| Created_tmp_disk_tables | 0     |
| Created_tmp_files       | 5     |
| Created_tmp_tables      | 1     |
3 rows in set (0.00 sec)

As you can see in the previous results, no table is made on disk, then this situation does not correspond with the 30% data provided by the script. This is due to the script does not check the real value of the temporary tables created on disk, since looking its code we can confirm that the script runs a benchmark to generate 5000 aleatory registers and measures its performance ("show /*!50000 global */ status like...").

In the MySQL configuration file, we can change the tmp_table_size and max_heap_table_size values.

[root@centos ~]# cat /etc/my.cnf
tmp_table_size = 64M
max_heap_table_size = 32M

Oct 3, 2010

MySQL optimization (II)

Continuing with the previous article about MySQL optimization (I), we are going to start with one of the suggestions provided by the tunning-primer.sh script: The slow query log is NOT enabled.

The queries which spend a lot of CPU (its running time is very high, for example more than 5 seconds) are named slow queries, and it is appropriate to register them in order to be optimized by the developers.

Other good measure can be to activate the logging of those queries which do not use indexes, since this kind of query increases the computer resources consumption because it is necessary more time to loop through the tables. This sort of query should be treated too.

[root@centos ~]# cat /etc/my.cnf
long_query_time = 5

Other variable showed is related to the thread cache (thread_cache_size), which indicates us that seems to be fine.

The size of this parameter depends on the speed with which the new threads are created (Threads_created). For the case that we are discussing (Zabbix database), many threads are not generated quickly, thus we will enable this cache for safety and we will set a low value, such as 32.

[root@centos ~]# cat /etc/my.cnf
thread_cache_size = 32

So as to display the threads state, we can run the following order:

mysql> show status like 'threads%';
| Variable_name     | Value |
| Threads_cached    | 0     |
| Threads_connected | 15    |
| Threads_created   | 23428 |
| Threads_running   | 1     |
4 rows in set (0.00 sec)

Another parameter offered by the script which seems to be also properly configured is the maximum number of allowed connections (Your max_connections variable seems to be fine). In order to see the maximum number of connections which have been used, we can run the following command:

mysql> show status like 'max_used_connections';
| Variable_name        | Value |
| Max_used_connections | 21    |
1 row in set (0.00 sec)

If we would want to increase the maximum number of allowed connections (100 by default), we could edit the max_connections parameter in the MySQL configuration file:

[root@centos ~]# cat /etc/my.cnf

wait_timeout = 10
max_connect_errors = 100

Two other parameters to consider are wait_timeout (when this time is exceeded by an idle connection, it will be closed) and max_connect_errors (maximum number of times that a connection can abort or fail - 10 by default).

Another recommendation given by the script with regard to the InnoDB data storage engine, is to set the innodb_buffer_pool_size variable around 60-70% of the total system memory. For the installation of Zabbix, we will allocate 1024 MB because the computer has got 2 GB.

[root@centos ~]# cat /etc/my.cnf
innodb_buffer_pool_size = 1024M

In the case of tables created by the MyISAM engine, the key parameter is key_buffer_size, which is already correctly adjusted (Your key_buffer_size seems to be fine) because the Zabbix database does not use this kind of tables.

For databases which utilize this sort of search engine with its tables, it is recommended to set this parameter around 25% of the total system memory.

Another way to adjust it is consulting the key_read_requests and key_reads values. The first of them indicates the number of requests which have used the index (memory) and the second, the number of requests made directly from the disk. Then it is clear that is interesting that key_reads is as low as possible and key_read_requests as high.

mysql> show status like '%key_read%';
| Variable_name     | Value  |
| Key_read_requests | 242148 |
| Key_reads         | 35618  |
2 rows in set (0.00 sec)

An optimal ratio should be around 1% (for each disk request made, 100 are performed from the buffer in memory).

If we want to fit this variable, we have to set its value into the my.cnf file.

[root@centos ~]# cat /etc/my.cnf
key_buffer_size = 32M