Recently in Perl Category

Updated mailgrey graylisting policy scripts

| | TrackBacks (1)
for my dual (My)SQL-Server Postfix graylisting policy service mentioned in
this blog entry about the two-node redundant SQL postfix graylisting service
are available:


You will need an MySQL Database Version 5 or greater (for InnoDB performance), perl with DBD::mySQL and Digest::MD5, and Postfix capable of using Policy servers.

With this update, changes to the whitelist databases (postmap) are detected automatically. The literal script in the former blog article was updated also.

Graylisting stats after 19 days

| | TrackBacks (0)
As mentioned before, I set up a graylisting combo on two incoming mailservers using Postfix, MySQL and Perl (on Solaris 10). This solution runs now for 19 days and I took the time to analyze the database a little bit. Here's a plot of the number of positive graylisted entries since the beginning (X-axis represents the number of days since the beginning):

w.gif

The number of new entries per day is going slowly down, as expected.
These servers deliver mail for approx 12,000 users. So at the moment every user statistically receives mail from 40 correspondents.

The two SQL databases are running fine, and every night between 2.5 and 6 million rows are deleted (inactive graylisting entries which did not become active after 48 hours):

Jul 14    Inactive deleted:  2646153    Active deleted:        0
Jul 15    Inactive deleted:  4268527    Active deleted:        0
Jul 16    Inactive deleted:  5531953    Active deleted:        0
Jul 17    Inactive deleted:  4406925    Active deleted:        0
Jul 18    Inactive deleted:  3413663    Active deleted:        0
Jul 19    Inactive deleted:  3422004    Active deleted:        0
Jul 20    Inactive deleted:  2864347    Active deleted:        0


I wanted a tool to show up network i/o statistics - just like iostat does for disk access. I used Sun::Solaris::Kstat in Perl, available in Solaris 10 package SUNWperl584core.

The only paramter my script accepts for now is the time delay between two measurements.

Example:

-bash-3.00$ knetstat 2
            network i/o statistics
       r/s        t/s        kr/s        kt/s  interface
            network i/o statistics
       r/s        t/s        kr/s        kt/s  interface
      1086       1916       91.75     2400.10  e1000g0
      45.5       48.5        6.78        8.51  e1000g1
            network i/o statistics
       r/s        t/s        kr/s        kt/s  interface
    1129.5       1921       99.30     2396.48  e1000g0
      17.5         18        2.46        2.59  e1000g1
            network i/o statistics
       r/s        t/s        kr/s        kt/s  interface
     706.5       1063       79.56     1199.68  e1000g0
        16         18        2.43        2.63  e1000g1


You may download the tiny perl script here.

Update July 12th, 2008: Version 1.1 corrects the problem that on some SPARC integrated network cards there is no kstat class "mac", so this update derives the nic instances by checking "obytes64".

Accessing Postfix dbm and hash tables from Perl

| | TrackBacks (0)
On  the other day, I wanted to access Postfix dbm: and hash:-tables, created by postmap, from Perl. I am setting up a greylisting system and my whitelist should be a postfix table, so I won't have to use another database format.

I used this as a test table:

test1   myentry
test2   yourentry
test3   funny


I saved it as "testmap". After that, I used:

postmap testmap

Result:

-rw-r--r-- 1 pascal users    42 2008-06-16 10:14 testmap
-rw-r--r-- 1 pascal users 12288 2008-06-16 10:14 testmap.db


You may access this hash-type postfix-db just by using DB_File:

#!/usr/bin/perl

use Fcntl;
use DB_File;

my %tab;
my $null=chr(0);

tie %tab,'DB_File','testmap.db',O_RDONLY,0400,$DB_HASH;

# Sample query
my $key='test2';

my $value=$tab{$key.$null};
chop $value;  # chop null byte

print $key." = ".$value."\n";


Result:

test2 = yourentry

As you can see, the key must be terminated by a null byte, and the result itself is also null-terminated.

In case you use the dbm:-Format in postmap:

-rw-r--r--   1 root     root          42 Jun 16 11:30 testmap
-rw-r--r--   1 root     root           0 Jun 16 11:30 testmap.dir
-rw-r--r--   1 root     root        1024 Jun 16 11:30 testmap.pag


In Perl, just use NDBM_File instead and use the filename without .dir or .pag:

#!/usr/bin/perl

use Fcntl;
use NDBM_File;

my %tab;
my $null=chr(0);

tie %tab,'NDBM_File','testmap',O_RDONLY,0400;

# Sample query
my $key='test2';

my $value=$tab{$key.$null};
chop $value;  # chop null byte

print $key." = ".$value."\n";


The Keys and values are also null-terminated in this case.

Result is the same as with our hash:-Postfix-Table:

test2 = yourentry



On the other day, I just wanted to know how many memory my system is using really for its purposes. The modular debugger "mdb" has a nifty macro for it: it is called memstat - really straightforward.

This is the result:

# mdb -k
Loading modules: [ unix krtld genunix specfs dtrace cpu.AuthenticAMD.15 uppc pcplusmp ufs mpt ip hook neti sctp arp usba fcp fctl qlc lofs fcip cpc random crypto zfs logindmux ptm nfs ]
> ::memstat
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                    4111973             16062   78%
Anon                       251805               983    5%
Exec and libs                6346                24    0%
Page cache                  37719               147    1%
Free (cachelist)           285302              1114    5%
Free (freelist)            547571              2138   10%

Total                     5240716             20471
Physical                  5121357             20005


Weird - a 16 GB Kernel? Yes, that is normal as we are using ZFS as a filesystem. And its cache is stored in kernel memory. You may use the famous arcstats.pl perl program from Neelakanth Nadgir to get detailed ARC statistics but to understand a little bit you also may start with the Sun::Solaris Perl-Modules shipped with Solaris 10.

For our ARC statistics we have to use Sun::Solaris::Kstat:

#!/usr/bin/perl

use Sun::Solaris::Kstat;

my $k=new Sun::Solaris::Kstat || die "No Kernel statistics module available.";

while (1)
{
  $k->update();
  my $kstats = $k -> {zfs}{0}{arcstats};
  my %khash = %$kstats;

  foreach my $key (keys %khash)
  {
    printf "%-25s = %-20s\n",$key,$khash{$key};
  }
  print "----------\n";
  sleep 5;
}


This example will print out something like this every 5 seconds:

mru_ghost_hits            = 31005
crtime                    = 134.940576581
demand_metadata_hits      = 7307803
c_min                     = 670811648
mru_hits                  = 4479479
demand_data_misses        = 1616108
hash_elements_max         = 1059239
c_max                     = 10737418240
size                      = 10737420288
prefetch_metadata_misses  = 0
hits                      = 14405090
hash_elements             = 940483
mfu_hits                  = 9925611
prefetch_data_hits        = 0
prefetch_metadata_hits    = 0
hash_collisions           = 2486320
demand_data_hits          = 7097287
hash_chains               = 280319
deleted                   = 1301979
misses                    = 2263351
demand_metadata_misses    = 647243
evict_skip                = 47
p                         = 10211474432
c                         = 10737418240
prefetch_data_misses      = 0
recycle_miss              = 595519
hash_chain_max            = 11
class                     = misc
snaptime                  = 12168.15032689
mutex_miss                = 13682
mfu_ghost_hits            = 139332


You can see fields for the size ("size"), the maximum size ("c_max") - this is set in my case to 10GB via

set zfs:zfs_arc_max=0x280000000

in /etc/system.

You see the counters for hits, misses, metadata misses, and so on. To get values per time unit, just take differences between to measures and format them - or just just use arcstats.pl, which will yield in an output like this:

# /var/home/pascal/bin/arcstat.pl
    Time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
10:45:21   16M    2M     13    2M   13     0    0  653K    8    10G   10G
10:45:22   512   196     38   196   38     0    0    96   57    10G   10G
10:45:23   736   219     29   219   29     0    0    76   27    10G   10G
10:45:24   647   210     32   210   32     0    0    74   39    10G   10G



So - at the end - we know that 10 GB of the 16 GB kernel memory is used for the ZFS cache.

Editor note, July 16th 2008: Yes, yes, yes, you are right, there is "kstat" available as a command (/usr/bin/kstat) and you can write:

kstat zfs:0:arcstats:size

to get the actual ARC cache size.
Just take a look at the kstat  program, it is written in perl and using... Sun::Solaris::Kstat to retrieve the values... :)

Fun with DTrace and ZFS mirrors (Solaris 10)

| | Comments (3) | TrackBacks (2)

[Update Apr 11th, 2009:]

All ZFS movies on this site:

ZFS vs UFS
ZFS as a movie actor
ZFS scrub

and my Youtube Playlist:

http://www.youtube.com/view_play_list?p=3D4F9C2AD1EF1282

and my Channel:

http://www.youtube.com/pascalgienger

blocksgraph4s.0090.gif
As a little DTrace exercise, I wanted to show how often disk blocks are read and written on a ZFS volume. More, it should be possible to watch zfs' write mechanism (more or less sequential on forthcoming unused blocks).

To begin, I started with a DTrace script like this:


#!/usr/sbin/dtrace -s

#pragma D option quiet

io:::done
{
  printf("%i,%i,%i,%i,%i\n",(timestamp/1000000),args[1]->dev_minor,args[0]->b_lblkno,args[0]->b_flags & B_WRITE,args[0]->b_bcount);
}

This resulted in a file beginning like that:

4668607882,448,2405944908,256,51200
4668607882,448,2405945042,256,7680
4668607883,512,2300900368,0,36864
4668607883,512,1865201616,0,20480
4668607884,448,2405945344,256,38912
4668607884,512,2358096048,256,38912
4668607884,512,2358096184,256,17408

[.... continued ...]

The columns mean: Timestamp (in milliseconds), minor device node of device, logical block number, read/write (read=0,write=256), number of bytes read or written starting from the given logical block.

This data file was then processed with a nifty perl script (using GD and the libgd graphics library), displaying each pixel as a block range of the device. Each read requests makes the dot greener, each write makes the underlying dot more red. For 10 seconds of data, I made a frame - at the end I had an animated gif which is funny to watch. You see growing "red" areas (write) and more or less random read i/o. And: You see the spool area, where files are written and read afterwards (the middle of the picture) - this is the heavy loaded postfix spool queue.
The left side of the graphic shows device 448, the right side device 512. Both are the two parts of a zpool mirror - hence the rather identical access patterns.

View animated gif but be warned: The file is 5 MB in size and it depends on your browser whether it can handle such large animated gifs - or not.

December 2015

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    

About

This blog is owned by:

Pascal Gienger
J├Ągerstrasse 77
8406 Winterthur
Switzerland


Google+: Profile
YouTube Channel: pascalgienger