Re: Improving my EFA Performance
Posted: 01 Apr 2015 17:04
Try running sa-learn with --no-sync and report back please.
like so?shawniverson wrote:Try running sa-learn with --no-sync and report back please.
Code: Select all
sa-learn --no-sync --{ham|spam} -f /var/spool/MailScanner/quarantine/<date>/{spam|nonspam}/<queuefile>
Code: Select all
diff -c functions.php functions.php.org
*** functions.php 2015-04-02 15:59:20.040362566 +0800
--- functions.php.org 2015-04-02 16:00:15.688371543 +0800
***************
*** 2760,2766 ****
$status = array();
if (!$rpc_only && is_local($list[0]['host'])) {
foreach ($num as $key => $val) {
- audit_log('learning started on message ' . $list[$val]['msgid'] . ' as ' . $type);
$use_spamassassin = false;
switch ($type) {
case "ham":
--- 2760,2765 ----
Code: Select all
02/04/15 15:58:11 <username> <ipaddr> SpamAssassin was trained on message 52BAB181625.AF91B as ham
02/04/15 15:56:32 <username> <ipaddr> learning started on message 52BAB181625.AF91B as ham
Code: Select all
# su - apache
-bash-4.1$ time /usr/local/bin/sa-learn -p /etc/MailScanner/spam.assassin.prefs.conf --ham --file /var/spool/MailScanner/quarantine/20150402/nonspam/07A1B181E17.A009C
Learned tokens from 1 message(s) (1 message(s) examined)
real 1m7.466s
user 0m2.987s
sys 0m0.143s
-bash-4.1$ time /usr/local/bin/sa-learn -p /etc/MailScanner/spam.assassin.prefs.conf --spam --file /var/spool/MailScanner/quarantine/20150402/nonspam/07A1B181E17.A009C
Learned tokens from 1 message(s) (1 message(s) examined)
real 1m18.832s
user 0m4.140s
sys 0m0.140s
-bash-4.1$ time /usr/local/bin/sa-learn -p /etc/MailScanner/spam.assassin.prefs.conf --ham --no-sync --file /var/spool/MailScanner/quarantine/20150402/nonspam/07A1B181E17.A009C
Learned tokens from 1 message(s) (1 message(s) examined)
real 1m42.449s
user 0m3.865s
sys 0m0.160s
-bash-4.1$ time /usr/local/bin/sa-learn -p /etc/MailScanner/spam.assassin.prefs.conf --spam --no-sync --file /var/spool/MailScanner/quarantine/20150402/nonspam/07A1B181E17.A009C
Learned tokens from 1 message(s) (1 message(s) examined)
real 1m6.208s
user 0m2.833s
sys 0m0.118s
Code: Select all
$ dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.57153 s, 301 MB/s
Code: Select all
$ dd if=/dev/zero of=1000MB.bin bs=10k count=102400
102400+0 records in
102400+0 records out
1048576000 bytes (1.0 GB) copied, 0.77117 s, 1.4 GB/s
Code: Select all
dd if=/dev/zero of=1000MB.bin bs=1k count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB) copied, 1.46977 s, 713 MB/s
Code: Select all
$ dd if=/dev/urandom of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 93.2176 s, 11.5 MB/s
Code: Select all
[itsupport@efa tmp]$ dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 12.7931 s, 83.9 MB/s
[itsupport@efa tmp]$ dd if=/dev/zero of=1000MB.bin bs=10k count=102400
102400+0 records in
102400+0 records out
1048576000 bytes (1.0 GB) copied, 7.64498 s, 137 MB/s
[itsupport@efa tmp]$ dd if=/dev/zero of=1000MB.bin bs=1k count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB) copied, 10.9982 s, 95.3 MB/s
[itsupport@efa tmp]$ dd if=/dev/urandom of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 175.187 s, 6.1 MB/s
Code: Select all
[root@kvm1 disk2]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.47506 s, 728 MB/s
[root@kvm1 disk2]# dd if=/dev/zero of=1000MB.bin bs=10k count=102400
102400+0 records in
102400+0 records out
1048576000 bytes (1.0 GB) copied, 4.97657 s, 211 MB/s
[root@kvm1 disk2]# dd if=/dev/zero of=1000MB.bin bs=1k count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB) copied, 7.76032 s, 135 MB/s
[root@kvm1 disk2]# dd if=/dev/urandom of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 166.427 s, 6.5 MB/s
Code: Select all
[root@kvm1 tmp]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.43541 s, 748 MB/s
[root@kvm1 tmp]# dd if=/dev/zero of=1000MB.bin bs=10k count=102400
102400+0 records in
102400+0 records out
1048576000 bytes (1.0 GB) copied, 14.0923 s, 74.4 MB/s
[root@kvm1 tmp]# dd if=/dev/zero of=1000MB.bin bs=1k count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB) copied, 15.1012 s, 69.4 MB/s
[root@kvm1 tmp]# dd if=/dev/urandom of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 171.508 s, 6.3 MB/s
Code: Select all
[root@splitter ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.56707 s, 685 MB/s
[root@splitter ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.73333 s, 393 MB/s
[root@splitter ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.727354 s, 1.5 GB/s
[root@splitter ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.704008 s, 1.5 GB/s
[root@splitter ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 0.745681 s, 1.4 GB/s
[root@splitter ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2.57086 s, 418 MB/s
[root@splitter ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.32075 s, 813 MB/s
Code: Select all
[postmaster@efa ~]$ dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.91086 s, 562 MB/s
[postmaster@efa ~]$ dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.5845 s, 300 MB/s
[postmaster@efa ~]$ dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.30461 s, 249 MB/s
[postmaster@efa ~]$ dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.09217 s, 262 MB/s
Code: Select all
echo deadline > /sys/block/vda/queue/scheduler
Code: Select all
[root@hormel ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.29689 s, 250 MB/s
[root@hormel ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.09777 s, 211 MB/s
[root@hormel ~]# dd if=/dev/zero of=1000MB.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.78092 s, 225 MB/s
[root@hormel ~]#
I am not noticing a difference, tried noop and deadline in the past, on about 40 nodes. I'd be hard pressed to find it, but I read some IBM case study and went with it. Also, how is your disk cache set for your container? And my final thought is how is the VM setup? LVM or qcow2? I have noticed much better IO with none as the setting in libvirt while using qcow2 images.pdwalker wrote:Nope, already using the virtio drivers and have the scheduler set. That's not it.
Also, you're better off using the deadline scheduler on the host, and the noop scheduler in the vm.
That is expected behaviour, if you place something in front of EFA, like an pfsense forwarder, loadbalancer etc.. Then all systems think that all the mail you are receiving is received from just one single host, this will trigger a bunch of things useless in EFA (RBL checking, greylisting, razor, pyzor etc) making your spam filter less effective.cdburgess75 wrote:Once I used postfix forwarder on pfsense (add on) in front of Efa and I had a similar resource problem you are describing. The sqlgrey and postfix forwarder get very confused, it breaks greylisting too