Fixed Versions: Linux SACK Attack – Denial of Service

The recently published CVE-2019-11477 and CVE-2019-11478 attacks enable an attacker with access to a TCP port on your server (most everyone, including those with web or mail servers) to either:

  1. Slow it down severely
  2. Cause a kernel crash

See the NIST publication for more detail:

https://nvd.nist.gov/vuln/detail/CVE-2019-11477

Upstream distributions have released fixes for these as follows.  The 2019-11478 vulnerability is an issue as well, but the -11477 issue has higher impact so we are listing it here.  So far as I have seen, the fix for both is in the same package version so you only need to reference the -11477 articles:

Mitigation

You can mitigate this attack with iptables. If you are using fwtree, our latest release for el6 and el7 includes the mitigation (version 1.0.1-70 or newer). Of course it is best to update your kernel, but this provides a quick fix without rebooting:

# [ -d /etc/fwtree.d ] && yum install -y fwtree && systemctl reload fwtree && iptables-save | grep MITIGATIONS

You can also do it directly with iptables:

# iptables -I INPUT -p tcp --tcp-flags SYN SYN -m tcpmss --mss 1:500 -j DROP
# ip6tables -I INPUT -p tcp --tcp-flags SYN SYN -m tcpmss --mss 1:500 -j DROP

You can also disable TCP selective acks in sysctl:

# Add this to /etc/sysctl.conf
net.ipv4.tcp_sack=0

Red Hat / CentOS / Scientific Linux

Vendor security article: https://access.redhat.com/security/cve/cve-2019-11477

Fixed Versions

  • el5: not vulnerable (and EOL, so upgrade already!)
  • el6: kernel-2.6.32-754.15.3.el6
  • el7:  kernel-3.10.0-957.21.3.el7

Ubuntu

Vendor security article: https://usn.ubuntu.com/4017-1/

Fixed Versions

  • Ubuntu 19.04
    • 5.0.0.1008.8
  • Ubuntu 18.10
    • 4.18.0.22.23
  • Ubuntu 18.04 LTS
    • 4.15.0-52.56
  • Ubuntu 16.04 LTS
    • 4.15.0-52.56~16.04.1
    • 4.4.0-151.178

Debian

Vendor security article: https://security-tracker.debian.org/tracker/CVE-2019-11477

Fixed Versions

  • jessie
    • 3.16.68-2
    • 4.9.168-1+deb9u3~deb8u1
  • stretch
    • 4.9.168-1+deb9u3
  • sid
    • 4.19.37-4

SuSE

Vendor security article: https://www.suse.com/security/cve/CVE-2019-11477/

Fixed Versions

For SuSE, there are too many minor version releases to list them all here. To generalize, if you are running a newer kernel than these then you are probably okay, but double-check the vendor security article for your specific release and use case:

  • Pre SLES-15:
    • 3.12.61-52.154.1
    • 4.4.121-92.114.1
    • 4.4.180-94.97.1
    • 4.12.14-95.19.1
  • SLES 15
    • 4.12.14-150.22.1
  • Leap 15
    • 4.12.14-lp150.12.64.1

Vanilla Upstream Kernel (kernel.org)

Security patch: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git/commit/?id=3b4929f65b0d8249f19a50245cd88ed1a2f78cff

Fixed Versions

  • 5.1.11
  • 4.19.52
  • 4.14.122
  • 4.9.182
  • 4.4.182
  • 3.16.69

[FIXED] Libvirtd QEMU / KVM monitor unexpectedly closed – failed to create chardev during live migration or virsh start

Fixing Libvirt/QEMU KVM Permission Errors in RHEL 7/CentOS 7

If you get errors like these while trying to live-migrate a virtual machine or run `virsh start`, then there is a simple fix.  If this is a live migration, the fix probably needs to be applied to the destination, but updating both sides is a good idea.

libvirtd: error : qemuMonitorIORead:610 : Unable to read from monitor: Connection reset by peer
libvirtd: error : qemuProcessReportLogError:1912 : internal error: qemu unexpectedly closed the monitor: qemu-kvm: -chardev pty,id=charserial0: Failed to create chardev
libvirtd: error : qemuMonitorIO:719 : internal error: End of file from qemu monitor

A Simple Fix

Just add this to fstab:

devpts     /dev/pts devpts     gid=5,mode=620     0 0

then remount:

mount -o remount,rw /dev/pts

-Eric

Server Security Update Best Practices

Securing the Server with Update Patches

In any environment it is important to keep systems up to date with security patches that fix vulnerabilities. In large deployments with many use cases, you may have application requirements that depend on certain versions of packages being installed and upgrading those packages could create undesired side effects. There are 3 basic ways to manage updates in this case in order to balance security patching with usability.

Before you start

In all cases it is a good idea to prioritize based on severity. Vendors typically publish how important the vulnerability is and how broad its exposure may be, and by reviewing the security notes for the update you can decide whether or not the vulnerability affects your implementation.

Always test deployment of the update in an environment intended to replicate your application requirements before applying them to production systems. If there are any problems, solving them in the test environment will make it easier to apply to production and minimize downtime. It is a good idea to schedule downtime or a maintenance window to keep you from surprising end-users with server interruption.

Keep a complete backup of your operating system or use snapshots to roll back to an earlier version in case something breaks during an update.

Update daily and follow the latest release

Updating all packages is the easiest thing to do. Unfortunately this can have problems if specific versions of software are needed, so it could break functionality.

Exclude critical packages from being updated and install all updates

This assumes that you know which packages are critical and should not be updated. By excluding those packages from the update process, the rest of the system can remain up to date. However, excluding a package from update could cause package dependencies to break. If this happens, it is likely that no package will install at all because it will not be able to install dependencies, so you will need to watch your update logs to make sure they are successful or alert on failure.

Review release information and only install packages that require security updates

This requires additional administrative overhead, but allows you to precisely update the packages that you need to maintain security while keeping all other packages at their current version. Sometimes an updated package requires a dependency; dependency chains can be quite long if the updated package is from a newer minor release of the operating system (for example, applying a version 7.6 patch to a 7.4 system). In these cases, it is sometimes possible to download the .src.rpm package and rebuild it on the earlier release platform (7.4) so that the newer rebuilt package will install cleanly in an older environment.

 

 

Scan Servers For SSH Password Authentication

Make Sure Password Authentication is Off

Sometimes it is useful to find out what hosts on your network allow Password authentication so that you can turn it off. Here is a simple script to facilitate that. Create a file called “iplist” filled with each IP that you wish to test and then run the script below. Optionally, you can set ‘password’ to something you expect to work and it will tell you if it authenticated, or if it only asked for a password but failed to authenticate:

#!/bin/bash

password=mypassword

while read HOST; do
echo "echo '$password'; echo >&2 '$HOST asked for password'; rm \$0" > /dev/shm/askpass
chmod 755 askpass
export SSH_ASKPASS="/dev/shm/askpass"
export DISPLAY=x
setsid ssh -o PreferredAuthentications=password,keyboard-interactive -o StrictHostKeyChecking=no -o ConnectTimeout=3 -o GSSAPIAuthentication=no $HOST "echo $HOST logged in" < /dev/null
done < iplist

-Eric

Fix LVM Thin Can’t create snapshot, Failed to suspend vg01/pool0 with queued messages

Fix LVM Thin Snapshot Creation Errors

From time to time you might see errors like the following:

~]# lvcreate -s -n foo-snap data/foo
Can’t create snapshot bar-snap as origin bar is not suspended.
Failed to suspend vg01/pool0 with queued messages.

You will note that foo and bar have nothing to do with each other, but the error message prevents creating additional thin volumes. While the cause is unknown, the fix is easy. Something caused LVM to try to create an LVM that it was unable to complete, so it generates this in its metadata:

message1 {
create = "bar-snap"
}

The Fix

  1. deactivate the thinpool
  2. dump the VG metadata
  3. backup the file
  4. remove the message1 section
  5. restore the metadata.

The Procedure

  • vgcfgbackup -f /tmp/pool0-current vg01
  • cp /tmp/pool0-current /tmp/pool0-current-orig # backup the file before making changes
  • vim /tmp/pool0-current # remove the message1 section in vg01 -> logical_volumes -> pool0
  • vgcfgrestore -f /tmp/pool0-current vg01 –force

Hopefully this works for you, and hopefully whatever causes this gets fixed upstream.

Dropbox Workaround For Pre-glibc 2.19 Support

Make Dropbox Run On CentOS 7

You may have noticed that Dropbox has dropped support for operating systems with glibc earlier than 2.19, and they are also requiring that everyone use ext4. In reality, they just need xattr support, which many filesystems support. I am guessing they just don’t want to deal with possible bugs on other filesystems so they are only certifying for ext4. At this time, they do not appear to be using glibc 2.19 specific functionality, so we can use a LD_PRELOAD to inform Dropbox that it is running under glibc 2.19 and that it is running on an ext4 filesystem.

We have this working and tested at Stanford University’s Physics department for KIPAC and their terabytes of scientific data.

Important: This is a workaround and is not supported by Dropbox. We have no affiliation with Dropbox and it is possible that this fix will stop working if Dropbox starts requiring functionality from glibc 2.19 that is not available in your glibc release. WE ARE NOT RESPONSIBLE FOR ANY DATA LOSS. This is provided WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Now that you have agreed to the legal disclaimer, you may try it for yourself: https://www.linuxglobal.com/src/unbox/unbox.c

To build, do this:
gcc -shared -fPIC unbox.c -o unbox.so -ldl

You can then test it by running it under LD_PRELOAD:

]# LD_PRELOAD=`pwd`/unbox.so df -Th
Filesystem Type Size Used Avail Use% Mounted on
devtmpfs ext4 2.0G 0 2.0G 0% /dev
tmpfs ext4 2.0G 8.0K 2.0G 1% /dev/shm
tmpfs ext4 2.0G 209M 1.8G 11% /run
tmpfs ext4 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/centos-root ext4 66G 63G 966M 99% /
/dev/vda1 ext4 477M 326M 127M 73% /boot
/dev/mapper/centos-var ext4 2.0G 1.4G 532M 72% /var
/dev/mapper/centos-tmp ext4 969M 4.4M 898M 1% /tmp
tmpfs ext4 396M 0 396M 0% /run/user/0

 

Note that all of the filesystems are listed as ext4, and Dropbox will see it this way, too. To run Dropbox, you will need to put the LD_PRELOAD in front of it like so:

LD_PRELOAD=/path/to/unbox.so exec /usr/local/dropbox-2.10.0/bin/dropbox "$@"

The `dropbox` file being referenced above is a Python script provided by Dropbox which you can get from here: https://www.dropbox.com/install-linux

Note that because Dropbox now requires xattrs, NFS will not work (even with this work-around) until NFS supports xattrs. Conceivably, one could extend Unbox to implement xattrs in userspace using something like sqlite and linking them by inode. If anyone is interested in doing this, contact us and we may be able to help you develop that.

I hope this helps, feel free to forward this link to others that may benefit!

-Eric

LSI Megaraid Storage Manager Does Nothing

Installing Broadcom MSM for LSI Megaraid Cards

On a minimal CentOS install I found that MSM would refuse to load when I ran “/usr/local/MegaRAID\ Storage\ Manager/startupui.sh”.  It would just exit without an error.  If you cat the script you will notice java running into /dev/null, thus hiding useful errors—so remove the redirect!  At least then we can see the error.

Since this was a minimal install, I was missing some of the X libraries that MSM wanted.  This fixed it:

yum install libXrender libXtst

-Eric

 

Redirect Directory Trailing Slash (/) with Restricted Access

Securing Apache and Maintaining Usability

First, you should always avoid .htaccess and use it as a last resort. Still, this example holds whether or not you are using .htaccess.

Let’s say you have a directory you wish to secure so that only the index and some file (test.txt) is available. Other other content in the directory should be denied. For example:

These links should load:

  • www.example.com/foo
  • www.example.com/foo/
  • www.example.com/foo/test.txt

In addition, the link without the trailing / should redirect to the link with the trailing / (from /foo to /foo/) for ease of access for your users.

These links should give a 403:

  • www.example.com/foo/bar
  • www.example.com/foo/letmein.txt

To accomplish this, you might write a .htaccess as follows:

Apache 2.2

Order allow,deny
<Files ~ ^$|^index.html$|^test.txt$>
     Order deny,allow
</Files>

Apache 2.4

Require all denied
<Files ~ ^$|^index.html$|^test.txt$>
     Require all granted
</Files>

However, you will run into a problem: The link without a trailing / will not work (www.example.com/foo) because permissions are evaluated before the mod_dir module’s DirectorySlash functionality evaluates whether or not this is a directory. While not intuitive, we also must add the directory as a file name to be allowed as follows:

Apache 2.2

Order allow,deny
<Files ~ ^foo$|^$|^index.html$|^test.txt$>
     Order deny,allow
</Files>

Apache 2.4

Require all denied
<Files ~ ^foo$|^$|^index.html$|^test.txt$>
     Require all granted
</Files>

Hopefully this will help anyone else dealing with a similar issue because it took us a lot of troubleshooting to pin this down. Here are some search terms you might try to find this post:

  • Apache 403 does not add trailing /
  • Apache does not add trailing slash
  • .htaccess deny all breaks trailing directory slash
  • .htaccess Require all denied breaks trailing directory slash

-Eric

 

FIPS: Failed to start Cryptography Setup

Enabling FIPS mode for CentOS 7 changes the way that the kernel initramfs loads crypto modules. If you simply enable FIPS mode with fips=1 on the kernel command line, then it will fail to boot with an error message like the following:

[FAILED] Failed to start Cryptography Setup for luks-....

FIPS LUKS Error
Settings fips=1 on the kernel commandline causes this error.

After digging a little bit deeper in the logs, you might find the following:

Libgcrypt error: integrity check using `/lib64/.libgcrypt.so.11.hmac' failed: No such file or directory

This is because Dracut is not packaging the .hmac file when it builds the initramfs, so you have to yum install dracut-fips-aesni and then rebuild the initramfs with dracut --force . Be sure you are running the latest kernel version, because by default Dracut will build the initramfs for the kernel that you are running, so if there is a new version available, then it will load if you reboot without the .hmac file.

If you do not have hardware AES support, then you can omit -aesni and install dracut-fips. Even if you do not have hardware support, however, installing the aesni version should still work, but without the performance boost.

Once enabling FIPS mode, we discovered on a CentOS 7 install that the drbg kernel module was not loaded which prevents aes-xts-plain64 formatted LUKS volumes (and possibly others) from being activated by cryptsetup. To fix this, add the following to your kernel commandline: rd.driver.post=drbg .  This problem is evident if you see the error error allocating crypto tfm at boot time or in the Dracut journal.

Finally, it is common to mount the boot partition on a different volume. If this is the case, then Dracut in FIPS mode will require the .hmac for vmlinuz and may give an error like the following: /boot/.vmlinuz-3.10.0-693.21.1-el7.x86_64.hmac does not exist . To fix this, specify the boot partition so that Dracut will mount it before validation with one of these options:

boot=<boot device>
specify the device, where /boot is located. e.g.
boot=/dev/sda1
boot=/dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:1:0-part1
boot=UUID=<uuid>
boot=LABEL=<label>

Red Hat has documentation about this problem here: https://bugzilla.redhat.com/show_bug.cgi?id=1014527#c7

I hope this helps, when you are done you will have the following added to your kernel commandline:

fips=1 rd.driver.post=drbg boot=/dev/sdaX

You will of course need to specify the correct boot volume for your machine.

-Eric

Choosing Linux Server Hardware

Hardware Options

Let’s assume that you are trying to decide between two different servers:

  • Server 1: 8 cores / 16 threads at 2.1GHz
  • Server 2: 4 cores / 8 threads at 3.8GHz

Considering Your Options

Choosing a server will depend on your workload. If you know you will be running lots of simultaneous connections and that each individual connection does not have a low latency requirement, then the first server would work best because it can handle more parallel processing. On the other hand, if you have fewer concurrent connections and each connection must complete quickly, then the second server is better.

We tend to prefer higher clock speed to higher core count because operations complete faster. For example, PHP pages will load almost 2x faster if the processors are not saturated. If they do saturate to somewhere between 150% to 180% of load, then it will run at about the same speed as server 1 based on the clock speeds and a 10% guess on context switch overhead.

We always try to build with redundancy in mind for long-term stability, so these are some considerations when thinking about your deployment:

  • You might get two identical servers. We could then configure them to be redundant so that either server can take over if one fails.
  • You could get your own routable network block so you can have a DMZ separate from your provider’s shared public network. This will reduce your clients’ security exposure to man-in-the-middle attacks.
  • If you get a routable network, then firewalls can be configured as separate virtual instances on the server(s) that will automatically recover if one of them has a problem.

 

Choosing the right hardware is important whether you are moving an existing server or deploying a new one, so please let us know if we can help you with your server planning.

-Eric