Fixing Libvirt/QEMU KVM Permission Errors in RHEL 7/CentOS 7
If you get errors like these while trying to live-migrate a virtual machine or run `virsh start`, then there is a simple fix. If this is a live migration, the fix probably needs to be applied to the destination, but updating both sides is a good idea.
libvirtd: error : qemuMonitorIORead:610 : Unable to read from monitor: Connection reset by peer
libvirtd: error : qemuProcessReportLogError:1912 : internal error: qemu unexpectedly closed the monitor: qemu-kvm: -chardev pty,id=charserial0: Failed to create chardev
libvirtd: error : qemuMonitorIO:719 : internal error: End of file from qemu monitor
In any environment it is important to keep systems up to date with security patches that fix vulnerabilities. In large deployments with many use cases, you may have application requirements that depend on certain versions of packages being installed and upgrading those packages could create undesired side effects. There are 3 basic ways to manage updates in this case in order to balance security patching with usability.
Before you start
In all cases it is a good idea to prioritize based on severity. Vendors typically publish how important the vulnerability is and how broad its exposure may be, and by reviewing the security notes for the update you can decide whether or not the vulnerability affects your implementation.
Always test deployment of the update in an environment intended to replicate your application requirements before applying them to production systems. If there are any problems, solving them in the test environment will make it easier to apply to production and minimize downtime. It is a good idea to schedule downtime or a maintenance window to keep you from surprising end-users with server interruption.
Keep a complete backup of your operating system or use snapshots to roll back to an earlier version in case something breaks during an update.
Update daily and follow the latest release
Updating all packages is the easiest thing to do. Unfortunately this can have problems if specific versions of software are needed, so it could break functionality.
Exclude critical packages from being updated and install all updates
This assumes that you know which packages are critical and should not be updated. By excluding those packages from the update process, the rest of the system can remain up to date. However, excluding a package from update could cause package dependencies to break. If this happens, it is likely that no package will install at all because it will not be able to install dependencies, so you will need to watch your update logs to make sure they are successful or alert on failure.
Review release information and only install packages that require security updates
This requires additional administrative overhead, but allows you to precisely update the packages that you need to maintain security while keeping all other packages at their current version. Sometimes an updated package requires a dependency; dependency chains can be quite long if the updated package is from a newer minor release of the operating system (for example, applying a version 7.6 patch to a 7.4 system). In these cases, it is sometimes possible to download the .src.rpm package and rebuild it on the earlier release platform (7.4) so that the newer rebuilt package will install cleanly in an older environment.
Sometimes it is useful to find out what hosts on your network allow Password authentication so that you can turn it off. Here is a simple script to facilitate that. Create a file called “iplist” filled with each IP that you wish to test and then run the script below. Optionally, you can set ‘password’ to something you expect to work and it will tell you if it authenticated, or if it only asked for a password but failed to authenticate:
From time to time you might see errors like the following:
~]# lvcreate -s -n foo-snap data/foo
Can’t create snapshot bar-snap as origin bar is not suspended.
Failed to suspend vg01/pool0 with queued messages.
You will note that foo and bar have nothing to do with each other, but the error message prevents creating additional thin volumes. While the cause is unknown, the fix is easy. Something caused LVM to try to create an LVM that it was unable to complete, so it generates this in its metadata:
create = "bar-snap"
deactivate the thinpool
dump the VG metadata
backup the file
remove the message1 section
restore the metadata.
vgcfgbackup -f /tmp/pool0-current vg01
cp /tmp/pool0-current /tmp/pool0-current-orig # backup the file before making changes
vim /tmp/pool0-current # remove the message1 section in vg01 -> logical_volumes -> pool0
vgcfgrestore -f /tmp/pool0-current vg01 –force
Hopefully this works for you, and hopefully whatever causes this gets fixed upstream.
You may have noticed that Dropbox has dropped support for operating systems with glibc earlier than 2.19, and they are also requiring that everyone use ext4. In reality, they just need xattr support, which many filesystems support. I am guessing they just don’t want to deal with possible bugs on other filesystems so they are only certifying for ext4. At this time, they do not appear to be using glibc 2.19 specific functionality, so we can use a LD_PRELOAD to inform Dropbox that it is running under glibc 2.19 and that it is running on an ext4 filesystem.
We have this working and tested at Stanford University’s Physics department for KIPAC and their terabytes of scientific data.
Important: This is a workaround and is not supported by Dropbox. We have no affiliation with Dropbox and it is possible that this fix will stop working if Dropbox starts requiring functionality from glibc 2.19 that is not available in your glibc release. WE ARE NOT RESPONSIBLE FOR ANY DATA LOSS. This is provided WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Note that because Dropbox now requires xattrs, NFS will not work (even with this work-around) until NFS supports xattrs. Conceivably, one could extend Unbox to implement xattrs in userspace using something like sqlite and linking them by inode. If anyone is interested in doing this, contact us and we may be able to help you develop that.
I hope this helps, feel free to forward this link to others that may benefit!
On a minimal CentOS install I found that MSM would refuse to load when I ran “/usr/local/MegaRAID\ Storage\ Manager/startupui.sh”. It would just exit without an error. If you cat the script you will notice java running into /dev/null, thus hiding useful errors—so remove the redirect! At least then we can see the error.
Since this was a minimal install, I was missing some of the X libraries that MSM wanted. This fixed it:
First, you should always avoid .htaccess and use it as a last resort. Still, this example holds whether or not you are using .htaccess.
Let’s say you have a directory you wish to secure so that only the index and some file (test.txt) is available. Other other content in the directory should be denied. For example:
These links should load:
In addition, the link without the trailing / should redirect to the link with the trailing / (from /foo to /foo/) for ease of access for your users.
These links should give a 403:
To accomplish this, you might write a .htaccess as follows:
<Files ~ ^$|^index.html$|^test.txt$>
Require all denied
<Files ~ ^$|^index.html$|^test.txt$>
Require all granted
However, you will run into a problem: The link without a trailing / will not work (www.example.com/foo) because permissions are evaluated before the mod_dir module’s DirectorySlash functionality evaluates whether or not this is a directory. While not intuitive, we also must add the directory as a file name to be allowed as follows:
<Files ~ ^foo$|^$|^index.html$|^test.txt$>
Require all denied
<Files ~ ^foo$|^$|^index.html$|^test.txt$>
Require all granted
Hopefully this will help anyone else dealing with a similar issue because it took us a lot of troubleshooting to pin this down. Here are some search terms you might try to find this post:
Apache 403 does not add trailing /
Apache does not add trailing slash
.htaccess deny all breaks trailing directory slash
.htaccess Require all denied breaks trailing directory slash
Enabling FIPS mode for CentOS 7 changes the way that the kernel initramfs loads crypto modules. If you simply enable FIPS mode with fips=1 on the kernel command line, then it will fail to boot with an error message like the following:
[FAILED] Failed to start Cryptography Setup for luks-....
After digging a little bit deeper in the logs, you might find the following:
Libgcrypt error: integrity check using `/lib64/.libgcrypt.so.11.hmac' failed: No such file or directory
This is because Dracut is not packaging the .hmac file when it builds the initramfs, so you have to yum install dracut-fips-aesni and then rebuild the initramfs with dracut --force . Be sure you are running the latest kernel version, because by default Dracut will build the initramfs for the kernel that you are running, so if there is a new version available, then it will load if you reboot without the .hmac file.
If you do not have hardware AES support, then you can omit -aesni and install dracut-fips. Even if you do not have hardware support, however, installing the aesni version should still work, but without the performance boost.
Once enabling FIPS mode, we discovered on a CentOS 7 install that the drbg kernel module was not loaded which prevents aes-xts-plain64 formatted LUKS volumes (and possibly others) from being activated by cryptsetup. To fix this, add the following to your kernel commandline: rd.driver.post=drbg . This problem is evident if you see the error error allocating crypto tfm at boot time or in the Dracut journal.
Finally, it is common to mount the boot partition on a different volume. If this is the case, then Dracut in FIPS mode will require the .hmac for vmlinuz and may give an error like the following: /boot/.vmlinuz-3.10.0-693.21.1-el7.x86_64.hmac does not exist . To fix this, specify the boot partition so that Dracut will mount it before validation with one of these options:
specify the device, where /boot is located. e.g.
Let’s assume that you are trying to decide between two different servers:
Server 1: 8 cores / 16 threads at 2.1GHz
Server 2: 4 cores / 8 threads at 3.8GHz
Considering Your Options
Choosing a server will depend on your workload. If you know you will be running lots of simultaneous connections and that each individual connection does not have a low latency requirement, then the first server would work best because it can handle more parallel processing. On the other hand, if you have fewer concurrent connections and each connection must complete quickly, then the second server is better.
We tend to prefer higher clock speed to higher core count because operations complete faster. For example, PHP pages will load almost 2x faster if the processors are not saturated. If they do saturate to somewhere between 150% to 180% of load, then it will run at about the same speed as server 1 based on the clock speeds and a 10% guess on context switch overhead.
We always try to build with redundancy in mind for long-term stability, so these are some considerations when thinking about your deployment:
You might get two identical servers. We could then configure them to be redundant so that either server can take over if one fails.
You could get your own routable network block so you can have a DMZ separate from your provider’s shared public network. This will reduce your clients’ security exposure to man-in-the-middle attacks.
If you get a routable network, then firewalls can be configured as separate virtual instances on the server(s) that will automatically recover if one of them has a problem.
Choosing the right hardware is important whether you are moving an existing server or deploying a new one, so please let us know if we can help you with your server planning.