Linux Security for Business
We make security for your infrastructure as easy as possible: custom-harden your server and fit your needs!
We cover the whole spectrum of Linux Security for Red Hat (RHEL), CentOS, Scientific Linux, Fedora, Debian, Ubuntu, and other distributions:
- Implement firewall policy (inbound and outbound)
- Harden your filesystem components (who can write to your files?)
- Configure security best practices for all services connected to the network
- Disable unused services to increase performance and reduce overhead
- Security monitoring
- … and much more!
Linux Server Security Check
We can review your Linux server no matter what distribution you are using (CentOS, RedHat, Debian, Ubuntu, others…). Let us review your server security for security best practices and discover how your server can be even more secure!
We have cleaned hundreds of websites of malware intrusion. We do our best to clean up compromises without requiring a server reload. The tools that we have developed to detect compromised hosts often find intrusions that automated scanners do not find. If you think that your server has compromised, then the sooner that you call, the soon we can fix it!
Security Hardening Methodology
Once a system has been cleared and is known to be free of intrusion (but not necessarily free of vulnerability) we begin the process of attack vector surface analysis. Imagine that you wish to protect a building. If you envision a bubble around the entire building, and the imaginary bubble prevents anything from exiting or entering, this is the simplest conceptualization of an attack surface. We then assume that the attacker will attempt to enter at any point on the surface (attack vector). Now collapse this bubble as the surface of the building.
Consider the range of difficulty for piercing the attack surface starting at the easiest entry points to the most difficult entry points. Certainly unlocked doors and windows are much easier to enter than tunneling underneath the building, however, we mustn’t exclude the possibility of entering through the ventilation shaft. We take this analogy then and apply it to a single Linux system (the same analysis may be performed for groups of systems).
In order to determine the possible attack vectors for any system installation, we must have a full understanding for the server’s role. The primary attack vector for network-based applications is the network. Similarly, the primary attack vector for systems where the application is primarily non-network-based is usually through the file system or via inter-process communication of some form.
The most common tool used for hardening the network layer is a firewall with a policy of default-deny for both inbound and outbound access. A common mistake in firewall configuration on the host is to filter only inbound traffic. A well configured firewall will have default-deny of inbound and outbound traffic with strict exceptions made which exactly fit the application. Earlier we discussed that an attacker must download their toolkit so they can proceed effectively at compromising the system, thus, when properly configured, outbound filtering will block most attempts at toolkit retrieval.
Network security hardening for a specific application extends into the process space, such that we can allow or deny access down to a per-user granularity. For example, one user may need FTP access, while another user requires email access and we can restrict each user to only the access that they require for proper functionality (formally, this is known as separation of duty.)
Not only do we need to implement a firewall that protects the system, but we must do our best to secure the system even if the firewall fails or is disabled. This is the principle of failing safe. Imagine a door that is unlocked electronically. Such a door should fail safe, such that it remains locked even if power is removed.
Applications that listen on the network waiting for a connection (such as a database server) can be configured to listen for connections on any network interface. It is common for systems to have at least two network interfaces: private and public. There is a third lesser-known interface that all systems have, which is internal, and is known as the loopback interface. A public webserver would listen on the internet-facing network interface, whereas a database server would most likely listen on the private network interface; the webserver and database server would communicate over that private link.
Now assume for the moment that the web application must query a caching service which is local to the server. Neither public users nor the database server would ever need this service, thus it should be bound to the loopback interface. While the example above is slightly technical, it demonstrates that the applications infrastructure is secure, even in the event of firewall failure because services are bound only to the appropriate interface(s). Ideally, the firewall is simply a countermeasure; an isolated application should be secure without it.
Linux Security From the Inside
Internal hardening generally lends itself to applications where users are allowed to execute programs within the system (such as a shared hosting environment). However, it also facilitates further hardening for network applications by assuming that the network application is vulnerable in such a way that network hardening cannot secure. This is an intentionally pessimistic view of network security in order to limit the power that an intruder could wield against the system.
In the broadest sense, an intrusion has taken place when an attacker has leveraged some facility of the server for their own use. For example, an attacker may have the ability to upload files into the server, and this functionality may legitimately be used by the web application. The problem arises when the attacker can use the uploaded content for their own purpose. One very common attack vector used in the wild is a form of upload and execute. An attacker uploads some small segment of code in hopes of executing it. We can prevent this entire attack vector simply by denying execution rights to locations that are writable by the application.
The example above notwithstanding, let us assume that an attacker has successfully downloaded their toolkit into the application server and is capable of executing arbitrary commands on the server. Because of the vast number of different Linux server builds and architectures, many binary applications will not run unless they are built specifically for that server. Thus, it is common for an attacker to upload their toolkit and build it on the host. For the class of toolkits which require binary execution, we can prevent an attacker from building their toolkit by removing development libraries and compilers.
If an attacker cannot build their toolkit they may be unable to continue their attack. Even if they can build their toolkit, the only location to which they can write the output-binary is non-executable, and it would not run.
In the examples above, you can begin to see how denying entire attack vectors defeats many attacks and attack combinations. To summarize, we prevent an attacker from running a toolkit, building their toolkit, and uploading (retrieving) their toolkit. In many cases, the perspective of an attacker to the result of their attempt is opaque. They will know that their attempt failed, but a properly secured server will provide no indication of why. This keeps the attacker guessing and provided all of their attempts fall within the class of attack vectors that have been defeated, the attacker cannot succeed.
Trusted Third Party
Let us assume that all prior countermeasures have failed. In this scenario we assume the strongest possible intruder: an intruder with access to make any change and hide any presence from the system. Once such an attacker has acquired access to the system there is nothing you can trust about the system because the attacker has all means at its disposal to imply its non-existence. They can delete log entries, change file modification dates, hide files, eavesdrop on network activity, et al.
As system architects, it is our job to design the system such that any changes they make can be made known through means which are external to the system. At this point we are growing the security model to extend beyond a single system such that this one system can be managed by a separate inaccessible trusted system. One such example is remote logging. If the logs are transmitted to a system which cannot be accessed remotely, then an attacker cannot modify the logs. This is one such example of external control from a trusted system. Other technologies exist, including backups, snapshots, network anomaly detection, file system replication, container encapsulation, and even detecting hidden processes through virtual machine introspection.
When we know that a server is compromised our immediate concern is discovering the method of entry, the intention of the attack, and the depth of access gained by the attacker. Discovering the method of entry allows us to prevent future intrusions of the same form. It may also shed light on possible attack vectors for which we can protect against unknown vulnerabilities of the same attack class.
Attackers have many different intentions. If the attackers intention did not directly target your organization, then it may be reasonable to assume that your system was compromised using a known vulnerability and therefore was a single target of many. While uncommon, we must also consider the possibility of an advanced persistent threat.
Advanced persistent threats are generally seen between governments, for corporate espionage or between other well-funded entities. However, if the attacker is directly targeting your infrastructure we must consider them an advanced persistent threat, and use everything we can in order to discover their motive, thus protecting specifically against their end goal.
When an attacker compromises a system, we consider them an intruder. An intruder brings with them a toolkit to further their agenda. By analyzing their toolkit we can understand their intentions and often their entry mechanism.
Depth of Access
The intruder’s toolkit may indicate their depth of access, or at least their intended depth of access if they did not fully compromise the system. Generally speaking, Linux systems have two access levels: restricted “user-level” access, and system administrative “root-level” access.
If the intruder has gained root access it is best to reload the server. An intruder may install software known as a “rootkit” which is capable of hiding files, network activity and processes. If there is any evidence of the intruder gaining root access, we must assume that a rootkit was installed. While we have successfully isolated and removed many rootkits, it would be unwise to assume that all root kits are detectable or even removable. Although time consuming, it is best to be conservative and reload the system from scratch.
Security hardening is never complete. There are always other options to make compromising a system more difficult, and there are always emerging technologies providing faster recovery mechanisms for compromised systems. Once we have finished one round of server hardening, we start back at the top, review the state of the machine, and begin devising plans to thwart more difficult attack scenarios akin to tunneling under a building. With thorough reevaluation a system can be hardened to the point where a casual attacker will pass it by and the advanced persistent threat is mitigated by continuous iteration of attack surface and vector analysis.
Is your server secure? Are you sure?
Let us make certain!
Would it be great to have enough time to review all of your system logs? Our security monitoring service uses both heuristic and human review of anomalous log entries. System logs provide excellent early warning of issues that could become a problem, as well as early warnings of intrusions or security weaknesses. Our security monitoring service provides heuristic scanning and checks a number of system configuration settings to make sure that they conform to industry best practices.