LVE is a kernel level technology developed by the CloudLinux team. The technology has common roots with container based virtualization and uses cgroups in its latest incarnation. It is lightweight and transparent. The goal of LVE is to make sure that no single website can bring down a whole shared web server.
Today, without LVE a single site can easily bring a shared server to a halt by consuming all your CPU, memory, and IO resources. CloudLinux proprietary Lightweight Virtual Environment (LVE) technology prevents that by allowing hosts to set up individual resource limits. This ensures that a tenant can never use more resources than he or she is given.
LVE is a kernel-level technology developed by the CloudLinux team. It integrates at the server, PAM (Pluggable Authentication Modules), and database levels to prevent any kind of abuse while maintaining the lowest overhead possible. The technology has roots in common with container-based virtualization.
The goal of LVE is to ensure that no single website can bring down a shared server.
LVE Manager allows you to maintain fine-tuned control over your resources, including CPU, IO, memory, inodes, numbers of processes, and connections, that any single account can use. It is lightweight and transparent. Now you can limit abusers while allowing good customers to use what they need.
With LVE Manager, you can:
- Limit resources per single account
- Create and apply default packages
- View usage history per account
- Identify abusers and take corrective actions
- Identify top users and upsell to higher-end plans
Memory limits control the amount of memory each customer can use. CloudLinux is able to identify, in real time, the amount of memory actually used by an end customer’s processes. Physical memory limits are especially effective in preventing out of memory (OOM) issues and customers’ ballooning memory usage, which destroys caches and causes server overload.
IO limits restrict the data throughput for the customer. They are measured in KB/s. When the limit is reached, the processes are throttled (put to sleep). Because IO is one of the scarcest resources in shared hosting, the ability to put an upper limit on customer use is vital.
CPU limits establish the maximum amount of CPU resources that an account can use. When a user hits the CPU limit, processes within that limit are slowed down. CPU limits are crucial in preventing CPU usage spikes, which can often make servers slow and unresponsive.
Number of Processes
Number of processes limits control the total number of processes within LVE. Once the limit is reached, no new process can be created until another one has finished. This effectively prevents fork bombs and similar DoS attacks.
Entry processes limits control the number of entries into LVE. The best way to think about this type of limit is the number of web scripts that can be executed in parallel by visitors to a site. These limits are important to preventing single sites from hogging all Apache slots, thus causing Apache to be unresponsive.
An inode is a data structure on a file system that is used to keep information about a file or a folder. The number of inodes indicates the number of files and folders an account has. Inodes limits work on the level of disk quota.