

IIS has many features that are IIS specific, as well as it has built-in functionality with other Microsoft services such as Microsoft FTP services.Īpache, on the other hand, is the default web server platform that is installed on Linux CentOS systems running in a cPanel environment.
#Litespeed web server windows
Each one of these above-mentioned web server platforms have their unique benefits or reasons to use.įor example, if you require a windows operating system then the default web server platform is Microsoft IIS. There are many other web server platforms out there, though these are the most popular by far. There are many different options of web server platforms out there, which we’ll outline the most popular below: Without a web server, the web browser would not be able to complete the request and would essentially throw an error that the connection to the website could not be made.Ī web server listens for this traffic and is configured to process different types of services like PHP, ColdFusion, ASP. The web server is what responds to website request, such as HTTP:// and HTTPS:/ request. So, LiteSpeed was found to be substantially faster than NGINX in both situations.If you found your way to this article, then there is a good chance that your website(s) have been experiencing some performance issues, or maybe you just want to see what other options are available.īefore we get started, let’s explain what LiteSpeed Web Server is, as well as what other options are available. However, NGINX’s performance was degraded by more than 100 percent. We found LiteSpeed’s performance decreased by around 50 percent when we introduced a shallow queue on path. We used a shallow queue with netem’s limit parameter and set it to seven. NGINX has struggled with high bandwidth, so we decided to see how it would fare with low bandwidth instead. We believe the contrast in LiteSpeed and NGINX’s respective performance was pronounced at this speed.

#Litespeed web server download
However, we became tired of waiting for the download to complete. We attempted to download a 1GB file with NGINX at 1gbps. It’s highly likely this is the cause of its weaker performance.

It also utilizes three to four times more CPU than LiteSpeed did in the above tests.Īcross all three of the NGINX vs LiteSpeed benchmarks, NGINX utilized 100 percent CPU. It’s clear that NGINX is a little slower here. In this situation, we fetched a single file using alternative network conditions and measured the length of time required to download it. So, that equates to greater than four times the performance for approximately 1/37th of the price. NGINX allocated in excess of 1GB of memory during this test, while LiteSpeed remained below 28MB. We had to set the NGINX http3_max_requests parameter to 10000 (from the 1000 default value) to make sure it could issue in excess of 1000 requests for each connection.Īt this point, LiteSpeed was using 100 percent CPU. Still, OpenLiteSpeed doesn’t utilize 100 percent CPU even with 20 and 10ms RTTs. This is why NGINX’s numbers tend to not improve despite the RTT dropping. It was fascinating to find in the initial test that NGINX used 100 percent CPU, when OpenLiteSpeed utilized around 45 percent CPU only. But its speed increases substantially at 20ms and 10ms RTT. -m: Amount of concurrent requests for each connectionĮvery connection will send 1000 requests now, with this being a longer run.Īt 100ms, LiteSpeed is quite a bit faster.-n: Overall number of requests to be sent.This testing involved fetching the 163 byte index page in numerous ways, through multiple network conditions. We ran three LiteSpeed or NGINX tests and took the median value to get each number (the requests per second or resource fetch time). This is easy to build when utilizing the Dockerfile supplied.

Load was generated with h2load with HTTP/3 support. This comprised a simple collection of static files, including a 163 byte index file with files of varied sizes (1MB, 10MB, 100MB, 1GB). This enabled us to issue 1,000,000 requests with 100 connections. OpenLiteSpeed and NGINX had been configured to utilize a single worker process, and NGINX’s maximum requests setting was boosted to 10,000. With regards to NGINX, we utilized 1.16.1 with Cloudflare’s Quiche patch. We leveraged OpenLiteSpeed, an open-source version of the LiteSpeed Web Server (specifically, version 1.6.4.). Netem was used to modify the bandwidth and RTT. This is an Ubuntu-14 machine featuring a 20 core Intel Xeon E7-4870 and 32GB RAM. Both the servers and load tool run on the same VM.
