I deserted OpenLiteSpeed and went again to good ol’ Nginx
Since 2017, in what spare time I’ve (ha!), I assist my colleague Eric Berger host his Houston-area climate forecasting web site, Space City Weather. It’s an fascinating internet hosting problem—on a typical day, SCW does possibly 20,000–30,000 web page views to 10,000–15,000 distinctive guests, which is a comparatively simple load to deal with with minimal work. However when extreme climate occasions occur—particularly in the summertime, when hurricanes lurk within the Gulf of Mexico—the location’s site visitors can spike to greater than 1,000,000 web page views in 12 hours. That degree of site visitors requires a bit extra prep to deal with.
For a really very long time, I ran SCW on a backend stack made up of HAProxy for SSL termination, Varnish Cache for on-box caching, and Nginx for the precise net server software—all fronted by Cloudflare to soak up the vast majority of the load. (I wrote about this setup at length on Ars just a few years in the past for folk who need some extra in-depth particulars.) This stack was totally battle-tested and able to devour no matter site visitors we threw at it, however it was additionally annoyingly advanced, with a number of cache layers to take care of, and that complexity made troubleshooting points harder than I’d have favored.
So throughout some winter downtime two years in the past, I took the chance to jettison some complexity and cut back the internet hosting stack all the way down to a single monolithic net server software: OpenLiteSpeed.
Out with the outdated, in with the brand new
I didn’t know an excessive amount of about OpenLiteSpeed (“OLS” to its pals) apart from that it is talked about a bunch in discussions about WordPress internet hosting—and since SCW runs WordPress, I began to get . OLS appeared to get a number of reward for its built-in caching, particularly when WordPress was concerned; it was presupposed to be quite quick compared to Nginx; and, frankly, after five-ish years of admining the identical stack, I used to be considering altering issues up. OpenLiteSpeed it was!
The primary important adjustment to take care of was that OLS is primarily configured by way of an precise GUI, with all of the annoying potential points that brings with it (one other port to safe, one other password to handle, one other public level of entry into the backend, extra PHP assets devoted simply to the admin interface). However the GUI was quick, and it largely uncovered the settings that wanted exposing. Translating the present Nginx WordPress configuration into OLS-speak was a great acclimation train, and I ultimately settled on Cloudflare tunnels as a suitable methodology for maintaining the admin console hidden away and notionally safe.
The opposite main adjustment was the OLS LiteSpeed Cache plugin for WordPress, which is the first software one makes use of to configure how WordPress itself interacts with OLS and its built-in cache. It’s an enormous plugin with pages and pages of configurable options, lots of that are involved with driving utilization of the Quic.Cloud CDN service (which is operated by LiteSpeed Know-how, the corporate that created OpenLiteSpeed and its for-pay sibling, LiteSpeed).
Getting probably the most out of WordPress on OLS meant spending a while within the plugin, determining which of the choices would assist and which might harm. (Maybe unsurprisingly, there are many methods in there to get oneself into silly quantities of hassle by being too aggressive with caching.) Thankfully, Area Metropolis Climate offers an excellent testing floor for net servers, being a properly energetic web site with a really cache-friendly workload, and so I hammered out a beginning configuration with which I used to be moderately joyful and, whereas talking the ancient holy words of ritual, flipped the cutover change. HAProxy, Varnish, and Nginx went silent, and OLS took up the load.