this post was submitted on 04 Feb 2026
13 points (100.0% liked)

Selfhosted

55801 readers
377 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I was hoping you guys could help me with a bit of a more out-of-the-ordinary situation. My older father, who has very little technical knowledge, is the owner of a local news outlet and is in the process of modernizing the whole website and its infrastructure. He is in talks with a local developer (just one guy) who has been maintaining everything for the past 5 years to transfer everything to a new dedicated server and make some much-needed software and design changes. He is currently running everything on an older Hetzner dedicated server, which we decided to upgrade very soon to the Hetzner AX102 (Ryzen 9 7950X3D, 128 GB DDR5 ECC, 2 × 1.92 TB NVMe SSD Datacenter Edition, and a 1 Gbit/s port with unlimited bandwidth). He has asked me to try to help him achieve a favorable outcome because he is aware that, due to his lack of technical knowledge, he might be taken advantage of or, at the very least, the developer will only do the bare minimum because no one will check his work, even though this process is not exactly cheap, at least by our country’s standards.

I only possess a basic understanding of most of what hosting such a site optimally on a dedicated server entails, as this is not my area of expertise, but I am willing to learn in order to help my father, at least to the point where we don’t get scammed and we are able to take full advantage of the new hardware to make the site load instantly.

More context:

  • The site is based on WordPress, and we plan to keep it that way when we make the transfer. The developer told me he would strongly prefer running AlmaLinux 10 with NGINX for our particular context and will likely use Bricks as a page builder. I would prefer not to change these, since it would likely create unneeded friction with him.
  • There are about 150k–250k average monthly users according to Google Analytics, depending on the time of year and different events, most of them from our area.
  • About 80% of readers are using smartphones.
  • There are a few writers who publish multiple articles daily (20–25 in a 24-hour window). The articles always contain at least text and some images. There’s a strong dependency on Facebook, as most of the readers access those articles from our Facebook page. This might be relevant for caching strategies and other settings.

For now, as a caching strategy for optimal speed, Gemini analyzed my requirements and recommended a tiered “in-memory” caching strategy to handle high traffic without a CDN. Could you validate whether these specific recommendations are optimal, since I am highly skeptical of AIs?

Page Cache: it suggests mapping Nginx FastCGI Cache directly to RAM (tmpfs). It recommends using ngx_cache_purge with the Nginx Helper plugin to instantly invalidate only the Homepage and Categories upon publishing. It also advises stripping tracking parameters (e.g., fbclid) to prevent cache fragmentation.

  1. Object Cache: It proposes using Valkey (Server-side) paired with the Redis Object Cache plugin. The specific advice is to connect them via Unix Socket (instead of TCP) for the lowest possible latency.
  2. PHP Layer: It recommends PHP 8.5 with OPcache and JIT (Tracing mode) enabled, optimized to keep the runtime entirely in memory.

**I’d appreciate any thoughts or advice you might have on the overall situation, not just the caching side of things. The caching is just what I managed to study so far since the AI insisted it was particular important for this setup. **😊

you are viewing a single comment's thread
view the rest of the comments
[–] clifmo@programming.dev 2 points 3 hours ago* (last edited 3 hours ago)

OpenLiteSpeed https://openlitespeed.org/

Host-specific guides (but no hetzner):

https://docs.litespeedtech.com/cloud/images/wordpress/

Very easy, robust, fast.

You can def roll your own sever and solution, but WordPress needs a lot of help. As other commentors said, you need to bypass both the database and PHP as much as possible, via caching.

While a simple redis or valkey store solves that, you're relying on some integration thru the php layer to make it happen, usually some plugin.

Serving files or otherwise caching directly thru the webserver is gonna make it faaaaast.

Then there's the question of database writes. Who is writing to your database, where, and how often?

Edit: I see you have editors updating content 1-2x per hour. They should rewrite caches hot on each update so they're the only ones paying the db latency cost.