this post was submitted on 04 Feb 2026
13 points (100.0% liked)

Selfhosted

55801 readers
349 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I was hoping you guys could help me with a bit of a more out-of-the-ordinary situation. My older father, who has very little technical knowledge, is the owner of a local news outlet and is in the process of modernizing the whole website and its infrastructure. He is in talks with a local developer (just one guy) who has been maintaining everything for the past 5 years to transfer everything to a new dedicated server and make some much-needed software and design changes. He is currently running everything on an older Hetzner dedicated server, which we decided to upgrade very soon to the Hetzner AX102 (Ryzen 9 7950X3D, 128 GB DDR5 ECC, 2 × 1.92 TB NVMe SSD Datacenter Edition, and a 1 Gbit/s port with unlimited bandwidth). He has asked me to try to help him achieve a favorable outcome because he is aware that, due to his lack of technical knowledge, he might be taken advantage of or, at the very least, the developer will only do the bare minimum because no one will check his work, even though this process is not exactly cheap, at least by our country’s standards.

I only possess a basic understanding of most of what hosting such a site optimally on a dedicated server entails, as this is not my area of expertise, but I am willing to learn in order to help my father, at least to the point where we don’t get scammed and we are able to take full advantage of the new hardware to make the site load instantly.

More context:

  • The site is based on WordPress, and we plan to keep it that way when we make the transfer. The developer told me he would strongly prefer running AlmaLinux 10 with NGINX for our particular context and will likely use Bricks as a page builder. I would prefer not to change these, since it would likely create unneeded friction with him.
  • There are about 150k–250k average monthly users according to Google Analytics, depending on the time of year and different events, most of them from our area.
  • About 80% of readers are using smartphones.
  • There are a few writers who publish multiple articles daily (20–25 in a 24-hour window). The articles always contain at least text and some images. There’s a strong dependency on Facebook, as most of the readers access those articles from our Facebook page. This might be relevant for caching strategies and other settings.

For now, as a caching strategy for optimal speed, Gemini analyzed my requirements and recommended a tiered “in-memory” caching strategy to handle high traffic without a CDN. Could you validate whether these specific recommendations are optimal, since I am highly skeptical of AIs?

Page Cache: it suggests mapping Nginx FastCGI Cache directly to RAM (tmpfs). It recommends using ngx_cache_purge with the Nginx Helper plugin to instantly invalidate only the Homepage and Categories upon publishing. It also advises stripping tracking parameters (e.g., fbclid) to prevent cache fragmentation.

  1. Object Cache: It proposes using Valkey (Server-side) paired with the Redis Object Cache plugin. The specific advice is to connect them via Unix Socket (instead of TCP) for the lowest possible latency.
  2. PHP Layer: It recommends PHP 8.5 with OPcache and JIT (Tracing mode) enabled, optimized to keep the runtime entirely in memory.

**I’d appreciate any thoughts or advice you might have on the overall situation, not just the caching side of things. The caching is just what I managed to study so far since the AI insisted it was particular important for this setup. **😊

you are viewing a single comment's thread
view the rest of the comments
[–] dan@upvote.au 5 points 3 hours ago* (last edited 3 hours ago) (2 children)

Use a page caching plugin that writes HTML files to disk. I don't do a lot with WordPress any more, but my preferred one was WP Super Cache. Then, you need to configure Nginx to serve pages directly from disk if they exist. By doing this, page loads don't need to hit PHP and you effectively get the same performance as if it were a static site.

See how you go with just that, with no other changes. You shouldn't need FastCGI caching. If you can get most page loads hitting static HTML files, you likely won't need any other optimizations.

One issue you'll hit is if there's any highly dynamic content on the page, that's generated on the server. You'll need to use JavaScript to load any dynamic bits. Normal article editing is fine, as WordPress will automatically clear related caches on publish.

For the server, make sure it's located near the region where the majority of your users are located. For 200k monthly hits, I doubt you'd need a machine as powerful as the Hetzner one you mentioned. What are you using currently?

[–] rimu@piefed.social 3 points 2 hours ago* (last edited 2 hours ago)

This is good advise, listen to dan. WP Super Cache is amazing although getting it working just right can take some tweaking.

The Redis Object Cache plugin is worth a try. It'll only take a minute to set up.

Is it 200k users or 200k page loads? Those are really different as each user will load multiple pages in a month. If it's 200k page loads then that server is way way too powerful (and expensive). Don't let a crappy developer hide their lack of optimization skills by throwing your money at the problem.

[–] Andres4NY@social.ridetrans.it 1 points 3 hours ago (1 children)

@dan @goldensw Yes, this. I did almost exactly what you did (taking over maintenance of an older wordpress site used by a local news org), and it was in rough shape. The config is a bit crotchety (like most things wordpress these days), but we're using WP Fastest Cache to create static html pages and a custom nginx configuration to read directly off those static pages (without hitting the php interpreter) for non-logged-in users. Basically try_files /.../cache/$uri, which falls back to php.

[–] Andres4NY@social.ridetrans.it 1 points 3 hours ago

@dan @goldensw The vast majority of traffic is going to be the first day or week that a new article is published, social media or whatever driving lots of traffic to that same article over and over. Loading the php interpreter each time, even if it's reading cached data, *will* make the site fall over. Static files will not.

Though nowdays there's stupid AI bots doing pathological stuff, so that may become an issue as well that requires some further adjustments.