this post was submitted on 05 Sep 2025
85 points (96.7% liked)

Technology

74831 readers
3021 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 36 comments
sorted by: hot top controversial new old
[–] DontNoodles@discuss.tchncs.de 3 points 16 hours ago (1 children)

While this is indeed a noble cause, i wonder if internet being slow in Antarctica is real. A large number of data recieving stations for polar satellites are stationed in Antarctica and they send data to other continents through high speed fiber lines which are also used for internet.

[–] frongt@lemmy.zip 5 points 15 hours ago (2 children)

It is quite real. The satellite links are like 10 Mbps. You go far enough south, and you cant even hit the satellite because it's over the horizon. There aren't any high-speed polar satellites. Companies don't send their satellites that far south because there are too few customers to justify the cost.

That's changing with starlink, though, since those ones are in a polar orbit.

[–] CheeseNoodle@lemmy.world 3 points 13 hours ago (1 children)

10 Mbps is like average Scotland internet unless you're in a major city.

[–] frongt@lemmy.zip 3 points 13 hours ago

For a household? Yeah that's tolerable. For a couple dozen people living and working, it's tighter.

[–] DontNoodles@discuss.tchncs.de 2 points 14 hours ago (1 children)

My point is that Antarctica is well connected by fiber. Am I mistaken?

[–] frongt@lemmy.zip 3 points 13 hours ago (1 children)

Yes. There are no fiber links to Antarctica. Nor copper. It's all satellite. https://www.submarinecablemap.com/

[–] DontNoodles@discuss.tchncs.de 1 points 12 hours ago (1 children)

I stand corrected. The satellite data from remote sensing satellites downloaded at Antarctica downlink stations are sent back to other countries by geostationary satellite links.

[–] frongt@lemmy.zip 1 points 12 hours ago

Yes. Or, if it's a lot of data, hand-carried on a hard drive.

[–] mesamunefire@piefed.social 23 points 1 day ago* (last edited 1 day ago) (5 children)

Ive had this thought for a while.

If humans ever go to other planets, its going to be VERY hard to keep software up to date without some serous thought and relay stations. The speed of light is a hard restriction.

Lots of devices are only designed for "always on" capability. What happens when its near impossible to "phone home"?

[–] frongt@lemmy.zip 13 points 1 day ago (1 children)

Local mirrors and caching proxies.

I've worked in an environment like this. We had a local server for Windows and Mac updates. Direct updates were blocked. It's a solved problem, you just need developers to participate.

[–] mesamunefire@piefed.social 3 points 1 day ago* (last edited 1 day ago) (1 children)

Interesting. The article seems to claim otherwise? How would Paul have fixed the issues he was coming across? If you had to, say, do Windows/Mac updates in the current architecture in a remote place, how would that work? I thought updates were required by MS/Apple?

[–] frongt@lemmy.zip 4 points 1 day ago

They're not required if you disable or block them. In an enterprise environment, you deploy a local update server, like I said.

As far as your personal devices are concerned, though, you're on your own. If your iPhone refuses to do something because it wants an update, you'll just have to wait to do that thing until you get home. We don't have the bandwidth to spare.

[–] shortwavesurfer@lemmy.zip 9 points 1 day ago

This is what IPFS is for. Instead of linking to a location that would be way far away off planet, it links to the content which could very well be cached on planet or on a relay station closer. Sure, one person has to pull it down from the incredibly far away place, but once it's pulled down at least one time, everybody else pulls it from the more local version that that person has. However, though, timeouts will need to be increased. Maybe not to some insanely stupid amount, but they will need to be increased somewhat.

[–] Imgonnatrythis@sh.itjust.works 6 points 1 day ago (1 children)

Maybe Microsoft will let us have local accounts again?

[–] ArsonButCute@lemmy.dbzer0.com 4 points 1 day ago (1 children)

Microsoft will absolutely not be making it on the interplanetary scene.

2880 is the year of the Linux desktop?

[–] thatonecoder@lemmy.ca 5 points 1 day ago

This is one of the reasons why I preach against Electron apps and the “storage is cheap” argument. Additionally, it may also be really expensive for people in 3rd world countries to buy storage.

[–] Rentlar@lemmy.ca 2 points 1 day ago

Instead of sneaker-net it will be rocket-net... and at a certain point you need an ~~on-prem~~ on-planet support team to just figure things out.

[–] undefined@lemmy.hogru.ch 13 points 1 day ago (1 children)

This very much bothers me as a web developer. I go hard on Conditinal GET Request support and compression as well as using http/2+. I’m tired of using websites (outside of work) that need to load a fuckton of assets (even after I block 99% of advertising and tracking domains).

macOS and iOS actually allow updates to be cached locally on the network, and if I remember correctly Windows has some sort of peer-to-peer mechanism for updates too (I can’t remember if that works over the LAN though; I don’t use Windows).

The part I struggle with is caching HTTP. It used to be easy pre-HTTPS but now it’s practically impossible. I do think other types of apps do a poor job of caching things though too.

[–] frongt@lemmy.zip 6 points 1 day ago (1 children)

Yes, Windows peer to peer update downloads work over LAN. (In theory, I've never verified it.)

HTTP caching still works fine, if your proxy performs SSL termination and reencryption. In an enterprise environment that's fine, for individuals it's a non-starter. In this case, you'd want to have a local CDN mirror.

[–] undefined@lemmy.hogru.ch 1 points 1 day ago (1 children)

I couldn’t get SSL bumping in Squid on Alpine Linux about a year ago but I’m willing to give it another shot.

My home router is also a mini PC on Alpine Linux. I do transparent caching of plain HTTP (it’s minimal but it works) but with others using the router I do feel uneasy about SSL bumping, not to mention some apps (banks) are a lot more strict about it.

[–] frongt@lemmy.zip 2 points 1 day ago (1 children)

Yeah, you'll have to have a bypass list for some sites.

Honestly, unless you're actually on a very limited connection, you probably won't see any actual value from it. Even if you do cache everything, each site hosts their own copy of jQuery or whatever the kids use these days, and your proxy isn't going to cache that any better than the client already does.

[–] undefined@lemmy.hogru.ch 1 points 1 day ago* (last edited 1 day ago)

For my personal setup I’ve been wanting to do it on a VPS I have. I route my traffic through a bundle of VPNs from the US to Switzerland and I end up needing to clear browser cache often (web developer testing JavaScript, etc).

each site hosts their own copy of jQuery or whatever the kids use these days

I do this in my projects (Hotwire) but I wish I could say the same for other websites. I still run into broken websites due to trying to import jQuery from Google for example. This would be another nice thing to have cached.

[–] tal@lemmy.today 7 points 1 day ago* (last edited 1 day ago) (2 children)

This low bandwidth scenario led to highly aggravating scenarios, such as when a web app would time out on [Paul] while downloading a 20 MB JavaScript file, simply because things were going too slow.

Two major applications I've used that don't deal well with slow cell links:

  • Lemmyverse.net runs an index of all Threadiverse instances and all communities on all instances, and presently is an irreplaceable resource for a user on here who wants to search for a given community. It loads an enormous amount of data for the communities page, and has some sort of short timeout. Whatever it's pulling down internally

I didn't look


either isn't cached or is a single file, so reloading the page restarts from the start. The net result is that it won't work over a slow connection.

  • This may have been fixed, but git had a serious period of time where it would smash into timeouts and not work on slow links, at least to github. This made it impossible to clone larger repositories; I remember failing trying to clone the Cataclysm: Dark Days Ahead repository, where one couldn't even manage a shallow clone. This was greatly-exacerbated by the fact that git does not presently have the ability to resume downloads if a download is interrupted. I've generally wound up working around this by git cloning to a machine on a fast connection, then using rsync to pull a repository over to the machine on a slow link, which, frankly, is a little embarrassing when one considers that git really is the premier distributed VCS tool out there in 2025, and really shouldn't need to rely on that sort of workaround.
[–] mesamunefire@piefed.social 3 points 1 day ago (1 children)

I remember there is some timeout flags you can do on curl that you can use in conjunction with git...but its been nearly a decade since Ive done anything of the sort. Modern day GitHub is fast-ish...but yeah bigger stuff has some big git issues.

Good points! Didn't know about Lemmyverse.net!

[–] rimu@piefed.social 5 points 1 day ago* (last edited 1 day ago) (1 children)

Didn't know about Lemmyverse.net!

As a PieFed user, soon you don't need to - piefed instances will automatically subscribe to every community in newcommunities@lemmy.world so the local communities-finder will always have everything you ever need.

Coming in v1.2.

Every third party site hanging around the fringes of Lemmy is a crutch for missing features in Lemmy and an opportunity for PieFed to incorporate it's functionality.

[–] mesamunefire@piefed.social 2 points 1 day ago

coo! Thats great. Thanks again!

[–] tal@lemmy.today 2 points 1 day ago* (last edited 1 day ago) (1 children)

A bit of banging away later


I haven't touched Linux traffic shaping in some years


I've got a quick-and-dirty script to set a machine up to temporarily simulate a slow inbound interface for testing.

slow.sh test script

# !/bin/bash
# Linux traffic-shaping occurs on the outbound traffic.  This script
# sets up a virtual interface and places inbound traffic on that virtual
# interface so that it may be rate-limited to simulate a network with a slow inbound connection.
# Removes induced slow-down prior to exiting.  Needs to run as root.

# Physical interface to slow; set as appropriate
oif="wlp2s0"

modprobe ifb numifbs=1
ip link set dev ifb0 up
tc qdisc add dev $oif handle ffff: ingress
tc filter add dev $oif parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0

tc qdisc add dev ifb0 root handle 1: htb default 10
tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbit
tc class add dev ifb0 parent 1:1 classid 1:10 htb rate 1mbit

echo "Rate-limiting active.  Hit Control-D to exit."
cat

# shut down rate-limiting
tc qdisc delete dev $oif ingress
tc qdisc delete dev ifb0 root
ip link  set dev ifb0 down
rmmod ifb

I'm going to see whether I can still reproduce that git failure for Cataclysm on git 2.47.2, which is what's in Debian trixie. As I recall, it got a fair bit of the way into the download before bailing out. Including the script here, since I think that the article makes a good point that there probably should be more slow-network testing, and maybe someone else wants to test something themselves on a slow network.

Probably be better to have something a little fancier to only slow traffic for one particular application


maybe create a "slow Podman container" and match on traffic going to that?


but this is good enough for a quick-and-dirty test.

[–] mesamunefire@piefed.social 3 points 1 day ago (1 children)

Nice! Scientific data!

Also looks like its still an issue with GH: https://github.com/orgs/community/discussions/135808 in slower countries. so yeah nvm its still a huge issue even today.

[–] tal@lemmy.today 1 points 1 day ago* (last edited 1 day ago) (1 children)

Thanks. Yeah, I'm pretty sure that that was what I was hitting. Hmm. Okay, that's actually good


so it's not a git bug, then, but something problematic in GitHub's infrastructure.

EDIT: On that bug, they say that they fixed it a couple months ago:

This seems to have been fixed at some point during the last days leading up to today (2025-03-21), thanks in part to @MarinoJurisic 's tireless efforts to convince Github support to revisit this problem!!! 🎉

So hopefully it's dead even specifically for GitHub. Excellent. Man, that was obnoxious.

[–] mesamunefire@piefed.social 2 points 1 day ago* (last edited 1 day ago) (1 children)

I wonder if there is a retry or something on git? I know there is if you create a basic bash script, but we can assume someone is having the same issue, right?

I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.

I cant find the curl workaround I used a long time ago. It might have been just pulling the code as a zip or something like some GH repos let you do.

[–] tal@lemmy.today 2 points 1 day ago

I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.

Yeah, that's a shallow clone. That reduces what it pulls down, and I did try that (you most-likely want a bit more, probably to also ask to only pull down data from a single branch) but back when I was crashing into it, that wasn't enough for the Cataclysm repo.

It looks like it's fixed as of early this year; I updated my comment above.

[–] shortwavesurfer@lemmy.zip 3 points 1 day ago

Meshtastic LongFast is a blazing 1.09kbps and even ShortFast is ~10kbps. Wifi 802.11ah halo can do 4mhz and 16mbps max.

[–] Kolanaki@pawb.social 3 points 1 day ago (2 children)

Have they tried sending their packets using pigeons?

[–] owenfromcanada@lemmy.ca 8 points 1 day ago

Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.
-- Andrew S. Tanenbaum

[–] mesamunefire@piefed.social 2 points 1 day ago

heh! Thats such a funny/cool protocol.