this post was submitted on 05 May 2026
991 points (97.8% liked)

Technology

84413 readers
4466 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] T156@lemmy.world 15 points 1 day ago (1 children)

It's also cheaper, if they can offload a portion to the user's computer.

[–] adespoton@lemmy.ca 5 points 1 day ago (1 children)

Cheaper for them, that is.

What I want to see is throttleable models, kind of like progressive JPEG, where the default model is “nano” and it has a watch function that analyzes if more tokens might be needed for a certain task and scales up as needed — identifying if the resources are too much for the device and offloading to the cloud (with explicit permission) only if (but always if) needed. Over time as the technology improves, larger models move to the endpoint.

And then people could have a basic set of sliders: on-device only, on-cloud only, or somewhere in between, based on the user’s preferences.

[–] T156@lemmy.world 1 points 1 day ago

That's basically model routing, and has existed a while. Open AI's GPT-5 and llama-swap do that, for example. If the task is simple, it uses a smaller, less intensive model, and only uses the slower, larger one of the task is more complex.

Though most tend to operate with models on the same device/service, rather than a model run elsewhere.