this post was submitted on 28 Apr 2026
114 points (99.1% liked)
Technology
84171 readers
3330 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The newer CPU generations come with cores optimized for this stuff (referred to as an NPU). It actually seems to work fairly well for the kind of model you'd run locally.
Barring that, a typical laptop dGPU will also work, although not super efficiently since they often don't exceed 8 GB of VRAM and thus can't run most models without partially offloading them to the CPU.
Of course a laptop with a dGPU and NPU cores will make the offloading less painful. So yeah, workable for most reasonably-sized models.
Models can split loads across a discrete GPU and CPU/RAM.
Its not as fast as if you can load it all in the GPU, but it gives you more options. Its been quite common for a long time.
Yeah, that's what I refer to with offloading. Depending on the model and runtime it might be a bit fiddly but it usually works fine.
Im apparently just bad at reading the whole message.