Not even remotely? RSS/ATOM is in no way federated or has anything to do with the Fediverse.
All this means that government news will be accessible in a standardised format.
Not even remotely? RSS/ATOM is in no way federated or has anything to do with the Fediverse.
All this means that government news will be accessible in a standardised format.
"See, no matter how much I'm trying to force this sewing machine to be a racecar, it just can't do it, it's a piece of shit"
Just because there are similarities, if you misuse LLMs, they won't perform well. You have to treat it as a tool, with a specific purpose. In case of LLMs that purpose is to take a bunch of input tokens, analyse them, and output the most likely output tokens that is statistically the "best response". The intelligence is putting that together, not "understanding tic tac toe". Mind you, you can tie in other ML frameworks for specific tasks that are better suited for those -e.g. you can hook up a chess engine (or tic tac toe engine), and that will beat you every single time.
Or an even better example... Instead of asking the LLM to play tic-tac-toe with you, ask it to write a Bash/Python/JavaScript tic-tac-toe game, and try playing against that. You'll be surprised.
Local inference isn't really the issue. Relatively low power hardware can already do passable tokens per sec on medium to large size models (40b to 270b). Of course it won't compare to an AWS Bedrock instance, but it is passable.
The reason why you won't get local AI systems - at least not completely - is due to the restrictive nature of the best models. Most actually good models are not open source. At best you'll get a locally runnable GGUF, but not open weights, meaning re-training potential is lost. Not to mention that most of the good and usable solutions tend to have complex interconnected systems so you're not just talking to an LLM but a series of models chained together.
But that doesn't mean that local (not hyperlocal, aka "always on your device" but local to your LAN) inference is impossible or hard. I have a £400 node running 3-4b models at lightning speed, at sub-100W (really sub-60W) power usage. For around £1500-2000 you can get a node that gets similar performance with 32-40b models. For about £4000, you can get a node that does the same with 120b models. Mind you I'm talking about lightning fast performance here, not passable.
And some of the most intelligent people ARE arrogant twits, unfortunately.
Even that won't be anywhere close to the efficiency of neurons.
And actual neurons are not comparable to transistors at all. For starters the behaviour is completely different, closer to more complex logic gates built from transistors, and they're multi-pathway, AND don't behave as binary as transistors do.
Which is why AI technology needs so much power. We're basically virtualising a badly understood version of our own brains. Think of it like, say, PlayStation 4 emulation - it's kinda working but most details are unknown and therefore don't work well, or at best have a "close enough" approximaion of behaviour, at the cost of more resource usage. And virtualisation will always be costly.
Or, I guess, a better example would be one of the many currently trending translation layers (e.g. SteamOS's Proton or macOS' Rosetta or whatever Microsoft was cooking for Windows for the same purpose, but also kinda FEX and Box86/Box64), versus virtual machines. The latter being an approximation of how AI relates to our brains (and by AI here I mean neural network based AI applications, not just LLMs).
Oh no. How will France survive now!
Well, the rich didn't get rich by giving away their money.
To them it's a status symbol - like how Apple products used to be status symbols for the plebeian (aka us).
Having multimillion dollar supercars, villas, mansions, seven bedroom apartments, etc., works well, but you're only truly rich if you can show off how you didn't actually lose any value to get those benefits. How you didn't need to spend a single penny of your wealth to live the rich life.
I found most Japanese whiskey to be too sweet for my taste. GJ has that balance of smoothness while keeping the sweetness at bay.
Plus I'm not that big of a fan of non-scotch whisky anyway.
By the way if you want something truly smooth, and not too sweet, the cheaper Mackinlay's Shackleton remake, usually around £20 a bottle (should be around $25-30 in the US), is an amazing choice.
The idea is that "rich people spend their money" - which, let's be honest, is blatantly untrue.
The rich don't spend their own money. They trade favours. Hell, if you have $1mil on your hand, you can easily walk into a "rich people bank", open an account and get free shit every month, beyond the very generous interest rates. Free concert/theatre tickets, and such. It's like a casino hotel that tries to get you to stay with the freebies, but on an "I have a few million dollars in your bank" level.
There's two products of Jack Daniels that I do appreciate:
Except every single MacBook you can buy right now (directly, from Apple, not second hand) directly beats pretty much every other device in its price range - unless you go super crazy with the specs and want to do 128GB RAM with an M5 Max and 8TB storage.
So it's hardly overpriced.
Sounds like you're the kind of person who needs the "don't put your fucking pets in the microwave" warnings.