this post was submitted on 17 Dec 2025
574 points (97.8% liked)
Technology
77084 readers
2791 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Am engineer. Know zero professional people in the engineering community who use AI browsers, and very few who even touch AI for anything aside from docs or stats.
In my personal life I know zero people who use these browsers. I think this is just panic from the higher ups at Mozilla who have no idea what in the fuck the company should be doing or is about, even.
Start making tools to give to people to combat this bullshit from the EU. Build a USABLE and decentralized chat app that people can actually use FFS. Build something like Proton and ACTUALLY BECOME SELF-SUFFICIENT.
Others have eaten your lunch because of this exact thing. Do better.
There’s another possibility I don’t see anyone talking about. It could just be the higher ups at Mozilla doing the old performative “we’re doing AI” dance for their shareholders and the investment community. Everyone assumes they are 100% sincere about embracing AI but this could simply be them paying the AI tax that all companies seem required to pay right now.
If this is plausible, then we should just wait for it to manifest as actual feature changes and then judge. Right now this is just high level messaging and PR.
If you've not been paying attention to their other random products, it would seem this is unlikely.
They just jump from random thing to random thing and collect money along the way, draining the coffers with their C-level titles. Absolutely bullshit.
As someone who started their career as a volunteer at Mozilla and was fortunate enough to become an employee (although am no longer), I can say with a fair amount of confidence that this has been their standard operating mode for over a decade. Nothing I’ve seen from them since I was let go has shown me they’re operating any differently.
I still support Firefox because I oppose a browser monoculture owned by Google, and the advocacy work the Foundation is vitally important. The Corporation lost the plot ages ago though, and does more harm to Mozilla’s mission than any other player out there. No amount of re-orgs or pivots can fix this.
I hope, someday, for Firefox to be freed from the Corporation as a sustainable community run project (like Debian), with infrastructure sponsored by the Foundation and others who want to see it continue. Unfortunately the Corporation will never let Firefox go because its existential for them, and will be stuck in this panic cycle for as long as Google keeps them on life support.
Anyway, still using Firefox and pruning all the weeds from it each release, but it’s become exhausting.
The main use for AI that I've seen in my circles is a search engine replacement. Not because AI is a good search engine, but because search engines have largely become useless.
If Mozilla wants to cement their place, create a better search engine. It's how Google came to control a huge portion of the internet, and there's now a huge vacuum waiting for someone to replace what we lost.
Exact same thing with anyone I know who uses it. You used to be able to type questions into search engines, now it picks one word from that question and gives you slop results.
Why, you didn't want all of the top results to be scams?
AI search is useless for the same reason search engines are useless. But at least search engines force you to look at the source information and the context around it. So AI search is even more useless.
Making a better search engine solves nothing. There are several dozen of them already but Google remains on the top for a variety of reasons, including continued anticompetitive behavior and overwhelming consumer apathy. Most of the other ones aren't sustainable without using the same shady advertising Google is using. Kagi being the exception. Mozilla could definitely offer a similar paid solution.
I feel stupid for asking but what is an AI agentic browser even supposed to do? Search things based on your query? Well search bars have been a thing since forever. 🤷
Not even translation? That’s probably the biggest browser AI feature.
Similarly, translating from html/QML or js/py/rust is handy.
Its still a pain because even good models like opus are hit or miss. The code still has to be reviewed and adapted. Can save time though.
They are also very useful for mocking up a quick proof of concept.
Is X doable? Will Y potentially solve the problems that my clients need me to solve? mock it up in two seconds with a few prompts and a language model and you don't have to take a stroll down a garden path.
The actual work I still have to do but that's why I'm paid to do it.
Translation is my main use. Yes, the caveat that AI is 50/50 wrong is still there but at least I don't have to pester friends that know the language for everything. I only use it for unimportant things.
To be fair, it's way better than 50/50, but of course no guarantees still.
It gets the job done enough to understand the jist for me, yeah. But mostly I only do short posts. A language like Japanese makes it just a lot harder from what I understand from friends that learned the language. IIRC it's because the language relies on unspoken context and of course its grammar making machine translation trip.
From what I know, Kanji can help with removing some ambiguity.
The key to responsible AI use. Of course, in the grand scheme, few things are all that important.
If the marginal cost of being wrong about something is essentially zero, AI is a very helpful resource due to its speed and ubiquity.
I'd like to run local LLM with ethical models some day, if there's such models.
It's not panic, it's consequence of networking and a very specific culture having formed for CEOs and such.
A bit like Silicon Valley tech bros, they think they are the chosen ones leading the charge and able to make decisions for all of us, sort of aristocracy.
So in their circles it's fashion now to play this "AI" thing.
And mechanisms to remove those fools from places they don't belong to and make them clean streets have rotten.
Usable and decentralized - well, you'll need some beyond-the-horizon planning for how the development of that will go on. Because 90s Web was kinda normal too, except there were future stages.
You need something that's usable almost from the beginning, but that is also usable for everything you haven't yet thought about. Something that allows any use, but doesn't limit any, even needed only by a handful of people, task.
You need universal open infrastructure. Something allowing to pool public service trackers, storage services, relay services, notification services, key services, search services, but tying them into specific applications on the client. Different applications, over the common high-level medium (of authors and messages and groups, for example ; perhaps subscriptions). And you need that to be untrusted and backed up by DHT and sneakernet as perfectly functional alternative ways for the same system. You need them all.
And you need means of development with higher common, basic level. You need something like Hypercard on the clients, so that development in this "alternative Web" were accessible in its full power. With "cards" shared like messages. That'd be similar to how we fetch different websites.
Messages and people and groups would have global identifiers, tied to cryptography. One could have sort of "permission rule" messages to be interpreted by clients to decide, during "replaying" a group with its messages, which action was valid and which wasn't, and what can this specific user do to the group at this specific moment.
There could be different types of messages, perhaps with references to "interpreter" messages containing scripts.
OK. That's just a pet dream of mine, but I don't yet have a full picture in my mind.
clients could then be overwhelmed with mass invalid messages by bad actors
I don't know how safe could that be, but deltachat does something like that
It could be a shared responsibility to filter these out, similarly to email spam.
Both by users and by relay\storage services.
I miss HyperCard.
What about all those ladder climbers who want to sound like they're tapped in to the pulse of cutting edge technology to the bosses? I work with engineers and it seems to be pretty split between full adoption and full rejection.
LLMs aren't going to make you good at your job.
If you lacked coming in and relied on this bullshit, you'll suck even more going out when they figure out you can't have a conversation about the thing you were hired to be an expert on, buddy.
Good luck to you.
Im genuinely confused by your reply. I wasn't referring to ladder climbers in a positive light. I see them shoehorning AI into pointless projects that dazzle the bosses because they don't know any better or because they want to dazzle their own bosses with more mumbo jumbo derived from their own reports.
Small LLMs could be useful in-browser for automating actions - e.g. reject all cookie/tracking popups. Consent-o-matic only works for half the sites I encounter and doesn't support mobile
Security however is another rabbit hole
Yeah, no. LLMs are known untrustworthy so need a validation step so they aren't a great fit for any automation you don't look at... unless you don't really care about the outcome
What would work here is a browser API for cookie settings. You set your preferences with the browser and the sites check the browser. I don't think this is likely to happen because people with influence and money in tech wouldn't be able to point to how annoying the modals are and say "Look X government is doing something we don't like so you should be angry and not trust them"
Consent-o-matic does too support Firefox mobile! What makes you think it doesn't?
The mobile sites I visit don't have the cookie banners auto dismissed
Curious. There are certain ones it doesn't work on, both on desktop and mobile, but works as normal other than that. Maybe check your settings?
LLMs are useful for summarization. That is it.
How often are you needing a summary of the thing that you're browsing at the moment?
You could tey Super Agent on firefox. Though they only have 40 free pop ups before paying either subscription or one time pay.
It worked really well for me and I didn't realize it was doing its thing until I quickly reached the 40 pop up limit.
"Am engineer". This is reddit level cringe stuff. There are tons of engineers, we're not special and most of us are equally dumb. Its funny you mention proton when they've made pro-***** statements and then trying to stay neutral in the blowback. "AI" has its uses like you said, in docs and stats. Firefox will NEVER be self-sufficient because they exist on funding from Google to exist as their only competition to not be a browser monopoly. As much as we hate it, there is a complicated line to be towed here. Mozilla isn't perfect, but they're far from an enemy here. The Firefox forks we love so much won't exist without this