Tiresia

joined 1 year ago
[–] Tiresia@slrpnk.net 3 points 3 weeks ago

He is cunning to a tee.

"Hey let's livestream me playing Path of Exile after saying I'm the best in the world, with uncensored live chat from thousands of pseudanonymous gamers with actual experience."

He's good at creating the illusion that he's a genius on a subject for the duration of an informal conversation. Steering away from topics he doesn't understand, forging signals of deep understanding by mimicking the speech patterns of an expert who struggles to put things in lay man's terms while namedropping memorized keywords, etc.

If you look at Path of Exile and the Cybertruck, it's clear that Elon doesn't know when his promises are unrealistic in a way that will make him look like an idiot. I think he has handlers, not just at SpaceX but everywhere, and those handlers are the real talent. Those handlers know how to cultivate experts that are actually good at their jobs to quietly do the work that Elon takes credit for and how to coach them to make Elon feel good about this arrangement most of the time.

[–] Tiresia@slrpnk.net 2 points 3 weeks ago

If so the Democrats could act like it by showing what happens when they try to say what they aren't allowed to say.

At which point you could say that the Democrats are owned by the far right, at which point "far right" becomes an impractical phrase to use to distinguish between the likes of AOC and Mamdani and the likes of Trump.

So no, the news media aren't owned by the far right. They are owned by the same people that own the Democrats and Republicans, which have a diverse range of right wing opinions none of which include stopping fascists that got elected through the system that they rely on for their wealth and power.

If the DNC wanted to hammer the Republicans on this, then by the same token the news media would want to let them. But the DNC doesn't want to encourage opposition too much because they know they and their owners would lose massive amounts of money if there was any kind of structural reform.

[–] Tiresia@slrpnk.net 2 points 3 weeks ago

And things ended up this way because in 1776 they had little idea of how their rules were going to play out but they had to choose something to get started and they hoped it would get fixed with time.

[–] Tiresia@slrpnk.net 1 points 3 weeks ago

A divided congress representing a divided people? Say it ain't so.

[–] Tiresia@slrpnk.net 0 points 4 weeks ago

It would be easier to have a satellite in orbit that fires a shotgun at them.

You would need some fancy orbital calculations and precise aiming to make sure the shotgun pellets actually intercept the mirrors, and it would take some engineering to make a shotgun that fires the pellets in a narrow enough cone at high enough velocity to be able to get on an intercept course with most satellites, but you could probably fit it on a Starlink-sized payload. The main issue would be bribing a launch provider to send it up there, but once it's there you could direct it from the ground without it being traceable to you, and you could have it thrust randomly to dodge anti-satellite weaponry until it runs out of shells.

At some point this would create enough space debris that it could trigger Kessler syndrome, with the debris from destroyed satellites hitting other satellites faster than it de-orbits, until all satellites in low earth orbit are reduced to powder that falls down to earth over a couple of years.

Apart from bribing a launch provider to get the satellite up there, you could probably do either of these for under $10 million, most of it R&D. Much cheaper than developing your own surface-to-space missiles.

[–] Tiresia@slrpnk.net 23 points 1 month ago

Oh honey, that hasn't been true since 2008.

The government will bail out companies that get too big to fail. So investors want to loan money to companies so that those companies become too big to fail, so that when those investors "collect on their debt with interest" the government pays them.

They funded Uber, which lost 33 billion dollars over the course of 7 years before ever turning a profit, but by driving taxi companies out of business and lobbying that public transit is unnecessary, they're an unmissable part of society, so investors will get their dues.

They funded Elon Musk, whose companies are the primary means of communication between politicians and the public, a replacing NASA as the US government's primary space launch provider for both civilian and military missions, and whose prestige got a bunch of governments to defund public transit to feed continued dependence on car companies. So investors will get their dues through military contracts and through being able to threaten politicians with a media blackout.

And so they fund AI, which they're trying to have replace so many essential functions that society can't run without it, and which muddies the waters of anonymous interaction to the point that people have no choice but to only rely on information that has been vetted by institutions - usually corporations like for-profit news.

The point of AI is not to make itself so desirable that people want to give AI companies money to have it in their life. The point of AI is to make people more dependent on AI and on other corporations that the AI company's owners own.

[–] Tiresia@slrpnk.net 1 points 1 month ago (4 children)

They could stick to unpoisoned datasets for next token prediction by simply not including data collected after the public release of ChatGPT.

But the real progress they can make is that LLMs can be subjected to reinforcement learning, the same process that got superhuman results in Go, Starcraft, and other games. The difficulty is getting a training signal that can guide it past human-level performance.

And this is why they are pushing to include ChatGPT in everything. Every conversation is a datapoint that can be used to evaluate ChatGPT's performance. This doesn't get poisoned by the public adoption of AI because even if ChatGPT is speaking to an AI, the RL training algorithm evaluates ChatGPT's behavior, treating the AI as just another possible thing-in-the-world it can interact with.

As AI chatbots proliferate, more and more opportunities arise for A/B testing - for example if two different AI chatbots write two different comments to the same reddit post, with the goal of getting the most upvotes. While it's not quite the same as the billions of games playing against each other in a vacuum that made AlphaGo and AlphaStar better than humans, there is definitely opportunity for training data.

And at some point they could find a way to play AI against each other to reach greater heights, some test that is easy to evaluate despite being based on complicated next-token-prediction. They've got over a trillion dollars of funding and plenty of researchers doing their best, and I don't see a physical reason why it couldn't happen.


But beyond any theoretical explanation, there is the simple big-picture argument: for the past 10 years I've heard people say that AI could never do the next thing, with increasing desperation as AI swallows up more and more of the internet. They have all had reasons about as credible-sounding as yours. Sure it's possible that at some point the nay-sayers will be right and the technology will taper off, but we don't have the luxury of assuming we live in the easiest of all possible worlds.

It may be true that 3 years from now all digital communication is swallowed up by AI that we can't distinguish from humans, that try to feed us information optimized to convert us to fascism on behalf of the AI's fascist owners. It may be true that there will be mass-produced drones that are as good as maneuvering around obstacles and firing weapons as humans and these drones will be applied against anyone who resists the fascist order.

We may be only years away from resistance to fascism becoming impossible. We can bet that we have longer, but only if we get something that is worth the wait.

[–] Tiresia@slrpnk.net 1 points 4 months ago* (last edited 4 months ago)

Fair, but social media shows that enshittification doesn't have to result in them charging money. Advertising and control over the zeitgeist are plenty valuable. Even if people don't have money to pay for AI, AI companies can use the enshittified AI to get people to spend their food stamps on slurry made by the highest bidder.

And even if companies have conglomerated into a technofeudal dystopia so advertisement is unnecessary, AI companies can use enshittified AI to make people feel confused and isolated when they try to think through political actions that would threaten the system but connected and empowered when they try to think through subjugating themselves or 'resisting' in an unproductive way.