Powderhorn

joined 2 years ago
 

It's no secret that much of social media has become profoundly dysfunctional. Rather than bringing us together into one utopian public square and fostering a healthy exchange of ideas, these platforms too often create filter bubbles or echo chambers. A small number of high-profile users garner the lion's share of attention and influence, and the algorithms designed to maximize engagement end up merely amplifying outrage and conflict, ensuring the dominance of the loudest and most extreme users—thereby increasing polarization even more.

Numerous platform-level intervention strategies have been proposed to combat these issues, but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it's not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media. So we're probably doomed to endless toxic feedback loops unless someone hits upon a brilliant fundamental redesign that manages to change those dynamics.

Co-authors Petter Törnberg and Maik Larooij of the University of Amsterdam wanted to learn more about the mechanisms that give rise to the worst aspects of social media: the partisan echo chambers, the concentration of influence among a small group of elite users (attention inequality), and the amplification of the most extreme divisive voices. So they combined standard agent-based modeling with large language models (LLMs), essentially creating little AI personas to simulate online social media behavior. "What we found is that we didn't need to put any algorithms in, we didn't need to massage the model," Törnberg told Ars. "It just came out of the baseline model, all of these dynamics."

 

We already live in a world where pretty much every public act - online or in the real world - leaves a mark in a database somewhere. But how far back does that record extend? I recently learned that record goes back further than I'd seriously imagined.

On my recent tour of the United States (making it through immigration checks in record time, thanks to facial recognition), I caught that bug, the same one that brought the world to a halt half a decade ago. But I caught it early, so I knew that I could probably get some treatment.

That led to a quick trip to an 'Urgent Care' - the frontline medical center for most Americans. At the check-in counter, the check-in nurse asked to see some ID, so I handed over my Australian driver's license. The nurse looked at the license and typed some of the info on it into a computer, then they looked up at me and asked: "Are you the same Mark Pesce who lived at...?" and then proceeded to recite an address that I resided at more than half a century ago.

Dumbstruck, I said, "Yes...? And how did you know that? I haven't lived there in nearly 50 years. I've never been in here before - I've barely ever been in this town before. Where did that come from?"

"Oh," they replied. "We share our patient data records with Massachusetts General Hospital. It's probably from them?"

I remembered having a bit of minor surgery as an 11 year old, conducted at that facility. 51 years ago. That's the only time I'd ever been a patient at Massachusetts General Hospital.

Good thing we're paying for all these data centers!

 

I've been familiar with the concept, but this is by far the best behind-the-scenes explanation I've seen.

 

The job market is queasy and since you're reading this, you need to upgrade your CV. It's going to require some work to game the poorly trained AIs now doing so much of the heavy lifting. I know you don't want to, but it's best to think of this as dealing with a buggy lump of undocumented code, because frankly that's what is between you and your next job.

A big reason for that bias in so many AIs is they are trained on the way things are, not as diverse as we'd like them to be. So being just expensively trained statistics, your new CV needs to give them the words most commonly associated with the job you want, not merely the correct ones.

That's going to take some research and a rewrite to get it looking like those it was trained to match. You need to be adding synonyms and dependencies because the AIs lack any model of how we actually do IT, they only see correlations between words. One would hope a network engineer knows how to configure routers, but if you just say Cisco, the AI won't give it as much weight as when you say both, nor can you assume it will work out that you actually did anything to the router, database or code, so you need to explicitly say what you did.

Fortunately your CV does not have to be easy to read out loud, so there is mileage in including the longer versions of the names of the more relevant tools you've mastered, so awful phrases like "configured Fortinet FortiGate firewall" are helpful if you say it once, as does using all three F words elsewhere. This works well for the old fashioned simple buzzword matching still widely used.

This is all so fucked.

 

As always with Zitron, grab a beverage before settling in.