Lol. Lmao.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Its a shame that I assume this article is written by an LLM, prompted and edited by a person, and thus I have little will to even read it.
You could use an AI to summarize it....
I'll see myself out.
To be fair, a lot of the people who believe that have no concept of "shame" in the first place.
It is lazy. It will be sloppy, shoddily made garbage.
The shame is entirely on the one who chose to use the slop machine in the first place.
I laugh at all these desperate "AI good!" articles. Maybe the bubble will pop sooner than I thought.
Its gonna suck. Because of course they're gonna get bailed out. It's gonna be "too big to fail" all over again.
Because "national security" or some such nonsense.
But not all of them, just those who started the whole thing.
It's a working mechanism of cleaning the field of potential competition once a decade or two.
Let's admit it, normies investing don't have to know it's a bubble or a scam. They have right to expect to not be scammed, honestly. The scammers are those responsible for the scam. And the majority of those jumping on the hype train are scammed normies. They could be decent participants of the market were this hype shot down earlier. Instead they'll burn. And the big fish to be bailed out is actually interested in this happening - so that they were still around, but their competition were not. So those making the bubble will remain. A negative selection.
It honestly seems like a very slow power takeover, done by economic means. To concentrate such amounts of power, that when it becomes open, nobody can do anything. Then it'll be a game with different rules, for which those people might not be prepared well enough, let's hope Digital Heaven: Global Starvation, a sequel to the esteemed Khmer Rouge: Rice Fields Bitch, this time with smartphones, is not how it will happen.
The way I see it is, the usefulness of straight LLM generated text is indirectly proportional to the importance of the work. If someone is asking for text for the sake of text and can't be convinced otherwise, give 'em slop.
But I also feel that properly trained & prompted LLM generated text is a force multiplier when combined with revision and fact checking, also varying indirectly proportional with experience and familiarity with the topic.
valid.
If it's not shameful, why not disclose it?
Regardless, I see its uses in providing structure for those who have issues expressing themselves competently, but not in providing content, and you should always check all the sources that the LLM quotes to make sure it's not just nonsense. Basically, if someone else (or even yourself with a bit more time) could've written it, I guess it's "okay".
If it’s not shameful, why not disclose it?
https://en.wikipedia.org/wiki/Threshold_of_originality
If you don't disclose it, you can claim copyright even if you have no right to. LLM-generated code is either plagiarism (but lawmakers proved that they don't care about enforcing copyright on training data which has funny implications) or public domain because machine generation is not creative human work.
Copyright and patent laws need to die.
At least copyright is dying because of AI and few people seem to care. You can ask any of the big AI bots to recite Harry Potter. You need to be a bit creative with the questions but entire copyrighted works are in the database. You can bet your ass Windows is being developed these days using Linux code. Not because the developers are copying and pasting the code but because Copilot has been trained on Linux code and absolutely nobody is seeking GPL enforcement.
if that task is offloaded to spicy autocomplete, all and any learning of this skill is avoided, so it's not mega useful
That presumes that is how people are using AI. I use it all the time, but AI never replaces my own judgement or voice. It's useful. It's not life-changing.
Uh, yes. Yes it is.
Back to in-class essays!
It’s going to be plagiarism so yes, it is.
I’ve asked Copilot at work for word help. I’ll ask out something like, what’s a good word that sounds more professional than some other word? And it’ll give me a few choices and I’ll pick one. But that’s about it.
They’re useful, but I won’t let them do my work for me, or give them anything they can use (we have a corporate policy against that, and yet IT leaves Copilot installed/doesn’t switch to something like Linux).
You could also just use a literal thesaurus. That way, you're using your own mind to choose a synonym rather than plagiarizing an LLM's word choice.
By their nature, LLMs are truly excellent as thesauruses. It's one of the few tasks they're really designed to be good at.
Ha, fuck yeah it is.
Imagine if the AI bots learn to target and prioritize content not generated by AI (if they aren’t already). Labeling your content as organic makes it so much more appetizing for bots.
Bruh, if you write this poorly maybe do use it? But yes of course you should acknowledge using it. Readers want to know if they are reading rehashed garbage or original material. Your writing is very poor and AI writing is uninteresting so either way I guess I wouldn't worry about it too much. If you want to write and be read, work on improving your writing; doing so will go much further than trying to squeeze copy out of a LLM.