this post was submitted on 17 Jan 2026
929 points (99.5% liked)
Fuck AI
5268 readers
2025 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So, I've been, in the last few hours, basically uh... distilling or refining the prompt.
Main goal was just to explictly capture more ... basically very specific rules that define like, this method is deprecated, use this one now, here is a generalized example, here's the old syntax, here's the new syntax, etc.
Keep running through scripts, keep asking it to do stuff, if it produces things with syntax errors, more or less work through why that is happening, rework or add to the prompt.
But something I did not expect to make a significant difference was to add a blurb to the top of the prompt that basically says 'be concise and non verbose unless asked to explain a concept or brainstorm a bigger picture, conceptual approach type question'.
So that resulted in it still being able to be conversational and contemplative, but when you're just feeding it code, errors, simple commands?
Well that makes it stop generating extraneous paragraphs telling you how wonderful your idea is and how happy it is to restate your request and tell your how its going to do it, etc etc.
Having it not do that actually significantly sped all this up, haha, as it... apparently devotes a lot of 'brain power' to figuring out how to write dumb fluff intros and outros.
And on a low power rig, its pretty impactful to strip out the bs.
But thats not an answer to your question lol.
I am just using the basic Qwen3 model, the 8B variant.
If you close Steam and don't have a browser with 10+ tabs open, this will work, it will run, hasn't blown up (yet).
I did have to uh, 6x the context size in the settings, that seems to be roughly optimal for ... not stalling out on actually reading the entirety of larger inputs, but also not blowing past the actual hardware limits.
So anyway, Qwen3 is a generalized model, and it actually just already understands GSScript circa roughly 4.1, seems to be when its training data set was formalized.
I started this all with something like 'tell me everything you know about modern GDScript for Godot' and it basically wrote a structured mini report, that served as an initial template to tweak update, revise.
So, my process has been:
Its now 2026, Godot 4.6 is out now (basically), this other online LLM said here are a bunch of things that have changed since 4.1, make a prompt that basically updates yourself.
As I tried to describe... Its basically a trial and error process of 'refining' or maybe 'distilling' the prompt, to get it to be comprehensive, but not too wordy, to focus it in without missing critical details.
Thank you so much! This is such great information. I know I could look it up online, but I appreciate your insights. I'll give this a try and see how that goes.
I promised you that was the last question, but I still have one more. Sorry....! Does your LLM modify your code by itself? What do you use for this? Codex or something? (I'm not a total noob in this field, but I certainly don't have enough experience.)
Uh... i copy and past the code in, after either a simple command, or a question, or something like that, then do:
code
... and then return to comments or questions or directions if I have more.
Or maybe if the code is throwing either syntax or runtime errors, give it the errors.
Then, it generates some output, with a code block, I examine it, copy and paste it back to wherever, see if it throws syntax errors, see if it runs, see if it broke something, see if new thing I'm trying to make actually works right, etc.
much more rambling thoughts of someone who hopefully does not have a form of ai psychosis
I don't even know what Codex is.
I've been writing code for decades and I... I still don't like 'get' IDEs, most of the time? They almost always seem like more trouble than they are worth.
I'm just used to much more lightweight editors, or as with the case of Godot, it has a pretty decent code editor / manager window thingy.
This is all an excercise in ... optimizing laziness, lol.
But uh yeah, Alpaca is a kind of containerized way of handling LLMs.
The whole idea is that it is self contained, sandboxed.
I am extremely hesitant... to try and like, build my own version of Copilot, that is... insanely potentially dangerous.
So yeah, there are walls between the LLM and the rest of the system, thats the point.
I could try and like, build an automated workflow, but... it makes mistakes too often, and frankly, its... basically kinda like partner coding.
I point out mistakes it makes, it points out mistakes I make.
It is a communicative collaborative process, if you're doing anything remotely conceptually complicated.
And you can just tell the thing 'do a sanity check on this code' or ... describe an idea and ask it to critique it, or ask it to ask you questions that it might have.
I basically just treat it as a fellow programmer, you know, another autist, sort of like another normie human, sort of not, lol.
Often, it will kind of... conceptually pigeon hole itself into a particular way of trying to solve a problem.
We'll spend time trying to get this to work, it doesn't, I get frustrated, engage my own actual brain for a bit, realize what it is trying to do is fundamentally nonsensical, propose a different approach.
Sometimes, nope, my idea isnt compatible, it says as much.
Sometimes it basically has a facepalm moment and says wow thats a much simpler way to do this, and then we figure it out in like the next 10 minutes.
... This is very much like real coding with other real people, at least in my experience, lol.
Brains together strong, theoretically.
No worries! Sorry, I was thinking of Cursor, not Codex. It essentially opens a side panel inside VS Code from which you can interact with your LLM. I haven't tried it myself, but I have tried Gemini Code Assist and Claude Code.
They'll modify the code for you. Some people are riskier than me and will let them go wild. I, on the other hand make them write a development plan beforehand and I review it. If everything looks good, then I'll give the go-ahead.
But I digress. I wouldn't mind manually applying the changes and copy/paste back and forth. That's how I did it previously anyway.
And I get it. I also have decades coding (since the 80s!), and I resisted for many months the idea of using AI to assist me with something that I enjoy doing most of the time.
Oh! Ok, that makes more sense, yes, I've poked around with Cursor... at least once, at some point?
But yeah, I'll also very often ask the LLM to... draw up some kind of plan, before it makes some larger scale modification, or if we're trying to add something that would have to span across multiple scripts, etc.
You've also been coding longer than I have, hah!
I wasn't even alive for most of the 80s... but I do remember having to actually remember phone numbers, hahaha!
They used to have cords! You were fancy if you could take a phone from the kitchen to the couch, whoah, cordless!
But anyway, yeah I wouldn't have even considered trying this if it would have wound but being totally reliant on an internet connection, someone else's computer doing the actual work.
That, quite literally, is how they getcha.