artwork

joined 1 month ago
[–] artwork@lemmy.world -1 points 2 days ago* (last edited 2 days ago)

If you want to actually realize the amount of possible misunderstanding in the current conversation and of what shell scriting is, please do consider joining #bash at Libera IRC. Please do also mention the word "throwaway" in the rooms! Since there's literally no understanding on what you mean still, sorry. It does not feel like you have a significant enough understanding of the subjects raised.

For a very simple example, there are literally no documentation regarding certain cases you'll encounter in Bash's built-ins even, unless you actually encounter it or learn from Bash's very source code, like read built-in. Not to mention shenanigans in shell logics for inter-process communication (IPC), file-descriptors, environment variables like PWD, exported functions' BASH_FUNC_, pipes, etc.

[–] artwork@lemmy.world 0 points 2 days ago* (last edited 2 days ago) (4 children)

I still don't get what you mean, sorry. And why Bash and not another shell?
Why not Korn, Ash, Dash, Zsh, Fish, or anything REPL, including PHP, Perl, Node, Python etc.

Should we consider "throwaway" anything that supports interactive mode of your daily driver you chose in your default terminal prompt?
What does "throwaway" code means in the first place?

[–] artwork@lemmy.world 4 points 3 days ago* (last edited 3 days ago) (6 children)

I am sorry, but I am not sure what tells you how Bash "was designed" or not. Perhaps you haven't yet written anything serious in Bash...
Have you checked out Bash PitFalls at Wooledge, at least?
Bash, or the most shells, including Posix, or even Perl, are some of the most complex languages out there to make a mistake... since there's no compiler to protect you from, and though legendary but readline may cause the whole terminal go flying, depending on the terminal/terminfo in process...

No, sorry. I absolutely disagree on your stance regarding "shell" for a "bugless" "huge deal" in "real cases".

[–] artwork@lemmy.world 21 points 4 days ago* (last edited 4 days ago)

It's rather sad when such extensions don't refer to their source code in descriptions.
Meanwhile, Google Chrome extensions are to be updated automatically.
Sorry, I won't install it, since I have no trust in the source of it, and I care about my time to not re-check it on every update.

[–] artwork@lemmy.world 0 points 4 days ago* (last edited 4 days ago) (1 children)

No, thank you. Sorry, never.

Not only that, but the huge probability of mistakes is just deafening. The last time I used LLM was in 2023 someone recommended for a task at paper work, and I got a literal headache in 10 minutes... Since then I never ever will use that sorrow for anything that is not for blackbox pentesting or experimental unverified data generated you may find in medicine or military isolated solutions.

That deafening feel that every single bit of output from that LLM or void machine may contain a mistake no soul is accountable for to ask about... A generated bit of someone's work you just cannot verify since no source nor human is available... How would you trace the rationale that resulted in the output shown?

Faster? Is that so... Doesn't verification of every output require even more time to test it and consider stable, to prove it is correct, to stay accountable for the knowledge and actions you perform as a developer, artist, researcher... human?

Your mind is to be trained to do a research, remember, and do not depend on someone's service to a level of predominance/replacement.
Meanwhile, effort, passion, creativity, empathy, and love, in turn, you carry, supports in long-term.

You may not care now, though, but you do you. It's your mind and memory you develop.

[–] artwork@lemmy.world -1 points 4 days ago

Thank you! You do you.

[–] artwork@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (1 children)

Thank you, but I do disagree. You cannot know the "result" of that LLM does include all the required context, and you won't re-clarify it, since the output does already not contain the relevant, and in the end you miss the knowledge and waste the time, too.

How are you sure the output does include the relevant? Will you ever re-submit the question to an algorithm, without even knowing it is required re-submit it, since there's even no indication for it? I.e. The LLM just did not include what you needed, did not include also important context surrounding it, and did not even tell you the authors to question further - no attribution, no accountability, no sense, sorry.

[–] artwork@lemmy.world 0 points 1 week ago

Apparently, an unbearable pain and sorrow starts taking over my head, heart, or even mind with soul... whenever I even think about a moment I realize that something I focused on, in order to imagine and try realizing what idea the artist wanted to express, actually, had no human involved in it but void...

This void is not the reason I dignify discovered, nor I will ever want to... this is not the reason I live... Art is between human to pass ideas and love through the universe, I trust...

Thank you... dear Artists... from the very depths of my hear and soul... for you creating miracles... for believing in the infinitely ineffably magnificent... in purpose... in human... in art...

The waves are coming clean...
Crash your breach and roll the tide that dreams...
The tide that dreams tomorrow...
Carry on and cover me...
The storm will come and it brings relief...
~ Devil's Eyes - Calisus

---

- The most important thing in communication is hearing what isn’t said. ~ Peter Drucker
- The art of conversation is the art of hearing as well as of being heard. ~ William Hazlitt
- The purpose of art is the lifelong construction of a state of wonder. ~ Glenn Gould
- The purpose of art is the fight for freedom. ~ Ai Weiwei
- The purpose of art is to stop time. ~ Bob Dylan
- The purpose of art is mystery. ~ Rene Magritte

---

Sorry. Just paralyzed by the indescribable beauty of the cosmos. We'll get to work. ~ Daniel Suarez

Related: Chinese room (...argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave...)

[–] artwork@lemmy.world 11 points 1 week ago* (last edited 1 week ago) (5 children)

Please no. Absolutely not. LLM is absolutely not "nice for dealing with confusion" but the very opposite.
Please do consider people effort, articles, attributions, and actually learning and organizing your knowledge. Please do train your mind, and self-confidence.

[–] artwork@lemmy.world 21 points 1 week ago* (last edited 1 week ago) (1 children)

Thank you, but I am sorry, I will not read the output of the LLM. I'll re-recheck the grammar manually.

view more: next ›