katsura

joined 3 weeks ago
 

One of the slides Bahran showed during the presentation attempted to quantify the amount of human involvement these new AI-controlled power plants would have. He estimated less than five percent “human intervention during normal operations.”

“The claims being made on these slides are quite concerning, and demonstrate an even more ambitious (and dangerous) use of AI than previously advertised, including the elimination of human intervention. It also cements that it is the DOE's strategy to use generative AI for nuclear purposes and licensing, rather than isolated incidents by private entities,” Heidy Khlaaf, head AI scientist at the AI Now Institute, told 404 Media.

“The implications of AI-generated safety analysis and licensing in combination with aspirations of <5% of human intervention during normal operations, demonstrates a concerted effort to move away from humans in the loop,” she said. “This is unheard of when considering frameworks and implementation of AI within other safety-critical systems, that typically emphasize meaningful human control.”

Hmm

[–] katsura@leminal.space 3 points 5 days ago (1 children)

thank you for the lengthy response. I have some experience with vectors but not necessarily piles of them, though maybe I do have enough of a base to do a proper deep dive.

I think I am grasping the idea of the n-dimensional "map" of words as you describe it, and I see how the input data can be taken as a starting point on this map. I am confused when it comes to walking the map. Is this walking, i.e. the "novel" connections of the LLMs, simply mathematics dictated by the respective values/locations of the input data? or is it more complicated? I have trouble conceptualizing how just manipulating the values of the input data could lead to conversational abilities, is it that the mapping is just absurdly complex, like the vectors just have an astronomical number of dimensions?

[–] katsura@leminal.space 2 points 5 days ago (2 children)

thank you for your response, the cauliflower anecdote was enlightening. your description of it being a statistical prediction model is essentially my existing conception of LLMs, but this was only really from gleaning other's conceptions online, and I've recently been concerned it was maybe an incomplete simplification of the process. I will definitely read up on markov chains to try and solidify my understanding of LLM 'prediction

I have kind of a follow up if you have the time. I hear a lot that LLMs are "running out of data" to train on. When it comes creating a bicycle schematic, it doesn't seem like additional data would make an LLM more effective at a task like this, since its already producing a broken amalgamation. It seems like generally these shortcomings of LLMs' generalizations would not be alleviated by increased training data. So what exactly is being optimized by massive increases (at this point) in training data--or, conversely, what is threatened by a limited pot?

I ask this because lots of people who preach that LLMs are doomed/useless seem to focus in on this idea that their training is limited. To me their generalization seems like evidence enough that we are no where near the tech-bro dreams of AGI.

 

I do not believe that LLMs are intelligent. That being said I have no fundamental understanding of how they work. I hear and often regurgitate things like "language prediction" but I want a more specific grasp of whats going on.

I've read great articles/posts about the environmental impact of LLMs, their dire economic situation, and their dumbing effects on people/companies/products. But the articles I've read that ask questions like "can AI think?" basically just go "well its just language and language isnt the same as thinking so no." I haven't been satisfied with this argument.

I guess I'm looking for something that dives deeper into that type of assertion that "LLMs are just language" with a critical lens. (I am not looking for a comprehensive lesson on technical side LLMs because I am not knowledgeable enough for that, some goldy locks zone would be great). If you guys have any resources you would recommend pls lmk thanks

[–] katsura@leminal.space 3 points 3 weeks ago

that is insane