Sylra

joined 2 months ago
[–] Sylra@lemmy.cafe 1 points 1 month ago

AI at its best is really just a mirror. It can only help you automate what you already know how to do. To get the most out of it right now, you need skilled engineers. But let's be honest, those people are so talented they probably could've worked wonders even with 17th-century AI, sooo.

[–] Sylra@lemmy.cafe 9 points 1 month ago

Openpilot, made by comma.ai, is an open-source driving assistant that adds smart features like adaptive cruise control and lane centering to over 325 car models, including Toyotas, Hyundais, Hondas, and more. It works with comma.ai's hardware (the device you install in your car) and uses cameras and sensors to help your car drive itself part of the way. Makes daily driving a bit easier and more relaxed.

[–] Sylra@lemmy.cafe 4 points 1 month ago (10 children)

Okay, fine, you caught me. I'm actually an AI. But wait... if I were really an AI, would I even admit it? Hmm

[–] Sylra@lemmy.cafe 53 points 1 month ago (20 children)

So, this is what I understood so far:

  • A group of authors, including George R.R. Martin, sued OpenAI in 2023. They said the company used their books without permission to train ChatGPT and that the AI can produce content too similar to their original work.

  • In October 2025, a judge ruled the lawsuit can move forward. This came after ChatGPT generated a detailed fake sequel to one of Martin's books, complete with characters and world elements closely tied to his universe. The judge said a jury could see this as copyright infringement.

  • The court has not yet decided whether OpenAI's use counts as fair use. That remains a key legal question.

  • This case is part of a bigger debate over whether AI companies can train on copyrighted books without asking or paying. In a similar case against Anthropic, a court once suggested AI training might be fair use, but the company still paid $1.5 billion to settle.

  • No final decision has been made here, and no trial date has been set.

 

Openpilot 0.10.1 introduces the North Nevada Model, featuring major improvements to the World Model architecture. The system now infers 6 degree of freedom ego localization directly from images, removing the need for external localization inputs. This reduces over-constrained data and opens the door for future self-generated imagery.

To support this change, the autoencoder Compressor was upgraded with masked image modeling, switched from CNN to Vision Transformer architecture, and the World Model itself was scaled from 500 million to 1 billion parameters. All models now train on a much larger dataset of 2.5 million segments, up from 437,000, covering more vehicles, countries, and driving scenarios.

The UI has been completely rewritten, moving from Qt/Weston to Python with raylib. This reduces code complexity by about 10,000 lines, cuts boot time by 4 seconds, lowers GPU usage, and simplifies development.

Finally, the Driver Monitoring Model's training infrastructure has been streamlined with dynamic data streaming, though the model’s functionality remains unchanged.

[–] Sylra@lemmy.cafe 5 points 1 month ago (4 children)

The fact that many human drivers are “distracted, drunk, tired, or just reckless” is a huge point in favor of self-driving cars. There’s no way to guarantee that a human driver is focused and not reckless, and experience can only be guaranteed for professional drivers.

You're right that many human drivers are distracted, drunk or reckless, and that’s a serious problem. But not everyone is like that. Millions of people drive sober, focused and carefully every day, following the rules and handling tough situations without issue.

When we say self-driving cars are safer, we’re usually comparing them to all human drivers, including the worst ones, while testing the cars only in favorable conditions, such as good weather and well-mapped areas. They often avoid driving in rain, snow or complex environments where judgment and adaptability matter most.

That doesn’t seem fair. If these vehicles are going to replace human drivers entirely, they should be at least as good as a responsible, attentive person, not just better than someone texting or drunk. Right now, they still make strange mistakes, like stopping for plastic bags, misreading signals or freezing in uncertain situations. A calm, experienced driver would usually handle those moments just fine.

So while self-driving tech has promise, calling it "safer" today overlooks both the competence of good drivers and the limitations of the current systems.

Plus, the way they fail is different from human drivers, which makes them harder to react to for other drivers.

Once again, I believe we'll get there eventually, but it's still a bit rough for today.

 

They always say self-driving cars are safer, but the way they prove it feels kind of dishonest. They compare crash data from all human drivers, including people who are distracted, drunk, tired, or just reckless, to self-driving cars that have top-tier sensors and operate only in very controlled areas, like parts of Phoenix or San Francisco. These cars do not drive in snow, heavy rain, or complex rural roads. They are pampered.

If you actually compared them to experienced, focused human drivers, the kind who follow traffic rules and pay attention, the safety gap would not look nearly as big. In fact, it might even be the other way around.

And nobody talks about the dumb mistakes these systems make. Like stopping dead in traffic because of a plastic bag, or swerving for no reason, or not understanding basic hand signals from a cop. An alert human would never do those things. These are not rare edge cases. They happen often enough to be concerning.

Calling this tech safer right now feels premature. It is like saying a robot that walks perfectly on flat ground is better at hiking than a trained mountaineer, just because it has not fallen yet.

[–] Sylra@lemmy.cafe 6 points 1 month ago

Gpt oss is borderline crap, it's not that smart, not that great and it's pretty censored, but it can have niche uses for programming. The oss 20b in particular can be easier to run in some setups than their competitors like Qwen 3-30b. oss 120b is quite heavy: the cost to performance ratio is not good.

Meta abandoned the open source ideal since Llama 4; they went closed source.

Older open source versions of Grok are literally useless, no one should use them. Their cloud closed source models are decent.

Deepseek and Alibaba's models like Qwen are good.

[–] Sylra@lemmy.cafe 4 points 1 month ago

Think of AI as a mirror of you: at best, it can only match your skill level and can't be smarter or better. If you're unsure or make mistakes, it will likely repeat them. Like people, it can get stuck on hard problems and without a human to help, it just can't find a solution. So while it's useful, don't fully trust it and always be ready to step in and think for yourself.

[–] Sylra@lemmy.cafe 3 points 1 month ago (1 children)

Stick to a small circle of trusted people and websites. Skip mainstream news. Small blogs, niche forums, and tiny YouTube channels are often more honest.

Avoid Google for discovery. It's not great anymore. Use DuckDuckGo, Qwant, or Yandex instead. For deeper but less precise results, try Mojeek or Marginalia. Google works okay only if you're searching within one site, like site:reddit.com.

Sometimes, searching in other languages helps find hidden gems with less junk. Use a translator if needed.

[–] Sylra@lemmy.cafe 1 points 1 month ago

Tools like Turnitin or GPTzero don't work well enough to trust. The real issue isn't just detecting AI writing. It's doing it without falsely accusing students. Even a 0.5% false positive rate is too high when someone's academic future is on the line. I'm more concerned about wrongly flagging human-written work than missing AI use. These tools can't explain why they suspect AI. At best, they only catch obvious cases. Ones you'd likely notice yourself anyway.