this post was submitted on 29 Jul 2025
1 points (100.0% liked)

Technology

74153 readers
3806 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 35 comments
sorted by: hot top controversial new old
[–] Ulrich@feddit.org 1 points 2 weeks ago (4 children)

I think it's generally a brilliant solution but there are a couple of problems here:

  1. The scanner seems to flag fucking everything and charge for minor damage where a human would probably flag it as wear.
  2. No one is allowed to correct the scanner:

Perturbed by the apparent mistake, the user tried to speak to employees and managers at the Hertz counter, but none were able to help, and all "pointed fingers at the 'AI scanner.'" They were told to contact customer support — but even that proved futile after representatives claimed they "can’t do anything."

Sounds to me like they're just trying to replace those employees. That's why they won't let them interfere.

[–] Lizardking13@lemmy.world 1 points 2 weeks ago (1 children)

It's really funny here. There already exists software that does this stuff. It's existed for quite a while. I personally know a software engineer that works at a company that creates this stuff. It's sold to insurance companies. Hertz version must just totally suck.

[–] phutatorius@lemmy.zip 1 points 2 weeks ago

It's designed to suck.

[–] tiramichu@sh.itjust.works 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I'm not sure how you can make the points you make, and still call it a "generally brilliant solution"

The entire point of this system - like anything a giant company like Hertz does - is not to be fair to the customer. The point is to screw the customer over to make money.

Not allowing human employees to challenge the incorrect AI decision is very intentional, because it defers your complaint to a later time when you have to phone customer support.

This means you no longer have the persuasion power of being there in person at the time of the assessment, with the car still there too, and means you have to muster the time and effort to call customer services - which they are hoping you won't bother doing. Even if you do call, CS hold all the cards at that point and can easily swerve you over the phone.

It's all part of the business strategy.

[–] Ulrich@feddit.org 1 points 2 weeks ago (1 children)

I'm not sure how you can make the points you make, and still call it a "generally brilliant solution"

Because the technology itself is not the problem, it's the application. Not complicated.

[–] Trouble@lemmy.blahaj.zone -1 points 2 weeks ago (2 children)

The technology is literally the problem as it’s not working

[–] phutatorius@lemmy.zip 1 points 2 weeks ago

It works as Hertz intended. And that's the problem.

[–] Ulrich@feddit.org 1 points 2 weeks ago (1 children)

There's literally nothing wrong with the technology. The problem is the application.

[–] Trouble@lemmy.blahaj.zone -1 points 2 weeks ago (2 children)

The technology is NOT DOING WHAT ITS MEANT TO DO - it is IDENTIFYING DAMAGE WHERE THERE IS NONE - the TECHNOLOGY is NOT working as it should

[–] elephantium@lemmy.world 1 points 2 weeks ago

The technology isn't there to accurately assess damage. It's there to give Hertz an excuse to charge you extra money. It's working exactly as the ghouls in the C-suite like.

[–] Ulrich@feddit.org 0 points 2 weeks ago (1 children)

Just because THE TECHNOLOGY IS NOT PERFECT does not mean it is NOT DOING WHAT IT'S intended to do. Sorry I'm having trouble controlling THE VOLUME OF MY VOICE.

[–] Trouble@lemmy.blahaj.zone -1 points 2 weeks ago

There's literally nothing wrong with the technology.

Pick a lane troll

[–] captain_aggravated@sh.itjust.works 0 points 2 weeks ago (1 children)

Sounds like they want to lose those customers.

[–] Ulrich@feddit.org 1 points 2 weeks ago* (last edited 2 weeks ago)

Companies have been fucking consumers since the beginning of time and consumers, time and time again, bend over and ask for more. Just look at all of the most successful companies in the world and ask yourself, are they constantly trying to deliver the most amazing service possible for their customers or are they trying to find new ways to fuck them at every available opportunity?

[–] CyprianSceptre@feddit.uk 0 points 2 weeks ago (1 children)

You are spot on here. AI is great for sensitivity (noticing potential issues), but terrible for specivity (giving many false positives).

The issue is how AI is used, not the AI itself. They don't have a human in the checking process. They should use AI scanner to check the car. If it's fine, then you have saved the employee from manually checking, which is a time-consuming process and prone to error.

If the AI spots something, then get an employee to look at the issues highlighted. If it's just a water drop or other false positive, then it should be a one click 'ignore', and the customer goes on their way without charge. If it is genuine, then show the evidence to the customer and discuss charges in person. Company still saves time over a manual check and has much improved accuracy and evidence collection.

They are being greedy by trying to eliminate the employee altogether. This probably doesn't actually save any money, if anything it costs more in dealing with complaints, not to mention the loss of sales due to building a poor image.

[–] phutatorius@lemmy.zip 1 points 2 weeks ago

AI is great for sensitivity (noticing potential issues), but terrible for specivity (giving many false positives).

AI is not uniqely prone to false positives; in this case, it's being used deliberately to produce them.

[–] AlecSadler@lemmy.blahaj.zone 1 points 2 weeks ago

Okay so...in the rare event I need to rent a car, any suggestions on who to use that isn't Hertz and sister companies?

[–] bcgm3@lemmy.world 1 points 2 weeks ago

Oh, so Hertz has gotten wise to... every online platform that exists: Outsourcing all responsibility for their user-hostile bullshit to some vague "system" that cannot be held accountable.

I'm so sorry but the advertised cost has doubled because... Computer says so! No, sir, there's nothing I can do, sir, you see it's the system.

And you can't go anywhere else, because everyone else is doing it (or soon will be) too!

[–] ininewcrow@lemmy.ca 0 points 2 weeks ago (1 children)

I'd ask for the stupid AI scanning system to scan my car before I agree to renting it. Once they sign off on the 'all clear' notification from their AI scanner before rental, then I'd consider renting it .... but after reading this headline, I'd probably just tell them, I'm spending a few hundred dollars more on renting a car from someone else.

[–] Clasm@ttrpg.network 1 points 2 weeks ago

Just spit balling here, but they probably tune the AI for different thresholds between return and rent out so that they can rake in the damage fees for things that "weren't there" during the first AI scan.

[–] GaMEChld@lemmy.world 0 points 2 weeks ago (1 children)

I wonder what a credit card dispute would result in here. Underutilized feature when businesses pull shady shit. Think I've had 6 or so disputes over the years, never failed.

[–] TeddE@lemmy.world 0 points 2 weeks ago (1 children)

Too many people these days don't use or have access to credit cards for services like this. Many people I know only use bank debit cards, or worse, use the debit preloaded cash cards issued by their employers' payroll service provider.

Credit cards motivate banks to help you, because if you won't pay, and the business doesn't pay, the bank has to take the hit.

Debit cards will work as well if your bank values it's reputation - but not all banks do.

And I would not trust a preloaded card provider to assist. You are neither their business partner nor their customer and that puts your interests at the bottom of a very long list. You have to hope some law is on your side or that your issue is so trivial that resolving it is more cost effective then dealing with you.

Credit cards are also an instrument of christofascist pedophiles who want to ban all pornography and 'pornography' (they consider the existence of queer people to be porn)

[–] naught101@lemmy.world 0 points 2 weeks ago (1 children)

Sounds like that shit with dodgy smoking detection in a hotel from last week..

[–] BackgrndNoize@lemmy.world 0 points 2 weeks ago (1 children)

Yup intentionally using dogy tools to extract more money from people under false pretenses, at this point I'm boycotting any company that claims to use AI, fuck em all

[–] RagingRobot@lemmy.world 1 points 2 weeks ago

Good luck trying to boycott a car rental company, as far as I can tell they are all actually the same company with 5 different "brands". You rent from one but when you show up they send you to another one who has the car. It's crazy.

[–] flop_leash_973@lemmy.world -1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

You mean an LLM that doesn't have the ability to understand context fails to make decisions that require context to do properly? Shocking /s

[–] coach_cheese@lemmy.world 1 points 2 weeks ago (2 children)

Except they are using computer vision, not an LLM

[–] Auli@lemmy.ca 0 points 2 weeks ago (1 children)

And what is processing that information?

[–] coach_cheese@lemmy.world 1 points 2 weeks ago (1 children)

Computer vision commonly uses convolutional neural networks on the input, which is different from the transformer neural networks used in LLMs. If you have more info indicating LLMs are used here please share

[–] mojofrododojo@lemmy.world -1 points 2 weeks ago (1 children)

If you have more info indicating LLMs are used here please share

two seconds of research would reveal LLMs are ALL OVER COMPUTER VISION. Are convolutional networks used? Yes. Are LLMs used? Yes. And MLLMs.

Tell you what sparky: you find me a source that says ONLY CNNs are used, then you can act like a subject matter expert.

https://arxiv.org/abs/2311.16673

https://techcommunity.microsoft.com/blog/educatordeveloperblog/its-not-just-words-llms-in-computer-vision/3927912

https://medium.com/@tenyks_blogger/multimodal-large-language-models-mllms-transforming-computer-vision-76d3c5dd267f

https://github.com/OpenGVLab/VisionLLM

https://www.chooch.com/blog/how-to-integrate-large-language-models-with-computer-vision/

[–] coach_cheese@lemmy.world 1 points 2 weeks ago (1 children)

I was actually referring to UVEye which was referenced in the article. I looked into UVEye and nowhere did it say it used LLMs with their computer vision. That’s why I asked if anyone had any info on them using it. The comment I replied to assumed LLMs were used but supplied no evidence. None of the links you shared have anything to do with UVEye either.

[–] mojofrododojo@lemmy.world -1 points 2 weeks ago (1 children)

Computer vision commonly uses convolutional neural networks on the input,

no where do you specify UVEye.

You could admit they're all over, but instead double down on how I assumed lol

[–] coach_cheese@lemmy.world 1 points 2 weeks ago

Except they are using computer vision, not an LLM

That’s what I initially said, referring to the article. If you have nothing to say regarding the technology in this article that’s fine, but don’t just assume that since there is research of incorporating LLMs into computer vision means it was used in this specific case.