146
I Asked AI to Count My Carbs 27,000 Times. It Couldn’t Give Me the Same Answer Twice.
(www.diabettech.com)
This is a most excellent place for technology news and articles.
If you supplied humans with the same image and asked for the same estimate I'd be curious to know the difference in results.
Mine would be: "I have no idea" - An answer the LLMs generally refuse to give by their nature (usually declining to answer is rooted in something in the context indicating refusing to answer being the proper text).
If you really pressed them, they'd probably google each thing and sum the results, so the estimates would be as consistent as first google results.
LLMs have a tendency to emit a plausible answer without regard for facts one way or the other. We try to steer things by stuffing the context with facts roughly based on traditional 'fact' based measures, but if the context doesn't have factual data to steer the output, the output is purely based on narrative consistency rather than data consistency. It may even do that if the context has fact based content in it sometimes.