this post was submitted on 24 Apr 2026
544 points (93.3% liked)
memes
21009 readers
1833 users here now
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads/AI Slop
No advertisements or spam. This is an instance rule and the only way to live. We also consider AI slop to be spam in this community and is subject to removal.
A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What I'll defend, however, is fractional measurements when precision matters.
With decimal measurements, precision can't be nearly as granular. If your measurement is precise to one 1/8 of a unit, how do you represent that in decimal? 0.625 implies your measurement is precise to the nearest thousandth, but rounding it to 1 also isn't precise. 5/8, however, tells you the measurement AND the precision.
With fractional measurements, you can specify precision by changing the denominator to any number, whereas decimal is essentially fractional measurements, but with fixed denominator at powers of 10. For instance, a measurements of a half-unit with levels of precision between 0.1 and 0.10, fractional can be 6/12, 7/14, 8/16, 9/18, 10/20, 24/48, etc. Decimal can't specify that precision without essentially writing a sentance.
What's simpler to record? "24/48" or "0.5 +- 0.208333...."
That is not a flaw of decimals. It is a flaw of you not knowing how precision is encoded in decimals.
0,7583 means 0,7583 ± 0,00005.
0,758300 means 0,75833 ± 0,0000005.
0,76 means 0,76 ± 0,005.
That is why when in a store an item costs 7,5€, we don't say 7,5€. We say 7,50€. Because it is precise to a hundredth of a €, not a tenth of a €.
When precision matters, that precision is considered in the measurements. You would never put 0.5 +- 0.208333, you express it as 0.50 +- 0.21. The error value is just the standard deviation of the measurements and it doesn't make sense to use more than 2 significant digits.
Another example would be measuring large distances using a ruler with centimeter precision. In that case, a measurement would be expressed as 250 +- 1 cm. Converting the measurement from cm to mm, it is 2500 +- 10 mm. This is much more cumbersome with inches or feet as changing units means updating the precision, possibly reducing it.
Did I defend using imperial units?
I'm defending recording precision without having to add a qualifying statement because you can otherwose only increase precision by orders of magnitude in decimal.
That does make sense when you need absolute precision like when doing abstract math. Otherwise you can just use whichever unit and number of significant digits you need and be precise to that amount. That's what you do with imperial/American customary units as well; a 5/32" screw isn't going to be manufactured to the precision of a Planck length; manufacturers specify their sizes to three significant digits of an inch.
Let's say you have a machining project and your tools are precise to 0.1 mm. So you plan things out at a precision of 0.1 mm. It doesn't matter that a distance is 17/38 cm exactly. It doesn't matter that it's 4.473684210526315789... mm. You can't set the tool to anything better than 4.5 mm anyway.
Also note that the metric system doesn't prevent you from using fractions. You're perfectly free to work with fractions where useful. That's just not how people talk about lengths because those fractions have no meaning outside your specific use case.
But that 5/32 screw has its precision built into the measurement. Sig figs and error ranges aren't required for fractional, because both are built into the denominator.
If your 5/32 measurement is super precise you can record it as 160/1024ths, because the denominator has "+/- 1/2048" built into the measurement.
As I said in another (larger) comment, you just don't know how precision is encoded in decimals, which doesn't mean that it isn't. In fact, precision is encoded in decimals, just like with fractions.
0,7 is 0,7 ± 0,05 0,7000 is 0,7 ± 0,00005
It does. If it were precise to less than that, you'd say 0.62 or 0.6 to indicate hundredths or tenths. Why would you say 0.625 if you're not precise to thousandths? You'd say 0.62500 if you wanted to indicate precision to hundred-thousandths.
But what if your precision is greater than 1/100 but not 10 times as precise?
If you have 0,7 that is more precise than 0,7 and less precise than 0,7. You can just say 0,7 ± 0,02.
My metric measurents are precise to 1/10th of a unit. Like 22.7°C or 34.7cm.
What if you get a new ruler that's 4 times as precise than the one you have that measures to 0.1cm? You don't want to record it as 0.70cm, because that's more precise than your measurement. But you could record it in 40ths with fractions.
Another way to look at it is that decimal is already a fractional system (1/10, 1/100, 1/1000) that doesn't allow you to use 90% of possible fractions.
If there's a technical need you can have your scale divided into whatever you want. There's nothing preventing you into dividing your scale every 0.25mm to get 1/4th precision. It's very rarely done because there's no need, but it's absolutely possible.
Thermometers have sometimes division per 0.5°C instead of 1°C
Yes, but how do you record that precision without needing a qualifying statement. When precision matters, "0.25" represents a measurement that is known to be closer to 0.25 than it is to either 0.24 or 0.26. Something that is only precise to 1/4 of a unit isn't that precise. The decimal way to record a precision of 1/4 is "0.25 +/- 0.125".
The thing to understand about decimals and precision is that you're still recording a fractional measurement, but your denominator is fixed to powers of 10. 0.1 is 1/10. 0.01 is 1/100. So when increasing precision by less than a factor of 10 is difficult to represent.
This matters a lot for things like digital calipers, where a cheap set will show the same measurement as a nice set that's more precise because the good ones aren't 10 times as precise. But if they have a fractional setting, the nicer ones will read more precisely because that increased precision can be represented on the display.
This hurts my brain. Why do we care about all the weird fractions? +/- 0.1 is just another way of saying 1/10. You can still do that if you want without having to do fraction math in random denominators.
The fraction allows you to communicate length and tolerance in a single number. A decimal implies precision to the last number, a measure with a fraction can show 1/8 as more granular than 1/16. 1/8 of a cm is less precise than a mm, but if you wrote 1.125 cm, you are now implying sub mm level precision.
This matters because the level needed in building generally doesn't line up to 1/10 measurements. For example if you had a brick wall and a row had 1 cm height differences between bricks in a row it would be extremely obvious and look terrible. A 1mm height difference would be impossible to notice, but is also overkill to get that level. Ideal is about 5/8 cm or 6.35 mm difference over 3 meters of wall. The fractional measure often ends up easier to work with in practice.
I don't see how that isn't true of decimals, too. 0.1 indicates a precision of 1 digit, 0.12 indicates a precision of 2, 0.120 indicates a precision of three.
Exactly like my example above. 1/8 implies +or- 1/16. While .125 implies +or- .0005, but it was only measured to +or- .0625, which is 2 orders of magnitude different.
How do you account for doubling precision? Decimal only records 10-fold steps.
In any context where it's important, you'd note it with +/-. Not really a problem.
I guess there's nothing wrong with saying 1/8th metre, 1/8th centimetre, 15/16th metre either. Just as some people might use 0.356 inches.
I'd be a big fan of fractional metric.
Although if we really wanted to go crazy (this will never happen), we'd ditch base-10. It's a stupid base that we only use because of our fingers. Base 12 is superior and is actually the strongest defense of feet and inches (though yards can fuck right off). It has 6 divisors whereas 10 only has 4.
Base 60 is also cool (divisible by 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60), but that would also be significantly more difficult to teach children - it takes them long enough to learn the order of 26 letters.
And being a geographer, I adore 360 because it's fucking awesome to work with, and you don't get a better composite until 2520, which is just too much to deal with.
If you are drawing maps, a precision of meters is enough. If you are building a house, cm it is. If you are making furniture, mm. If you are working with metal, um (micrometer)
If I want to build something and I want it to be 23/48" ± 1/24" how would I write that? Because the way I understand it x/48" would imply a tolerance of ± 1/48".
If your tolerance is 1/24 your precision isn't fine enough enough to record 23/48.
23/48 has a built in tolerance of +/- 1/96, because outside of that range the measurement would read as either 22/48 or 24/48.