A truly logic system would be entirely designed around a base-12 number system. But we were born with an imperfect set of 10 fingers and that doomed us.
Those aliens have 6 fingers. It’s an absolutely ironic twist that their discussion on measuring systems is super illogical for them, and yet logical is the verbiage they use.
Basically it’s because 12 is more divisible than 10. Factors of 10 are 1,2,5 and 10. 12 has 1,2,3,4,6 and 12. This gives more flexibility when discussing numbers. Our time is technically using base 12, which is why we can say quarter past 4 and it means a traditional whole number. That’s the argument I’ve heard anyway
I believe this is also why we have 360 “degrees” in a circle, and not 365. The ancients hated that a year was clise to, but not exactly, 365 days. They chalked it up to the imperfection of Earth relative to the heavens. But a perfect year should be 360 days because it is divisible by every single digit number but 7.
On the matter of days in a years, there’s also the idea of spliting the year into 13 months of 28 days each for a total of 364 days, closer matching the lunar cycle (and women’s body). Every day of the year would always be on the same day of the week.
Then the extra day? It’s world day, a global holiday for celebrating the new year, and it doesn’t belong in any weekday. Sometimes we’d need an extra leap year day (just like right now with 29th February) so they would just both be world day.
A base 12 metric system is the best of all worlds. 1/3 of a cm is 0.4cm or 4.
Your way seems like the worst of all worlds. What is 0.1 feet in inches? shrugs. If you’re going to use a different system to the people around you, why not use normal metric?
I said decimal inches, not decimal feet. Also, I use them personally with my own projects, not when giving measurements to other people. 4.25 inches makes more sense to me than 4 1/4 inches. I could use cm but I’m more used to inches since I live in the US. If I were to give my measurements to someone else I’d use fractions, since that is the standard here.
Base 6 however is perfect for 2 hands with 5 fingers each. You can easily represent the six possible digits 0 1 2 3 4 5 on each hand, and can therefore comfortably count to 55 (decimal 35) with two hands, using our familiar place-value numeral system.
A base 12 number system would have two extra symbols. Twelve would be written 10 and be called ten, and the number 144 would be written 100 and be called one hundred.
Everything you may think is inherent to base 10 is largely not. The quirky rules of 9’s multiplication table would apply to 11’s. Pi and e would still be irrational, and continue being no no matter which base of N you choose. Long division would work the same. Etc.
Yep. In computer science you sometimes need to calculate with hexadecimal numbers where 10-15 are the letters A-F. You just use another factor for scaling “easily”.
In hexadecimal 10 is 16 in decimal. So if you do C * 10 it’s C0 but that is 192 in decimal (12 * 16, remember the base is 16).
Whats cool though is that (all hexadecimal):
10 / 2 = 8
10 is 2 to the power of 4 which means 10 is divisible by 2 4 times.
Similarly (and arguably even cooler) with a base 12 system 10 is divisible by 2 AND 3!
I’ll also defend fractional measurements over decimal to my dying breath. Decimal measurements can’t express precision very well at all. You can only increase or decrease precision by a power of 10.
If your measurement is precise to a quarter of a unit, how do you express that in decimal? “.25” is implying that your measurement is precise to 1/100th - misrepresenting precision by a factor of 25.
Meanwhile with fractions it’s easy. 1/4. Oh, your measurement of 1/4 meter is actually super duper precise? Great! Just don’t reduce the fraction.
928/3712 is the same number as 1/4 or .25, but now you know exactly how precise the measurement is. Whereas with a decimal measurement you either have to say it’s precise to 1/1000th (0.250), which is massively understating the precision, or 1/10000th (0.2500), which is massively overstating it.
Honestly, I don’t give a shit either way. Wish us 'mericans were on the same wavelength as the rest of the world, but we’re awful in so many ways it doesn’t even register.
However, this troll is gold and I think you’re all sleeping on his genius
No measured value will be perfectly precise, so it doesn’t make sense to use that as a criteria for a system of measurement. You’re never going to be able to cut a board to exactly 1/3 of a foot, so it doesn’t matter that the metric value will be rounded a bit.
i’ve never heard of anyone using non-reduced fractions to measure precision. if you go into a machine shop and ask for a part to be milled to 16/64”, they will ask you what precision you need, they would never assume that means 16/64”±1/128”.
if you need custom precision in any case, you can always specify that by hand, fractional or decimal.
But you can’t specify it with decimal. That’s my point. How do you tell the machine operator it needs to be precise to the 64th in decimal? “0.015625” implies precision over 15,000x as precise as 1/64th. The difference between 1/10 and 1/100 is massive, and decimal has no way of expressing it with significant figures.
sure you can, you say “i need a hole with diameter 0.25” ± 0.015625“”. it doesn’t matter that you have more sig figs when you state your precision
but regardless, that’s probably not the precision you care about. there’s a good chance that you actually want something totally different, like 0.25±0.1”. with decimal, it’s exceptionally clear what that means, even for complicated/very small decimals. doing the same thing fractionally has to be written as 1/4±1/10”, meaning you have to figure out what that range of values are (7/20” to 3/20”)
Having to provide a “+/-” for a measurement is a silly alternative to using a measurement that already includes precision. You’re just so used to doing things a stupid way that you don’t see it.
providing an arbitrarily non-reduced fraction is an even sillier alternative. the same fundamental issue arises either way, and it’s much clearer to use obvious semantics that everyone can understand
How do you represent 1/64th in decimal without implying greater or lesser precision? Or 1/3rd? Or 1/2 or literally anything that isn’t a power of 10?
You’re defending the practice of saying “this number, but maybe not because we can’t actually measure that precisely, so here’s some more numbers you can use to figure out how precise or measurements are”
How is that a more elegant solution than simply having the precision recorded in a single rational measurement?
A truly logic system would be entirely designed around a base-12 number system. But we were born with an imperfect set of 10 fingers and that doomed us.
Those aliens have 6 fingers. It’s an absolutely ironic twist that their discussion on measuring systems is super illogical for them, and yet logical is the verbiage they use.
Care to elaborate on how base 12 would be better than base 10 in this case?
Basically it’s because 12 is more divisible than 10. Factors of 10 are 1,2,5 and 10. 12 has 1,2,3,4,6 and 12. This gives more flexibility when discussing numbers. Our time is technically using base 12, which is why we can say quarter past 4 and it means a traditional whole number. That’s the argument I’ve heard anyway
I believe this is also why we have 360 “degrees” in a circle, and not 365. The ancients hated that a year was clise to, but not exactly, 365 days. They chalked it up to the imperfection of Earth relative to the heavens. But a perfect year should be 360 days because it is divisible by every single digit number but 7.
On the matter of days in a years, there’s also the idea of spliting the year into 13 months of 28 days each for a total of 364 days, closer matching the lunar cycle (and women’s body). Every day of the year would always be on the same day of the week.
Then the extra day? It’s world day, a global holiday for celebrating the new year, and it doesn’t belong in any weekday. Sometimes we’d need an extra leap year day (just like right now with 29th February) so they would just both be world day.
https://en.m.wikipedia.org/wiki/International_Fixed_Calendar
Check for pros and cons.
I’m very much in favor of the 13 month system. So hard to change such things now that we don’t have emperors.*
*I’m also very much in favor of not having emperors though…
The ancients actually used a 360 day calendar and a bonus week of 5-6 days at the end before the new year started.
deleted by creator
Because 12 is more than 10 and more is better.
7.62mm is more than 5.56mm but 'muricans (fuck yeah) still chose AR-15s because freedum. Where is your God now? /s
I’m american and chose 7.62 three times in the forms of SKS, AK-47, and AK-104. Big bullet go boom.
I’ve heard before it’s because 1/3 can be represented as a whole number.
Just like feet, which can have 12 inches. But if we want to get more precise we start cutting inches into eighths for some reason 😅
old school carpenters’ squares also have inches divided in 12.
I always use decimal inches wherever possible, personally. Makes so much more sense to me than “3/64” or some crap like that
A base 12 metric system is the best of all worlds. 1/3 of a cm is 0.4cm or 4.
Your way seems like the worst of all worlds. What is 0.1 feet in inches? shrugs. If you’re going to use a different system to the people around you, why not use normal metric?
I said decimal inches, not decimal feet. Also, I use them personally with my own projects, not when giving measurements to other people. 4.25 inches makes more sense to me than 4 1/4 inches. I could use cm but I’m more used to inches since I live in the US. If I were to give my measurements to someone else I’d use fractions, since that is the standard here.
Base 6 however is perfect for 2 hands with 5 fingers each. You can easily represent the six possible digits 0 1 2 3 4 5 on each hand, and can therefore comfortably count to 55 (decimal 35) with two hands, using our familiar place-value numeral system.
I like the idea of base 12 counting the segments of your fingers with your thumb. Though its less intuitive.
Base 10 is the most easy to scale, you just move the coma and add 0s. Base 12 doesn’t allow that easily
A base 12 number system would have two extra symbols. Twelve would be written 10 and be called ten, and the number 144 would be written 100 and be called one hundred.
Everything you may think is inherent to base 10 is largely not. The quirky rules of 9’s multiplication table would apply to 11’s. Pi and e would still be irrational, and continue being no no matter which base of N you choose. Long division would work the same. Etc.
What if I choose base Pi? Then pi = 1
Checkmate.
Oh nvm, you did say base of N, but that’s boring.
I bet you love radians.
Can’t spell radians without rad 😎
You can just assign digits to ten and eleven?
Yep. In computer science you sometimes need to calculate with hexadecimal numbers where 10-15 are the letters A-F. You just use another factor for scaling “easily”.
In hexadecimal 10 is 16 in decimal. So if you do C * 10 it’s C0 but that is 192 in decimal (12 * 16, remember the base is 16).
Whats cool though is that (all hexadecimal):
10 / 2 = 8
10 is 2 to the power of 4 which means 10 is divisible by 2 4 times.
Similarly (and arguably even cooler) with a base 12 system 10 is divisible by 2 AND 3!
10 / 3 = 4
10 / 2 = 6
You can count your 12 finger-parts with your thumb, once you go over 12 on one hand, go back to 1 and count one more on the other hand
Have fun counting on one hand, writing with the other, or counting to 100 dozenal on just two hands!
I’ll also defend fractional measurements over decimal to my dying breath. Decimal measurements can’t express precision very well at all. You can only increase or decrease precision by a power of 10.
If your measurement is precise to a quarter of a unit, how do you express that in decimal? “.25” is implying that your measurement is precise to 1/100th - misrepresenting precision by a factor of 25.
Meanwhile with fractions it’s easy. 1/4. Oh, your measurement of 1/4 meter is actually super duper precise? Great! Just don’t reduce the fraction.
928/3712 is the same number as 1/4 or .25, but now you know exactly how precise the measurement is. Whereas with a decimal measurement you either have to say it’s precise to 1/1000th (0.250), which is massively understating the precision, or 1/10000th (0.2500), which is massively overstating it.
Fractional measurements are awesome.
This is one of the dumbest fucking trolls I’ve ever seen.
Congratulations? I guess?
Honestly, I don’t give a shit either way. Wish us 'mericans were on the same wavelength as the rest of the world, but we’re awful in so many ways it doesn’t even register.
However, this troll is gold and I think you’re all sleeping on his genius
No measured value will be perfectly precise, so it doesn’t make sense to use that as a criteria for a system of measurement. You’re never going to be able to cut a board to exactly 1/3 of a foot, so it doesn’t matter that the metric value will be rounded a bit.
Not “a bit”. You can have a 9x difference in precision and be unable to record it.
i’ve never heard of anyone using non-reduced fractions to measure precision. if you go into a machine shop and ask for a part to be milled to 16/64”, they will ask you what precision you need, they would never assume that means 16/64”±1/128”.
if you need custom precision in any case, you can always specify that by hand, fractional or decimal.
But you can’t specify it with decimal. That’s my point. How do you tell the machine operator it needs to be precise to the 64th in decimal? “0.015625” implies precision over 15,000x as precise as 1/64th. The difference between 1/10 and 1/100 is massive, and decimal has no way of expressing it with significant figures.
sure you can, you say “i need a hole with diameter 0.25” ± 0.015625“”. it doesn’t matter that you have more sig figs when you state your precision
but regardless, that’s probably not the precision you care about. there’s a good chance that you actually want something totally different, like 0.25±0.1”. with decimal, it’s exceptionally clear what that means, even for complicated/very small decimals. doing the same thing fractionally has to be written as 1/4±1/10”, meaning you have to figure out what that range of values are (7/20” to 3/20”)
Having to provide a “+/-” for a measurement is a silly alternative to using a measurement that already includes precision. You’re just so used to doing things a stupid way that you don’t see it.
providing an arbitrarily non-reduced fraction is an even sillier alternative. the same fundamental issue arises either way, and it’s much clearer to use obvious semantics that everyone can understand
It’s not the same issue at all.
How do you represent 1/64th in decimal without implying greater or lesser precision? Or 1/3rd? Or 1/2 or literally anything that isn’t a power of 10?
You’re defending the practice of saying “this number, but maybe not because we can’t actually measure that precisely, so here’s some more numbers you can use to figure out how precise or measurements are”
How is that a more elegant solution than simply having the precision recorded in a single rational measurement?
This feels like such a niche reason to prefer fractional measurements.
I’m scratching my head, wondering why all the downvotes.
I’ve always sucked at math tbh, but fractional measurements are my jam. It goes faster in my head and I can visualize things better.