THEORY:
Linear characterization of nonlinear functionality convolutes calculations, at best, and, at worst, corrupts capability and understanding, greatly increasing the probability of error and misprojection when utilizing that linear data in subsequent operations.
REAL-WORLD EXAMPLE*: In adjusting the amount of solvent to add to an existing solution to decrease the concentration of the dissolved solids -- a quite similar nonlinear zero-sum dynamic to odds-and-probs -- the following algorithm holds for concentrations expressed as
percentages:
where G is the required amount of added solvent (grams), A is the starting concentration (%, linear), B is the starting batch mass or weight (grams), and E is the goal concentration (%).
This nicely symmetric formula is bit of a hassle to calculate by hand, especially if done often, so a handy Excel spreadsheet is generally utilized. (By hand, I hate those
inverse minus-ones.)
That same solvent/solids mixture can also be characterized as the
nonlinear ratio of those two components: the solvent and the dissolved solids. The simplified algorithm for that characterization becomes:
where G is the required amount of added solvent (grams), A is the starting concentration ratio (solvent/solids, nonlinear), B is the starting batch mass or weight (grams), and E is the goal concentration ratio.
Hey, it’s a
back-of-a-napkin calculation
..!!!
* The odds-to-probs dead-end – originally reviewed above -- was kind of a metaphysical pursuit, not really ‘material world’.
--------[ WAIT ... THERE'S MORE!!! ]-------
BACKGROUND: On occasion, I get to play with math vocationally, beyond just the usual straightforward ratio stuff, the grist of engineering.
Recently, we needed to dilute a batch of product with solvent to a goal solids level, which is a key parameter in many industries, from adhesives to pharma to food, not only in the final product but also in its upstream processing. To assist in doing this, a ‘moisture meter’ determines how much solvent is in the product, by continuously weighing the specimen during gradual evaporation to a set temperature, until the readings equilibrate, as the sample is then considered ‘dry’. The meter spits out its reading as
%-moisture lost from the incoming sample during the test; from that, the initial
%-solids is determined (100% minus %-moisture).
What needs to be calculated here is the amount of solvent required to dilute a batch of a given weight to a final desired solids level. To do this numbers-crunch, the series of functions is all very simple stuff, where Excel works great. On this specific Excel spreadsheet, there were three input columns: A:
Starting %-moisture, B:
Starting weight (grams), and E:
Goal %-moisture. The output/answer for the required amount of solvent to be added (in grams) is column G. Columns C, D, F and H were very simple intermediate calculations (H was the final weight of the diluted batch).
For kicks, I recently combined all the four columnar calculations into a single equation – knowing A, B and E and solving for G -- and then simplified the overarching algorithm for determining G, the required amount of solvent to be added to reach the goal %-solids, shown in the elegant and symmetric formula above.
Those pesky
inverse minus-one functions really make hand-calculating tough, and doing it in your head impossible, at least for me, and are only an artifact of forcing a linear scale (percentage) onto a natural system, in this case, simply solids in a solvent. This wouldn’t happen if a simple nonlinear functional ratio – specifically, the
solids/solvent mass ratio – had been reported by the moisture meter, instead of the percentage of the total incoming mass removed with vaporization of one of the constituents. (If I could change the settings on my cheesy halogen-bulb-heated moisture meter, I would … but the skimpy manual looks like it was written in Shenzhen, with
Google Translate.)
Sure, automation in computing can do lots of great things, but when the practice is applied to systems that have simple
nonlinear functionality but
linear characterization (%), and then that practice becomes institutionalized and relied on, generally unknowingly, perhaps lazily, it corrupts the knowledge base, and inherent capability degrades, regardless of the awesome computing power that can be applied. The study and practical applications of
Chemistry offer a good example, where there is a vast difference between really understanding the Periodic Table and its inherent functionality … and just memorizing it.
On a more whimsical note, the use of percentage to characterize a natural system could be deemed ‘irrational’, in that it avoids expressing a functional nonlinear relationship as a
ratio ... literally, a 'rational number'.
This thread is continued here and here and here.
--------[ WAIT ... THERE'S MORE!!! ]-------
Along with other species coexisting with us in our nonlinear world,
humans have innate quantal cognition (logarithmic), but must learn
numerical cognition (arithmetic), which is just a social construct:
What seems innate and shared between humans and other animals is not this sense that the differences between 2 and 3 and between 152 and 153 are equivalent (a notion central to the concept of number) but, rather, a distinction based on relative difference, which relates to the ratio of the two quantities. It seems we never lose that instinctive basis of comparison. ‘Despite abundant experience with number through life, and formal training of number and mathematics at school, the ability to discriminate number remains ratio-dependent’...
What this means … is that the brain’s natural capacity relates not to number but to the cruder concept of quantity.
…
Some researchers have argued that the default way that humans quantify things is not arithmetically – one more, then another one – but logarithmically.
Here’s another species that
thinks nonlinearly, but prefers order, not like some species…