Jump to content

Talk:Entropy: Difference between revisions

Line 42:
:: [[User:CRGreathouse|CRGreathouse]] ([[User talk:CRGreathouse|talk]]) 00:55, 18 April 2013 (UTC)
 
: Yes, it was supposed to be bits/character and I've corrected it. Shannon called N the number of "symbols" and I call the number of different csymbols "characters" (your number of c) to distinguish it fromn "different symbols". from See myN commentssymbols. below. The entropy in bits (technically called shannons to indicate not only base 2 was used but that the function N*H was applied) is S<sub>2</sub>=NH<sub>2</sub> = 100*2.9758 = '''297 shannons'''. H is intensive or specific entropy, S is extensive or total entropy. The normalized specific entropy/symbol (the degree to which the symbols were equally frequently used, from 0 to 1) is H<sub>8</sub> = H<sub>n</sub> = H<sub>2</sub>*ln(2)/ln(8) = 0.992 in "octals/symbol" or "normalized entropy/symbol". ln(2)/ln(8) changes from base 2 to base 8. The same distribution of characters with a different number n of characters would give the same H<sub>n</sub>. The most objective "total entropy" is then S<sub>n</sub> = H<sub>n</sub> N = 99.2. The are no units because this is a pure statistical measure of the distribution of the ratios times the number of symbols. 99.2 shows that only 1 of the 100 symbols was not perfectly "random" from an H-function perspective (which is VERY limited and basic because it does not try to do any compression, i.e. it does not try to look for patterns except for character frequency). But the last digit had to be chosen randomly without trying to make H=1. S<sub>n</sub> is the total "information" in the data if the observer of the data was not able to compress the data more. Compression has no perfect universal function so information depends on the observer's view, or his access to compression tools like gzip. So for general computer work, the compressed file is the amount of information in it. See also my comments in a section below.
 
:Physics is our best known (most efficient) set of non-lossy compression functions for the data observed in the world, which is a way of saying Occam's razor was applied. Other sets of compression functions like religion + democracy + free markets are more intelligent than physics's functions if they result in more profit for the computing device implementing the functions (algorithms), in keeping with Marcus Hutter's definition of intelligence. Deciding the most efficient way to move the symbols around with electrons in CPUs, ions in brains, votes in government, reputation in open source, or money in markets allows the computing device (CPUs implementing real A.I., brains, government, open source, markets) to most efficiently move the much larger real-world objects that the symbols represent so that the computing device can more efficiently make copies of itself (seek lower entropy). This is how entropy relates to information and how information relates to intelligence and evolution which are the result of least action dynamics seeking lower physical entropy in the distribution of N atoms on Earth, as the excess entropy is sloughed off to space with 17 low-energy un-directed photons per incoming sun-directed photon, so that the Universe can expand (entropy and energy per comoving volume of the universe is constant). The result is higher energy bonds (lower entropy due to smaller volume) of the N atoms on Earth of n types, which is why machines that utilize metal and metalloid (e.g. silicon) atoms with their previously attached oxygen atoms removed (resulting in carbon-carbon, metal-metal, silicon-silicon) are replacing biology: they are more efficient due to the higher energy low entropy bonds. This is why entropy is important. [[User:Zawy|Zawy]] ([[User talk:Zawy|talk]]) 10:33, 23 January 2016 (UTC)
Anonymous user
Cookies help us deliver our services. By using our services, you agree to our use of cookies.