Talk:Heronian triangles: Difference between revisions

From Rosetta Code
Content added Content deleted
(Python should work now for version 2 too)
(Perhaps worth dropping the 'product' import from itertools in the Python version ?)
Line 5: Line 5:


:::The solution originally just worked for Python 3. I've added the necessary <code>__future__</code> import. --[[User:AndiPersti|Andreas Perstinger]] ([[User talk:AndiPersti|talk]]) 07:00, 25 October 2015 (UTC)
:::The solution originally just worked for Python 3. I've added the necessary <code>__future__</code> import. --[[User:AndiPersti|Andreas Perstinger]] ([[User talk:AndiPersti|talk]]) 07:00, 25 October 2015 (UTC)

::::Thanks – that was fast.
::::On the topic of imports, I wonder if it might make good pedagogic (and perhaps engineering) sense to drop the import of ''product'' from ''itertools'', and let the list comprehension do the generation of cartesian products ?
::::The fact that list monads and list comprehensions yield cartesian products unassisted is one of their most interesting (and arguably central) properties, and perhaps we can demonstrate that more clearly by leaving the condition as it is, while rewriting the first (generating) half of that comprehension as '''h = [(a, b, c) for a in range(1, last) for b in range(a, last) for c in range(b, last)'''
:::: (where ''last'' is maxside+1)
::::'''Advantages''':
::::# The filtering happens earlier. Rather than first generating 8 million potential tuples and only then starting to filter, we immediately begin to filter in the inner ''for'' loop (or inner call to ''concat map'') of the process which is generating the cartesian product, and we never create the full oversized set in the first place.
::::# More trivially, by defining b and c in terms of a and b, we can also skip the consideration of ''(200 cubed) - (200*199*198) = 119600'' redundant duplicates.
::::# Apart from a probable space improvement, there seems (as it happens) to be a time improvement in the range of 50% (at least on this system with Python 2.7).
::::[[User:Hout|Hout]] ([[User talk:Hout|talk]]) 10:11, 25 October 2015 (UTC)

Revision as of 10:11, 25 October 2015

The Python part is badly formatted and does not show up in the index, someone who knows wiki-formatting should fix it --Zorro1024 (talk) 14:21, 22 March 2015 (UTC)

Fixed. The problem was Smoe's R entry which was not terminated properly (and which should have gone after the python entry rather than before it). --Rdm (talk) 16:27, 22 March 2015 (UTC)
I wonder if a spark of static or noise entered the Python version at some point in its editing history ? On my system the current draft overgenerates triangles, giving a different output from that shown. (It seems to find 1383 rather than 517 triangles). If not an editing glitch then possibly an artefact of changing Python versions ? I am running Python 2.7.10 on OS X 10.11. Hout (talk) 00:12, 25 October 2015 (UTC)
The solution originally just worked for Python 3. I've added the necessary __future__ import. --Andreas Perstinger (talk) 07:00, 25 October 2015 (UTC)
Thanks – that was fast.
On the topic of imports, I wonder if it might make good pedagogic (and perhaps engineering) sense to drop the import of product from itertools, and let the list comprehension do the generation of cartesian products ?
The fact that list monads and list comprehensions yield cartesian products unassisted is one of their most interesting (and arguably central) properties, and perhaps we can demonstrate that more clearly by leaving the condition as it is, while rewriting the first (generating) half of that comprehension as h = [(a, b, c) for a in range(1, last) for b in range(a, last) for c in range(b, last)
(where last is maxside+1)
Advantages:
  1. The filtering happens earlier. Rather than first generating 8 million potential tuples and only then starting to filter, we immediately begin to filter in the inner for loop (or inner call to concat map) of the process which is generating the cartesian product, and we never create the full oversized set in the first place.
  2. More trivially, by defining b and c in terms of a and b, we can also skip the consideration of (200 cubed) - (200*199*198) = 119600 redundant duplicates.
  3. Apart from a probable space improvement, there seems (as it happens) to be a time improvement in the range of 50% (at least on this system with Python 2.7).
Hout (talk) 10:11, 25 October 2015 (UTC)