Talk:Word wheel: Difference between revisions

m
→‎Algorithms: added an adjective for CPU time.
m (→‎Algorithms: added an adjective for CPU time.)
Line 5:
Nice. --[[User:Paddy3118|Paddy3118]] ([[User talk:Paddy3118|talk]]) 15:30, 4 July 2020 (UTC)
 
: It seems a waste of memory and CPU time to generate all possible conforming strings instead of writing some simple filters that validates the words against the Rosetta Code task and grid (word wheel) constraints. &nbsp; The REXX solution consumes almost all of the &nbsp; (smallish, sub-second) &nbsp; CPU time in just reading in the dictionary. &nbsp; The filters that I used for the REXX programming solution eliminate over half of the &nbsp; 25,105 &nbsp; words in the UNIXDICT file, &nbsp; about &nbsp;<sup>1</sup>/<sub>4</sub>&nbsp; of the time used for the filtering was used in the detecting of duplicate words &nbsp; (there are none, however). &nbsp; A very small fraction of that is used to validate that each letter (by count) is represented in the grid. &nbsp; I wonder what the CPU consumption would be if the number of words (entries) in the dictionary were a magnitude larger. &nbsp; My "personal" dictionary that I built has over &nbsp; 915,000 &nbsp; words in it. &nbsp; &nbsp; -- [[User:Gerard Schildberger|Gerard Schildberger]] ([[User talk:Gerard Schildberger|talk]]) 21:34, 4 July 2020 (UTC)
 
::Hi Gerard, no waste of memory in the Julia case as nested loops are used to generate word candidates one-at-a-time and then quickly checked if they are in the set of dictionary words. Probably hundreds of thousands of lookups which is OK for todays laptops. As for dictionary size, the task ''specifies'' a ''particular'' dictionary to use; going so far outside of that may interest, but is outside the task boundary.