Anagrams: Difference between revisions

m
syntax highlighting fixup automation
No edit summary
m (syntax highlighting fixup automation)
Line 16:
=={{header|11l}}==
{{trans|Python}}
<langsyntaxhighlight lang=11l>DefaultDict[String, Array[String]] anagram
L(word) File(‘unixdict.txt’).read().split("\n")
anagram[sorted(word).join(‘’)].append(word)
Line 24:
L(ana) anagram.values()
I ana.len == count
print(ana)</langsyntaxhighlight>
{{out}}
<pre>
Line 36:
 
=={{header|8th}}==
<langsyntaxhighlight lang=8th>
\
\ anagrams.8th
Line 169:
bye
;
</syntaxhighlight>
</lang>
=={{header|AArch64 Assembly}}==
{{works with|as|Raspberry Pi 3B version Buster 64 bits <br> or android 64 bits with application Termux }}
<langsyntaxhighlight lang=AArch64 Assembly>
/* ARM assembly AARCH64 Raspberry PI 3B */
/* program anagram64.s */
Line 559:
/* for this file see task include a file in language AArch64 assembly */
.include "../includeARM64.inc"
</syntaxhighlight>
</lang>
<pre>
~/.../rosetta/asm1 $ anagram64
Line 571:
</pre>
=={{header|ABAP}}==
<langsyntaxhighlight lang=ABAP>report zz_anagrams no standard page heading.
define update_progress.
call function 'SAPGUI_PROGRESS_INDICATOR'
Line 672:
return.
endif.
endform.</langsyntaxhighlight>
{{out}}
<pre>[ angel , angle , galen , glean , lange ]
Line 682:
 
=={{header|Ada}}==
<langsyntaxhighlight lang=ada>with Ada.Text_IO; use Ada.Text_IO;
 
with Ada.Containers.Indefinite_Ordered_Maps;
Line 750:
Iterate (Result, Put'Access);
Close (File);
end Words_Of_Equal_Characters;</langsyntaxhighlight>
{{out}}
<pre>
Line 763:
=={{header|ALGOL 68}}==
{{works with|ALGOL 68G|Any - tested with release 2.8.3.win32}} Uses the "read" PRAGMA of Algol 68 G to include the associative array code from the [[Associative_array/Iteration]] task.
<langsyntaxhighlight lang=algol68># find longest list(s) of words that are anagrams in a list of words #
# use the associative array in the Associate array/iteration task #
PR read "aArray.a68" PR
Line 855:
e := NEXT words
OD
FI</langsyntaxhighlight>
{{out}}
<pre>
Line 872:
This is a rough translation of the J version, intermediate values are kept and verb trains are not used for clarity of data flow.
 
<langsyntaxhighlight lang=APL>
anagrams←{
tie←⍵ ⎕NTIE 0
Line 880:
({~' '∊¨(⊃/¯1↑[2]⍵)}ana)⌿ana ⋄ ⎕NUNTIE
}
</syntaxhighlight>
</lang>
On a unix system we can assume wget exists and can use it from dyalog to download the file.
 
Line 886:
 
'''Example:'''
<langsyntaxhighlight lang=APL>
⎕SH'wget http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'
]display anagrams 'unixdict.txt'
</syntaxhighlight>
</lang>
'''Output:'''
<pre>
Line 916:
=={{header|AppleScript}}==
 
<langsyntaxhighlight lang=applescript>use AppleScript version "2.3.1" -- OS X 10.9 (Mavericks) or later — for these 'use' commands!
-- Uses the customisable AppleScript-coded sort shown at <https://macscripter.net/viewtopic.php?pid=194430#p194430>.
-- It's assumed scripters will know how and where to install it as a library.
Line 996:
set wordFile to ((path to desktop as text) & "unixdict.txt") as «class furl»
set wordList to paragraphs of (read wordFile as «class utf8»)
return largestAnagramGroups(wordList)</langsyntaxhighlight>
 
{{output}}
<langsyntaxhighlight lang=applescript>{{"abel", "able", "bale", "bela", "elba"}, {"alger", "glare", "lager", "large", "regal"}, {"angel", "angle", "galen", "glean", "lange"}, {"caret", "carte", "cater", "crate", "trace"}, {"elan", "lane", "lean", "lena", "neal"}, {"evil", "levi", "live", "veil", "vile"}}</langsyntaxhighlight>
=={{header|ARM Assembly}}==
{{works with|as|Raspberry Pi <br> or android 32 bits with application Termux}}
<langsyntaxhighlight lang=ARM Assembly>
/* ARM assembly Raspberry PI */
/* program anagram.s */
Line 1,356:
/***************************************************/
.include "../affichage.inc"
</syntaxhighlight>
</lang>
<pre>
bale able bela abel elba
Line 1,367:
=={{header|Arturo}}==
 
<langsyntaxhighlight lang=rebol>wordset: map read.lines relative "unixdict.txt" => strip
 
anagrams: #[]
Line 1,380:
 
loop select values anagrams 'x [5 =< size x] 'words ->
print join.with:", " words</langsyntaxhighlight>
 
{{out}}
Line 1,393:
=={{header|AutoHotkey}}==
Following code should work for AHK 1.0.* and 1.1* versions:
<langsyntaxhighlight lang=AutoHotkey>FileRead, Contents, unixdict.txt
Loop, Parse, Contents, % "`n", % "`r"
{ ; parsing each line of the file we just read
Line 1,423:
Else ; output only those sets of letters that scored the maximum amount of common words
Break
MsgBox, % ClipBoard := SubStr(var_Output,2) ; the result is also copied to the clipboard</langsyntaxhighlight>
{{out}}
<pre>
Line 1,435:
 
=={{header|AWK}}==
<langsyntaxhighlight lang=AWK># JUMBLEA.AWK - words with the most duplicate spellings
# syntax: GAWK -f JUMBLEA.AWK UNIXDICT.TXT
{ for (i=1; i<=NF; i++) {
Line 1,460:
}
return(str)
}</langsyntaxhighlight>
{{out}}
<pre>
Line 1,473:
Alternatively, non-POSIX version:
{{works with|gawk}}
<langsyntaxhighlight lang=awk>#!/bin/gawk -f
 
{ patsplit($0, chars, ".")
Line 1,489:
if (count[i] == countMax)
print substr(accum[i], 2)
}</langsyntaxhighlight>
 
=={{header|BaCon}}==
<langsyntaxhighlight lang=freebasic>OPTION COLLAPSE TRUE
 
DECLARE idx$ ASSOC STRING
Line 1,511:
FOR y = 0 TO x-1
IF MaxCount = AMOUNT(idx$(n$[y])) THEN PRINT n$[y], ": ", idx$(n$[y])
NEXT</langsyntaxhighlight>
{{out}}
<pre>
Line 1,526:
=={{header|BBC BASIC}}==
{{works with|BBC BASIC for Windows}}
<langsyntaxhighlight lang=bbcbasic> INSTALL @lib$+"SORTLIB"
sort% = FN_sortinit(0,0)
Line 1,589:
C% = LEN(word$)
CALL sort%, char&(0)
= $$^char&(0)</langsyntaxhighlight>
{{out}}
<pre>
Line 1,602:
=={{header|BQN}}==
 
<langsyntaxhighlight lang=bqn>words ← •FLines "unixdict.txt"
•Show¨{𝕩/˜(⊢=⌈´)≠¨𝕩} (⊐∧¨)⊸⊔ words</langsyntaxhighlight>
<langsyntaxhighlight lang=bqn>⟨ "abel" "able" "bale" "bela" "elba" ⟩
⟨ "alger" "glare" "lager" "large" "regal" ⟩
⟨ "angel" "angle" "galen" "glean" "lange" ⟩
⟨ "caret" "carte" "cater" "crate" "trace" ⟩
⟨ "elan" "lane" "lean" "lena" "neal" ⟩
⟨ "evil" "levi" "live" "veil" "vile" ⟩</langsyntaxhighlight>
 
Assumes that <code>unixdict.txt</code> is in the same folder. The [[mlochbaum/BQN|JS implementation]] must be run in Node.js to have access to the filesystem.
Line 1,619:
This solution makes extensive use of Bracmat's computer algebra mechanisms. A trick is needed to handle words that are merely repetitions of a single letter, such as <code>iii</code>. That's why the variabe <code>sum</code> isn't initialised with <code>0</code>, but with a non-number, in this case the empty string. Also te correct handling of characters 0-9 needs a trick so that they are not numerically added: they are prepended with a non-digit, an <code>N</code> in this case. After completely traversing the word list, the program writes a file <code>product.txt</code> that can be visually inspected.
The program is not fast. (Minutes rather than seconds.)
<langsyntaxhighlight lang=bracmat>( get$("unixdict.txt",STR):?list
& 1:?product
& whl
Line 1,646:
| out$!group
)
);</langsyntaxhighlight>
{{out}}
<pre> abel+able+bale+bela+elba
Line 1,656:
 
=={{header|C}}==
<langsyntaxhighlight lang=c>#include <stdio.h>
#include <stdlib.h>
#include <string.h>
Line 1,815:
fclose(f1);
return 0;
}</langsyntaxhighlight>
{{out}} (less than 1 second on old P500)
<pre>5:vile, veil, live, levi, evil,
Line 1,825:
</pre>
A much shorter version with no fancy data structures:
<langsyntaxhighlight lang=c>#include <stdio.h>
#include <stdlib.h>
#include <string.h>
Line 1,925:
close(fd);
return 0;
}</langsyntaxhighlight>
{{out}}
<pre>
Line 1,937:
 
=={{header|C sharp|C#}}==
<langsyntaxhighlight lang=csharp>using System;
using System.IO;
using System.Linq;
Line 1,966:
}
}
}</langsyntaxhighlight>
{{out}}
<pre>
Line 1,978:
 
=={{header|C++}}==
<langsyntaxhighlight lang=cpp>#include <iostream>
#include <fstream>
#include <string>
Line 2,012:
}
return 0;
}</langsyntaxhighlight>
{{out}}
abel, able, bale, bela, elba,
Line 2,023:
=={{header|Clojure}}==
Assume ''wordfile'' is the path of the local file containing the words. This code makes a map (''groups'') whose keys are sorted letters and values are lists of the key's anagrams. It then determines the length of the longest list, and prints out all the lists of that length.
<langsyntaxhighlight lang=clojure>(require '[clojure.java.io :as io])
 
(def groups
Line 2,032:
maxlength (count (first wordlists))]
(doseq [wordlist (take-while #(= (count %) maxlength) wordlists)]
(println wordlist))</langsyntaxhighlight>
 
<langsyntaxhighlight lang=clojure>
(->> (slurp "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
clojure.string/split-lines
Line 2,049:
;; ["evil" "levi" "live" "veil" "vile"]
;; ["abel" "able" "bale" "bela" "elba"])
</syntaxhighlight>
</lang>
 
=={{header|CLU}}==
<langsyntaxhighlight lang=clu>% Keep a list of anagrams
anagrams = cluster is new, add, largest_size, sets
anagram_set = struct[letters: string, words: array[string]]
Line 2,134:
stream$putl(po, "")
end
end start_up</langsyntaxhighlight>
{{out}}
<pre>Largest amount of anagrams per set: 5
Line 2,148:
Tested with GnuCOBOL 2.0. ALLWORDS output display trimmed for width.
 
<langsyntaxhighlight lang=COBOL> *> TECTONICS
*> wget http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
*> or visit https://sourceforge.net/projects/souptonuts/files
Line 2,390:
.
 
end program anagrams.</langsyntaxhighlight>
 
{{out}}
Line 2,422:
 
=={{header|CoffeeScript}}==
<langsyntaxhighlight lang=coffeescript>http = require 'http'
 
show_large_anagram_sets = (word_lst) ->
Line 2,452:
req.end()
get_word_list show_large_anagram_sets</langsyntaxhighlight>
{{out}}
<langsyntaxhighlight lang=coffeescript>> coffee anagrams.coffee
[ 'abel', 'able', 'bale', 'bela', 'elba' ]
[ 'alger', 'glare', 'lager', 'large', 'regal' ]
Line 2,460:
[ 'caret', 'carte', 'cater', 'crate', 'trace' ]
[ 'elan', 'lane', 'lean', 'lena', 'neal' ]
[ 'evil', 'levi', 'live', 'veil', 'vile' ]</langsyntaxhighlight>
 
=={{header|Common Lisp}}==
{{libheader|DRAKMA}} to retrieve the wordlist.
<langsyntaxhighlight lang=lisp>(defun anagrams (&optional (url "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"))
(let ((words (drakma:http-request url :want-stream t))
(wordsets (make-hash-table :test 'equalp)))
Line 2,486:
else if (eql (car pair) maxcount)
do (push (cdr pair) maxwordsets)
finally (return (values maxwordsets maxcount)))))</langsyntaxhighlight>
Evalutating
<langsyntaxhighlight lang=lisp>(multiple-value-bind (wordsets count) (anagrams)
(pprint wordsets)
(print count))</langsyntaxhighlight>
{{out}}
<pre>(("vile" "veil" "live" "levi" "evil")
Line 2,500:
5</pre>
Another method, assuming file is local:
<langsyntaxhighlight lang=lisp>(defun read-words (file)
(with-open-file (stream file)
(loop with w = "" while w collect (setf w (read-line stream nil)))))
Line 2,518:
longest))
 
(format t "~{~{~a ~}~^~%~}" (anagram "unixdict.txt"))</langsyntaxhighlight>
{{out}}
<pre>elba bela bale able abel
Line 2,529:
=={{header|Component Pascal}}==
BlackBox Component Builder
<langsyntaxhighlight lang=oberon2>
MODULE BbtAnagrams;
IMPORT StdLog,Files,Strings,Args;
Line 2,705:
END BbtAnagrams.
</syntaxhighlight>
</lang>
Execute:^Q BbtAnagrams.DoProcess unixdict.txt~<br/>
{{out}}
Line 2,722:
=={{header|Crystal}}==
{{trans|Ruby}}
<langsyntaxhighlight lang=ruby>require "http/client"
 
response = HTTP::Client.get("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
Line 2,744:
anagram.each_value { |ana| puts ana if ana.size >= count }
end
</syntaxhighlight>
</lang>
 
{{out}}
Line 2,758:
=={{header|D}}==
===Short Functional Version===
<langsyntaxhighlight lang=d>import std.stdio, std.algorithm, std.string, std.exception, std.file;
 
void main() {
Line 2,766:
immutable m = an.byValue.map!q{ a.length }.reduce!max;
writefln("%(%s\n%)", an.byValue.filter!(ws => ws.length == m));
}</langsyntaxhighlight>
{{out}}
<pre>["caret", "carte", "cater", "crate", "trace"]
Line 2,778:
===Faster Version===
Less safe, same output.
<langsyntaxhighlight lang=d>void main() {
import std.stdio, std.algorithm, std.file, std.string;
 
Line 2,791:
immutable m = anags.byValue.map!q{ a.length }.reduce!max;
writefln("%(%-(%s %)\n%)", anags.byValue.filter!(ws => ws.length == m));
}</langsyntaxhighlight>
Runtime: about 0.06 seconds.
 
Line 2,798:
{{libheader| System.Classes}}
{{libheader| System.Diagnostics}}
<langsyntaxhighlight lang=Delphi>
program AnagramsTest;
 
Line 2,925:
end.
 
</syntaxhighlight>
</lang>
 
{{out}}
Line 2,944:
 
=={{header|E}}==
<langsyntaxhighlight lang=e>println("Downloading...")
when (def wordText := <http://wiki.puzzlers.org/pub/wordlists/unixdict.txt> <- getText()) -> {
def words := wordText.split("\n")
Line 2,965:
println(anagramGroup.snapshot())
}
}</langsyntaxhighlight>
 
=={{header|EchoLisp}}==
For a change, we will use the french dictionary - '''(lib 'dico.fr)''' - delivered within EchoLisp.
<langsyntaxhighlight lang=scheme>
(require 'struct)
(require 'hash)
Line 3,005:
(cdr h))
))
</syntaxhighlight>
</lang>
{{out}}
<langsyntaxhighlight lang=scheme>
(length mots-français)
→ 209315
Line 3,017:
→ { alisen enlias enlisa ensila islaen islean laines lianes salien saline selina }
 
</syntaxhighlight>
</lang>
 
=={{header|Eiffel}}==
<langsyntaxhighlight lang=Eiffel>
class
ANAGRAMS
Line 3,108:
 
end
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 3,121:
=={{header|Ela}}==
{{trans|Haskell}}
<langsyntaxhighlight lang=ela>open monad io list string
 
groupon f x y = f x == f y
Line 3,134:
let wix = groupBy (groupon fst) << sort $ zip (map sort words) words
let mxl = maximum $ map length wix
mapM_ (putLn << map snd) << filter ((==mxl) << length) $ wix</langsyntaxhighlight>
 
{{out}}<pre>["vile","veil","live","levi","evil"]
Line 3,146:
=={{header|Elena}}==
ELENA 5.0:
<langsyntaxhighlight lang=elena>import system'routines;
import system'calendar;
import system'io;
Line 3,191:
console.readChar()
}</langsyntaxhighlight>
{{out}}
<pre>
Line 3,217:
 
=={{header|Elixir}}==
<langsyntaxhighlight lang=Elixir>defmodule Anagrams do
def find(file) do
File.read!(file)
Line 3,232:
end
 
Anagrams.find("unixdict.txt")</langsyntaxhighlight>
 
{{out}}
Line 3,245:
 
The same output, using <code>File.Stream!</code> to generate <code>tuples</code> containing the word and it's sorted value as <code>strings</code>.
<langsyntaxhighlight lang=Elixir>File.stream!("unixdict.txt")
|> Stream.map(&String.strip &1)
|> Enum.group_by(&String.codepoints(&1) |> Enum.sort)
Line 3,252:
|> Enum.max
|> elem(1)
|> Enum.each(fn n -> Enum.sort(n) |> Enum.join(" ") |> IO.puts end)</langsyntaxhighlight>
 
{{out}}
Line 3,266:
=={{header|Erlang}}==
The function fetch/2 is used to solve [[Anagrams/Deranged_anagrams]]. Please keep backwards compatibility when editing. Or update the other module, too.
<langsyntaxhighlight lang=erlang>-module(anagrams).
-compile(export_all).
 
Line 3,295:
get_value([], _, _, L) ->
L.
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 3,309:
 
=={{header|Euphoria}}==
<langsyntaxhighlight lang=euphoria>include sort.e
 
function compare_keys(sequence a, sequence b)
Line 3,357:
puts(1,"\n")
end if
end for</langsyntaxhighlight>
{{out}}
<pre>abel bela bale elba able
Line 3,369:
=={{header|F Sharp|F#}}==
Read the lines in the dictionary, group by the sorted letters in each word, find the length of the longest sets of anagrams, extract the longest sequences of words sharing the same letters (i.e. anagrams):
<langsyntaxhighlight lang=fsharp>let xss = Seq.groupBy (Array.ofSeq >> Array.sort) (System.IO.File.ReadAllLines "unixdict.txt")
Seq.map snd xss |> Seq.filter (Seq.length >> ( = ) (Seq.map (snd >> Seq.length) xss |> Seq.max))</langsyntaxhighlight>
Note that it is necessary to convert the sorted letters in each word from sequences to arrays because the groupBy function uses the default comparison and sequences do not compare structurally (but arrays do in F#).
 
Takes 0.8s to return:
<langsyntaxhighlight lang=fsharp>val it : string seq seq =
seq
[seq ["abel"; "able"; "bale"; "bela"; "elba"];
Line 3,381:
seq ["caret"; "carte"; "cater"; "crate"; "trace"];
seq ["elan"; "lane"; "lean"; "lena"; "neal"];
seq ["evil"; "levi"; "live"; "veil"; "vile"]]</langsyntaxhighlight>
 
=={{header|Fantom}}==
<langsyntaxhighlight lang=fantom>class Main
{
// take given word and return a string rearranging characters in order
Line 3,426:
}
}
}</langsyntaxhighlight>
 
{{out}}
Line 3,439:
=={{header|Fortran}}==
This program:
<langsyntaxhighlight lang=fortran>!***************************************************************************************
module anagram_routines
!***************************************************************************************
Line 3,613:
!***************************************************************************************
end program main
!***************************************************************************************</langsyntaxhighlight>
 
{{out}}
Line 3,631:
=={{header|FBSL}}==
'''A little bit of cheating: literatim re-implementation of C solution in FBSL's Dynamic C layer.'''
<langsyntaxhighlight lang=C>#APPTYPE CONSOLE
 
DIM gtc = GetTickCount()
Line 3,790:
fclose(f1);
}
END DYNC</langsyntaxhighlight>
{{out}} (2.2GHz Intel Core2 Duo)
<pre>25104 words in dictionary max ana=5
Line 3,805:
 
=={{header|Factor}}==
<langsyntaxhighlight lang=factor> "resource:unixdict.txt" utf8 file-lines
[ [ natural-sort >string ] keep ] { } map>assoc sort-keys
[ [ first ] compare +eq+ = ] monotonic-split
dup 0 [ length max ] reduce '[ length _ = ] filter [ values ] map .</langsyntaxhighlight>
<langsyntaxhighlight lang=factor>{
{ "abel" "able" "bale" "bela" "elba" }
{ "caret" "carte" "cater" "crate" "trace" }
Line 3,816:
{ "elan" "lane" "lean" "lena" "neal" }
{ "evil" "levi" "live" "veil" "vile" }
}</langsyntaxhighlight>
 
=={{header|FreeBASIC}}==
<langsyntaxhighlight lang=freebasic>' FB 1.05.0 Win64
 
Type IndexedWord
Line 3,954:
Print
Print "Press any key to quit"
Sleep</langsyntaxhighlight>
 
{{out}}
Line 3,972:
 
=={{header|Frink}}==
<langsyntaxhighlight lang=frink>
d = new dict
for w = lines["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"]
Line 3,989:
i = i + 1
}
</syntaxhighlight>
</lang>
 
=={{header|FutureBasic}}==
Line 3,995:
 
This first example is a hybrid using FB's native dynamic global array combined with Core Foundation functions:
<langsyntaxhighlight lang=futurebasic>
include "ConsoleWindow"
 
Line 4,093:
fn FindAnagrams( "resistance" )
fn FindAnagrams( "mountaineer" )
</syntaxhighlight>
</lang>
Output:
<pre>
Line 4,295:
 
=={{header|GAP}}==
<langsyntaxhighlight lang=gap>Anagrams := function(name)
local f, p, L, line, word, words, swords, res, cur, r;
words := [ ];
Line 4,347:
# [ "alger", "glare", "lager", "large", "regal" ],
# [ "elan", "lane", "lean", "lena", "neal" ],
# [ "evil", "levi", "live", "veil", "vile" ] ]</langsyntaxhighlight>
 
=={{header|Go}}==
<langsyntaxhighlight lang=go>package main
 
import (
Line 4,396:
func (b byteSlice) Len() int { return len(b) }
func (b byteSlice) Swap(i, j int) { b[i], b[j] = b[j], b[i] }
func (b byteSlice) Less(i, j int) bool { return b[i] < b[j] }</langsyntaxhighlight>
{{out}}
<pre>
Line 4,409:
=={{header|Groovy}}==
This program:
<langsyntaxhighlight lang=groovy>def words = new URL('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').text.readLines()
def groups = words.groupBy{ it.toList().sort() }
def bigGroupSize = groups.collect{ it.value.size() }.max()
def isBigAnagram = { it.value.size() == bigGroupSize }
println groups.findAll(isBigAnagram).collect{ it.value }.collect{ it.join(' ') }.join('\n')</langsyntaxhighlight>
{{out}}
<pre>
Line 4,425:
 
=={{header|Haskell}}==
<langsyntaxhighlight lang=haskell>import Data.List
 
groupon f x y = f x == f y
Line 4,434:
wix = groupBy (groupon fst) . sort $ zip (map sort words) words
mxl = maximum $ map length wix
mapM_ (print . map snd) . filter ((==mxl).length) $ wix</langsyntaxhighlight>
{{out}}
<langsyntaxhighlight lang=haskell>*Main> main
["abel","able","bale","bela","elba"]
["caret","carte","cater","crate","trace"]
Line 4,442:
["alger","glare","lager","large","regal"]
["elan","lane","lean","lena","neal"]
["evil","levi","live","veil","vile"]</langsyntaxhighlight>
 
and we can noticeably speed up the second stage sorting and grouping by packing the String lists of Chars to the Text type:
 
<langsyntaxhighlight lang=haskell>import Data.List (groupBy, maximumBy, sort)
import Data.Ord (comparing)
import Data.Function (on)
Line 4,457:
mapM_
(print . fmap snd)
(filter ((length (maximumBy (comparing length) ws) ==) . length) ws)</langsyntaxhighlight>
{{Out}}
<pre>["abel","able","bale","bela","elba"]
Line 4,467:
 
=={{header|Icon}} and {{header|Unicon}}==
<langsyntaxhighlight lang=icon>procedure main(args)
every writeSet(!getLongestAnagramSets())
end
Line 4,500:
every (s := "") ||:= (find(c := !cset(w),w),c)
return s
end</langsyntaxhighlight>
Sample run:
<pre>->an <unixdict.txt
Line 4,513:
=={{header|J}}==
If the unixdict file has been retrieved and saved in the current directory (for example, using wget):
<langsyntaxhighlight lang=j> (#~ a: ~: {:"1) (]/.~ /:~&>) <;._2 ] 1!:1 <'unixdict.txt'
+-----+-----+-----+-----+-----+
|abel |able |bale |bela |elba |
Line 4,526:
+-----+-----+-----+-----+-----+
|evil |levi |live |veil |vile |
+-----+-----+-----+-----+-----+</langsyntaxhighlight>
Explanation:
<langsyntaxhighlight lang=J> <;._2 ] 1!:1 <'unixdict.txt'</langsyntaxhighlight>
This reads in the dictionary and produces a list of boxes. Each box contains one line (one word) from the dictionary.
<langsyntaxhighlight lang=J> (]/.~ /:~&>)</langsyntaxhighlight>
This groups the words into rows where anagram equivalents appear in the same row. In other words, creates a copy of the original list where the characters contained in each box have been sorted. Then it organizes the contents of the original list in rows, with each new row keyed by the values in the new list.
<langsyntaxhighlight lang=J> (#~ a: ~: {:"1)</langsyntaxhighlight>
This selects rows whose last element is not an empty box.<br>
(In the previous step we created an array of rows of boxes. The short rows were automatically padded with empty boxes so that all rows would be the same length.)
Line 4,539:
The key to this algorithm is the sorting of the characters in each word from the dictionary. The line <tt>Arrays.sort(chars);</tt> sorts all of the letters in the word in ascending order using a built-in [[quicksort]], so all of the words in the first group in the result end up under the key "aegln" in the anagrams map.
{{works with|Java|1.5+}}
<langsyntaxhighlight lang=java5>import java.net.*;
import java.io.*;
import java.util.*;
Line 4,568:
System.out.println(ana);
}
}</langsyntaxhighlight>
{{works with|Java|1.8+}}
<langsyntaxhighlight lang=java5>import java.net.*;
import java.io.*;
import java.util.*;
Line 4,622:
;
}
}</langsyntaxhighlight>
{{out}}
[angel, angle, galen, glean, lange]
Line 4,634:
===ES5===
{{Works with|Node.js}}
<langsyntaxhighlight lang=javascript>var fs = require('fs');
var words = fs.readFileSync('unixdict.txt', 'UTF-8').split('\n');
 
Line 4,656:
}
}
}</langsyntaxhighlight>
 
{{Out}}
Line 4,667:
 
Alternative using reduce:
<langsyntaxhighlight lang=javascript>var fs = require('fs');
var dictionary = fs.readFileSync('unixdict.txt', 'UTF-8').split('\n');
 
Line 4,688:
keysSortedByFrequency.slice(0, 10).forEach(function (key) {
console.log(sortedDict[key].join(' '));
});</langsyntaxhighlight>
 
 
Line 4,696:
Using JavaScript for Automation
(A JavaScriptCore interpreter on macOS with an Automation library).
<langsyntaxhighlight lang=javascript>(() => {
'use strict';
 
Line 4,863:
// MAIN ---
return main();
})();</langsyntaxhighlight>
{{Out}}
<pre>[
Line 4,911:
 
=={{header|jq}}==
<langsyntaxhighlight lang=jq>def anagrams:
(reduce .[] as $word (
{table: {}, max: 0}; # state
Line 4,922:
# The task:
split("\n") | anagrams
</syntaxhighlight>
</lang>
{{Out}}
<syntaxhighlight lang=sh>
<lang sh>
$ jq -M -s -c -R -f anagrams.jq unixdict.txt
["abel","able","bale","bela","elba"]
Line 4,932:
["elan","lane","lean","lena","neal"]
["evil","levi","live","veil","vile"]
</syntaxhighlight>
</lang>
 
=={{header|Jsish}}==
From Javascript, nodejs entry.
<langsyntaxhighlight lang=javascript>/* Anagrams, in Jsish */
var datafile = 'unixdict.txt';
if (console.args[0] == '-more' && Interp.conf('maxArrayList') > 500000)
Line 4,974:
evil levi live veil vile
=!EXPECTEND!=
*/</langsyntaxhighlight>
 
{{out}}
Line 4,984:
=={{header|Julia}}==
{{works with|Julia|1.6}}
<langsyntaxhighlight lang=julia>url = "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"
wordlist = open(readlines, download(url))
 
Line 4,999:
end
 
println.(anagram(wordlist))</langsyntaxhighlight>
 
{{out}}
Line 5,010:
 
=={{header|K}}==
<langsyntaxhighlight lang=k>{x@&a=|/a:#:'x}{x g@&1<#:'g:={x@<x}'x}0::`unixdict.txt</langsyntaxhighlight>
 
=={{header|Kotlin}}==
{{trans|Java}}
<langsyntaxhighlight lang=scala>import java.io.BufferedReader
import java.io.InputStreamReader
import java.net.URL
Line 5,039:
.filter { it.size == count }
.forEach { println(it) }
}</langsyntaxhighlight>
 
{{out}}
Line 5,052:
 
=={{header|Lasso}}==
<langsyntaxhighlight lang=lasso>local(
anagrams = map,
words = include_url('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt')->split('\n'),
Line 5,079:
 
#findings -> join('<br />\n')
</syntaxhighlight>
</lang>
{{out}}
<pre>abel, able, bale, bela, elba
Line 5,089:
 
=={{header|Liberty BASIC}}==
<langsyntaxhighlight lang=lb>' count the word list
open "unixdict.txt" for input as #1
while not(eof(#1))
Line 5,159:
sorted$=sorted$+chrSort$(chr)
next
end function</langsyntaxhighlight>
 
=={{header|LiveCode}}==
LiveCode could definitely use a sort characters command. As it is this code converts the letters into items and then sorts that. I wrote a merge sort for characters, but the conversion to items, built-in-sort, conversion back to string is about 10% faster, and certainly easier to write.
 
<langsyntaxhighlight lang=LiveCode>on mouseUp
put mostCommonAnagrams(url "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
end mouseUp
Line 5,202:
replace comma with empty in X
return X
end itemsToChars</langsyntaxhighlight>
{{out}}
<pre>abel,able,bale,bela,elba
Line 5,213:
=={{header|Lua}}==
Lua's core library is very small and does not include built-in network functionality. If a networking library were imported, the local file in the following script could be replaced with the remote dictionary file.
<langsyntaxhighlight lang=lua>function sort(word)
local bytes = {word:byte(1, -1)}
table.sort(bytes)
Line 5,236:
print('') -- Finish with a newline.
end
end</langsyntaxhighlight>
{{out}}
<pre>abel able bale bela elba
Line 5,246:
 
=={{header|M4}}==
<langsyntaxhighlight lang=M4>divert(-1)
changequote(`[',`]')
define([for],
Line 5,291:
_max
for([x],1,_n,[ifelse(_get([count],x),_max,[_get([list],x)
])])</langsyntaxhighlight>
 
Memory limitations keep this program from working on the full-sized dictionary.
Line 5,309:
The convert call discards the hashes, which have done their job, and leaves us with a list L of anagram sets.
Finally, we just note the size of the largest sets of anagrams, and pick those off.
<langsyntaxhighlight lang=Maple>
words := HTTP:-Get( "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt" )[2]: # ignore errors
use StringTools, ListTools in
Line 5,317:
m := max( map( nops, L ) ); # what is the largest set?
A := select( s -> evalb( nops( s ) = m ), L ); # get the maximal sets of anagrams
</syntaxhighlight>
</lang>
The result of running this code is
<langsyntaxhighlight lang=Maple>
A := [{"abel", "able", "bale", "bela", "elba"}, {"angel", "angle", "galen",
"glean", "lange"}, {"alger", "glare", "lager", "large", "regal"}, {"evil",
"levi", "live", "veil", "vile"}, {"caret", "carte", "cater", "crate", "trace"}
, {"elan", "lane", "lean", "lena", "neal"}];
</syntaxhighlight>
</lang>
 
=={{header|Mathematica}}/{{header|Wolfram Language}}==
Download the dictionary, split the lines, split the word in characters and sort them. Now sort by those words, and find sequences of equal 'letter-hashes'. Return the longest sequences:
<langsyntaxhighlight lang=Mathematica>list=Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt","Lines"];
text={#,StringJoin@@Sort[Characters[#]]}&/@list;
text=SortBy[text,#[[2]]&];
splits=Split[text,#1[[2]]==#2[[2]]&][[All,All,1]];
maxlen=Max[Length/@splits];
Select[splits,Length[#]==maxlen&]</langsyntaxhighlight>
gives back:
<langsyntaxhighlight lang=Mathematica>{{abel,able,bale,bela,elba},{caret,carte,cater,crate,trace},{angel,angle,galen,glean,lange},{alger,glare,lager,large,regal},{elan,lane,lean,lena,neal},{evil,levi,live,veil,vile}}</langsyntaxhighlight>
An alternative is faster, but requires version 7 (for <code>Gather</code>):
<langsyntaxhighlight lang=Mathematica>splits = Gather[list, Sort[Characters[#]] == Sort[Characters[#2]] &];
maxlen = Max[Length /@ splits];
Select[splits, Length[#] == maxlen &]</langsyntaxhighlight>
 
Or using build-in functions for sorting and gathering elements in lists it can be implimented as:
<langsyntaxhighlight lang=Mathematica>anagramGroups = GatherBy[SortBy[GatherBy[list,Sort[Characters[#]] &],Length],Length];
anagramGroups[[-1]]</langsyntaxhighlight>
Also, Mathematica's own word list is available; replacing the list definition with <code>list = WordData[];</code> and forcing <code>maxlen</code> to 5 yields instead this result:
 
Line 5,365:
 
Also if using Mathematica 10 it gets really concise:
<langsyntaxhighlight lang=Mathematica>list=Import["http://wiki.puzzlers.org/pub/wordlists/unixdict.txt","Lines"];
MaximalBy[GatherBy[list, Sort@*Characters], Length]</langsyntaxhighlight>
 
=={{header|Maxima}}==
<langsyntaxhighlight lang=maxima>read_file(name) := block([file, s, L], file: openr(name), L: [],
while stringp(s: readline(file)) do L: cons(s, L), close(file), L)$
 
Line 5,407:
["angel", "angle", "galen", "glean", "lange"],
["caret", "carte", "cater", "crate", "trace"],
["abel", "able", "bale", "bela", "elba"]] */</langsyntaxhighlight>
 
=={{header|MUMPS}}==
<langsyntaxhighlight lang=MUMPS>Anagrams New ii,file,longest,most,sorted,word
Set file="unixdict.txt"
Open file:"r" Use file
Line 5,444:
Quit
 
Do Anagrams</langsyntaxhighlight>
<pre>
The anagrams with the most variations:
Line 5,460:
===Java&ndash;Like===
{{trans|Java}}
<langsyntaxhighlight lang=NetRexx>/* NetRexx */
options replace format comments java crossref symbols nobinary
 
Line 5,520:
 
return
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,534:
===Rexx&ndash;Like===
Implemented with more NetRexx idioms such as indexed strings, <tt>PARSE</tt> and the NetRexx &quot;built&ndash;in functions&quot;.
<langsyntaxhighlight lang=NetRexx>/* NetRexx */
options replace format comments java crossref symbols nobinary
 
Line 5,591:
 
Return
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,604:
 
=={{header|NewLisp}}==
<langsyntaxhighlight lang=NewLisp>
;;; Get the words as a list, splitting at newline
(setq data
Line 5,632:
;;; Print out only groups of more than 4 words
(map println (filter (fn(x) (> (length x) 4)) (group-by-key)))
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,644:
 
=={{header|Nim}}==
<langsyntaxhighlight lang=nim>
import tables, strutils, algorithm
 
Line 5,663:
 
main()
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,676:
=={{header|Oberon-2}}==
Oxford Oberon-2
<langsyntaxhighlight lang=oberon2>
MODULE Anagrams;
IMPORT Files,Out,In,Strings;
Line 5,833:
DoProcess("unixdict.txt");
END Anagrams.
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 5,845:
 
=={{header|Objeck}}==
<langsyntaxhighlight lang=objeck>use HTTP;
use Collection;
 
Line 5,885:
}
}
</syntaxhighlight>
</lang>
{{out}}
<pre>[abel,able,bale,bela,elba]
Line 5,895:
 
=={{header|OCaml}}==
<langsyntaxhighlight lang=ocaml>let explode str =
let l = ref [] in
let n = String.length str in
Line 5,927:
( List.iter (Printf.printf " %s") lw;
print_newline () )
) h</langsyntaxhighlight>
 
=={{header|Oforth}}==
 
<langsyntaxhighlight lang=Oforth>import: mapping
import: collect
import: quicksort
Line 5,941:
filter( #[ second size m == ] )
apply ( #[ second .cr ] )
;</langsyntaxhighlight>
 
{{out}}
Line 5,957:
Two versions of this, using different collection classes.
===Version 1: Directory of arrays===
<langsyntaxhighlight lang=ooRexx>
-- This assumes you've already downloaded the following file and placed it
-- in the current directory: http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
Line 5,993:
say letters":" list~makestring("l", ", ")
end
</syntaxhighlight>
</lang>
===Version 2: Using the relation class===
This version appears to be the fastest.
<langsyntaxhighlight lang=ooRexx>
-- This assumes you've already downloaded the following file and placed it
-- in the current directory: http://wiki.puzzlers.org/pub/wordlists/unixdict.txt
Line 6,043:
say letters":" words~makestring("l", ", ")
end
</syntaxhighlight>
</lang>
Timings taken on my laptop:
<pre>
Line 6,069:
 
=={{header|Oz}}==
<langsyntaxhighlight lang=oz>declare
%% Helper function
fun {ReadLines Filename}
Line 6,097:
%% Display result (make sure strings are shown as string, not as number lists)
{Inspector.object configureEntry(widgetShowStrings true)}
{Inspect LargestSets}</langsyntaxhighlight>
 
=={{header|Pascal}}==
<langsyntaxhighlight lang=pascal>Program Anagrams;
 
// assumes a local file
Line 6,187:
AnagramList[i].Destroy;
 
end.</langsyntaxhighlight>
{{out}}
<pre>
Line 6,200:
 
=={{header|Perl}}==
<langsyntaxhighlight lang=perl>use List::Util 'max';
 
my @words = split "\n", do { local( @ARGV, $/ ) = ( 'unixdict.txt' ); <> };
Line 6,211:
for my $ana (values %anagram) {
print "@$ana\n" if @$ana == $count;
}</langsyntaxhighlight>
If we calculate <code>$max</code>, then we don't need the CPAN module:
<langsyntaxhighlight lang=perl>push @{$anagram{ join '' => sort split '' }}, $_ for @words;
$max > @$_ or $max = @$_ for values %anagram;
@$_ == $max and print "@$_\n" for values %anagram;</langsyntaxhighlight>
{{out}}
alger glare lager large regal
Line 6,226:
=={{header|Phix}}==
copied from Euphoria and cleaned up slightly
<!--<langsyntaxhighlight lang=Phix>-->
<span style="color: #004080;">integer</span> <span style="color: #000000;">fn</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">open</span><span style="color: #0000FF;">(</span><span style="color: #008000;">"demo/unixdict.txt"</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"r"</span><span style="color: #0000FF;">)</span>
<span style="color: #004080;">sequence</span> <span style="color: #000000;">words</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000000;">anagrams</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{},</span> <span style="color: #000000;">last</span><span style="color: #0000FF;">=</span><span style="color: #008000;">""</span><span style="color: #0000FF;">,</span> <span style="color: #000000;">letters</span>
Line 6,263:
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">for</span>
<!--</langsyntaxhighlight>-->
{{out}}
<pre>
Line 6,276:
 
=={{header|Phixmonti}}==
<langsyntaxhighlight lang=Phixmonti>include ..\Utilitys.pmt
 
"unixdict.txt" "r" fopen var f
Line 6,316:
len for
get len maxlen == if ? else drop endif
endfor</langsyntaxhighlight>
 
Other solution
 
<langsyntaxhighlight lang=Phixmonti>include ..\Utilitys.pmt
 
( )
Line 6,355:
len for
get len maxlen == if ? else drop endif
endfor</langsyntaxhighlight>
 
{{out}}<pre>["abel", "able", "bale", "bela", "elba"]
Line 6,367:
 
=={{header|PHP}}==
<langsyntaxhighlight lang=php><?php
$words = explode("\n", file_get_contents('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'));
foreach ($words as $word) {
Line 6,379:
if (count($ana) == $best)
print_r($ana);
?></langsyntaxhighlight>
 
=={{header|Picat}}==
Using foreach loop:
<langsyntaxhighlight lang=Picat>go =>
Dict = new_map(),
foreach(Line in read_file_lines("unixdict.txt"))
Line 6,394:
println(Value)
end,
nl.</langsyntaxhighlight>
 
{{out}}
Line 6,406:
 
Same idea, but shorter version by (mis)using list comprehensions.
<langsyntaxhighlight lang=Picat>go2 =>
M = new_map(),
_ = [_:W in read_file_lines("unixdict.txt"),S=sort(W),M.put(S,M.get(S,"")++[W])],
X = max([V.len : _K=V in M]),
println(maxLen=X),
[V : _=V in M, V.len=X].println.</langsyntaxhighlight>
 
{{out}}
Line 6,419:
=={{header|PicoLisp}}==
A straight-forward implementation using 'group' takes 48 seconds on a 1.7 GHz Pentium:
<langsyntaxhighlight lang=PicoLisp>(flip
(by length sort
(by '((L) (sort (copy L))) group
(in "unixdict.txt" (make (while (line) (link @)))) ) ) )</langsyntaxhighlight>
Using a binary tree with the 'idx' function, it takes only 0.42 seconds on the same machine, a factor of 100 faster:
<langsyntaxhighlight lang=PicoLisp>(let Words NIL
(in "unixdict.txt"
(while (line)
Line 6,431:
(push (car @) Word)
(set Key (list Word)) ) ) ) )
(flip (by length sort (mapcar val (idx 'Words)))) )</langsyntaxhighlight>
{{out}}
<pre>-> (("vile" "veil" "live" "levi" "evil") ("trace" "crate" "cater" "carte" "caret
Line 6,439:
 
=={{header|PL/I}}==
<langsyntaxhighlight lang=PL/I>/* Search a list of words, finding those having the same letters. */
 
word_test: proc options (main);
Line 6,505:
end is_anagram;
 
end word_test;</langsyntaxhighlight>
{{out}}
<pre>
Line 6,514:
 
=={{header|Pointless}}==
<langsyntaxhighlight lang=pointless>output =
readFileLines("unixdict.txt")
|> reduce(logWord, {})
Line 6,527:
getMax(groups) =
groups |> filter(g => length(g) == maxLength)
where maxLength = groups |> map(length) |> maximum</langsyntaxhighlight>
 
{{out}}
Line 6,539:
=={{header|PowerShell}}==
{{works with|PowerShell|2}}
<langsyntaxhighlight lang=powershell>$c = New-Object Net.WebClient
$words = -split ($c.DownloadString('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'))
$top_anagrams = $words `
Line 6,551:
| Select-Object -First 1
 
$top_anagrams.Group | ForEach-Object { $_.Group -join ', ' }</langsyntaxhighlight>
{{out}}
<pre>abel, able, bale, bela, elba
Line 6,560:
evil, levi, live, veil, vile</pre>
Another way with more .Net methods is quite a different style, but drops the runtime from 2 minutes to 1.5 seconds:
<langsyntaxhighlight lang=powershell>$Timer = [System.Diagnostics.Stopwatch]::StartNew()
 
$uri = 'http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'
Line 6,602:
[string]::join('', $entry.Value)
}
}</langsyntaxhighlight>
 
=={{header|Processing}}==
<langsyntaxhighlight lang=processing>import java.util.Map;
 
void setup() {
Line 6,630:
}
}
}</langsyntaxhighlight>
 
{{out}}
Line 6,642:
=={{header|Prolog}}==
{{works with|SWI-Prolog|5.10.0}}
<langsyntaxhighlight lang=Prolog>:- use_module(library( http/http_open )).
 
anagrams:-
Line 6,690:
length(V1, L1),
length(V2, L2),
( L1 < L2 -> R = >; L1 > L2 -> R = <; compare(R, K1, K2)).</langsyntaxhighlight>
The result is
<pre>[abel,able,bale,bela,elba]
Line 6,702:
=={{header|PureBasic}}==
{{works with|PureBasic|4.4}}
<langsyntaxhighlight lang=PureBasic>InitNetwork() ;
OpenConsole()
Line 6,777:
PrintN("Press any key"): Repeat: Until Inkey() <> ""
EndIf
EndIf</langsyntaxhighlight>
{{out}}
<pre>evil, levi, live, veil, vile
Line 6,789:
===Python 3.X Using defaultdict===
Python 3.2 shell input (IDLE)
<langsyntaxhighlight lang=python>>>> import urllib.request
>>> from collections import defaultdict
>>> words = urllib.request.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
Line 6,800:
>>> for ana in anagram.values():
if len(ana) >= count:
print ([x.decode() for x in ana])</langsyntaxhighlight>
 
===Python 2.7 version===
Python 2.7 shell input (IDLE)
<langsyntaxhighlight lang=python>>>> import urllib
>>> from collections import defaultdict
>>> words = urllib.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
Line 6,828:
>>> count
5
>>></langsyntaxhighlight>
 
===Python: Using groupby===
{{trans|Haskell}}
{{works with|Python|2.6}} sort and then group using groupby()
<langsyntaxhighlight lang=python>>>> import urllib, itertools
>>> words = urllib.urlopen('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt').read().split()
>>> len(words)
Line 6,854:
>>> count
5
>>></langsyntaxhighlight>
 
 
Or, disaggregating, speeding up a bit by avoiding the slightly expensive use of ''sorted'' as a key, updating for Python 3, and using a local ''unixdict.txt'':
{{Works with|Python|3.7}}
<langsyntaxhighlight lang=python>'''Largest anagram groups found in list of words.'''
 
from os.path import expanduser
Line 6,969:
# MAIN ---
if __name__ == '__main__':
main()</langsyntaxhighlight>
{{Out}}
<pre>caret carte cater crate creat creta react recta trace
Line 6,976:
 
=={{header|QB64}}==
<langsyntaxhighlight lang=QB64>
$CHECKING:OFF
' Warning: Keep the above line commented out until you know your newly edited code works.
Line 7,126:
IF i < Finish THEN QSort i, Finish
END SUB
</syntaxhighlight>
</lang>
 
'''2nd solution (by Steve McNeill):'''
<langsyntaxhighlight lang=QB64>
$CHECKING:OFF
SCREEN _NEWIMAGE(640, 480, 32)
Line 7,275:
LOOP UNTIL gap = 1 AND swapped = 0
END SUB
</syntaxhighlight>
</lang>
 
'''Output:'''
<langsyntaxhighlight lang=QB64>
LOOPER: 7134 executions from start to finish, in one second.
Note, this is including disk access for new data each time.
Line 7,290:
caret, trace, crate, carte, cater
bale, abel, able, elba, bela
</syntaxhighlight>
</lang>
 
=={{header|Quackery}}==
 
<langsyntaxhighlight lang=Quackery> $ "rosetta/unixdict.txt" sharefile drop nest$
[] swap witheach
[ dup sort
Line 7,315:
else drop ]
drop cr ]
drop</langsyntaxhighlight>
 
{{out}}
Line 7,328:
 
=={{header|R}}==
<langsyntaxhighlight lang=R>words <- readLines("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
word_group <- sapply(
strsplit(words, split=""), # this will split all words to single letters...
Line 7,349:
"angel, angle, galen, glean, lange" "alger, glare, lager, large, regal"
aeln eilv
"elan, lane, lean, lena, neal" "evil, levi, live, veil, vile" </langsyntaxhighlight>
 
=={{header|Racket}}==
<langsyntaxhighlight lang=racket>
#lang racket
 
Line 7,374:
 
(get-maxes (hash-words (get-lines "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")))
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 7,389:
 
{{works with|Rakudo|2016.08}}
<langsyntaxhighlight lang=perl6>my @anagrams = 'unixdict.txt'.IO.words.classify(*.comb.sort.join).values;
my $max = @anagrams».elems.max;
 
.put for @anagrams.grep(*.elems == $max);</langsyntaxhighlight>
 
{{out}}
Line 7,406:
 
{{works with|Rakudo|2016.08}}
<langsyntaxhighlight lang=perl6>.put for # print each element of the array made this way:
'unixdict.txt'.IO.words # load words from file
.classify(*.comb.sort.join) # group by common anagram
.classify(*.value.elems) # group by number of anagrams in a group
.max(*.key).value # get the group with highest number of anagrams
.map(*.value) # get all groups of anagrams in the group just selected</langsyntaxhighlight>
 
=={{header|RapidQ}}==
<syntaxhighlight lang=vb>
<lang vb>
dim x as integer, y as integer
dim SortX as integer
Line 7,479:
End
 
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 7,491:
 
=={{header|Rascal}}==
<langsyntaxhighlight lang=rascal>import Prelude;
 
list[str] OrderedRep(str word){
Line 7,501:
longest = max([size(group) | group <- range(AnagramMap)]);
return [AnagramMap[rep]| rep <- AnagramMap, size(AnagramMap[rep]) == longest];
}</langsyntaxhighlight>
Returns:
<langsyntaxhighlight lang=rascal>value: [
{"glean","galen","lange","angle","angel"},
{"glare","lager","regal","large","alger"},
Line 7,510:
{"able","bale","abel","bela","elba"},
{"levi","live","vile","evil","veil"}
]</langsyntaxhighlight>
 
=={{header|Red}}==
<langsyntaxhighlight lang=Red>Red []
 
m: make map! [] 25000
Line 7,529:
]
foreach v values-of m [ if maxx = length? v [print v] ]
</syntaxhighlight>
</lang>
{{out}}
<pre>abel able bale bela elba
Line 7,544:
This version doesn't assume that the dictionary is in alphabetical order, &nbsp; nor does it assume the
<br>words are in any specific case &nbsp; (lower/upper/mixed).
<langsyntaxhighlight lang=rexx>/*REXX program finds words with the largest set of anagrams (of the same size). */
iFID= 'unixdict.txt' /*the dictionary input File IDentifier.*/
$=; !.=; ww=0; uw=0; most=0 /*initialize a bunch of REXX variables.*/
Line 7,573:
/*reassemble word with sorted letters. */
return @.a || @.b || @.c || @.d || @.e || @.f||@.g||@.h||@.i||@.j||@.k||@.l||@.m||,
@.n || @.o || @.p || @.q || @.r || @.s||@.t||@.u||@.v||@.w||@.x||@.y||@.z</langsyntaxhighlight>
Programming note: &nbsp; the long (wide) assignment for &nbsp; &nbsp; '''return @.a||'''... &nbsp; &nbsp; could've been coded as an elegant &nbsp; '''do''' &nbsp; loop instead of hardcoding 26 letters,<br>but since the dictionary (word list) is rather large, a rather expaciated method was used for speed.
 
Line 7,592:
===version 1.2, optimized===
This optimized version eliminates the &nbsp; '''sortA''' &nbsp; subroutine and puts that subroutine's code in-line.
<langsyntaxhighlight lang=rexx>/*REXX program finds words with the largest set of anagrams (of the same size). */
iFID= 'unixdict.txt' /*the dictionary input File IDentifier.*/
$=; !.=; ww=0; uw=0; most=0 /*initialize a bunch of REXX variables.*/
Line 7,621:
/*reassemble word with sorted letters. */
return @.a || @.b || @.c || @.d || @.e || @.f||@.g||@.h||@.i||@.j||@.k||@.l||@.m||,
@.n || @.o || @.p || @.q || @.r || @.s||@.t||@.u||@.v||@.w||@.x||@.y||@.z</langsyntaxhighlight>
{{out|output|text=&nbsp; is the same as REXX version 1.1}}
 
Line 7,629:
===annotated version using &nbsp; PARSE===
(This algorithm actually utilizes a &nbsp; ''bin'' &nbsp; sort, &nbsp; one bin for each Latin letter.)
<langsyntaxhighlight lang=rexx>u= 'Halloween' /*word to be sorted by (Latin) letter.*/
upper u /*fast method to uppercase a variable. */
/*another: u = translate(u) */
Line 7,651:
/*Note: the ? is prefixed to the letter to avoid */
/*collisions with other REXX one-character variables.*/
say 'z=' z</langsyntaxhighlight>
{{out|output|:}}
<pre>
Line 7,659:
 
===annotated version using a &nbsp; DO &nbsp; loop===
<langsyntaxhighlight lang=rexx>u= 'Halloween' /*word to be sorted by (Latin) letter.*/
upper u /*fast method to uppercase a variable. */
L=length(u) /*get the length of the word (in bytes)*/
Line 7,676:
_.?n||_.?o||_.?p||_.?q||_.?r||_.?s||_.?t||_.?u||_.?v||_.?w||_.?x||_.?y||_.?z
 
say 'z=' z</langsyntaxhighlight>
{{out|output|:}}
<pre>
Line 7,685:
 
===version 2===
<langsyntaxhighlight lang=rexx>/*REXX program finds words with the largest set of anagrams (same size)
* 07.08.2013 Walter Pachl
* sorta for word compression courtesy Gerard Schildberger,
Line 7,745:
End
Return c.a||c.b||c.c||c.d||c.e||c.f||c.g||c.h||c.i||c.j||c.k||c.l||,
c.m||c.n||c.o||c.p||c.q||c.r||c.s||c.t||c.u||c.v||c.w||c.x||c.y||c.z</langsyntaxhighlight>
{{out}}
<pre>
Line 7,763:
 
=={{header|Ring}}==
<langsyntaxhighlight lang=ring>
# Project : Anagrams
 
Line 7,844:
end
return cnt
</syntaxhighlight>
</lang>
Output:
<pre>
Line 7,867:
 
=={{header|Ruby}}==
<langsyntaxhighlight lang=ruby>require 'open-uri'
 
anagram = Hash.new {|hash, key| hash[key] = []} # map sorted chars to anagrams
Line 7,883:
p ana
end
end</langsyntaxhighlight>
{{out}}
<pre>
Line 7,895:
 
Short version (with lexical ordered result).
<langsyntaxhighlight lang=ruby>require 'open-uri'
 
anagrams = open('http://wiki.puzzlers.org/pub/wordlists/unixdict.txt'){|f| f.read.split.group_by{|w| w.each_char.sort} }
anagrams.values.group_by(&:size).max.last.each{|group| puts group.join(", ") }
</syntaxhighlight>
</lang>
{{Out}}
<pre>
Line 7,911:
 
=={{header|Run BASIC}}==
<langsyntaxhighlight lang=runbasic>sqliteconnect #mem, ":memory:"
mem$ = "CREATE TABLE anti(gram,ordr);
CREATE INDEX ord ON anti(ordr)"
Line 7,974:
print
next i
end</langsyntaxhighlight>
<pre>
abel able bale bela elba
Line 7,987:
Unicode is hard so the solution depends on what you consider to be an anagram: two strings that have the same bytes, the same codepoints, or the same graphemes. The first two are easily accomplished in Rust proper, but the latter requires an external library. Graphemes are probably the most correct way, but it is also the least efficient since graphemes are variable size and thus require a heap allocation per grapheme.
 
<langsyntaxhighlight lang=rust>use std::collections::HashMap;
use std::fs::File;
use std::io::{BufRead,BufReader};
Line 8,016:
}
}
}</langsyntaxhighlight>
{{out}}
<pre>
Line 8,029:
If we assume an ASCII string, we can map each character to a prime number and multiply these together to create a number which uniquely maps to each anagram.
 
<langsyntaxhighlight lang=rust>use std::collections::HashMap;
use std::path::Path;
use std::io::{self, BufRead, BufReader};
Line 8,070:
}
Ok(map.into_iter().map(|(_, entry)| entry).collect())
}</langsyntaxhighlight>
 
=={{header|Scala}}==
<langsyntaxhighlight lang=scala>val src = io.Source fromURL "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt"
val vls = src.getLines.toList.groupBy(_.sorted).values
val max = vls.map(_.size).max
vls filter (_.size == max) map (_ mkString " ") mkString "\n"</langsyntaxhighlight>
{{out}}
<pre>
Line 8,088:
----
Another take:
<langsyntaxhighlight lang=scala>Source
.fromURL("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt").getLines.toList
.groupBy(_.sorted).values
.groupBy(_.size).maxBy(_._1)._2
.map(_.mkString("\t"))
.foreach(println)</langsyntaxhighlight>
{{out}}
<pre>
Line 8,108:
Uses two SRFI libraries: SRFI 125 for hash tables and SRFI 132 for sorting.
 
<langsyntaxhighlight lang=scheme>
(import (scheme base)
(scheme char)
Line 8,150:
(map (lambda (grp) (list-sort string<? grp))
(largest-groups (read-groups)))))
</syntaxhighlight>
</lang>
 
{{out}}
Line 8,163:
 
=={{header|Seed7}}==
<langsyntaxhighlight lang=seed7>$ include "seed7_05.s7i";
include "gethttp.s7i";
include "strifile.s7i";
Line 8,218:
end if;
end for;
end func;</langsyntaxhighlight>
 
{{out}}
Line 8,231:
 
=={{header|SETL}}==
<langsyntaxhighlight lang=SETL>h := open('unixdict.txt', "r");
anagrams := {};
while not eof(h) loop
Line 8,270:
end loop;
return A;
end procedure;</langsyntaxhighlight>
{{out}}
<pre>{abel able bale bela elba}
Line 8,280:
 
=={{header|Sidef}}==
<langsyntaxhighlight lang=ruby>func main(file) {
file.open_r(\var fh, \var err) ->
|| die "Can't open file `#{file}' for reading: #{err}\n";
Line 8,289:
}
 
main(%f'/tmp/unixdict.txt');</langsyntaxhighlight>
{{out}}
<pre>alger glare lager large regal
Line 8,299:
 
=={{header|Simula}}==
<langsyntaxhighlight lang=simula>COMMENT COMPILE WITH
$ cim -m64 anagrams-hashmap.sim
;
Line 8,568:
 
END
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 8,583:
 
=={{header|Smalltalk}}==
<langsyntaxhighlight lang=Smalltalk>list:= (FillInTheBlank request: 'myMessageBoxTitle') subStrings: String crlf.
dict:= Dictionary new.
list do: [:val|
Line 8,589:
add: val.
].
sorted:=dict asSortedCollection: [:a :b| a size > b size].</langsyntaxhighlight>
Documentation:
<pre>
Line 8,607:
{{works with|Smalltalk/X}}
instead of asking for the strings, read the file:
<langsyntaxhighlight lang=smalltalk>d := Dictionary new.
'unixdict.txt' asFilename
readingLinesDo:[:eachWord |
Line 8,616:
sortBySelector:#size)
reverse
do:[:s | s printCR]</langsyntaxhighlight>
{{out}}
<pre>
Line 8,628:
...</pre>
not sure if getting the dictionary via http is part of the task; if so, replace the file-reading with:
<langsyntaxhighlight lang=smalltalk>'http://wiki.puzzlers.org/pub/wordlists/unixdict.txt' asURI contents asCollectionOfLines do:[:eachWord | ...</langsyntaxhighlight>
 
=={{header|SNOBOL4}}==
{{works with|Macro Spitbol}}
Note: unixdict.txt is passed in locally via STDIN. Newlines must be converted for Win/DOS environment.
<langsyntaxhighlight lang=SNOBOL4>* # Sort letters of word
define('sortw(str)a,i,j') :(sortw_end)
sortw a = array(size(str))
Line 8,655:
L3 j = j + 1; key = kv<j,1>; val = kv<j,2> :f(end)
output = eq(countw(val),max) key ': ' val :(L3)
end</langsyntaxhighlight>
{{out}}
<pre>abel: abel able bale bela elba
Line 8,665:
 
=={{header|Stata}}==
<langsyntaxhighlight lang=stata>import delimited http://wiki.puzzlers.org/pub/wordlists/unixdict.txt, clear
mata
a=st_sdata(.,.)
Line 8,680:
reshape wide v1, i(k) j(group) string
drop k
list, noobs noheader</langsyntaxhighlight>
 
'''Output'''
Line 8,693:
 
=={{header|SuperCollider}}==
<langsyntaxhighlight lang=SuperCollider>(
var text, words, sorted, dict = IdentityDictionary.new, findMax;
File.use("unixdict.txt".resolveRelative, "r", { |f| text = f.readAllString });
Line 8,712:
};
findMax.(dict)
)</langsyntaxhighlight>
 
Answers:
<langsyntaxhighlight lang=SuperCollider>[ [ angel, angle, galen, glean, lange ], [ caret, carte, cater, crate, trace ], [ elan, lane, lean, lena, neal ], [ evil, levi, live, veil, vile ], [ alger, glare, lager, large, regal ] ]</langsyntaxhighlight>
 
=={{header|Swift}}==
{{works with|Swift 2.0}}
 
<langsyntaxhighlight lang=swift>import Foundation
 
let wordsURL = NSURL(string: "http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")!
Line 8,761:
print("set \(i): \(thislist.sort())")
}
</syntaxhighlight>
</lang>
 
{{out}}
Line 8,776:
 
=={{header|Tcl}}==
<langsyntaxhighlight lang=tcl>package require Tcl 8.5
package require http
 
Line 8,799:
puts $anagrams($key)
}
}</langsyntaxhighlight>
{{out}}
<pre>evil levi live veil vile
Line 8,811:
Works with Transd v0.43.
 
<langsyntaxhighlight lang=scheme>#lang transd
 
MainModule: {
Line 8,827:
)
))
}</langsyntaxhighlight>{{out}}
<pre>
[[abel, able, bale, bela, elba],
Line 8,838:
 
=={{header|TUSCRIPT}}==
<langsyntaxhighlight lang=tuscript>$$ MODE TUSCRIPT,{}
requestdata = REQUEST ("http://wiki.puzzlers.org/pub/wordlists/unixdict.txt")
 
Line 8,865:
PRINT cs," ",f,": ",a
ENDLOOP
ENDCOMPILE</langsyntaxhighlight>
{{out}}
<pre>
Line 8,886:
Process substitutions eliminate the need for command pipelines.
 
<langsyntaxhighlight lang=bash>http_get_body() {
local host=$1
local uri=$2
Line 8,921:
done
 
printf "%s\n" "${maxwords[@]}"</langsyntaxhighlight>
 
{{output}}
Line 8,935:
The algorithm is to group the words together that are made from the same unordered lists of letters, then collect the groups together that have the same number of words in
them, and then show the collection associated with the highest number.
<langsyntaxhighlight lang=Ursala>#import std
 
#show+
 
anagrams = mat` * leql$^&h eql|=@rK2tFlSS ^(~&,-<&)* unixdict_dot_txt</langsyntaxhighlight>
{{out}}
<pre>
Line 8,950:
 
=={{header|VBA}}==
<syntaxhighlight lang=vb>
<lang vb>
Option Explicit
 
Line 9,112:
If (mini < j) Then Call SortTwoDimArray(myArr, mini, j, Colonne)
If (i < Maxi) Then Call SortTwoDimArray(myArr, i, Maxi, Colonne)
End Sub</langsyntaxhighlight>
{{out}}
<pre>25104 words, in the dictionary
Line 9,128:
=={{header|VBScript}}==
A little convoluted, uses a dictionary and a recordset...
<syntaxhighlight lang=vb>
<lang vb>
Const adInteger = 3
Const adVarChar = 200
Line 9,199:
wend
rs.close
</syntaxhighlight>
</lang>
The output:
<pre>
Line 9,216:
 
The word list is expected to be in the same directory as the script.
<langsyntaxhighlight lang=vedit>File_Open("|(PATH_ONLY)\unixdict.txt")
 
Repeat(ALL) {
Line 9,281:
Ins_Char(#8, OVERWRITE)
}
return</langsyntaxhighlight>
{{out}}
<pre>
Line 9,294:
 
=={{header|Visual Basic .NET}}==
<langsyntaxhighlight lang=vbnet>Imports System.IO
Imports System.Collections.ObjectModel
 
Line 9,357:
End Function
 
End Module</langsyntaxhighlight>
{{out}}
<PRE>
Line 9,370:
=={{header|Vlang}}==
{{trans|Wren}}
<langsyntaxhighlight lang=vlang>import os
 
fn main(){
Line 9,395:
}
}
}</langsyntaxhighlight>
 
{{out}}
Line 9,409:
=={{header|Wren}}==
{{libheader|Wren-sort}}
<langsyntaxhighlight lang=ecmascript>import "io" for File
import "/sort" for Sort
 
Line 9,427:
for (key in wordMap.keys) {
if (wordMap[key].count == most) System.print(wordMap[key])
}</langsyntaxhighlight>
 
{{out}}
Line 9,440:
 
=={{header|Yabasic}}==
<langsyntaxhighlight lang=Yabasic>filename$ = "unixdict.txt"
maxw = 0 : c = 0 : dimens(c)
i = 0
Line 9,520:
d(j,p) = c
end if
end sub</langsyntaxhighlight>
 
=={{header|zkl}}==
<langsyntaxhighlight lang=zkl>File("unixdict.txt").read(*) // dictionary file to blob, copied from web
// blob to dictionary: key is word "fuzzed", values are anagram words
.pump(Void,T(fcn(w,d){
Line 9,535:
"%d:%s: %s".fmt(v.len(),zz.strip(),
v.apply("strip").concat(","))
});</langsyntaxhighlight>
{{out}}
<pre>
Line 9,551:
</pre>
In the case where it is desirable to get the dictionary from the web, use this code:
<langsyntaxhighlight lang=zkl>URL:="http://wiki.puzzlers.org/pub/wordlists/unixdict.txt";
var ZC=Import("zklCurl");
unixdict:=ZC().get(URL); //--> T(Data,bytes of header, bytes of trailer)
unixdict=unixdict[0].del(0,unixdict[1]); // remove HTTP header
File("unixdict.txt","w").write(unixdict);</langsyntaxhighlight>
 
{{omit from|6502 Assembly|unixdict.txt is much larger than the CPU's address space.}}
10,327

edits