S-expressions: Difference between revisions

m
(→‎{{header|Haskell}}: Updates needed for it to (a) compile and (b) parse floats successfully)
 
(76 intermediate revisions by 21 users not shown)
Line 21:
 
The reader should be able to read the following input
<langsyntaxhighlight lang="lisp">((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))</langsyntaxhighlight>
and turn it into a native datastructure. (see the [[#Pike|Pike]], [[#Python|Python]] and [[#Ruby|Ruby]] implementations for examples of native data structures.)
 
Line 32:
Let the writer produce pretty printed output with indenting and line-breaks.
<br><br>
 
=={{header|11l}}==
{{trans|Nim}}
 
<syntaxhighlight lang="11l">T Token
T.enum Kind
INT
FLOAT
STRING
IDENT
LPAR
RPAR
END
 
Kind kind
String val
 
F (kind, val = ‘’)
.kind = kind
.val = val
 
F lex(input_str)
[Token] result
V pos = 0
 
F current()
R I @pos < @input_str.len {@input_str[@pos]} E Char("\0")
 
L pos < input_str.len
V ch = input_str[pos]
I ch == ‘(’
pos++
result.append(Token(Token.Kind.LPAR))
E I ch == ‘)’
pos++
result.append(Token(Token.Kind.RPAR))
E I ch C ‘0’..‘9’
V num = ‘’
V kind = Token.Kind.INT
L current() C ‘0’..‘9’
num ‘’= current()
pos++
I current() == ‘.’
num ‘’= current()
kind = FLOAT
pos++
L current() C ‘0’..‘9’
num ‘’= current()
pos++
result.append(Token(kind, num))
E I ch C (‘ ’, "\t", "\n", "\r")
pos++
E I ch == ‘"’
V str = ‘’
pos++
L current() != ‘"’
str ‘’= current()
pos++
pos++
result.append(Token(Token.Kind.STRING, str))
E
V BannedChars = Set([‘ ’, "\t", ‘"’, ‘(’, ‘)’, ‘;’])
V ident = ‘’
L current() !C BannedChars
ident ‘’= current()
pos++
result.append(Token(Token.Kind.IDENT, ident))
 
result.append(Token(Token.Kind.END))
R result
 
F indent(s, count)
R (count * ‘ ’)‘’s.replace("\n", "\n"(count * ‘ ’))
 
T SExpr
T.enum Kind
INT
FLOAT
STRING
IDENT
LIST
 
Kind kind
String val
[SExpr] children
 
F (kind, val = ‘’)
.kind = kind
.val = val
 
F to_str()
I .kind C (SExpr.Kind.INT, SExpr.Kind.FLOAT, SExpr.Kind.IDENT)
R .val
E I .kind == STRING
R ‘"’(.val)‘"’
E I .kind == LIST
V result = ‘(’
L(i, ex) enumerate(.children)
I ex.kind == LIST & ex.children.len > 1
result ‘’= "\n"
result ‘’= indent(ex.to_str(), 2)
E
I i > 0
result ‘’= ‘ ’
result ‘’= ex.to_str()
R result‘)’
assert(0B)
 
V input_str = ‘
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
V tokens = lex(input_str)
V pos = 0
 
F current()
R I :pos < :tokens.len {:tokens[:pos]} E Token(Token.Kind.END)
 
F parse() -> SExpr
V token = current()
:pos++
I token.kind == INT
R SExpr(SExpr.Kind.INT, token.val)
E I token.kind == FLOAT
R SExpr(SExpr.Kind.FLOAT, token.val)
E I token.kind == STRING
R SExpr(SExpr.Kind.STRING, token.val)
E I token.kind == IDENT
R SExpr(SExpr.Kind.IDENT, token.val)
E I token.kind == LPAR
V result = SExpr(SExpr.Kind.LIST)
L current().kind !C (Token.Kind.RPAR, Token.Kind.END)
result.children.append(parse())
assert(current().kind != END, ‘Missing right paren ')'’)
:pos++
R result
assert(0B)
 
print(parse().to_str())</syntaxhighlight>
 
{{out}}
<pre>
(
(data "quoted data" 123 4.5)
(data
(!@# (4.5) "(more" "data)")))
</pre>
 
=={{header|Ada}}==
Line 39 ⟶ 186:
Specification of package S_Expr:
 
<langsyntaxhighlight Adalang="ada">with Ada.Strings.Unbounded;
private with Ada.Containers.Indefinite_Vectors;
 
Line 93 ⟶ 240:
end record;
 
end S_Expr;</langsyntaxhighlight>
 
The implementation of S_Expr:
 
<langsyntaxhighlight Adalang="ada">with Ada.Integer_Text_IO, Ada.Float_Text_IO;
 
package body S_Expr is
Line 151 ⟶ 298:
end Print;
 
end S_Expr;</langsyntaxhighlight>
 
Specification and Implementation of S_Expr.Parser (a child package of S_Expr):
 
<langsyntaxhighlight Adalang="ada">generic -- child of a generic package must be a generic unit
package S_Expr.Parser is
 
Line 161 ⟶ 308:
-- the result of a parse process is always a list of expressions
 
end S_Expr.Parser;</langsyntaxhighlight>
 
<langsyntaxhighlight Adalang="ada">with Ada.Integer_Text_IO, Ada.Float_Text_IO;
 
package body S_Expr.Parser is
Line 272 ⟶ 419:
end Parse;
 
end S_Expr.Parser;</langsyntaxhighlight>
 
The main program Test_S_Expr:
 
<langsyntaxhighlight Adalang="ada">with S_Expr.Parser, Ada.Text_IO;
 
procedure Test_S_Expr is
Line 298 ⟶ 445:
Expression_List.First.Print(Indention => 0);
-- Parse will output a list of S-Expressions. We need the first Expression.
end Test_S_Expr;</langsyntaxhighlight>
 
{{out}}
Line 326 ⟶ 473:
 
=={{header|ALGOL 68}}==
<langsyntaxhighlight lang="algol68"># S-Expressions #
CHAR nl = REPR 10;
# mode representing an S-expression #
Line 452 ⟶ 599:
+ nl
)
)</langsyntaxhighlight>
{{out}}
<pre>
Line 475 ⟶ 622:
)
</pre>
 
=={{header|APL}}==
{{works with|Dyalog APL}}
 
These are two functions, <code>sexp</code> parses an S-expression, and
<code>pretty</code> prettyprints a parsed expression.
 
The S-expression is represented as a nested APL vector, where every
item is a tuple consisting of a number representing the type, and
the value of the item.
 
The simple types are string (1), number (2), and atom (3); in these
cases the value is a character vector (for string and atom) or a number.
Type 0 is a list, and the value is a vector of items.
 
As an example, this is how a list may be defined in APL itself:
 
<pre>
pretty⊂(0((3 'Hi')(3 'Bye')(1 'A string')(0((3 'Depth')(2 42)))))
(
Hi
Bye
"A string"
(
Depth
42
)
)
</pre>
 
The following is the result of parsing and then prettyprinting the given input:
 
<pre>
⍴r
65
r
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
sexp r
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│┌─┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│
││0│┌──────────────────────────────────────────────┬──────────────────────────────────────────────────────────────┐││
││ ││┌─┬──────────────────────────────────────────┐│┌─┬──────────────────────────────────────────────────────────┐│││
││ │││0│┌────────┬───────────────┬───────┬───────┐│││0│┌────────┬───────────────────────────────────────────────┐││││
││ │││ ││┌─┬────┐│┌─┬───────────┐│┌─┬───┐│┌─┬───┐││││ ││┌─┬────┐│┌─┬───────────────────────────────────────────┐│││││
││ │││ │││3│data│││1│quoted data│││2│123│││2│4.5│││││ │││3│data│││0│┌───────┬─────────────┬─────────┬─────────┐││││││
││ │││ ││└─┴────┘│└─┴───────────┘│└─┴───┘│└─┴───┘││││ ││└─┴────┘││ ││┌─┬───┐│┌─┬─────────┐│┌─┬─────┐│┌─┬─────┐│││││││
││ │││ │└────────┴───────────────┴───────┴───────┘│││ ││ ││ │││3│!@#│││0│┌───────┐│││1│(more│││1│data)││││││││
││ ││└─┴──────────────────────────────────────────┘││ ││ ││ ││└─┴───┘││ ││┌─┬───┐│││└─┴─────┘│└─┴─────┘│││││││
││ ││ ││ ││ ││ ││ ││ │││2│4.5││││ │ │││││││
││ ││ ││ ││ ││ ││ ││ ││└─┴───┘│││ │ │││││││
││ ││ ││ ││ ││ ││ ││ │└───────┘││ │ │││││││
││ ││ ││ ││ ││ ││ │└─┴─────────┘│ │ │││││││
││ ││ ││ ││ ││ │└───────┴─────────────┴─────────┴─────────┘││││││
││ ││ ││ ││ │└─┴───────────────────────────────────────────┘│││││
││ ││ ││ │└────────┴───────────────────────────────────────────────┘││││
││ ││ │└─┴──────────────────────────────────────────────────────────┘│││
││ │└──────────────────────────────────────────────┴──────────────────────────────────────────────────────────────┘││
│└─┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┘│
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
pretty sexp r
(
(
data
"quoted data"
123
4.5
)
(
data
(
!@#
(
4.5
)
"(more"
"data)"
)
)
)
</pre>
 
 
 
<syntaxhighlight lang="apl">sexp←{
wspace←' ',⎕TC ⍝ whitespace is space, tab, cr, lf
 
⍝ turn string into number if possible
num←{
0=≢⍵:⍬ ⍝ empty = nope
(⊃⍵)∊'-¯':-∇1↓⍵ ⍝ negative?
(1≥⍵+.='.')∧⍵∧.∊⊂⎕D,'.':⍎⍵ ⍝ number: all digits and 0 or 1 points
⍬ ⍝ otherwise, nope.
}
 
⍝ tokenize (0=brackets, 1=strings, 2=atoms)
tok←{
d←(~∧\⍵∊wspace)/⍵ ⍝ ignore leading whitespace
d≡'':d ⍝ empty input = empty output
s←1↑d ⋄ r←1↓d ⍝ start and rest
s∊'()':(⊂0,⊂s),∇r ⍝ brackets: just the bracket
sb←∧\~('"'=r)∧'\'≠¯1⌽r ⍝ strings: up to first " not preceded by \
sd←(1⌽sd≠'"')/sd←sb/r ⍝ without escape characters
s='"':(⊂1,⊂sd),∇1↓(~sb)/r
atm←∧\~d∊wspace,'()"' ⍝ atom: up to next whitespace, () or "
(⊂2,⊂atm/d),∇(~atm)/d
}
 
⍝ build structure from tokens
build←{
⍺←⍬
0=≢⍵:⍺ ⍬ ⍝ empty input = done
typ tok←⊃⍵ ⍝ current token and type
rst←1↓⍵ ⍝ rest of tokens
tok≡,'(':(⍺,⊂0 l)∇r⊣l r←∇rst ⍝ open bracket: go down a level
tok≡,')':⍺ rst ⍝ close bracket: go up a level
typ=1:(⍺,⊂1 tok)∇rst ⍝ string: type 1
0≠≢n←num tok:(⍺,⊂2(,n))∇rst ⍝ number: type 2
(⍺,⊂3 tok)∇rst ⍝ symbol: type 3
}
 
⍝ check that a string was passed in
(''≢0↑⍵)∨1≠⍴⍴⍵:⎕SIGNAL⊂('EN'11)('Message' 'Input must be a char vector')
 
⍝ check that all strings are closed
quot←('"'=⍵)∧'\'≠¯1⌽⍵
0≠2|+/quot:⎕SIGNAL⊂('EN'11)('Message' 'Open string')
 
⍝ check that all brackets match (except those in strings)
nest←+\+⌿1 ¯1×[1]'()'∘.=(~2|+\quot)/⍵
(0≠¯1↑nest)∨0<.∨nest:⎕SIGNAL⊂('EN'11)('Message' 'Mismatched parentheses')
 
⊃build tok ⍵
}
 
pretty←{
⍝ Prettyprinter for parsed S-expressions
NL←⎕tc[2]
∊∇{
typ itm←⍵
typ=3:itm,NL ⍝ Atom
typ=2:(⍕itm),NL ⍝ Number
typ=1:('"',('"'⎕R'\\"'⊢itm),'"'),NL ⍝ String
typ=0:'(',NL,('^'⎕R' '⊢⍺⍺ itm),')',NL ⍝ List
}¨⍵
}
 
</syntaxhighlight>
 
=={{header|Arturo}}==
<syntaxhighlight lang="rebol">code: {
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
}
 
s: first to :block code
inspect.muted s
print as.code s</syntaxhighlight>
 
{{out}}
 
<pre>[ :inline
[ :inline
data :word
quoted data :string
123 :integer
4.5 :floating
]
[ :inline
data :word
[ :inline
! :symbol
@ :symbol
# :symbol
[ :inline
4.5 :floating
]
(more :string
data) :string
]
]
]
((data "quoted data" 123 4.5) (data (! @ # (4.5) "(more" "data)")))</pre>
 
=={{header|AutoHotkey}}==
<langsyntaxhighlight AutoHotkeylang="autohotkey">S_Expressions(Str){
Str := RegExReplace(Str, "s)(?<![\\])"".*?[^\\]""(*SKIP)(*F)|((?<![\\])[)(]|\s)", "`n$0`n")
Str := RegExReplace(Str, "`am)^\s*\v+") , Cnt := 0
Line 492 ⟶ 822:
Res .= "`t"
return Res
}</langsyntaxhighlight>
Examples:<langsyntaxhighlight AutoHotkeylang="autohotkey">Str =
(
((data da\(\)ta "quot\\ed data" 123 4.5)
("data" (!@# (4.5) "(mo\"re" "data)")))
)
MsgBox, 262144, , % S_Expressions(Str)</langsyntaxhighlight>
{{out}}
<pre>(
Line 524 ⟶ 854:
 
=={{header|C}}==
<langsyntaxhighlight lang="c">#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
Line 749 ⟶ 1,079:
print_expr(x, 0);
return 0;
}</langsyntaxhighlight>
{{out}}<syntaxhighlight lang="text">input is:
((data da\(\)ta "quot\\ed data" 123 4.5)
("data" (!@# (4.5) "(mo\"re" "data)")))
Line 773 ⟶ 1,103:
)
)
)</langsyntaxhighlight>
 
=={{header|C sharp|C#}}==
Implementation of S-expression serializer & deserializer in C# 6.0 language.
 
Git repository with code and tests can be found here: https://github.com/ichensky/SExpression/tree/rosettacode
 
<syntaxhighlight lang="csharp">
using System;
using System.Collections.Generic;
using System.Text;
 
public class SNode
{
private List<SNode> _items;
public string Name { get; set; }
public IReadOnlyCollection<SNode> Items { get { return _items.AsReadOnly(); } }
public SNode()
{
this._items = new List<SNode>();
}
public SNode(string name):this()
{
this.Name=name;
}
public void AddNode(SNode node)
{
this._items.Add(node);
}
}
 
public class SNodeFull : SNode
{
private bool _isLeaf;
public bool IsLeaf { get => _isLeaf; }
public SNodeFull(bool isLeaf) : base()
{
this._isLeaf = isLeaf;
}
 
public SNodeFull(string name, bool isLeaf) : base(name)
{
this._isLeaf = isLeaf;
}
 
public SNodeFull RootNode { get; set; }
 
public void AddNode(SNodeFull node)
{
base.AddNode(node);
node.RootNode = this;
}
}
 
</syntaxhighlight>
<syntaxhighlight lang="csharp">
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
 
namespace SExpression
{
public partial class SExpression
{
public const string ErrorStrNotValidFormat = "Not valid format.";
}
public partial class SExpression : ISExpression
{
public String Serialize(SNode root)
{
if (root == null)
{
throw new ArgumentNullException();
}
var sb = new StringBuilder();
Serialize(root, sb);
return sb.ToString();
}
private void Serialize(SNode node, StringBuilder sb)
{
sb.Append('(');
 
if (node.Items.Count > 0)
{
int x = 0;
foreach (var item in node.Items)
{
if (x>0)
{
sb.Append(' ');
}
else
{
x++;
}
if (item.Items.Count > 0)
{
Serialize(item, sb);
}
else
{
SerializeItem(item, sb);
}
}
}
 
sb.Append(')');
}
private void SerializeItem(SNode node, StringBuilder sb)
{
if (node.Name == null)
{
sb.Append("()");
return;
}
node.Name = node.Name.Replace("\"", "\\\"");
if (node.Name.IndexOfAny(new char[] { ' ', '"', '(', ')' }) != -1 || node.Name == string.Empty)
{
sb.Append('"').Append(node.Name).Append('"');
return;
}
sb.Append(node.Name);
}
}
public partial class SExpression
{
public SNode Deserialize(string st)
{
if (st==null)
{
return null;
}
st = st.Trim();
if (string.IsNullOrEmpty(st))
{
return null;
}
 
var begin = st.IndexOf('(');
if (begin != 0)
{
throw new Exception();
}
var end = st.LastIndexOf(')');
if (end != st.Length - 1)
{
throw new Exception(ErrorStrNotValidFormat);
}
st = st.Remove(st.Length-1).Remove(0, 1).ToString();
var node = new SNodeFull(false);
Deserialize(ref st, node);
return node;
}
 
private void Deserialize(ref string st, SNodeFull root)
{
st = st.Trim();
if (string.IsNullOrEmpty(st))
{
return;
}
 
SNodeFull node = null;
SNodeFull r = root;
do
{
while (st[0] == ')')
{
st = st.Remove(0, 1).Trim();
if (st.Length==0)
{
return;
}
r = root.RootNode;
if (r==null)
{
throw new Exception(ErrorStrNotValidFormat);
}
}
node = DeserializeItem(ref st);
st = st.Trim();
 
r.AddNode(node);
if (!node.IsLeaf)
{
Deserialize(ref st,node);
}
}
while (st.Length > 0);
}
 
private SNodeFull DeserializeItem(ref string st)
{
if (st[0] == '(')
{
st = st.Remove(0, 1);
return new SNodeFull(false);
}
 
var x = 0;
var esc = 0;
for (int i = 0; i < st.Length; i++)
{
if (st[i] == '"')
{
if (esc == 0)
{
esc = 1;
}
else if(esc == 1 && (i> 0 && st[i - 1] == '\\'))
{
throw new Exception(ErrorStrNotValidFormat);
}
else
{
esc = 2;
break;
}
}
else if (esc==0 && " ()".Contains(st[i]))
{
break;
}
 
x++;
}
if (esc == 1)
{
throw new Exception(ErrorStrNotValidFormat);
}
 
var head = esc==0? st.Substring(0, x): st.Substring(1,x-1);
st = st.Remove(0, esc ==0 ? x: x + 2);
return new SNodeFull(head, true);
}
}
}
</syntaxhighlight>
<syntaxhighlight lang="csharp">
using System;
using System.Collections.Generic;
 
namespace SExpression.Test
{
class Program
{
static void Main(string[] args)
{
var str =
@"((data ""quoted data"" 123 4.5)
(data(!@# (4.5) ""(more"" ""data)"")))";
 
var se = new SExpression();
var node = se.Deserialize(str);
var result = se.Serialize(node);
Console.WriteLine(result);
}
}
}
</syntaxhighlight>
{{out}}
<pre>
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))
</pre>
 
=={{header|C++}}==
The function s_expr::parse parses an object from an input stream. An object may be a
quoted string, symbol (i.e. unquoted string), a number (integer or float) or a list
of objects. The "\" character is used for escaping special characters.
C++ 17 is required. As it stands, you can't really do anything with the parsed object
apart from writing it out, which fulfils this task's requirements. With some more work
this code might actually be useful.
<syntaxhighlight lang="cpp">#include <cctype>
#include <iomanip>
#include <iostream>
#include <list>
#include <memory>
#include <sstream>
#include <string>
#include <variant>
 
namespace s_expr {
 
enum class token_type { none, left_paren, right_paren, symbol, string, number };
enum class char_type { left_paren, right_paren, quote, escape, space, other };
enum class parse_state { init, quote, symbol };
 
struct token {
token_type type = token_type::none;
std::variant<std::string, double> data;
};
 
char_type get_char_type(char ch) {
switch (ch) {
case '(':
return char_type::left_paren;
case ')':
return char_type::right_paren;
case '"':
return char_type::quote;
case '\\':
return char_type::escape;
}
if (isspace(static_cast<unsigned char>(ch)))
return char_type::space;
return char_type::other;
}
 
bool parse_number(const std::string& str, token& tok) {
try {
size_t pos = 0;
double num = std::stod(str, &pos);
if (pos == str.size()) {
tok.type = token_type::number;
tok.data = num;
return true;
}
} catch (const std::exception&) {
}
return false;
}
 
bool get_token(std::istream& in, token& tok) {
char ch;
parse_state state = parse_state::init;
bool escape = false;
std::string str;
token_type type = token_type::none;
while (in.get(ch)) {
char_type ctype = get_char_type(ch);
if (escape) {
ctype = char_type::other;
escape = false;
} else if (ctype == char_type::escape) {
escape = true;
continue;
}
if (state == parse_state::quote) {
if (ctype == char_type::quote) {
type = token_type::string;
break;
}
else
str += ch;
} else if (state == parse_state::symbol) {
if (ctype == char_type::space)
break;
if (ctype != char_type::other) {
in.putback(ch);
break;
}
str += ch;
} else if (ctype == char_type::quote) {
state = parse_state::quote;
} else if (ctype == char_type::other) {
state = parse_state::symbol;
type = token_type::symbol;
str = ch;
} else if (ctype == char_type::left_paren) {
type = token_type::left_paren;
break;
} else if (ctype == char_type::right_paren) {
type = token_type::right_paren;
break;
}
}
if (type == token_type::none) {
if (state == parse_state::quote)
throw std::runtime_error("syntax error: missing quote");
return false;
}
tok.type = type;
if (type == token_type::string)
tok.data = str;
else if (type == token_type::symbol) {
if (!parse_number(str, tok))
tok.data = str;
}
return true;
}
 
void indent(std::ostream& out, int level) {
for (int i = 0; i < level; ++i)
out << " ";
}
 
class object {
public:
virtual ~object() {}
virtual void write(std::ostream&) const = 0;
virtual void write_indented(std::ostream& out, int level) const {
indent(out, level);
write(out);
}
};
 
class string : public object {
public:
explicit string(const std::string& str) : string_(str) {}
void write(std::ostream& out) const { out << std::quoted(string_); }
private:
std::string string_;
};
 
class symbol : public object {
public:
explicit symbol(const std::string& str) : string_(str) {}
void write(std::ostream& out) const {
for (char ch : string_) {
if (get_char_type(ch) != char_type::other)
out << '\\';
out << ch;
}
}
private:
std::string string_;
};
 
class number : public object {
public:
explicit number(double num) : number_(num) {}
void write(std::ostream& out) const { out << number_; }
private:
double number_;
};
 
class list : public object {
public:
void write(std::ostream& out) const;
void write_indented(std::ostream&, int) const;
void append(const std::shared_ptr<object>& ptr) {
list_.push_back(ptr);
}
private:
std::list<std::shared_ptr<object>> list_;
};
 
void list::write(std::ostream& out) const {
out << "(";
if (!list_.empty()) {
auto i = list_.begin();
(*i)->write(out);
while (++i != list_.end()) {
out << ' ';
(*i)->write(out);
}
}
out << ")";
}
 
void list::write_indented(std::ostream& out, int level) const {
indent(out, level);
out << "(\n";
if (!list_.empty()) {
for (auto i = list_.begin(); i != list_.end(); ++i) {
(*i)->write_indented(out, level + 1);
out << '\n';
}
}
indent(out, level);
out << ")";
}
 
class tokenizer {
public:
tokenizer(std::istream& in) : in_(in) {}
bool next() {
if (putback_) {
putback_ = false;
return true;
}
return get_token(in_, current_);
}
const token& current() const {
return current_;
}
void putback() {
putback_ = true;
}
private:
std::istream& in_;
bool putback_ = false;
token current_;
};
 
std::shared_ptr<object> parse(tokenizer&);
 
std::shared_ptr<list> parse_list(tokenizer& tok) {
std::shared_ptr<list> lst = std::make_shared<list>();
while (tok.next()) {
if (tok.current().type == token_type::right_paren)
return lst;
else
tok.putback();
lst->append(parse(tok));
}
throw std::runtime_error("syntax error: unclosed list");
}
 
std::shared_ptr<object> parse(tokenizer& tokenizer) {
if (!tokenizer.next())
return nullptr;
const token& tok = tokenizer.current();
switch (tok.type) {
case token_type::string:
return std::make_shared<string>(std::get<std::string>(tok.data));
case token_type::symbol:
return std::make_shared<symbol>(std::get<std::string>(tok.data));
case token_type::number:
return std::make_shared<number>(std::get<double>(tok.data));
case token_type::left_paren:
return parse_list(tokenizer);
default:
break;
}
throw std::runtime_error("syntax error: unexpected token");
}
 
} // namespace s_expr
 
void parse_string(const std::string& str) {
std::istringstream in(str);
s_expr::tokenizer tokenizer(in);
auto exp = s_expr::parse(tokenizer);
if (exp != nullptr) {
exp->write_indented(std::cout, 0);
std::cout << '\n';
}
}
 
int main(int argc, char** argv) {
std::string test_string =
"((data \"quoted data\" 123 4.5)\n"
" (data (!@# (4.5) \"(more\" \"data)\")))";
if (argc == 2)
test_string = argv[1];
try {
parse_string(test_string);
} catch (const std::exception& ex) {
std::cerr << ex.what() << '\n';
}
return 0;
}</syntaxhighlight>
 
{{out}}
<pre>
(
(
data
"quoted data"
123
4.5
)
(
data
(
!@#
(
4.5
)
"(more"
"data)"
)
)
)
</pre>
 
=={{header|Ceylon}}==
<langsyntaxhighlight lang="ceylon">class Symbol(symbol) {
shared String symbol;
string => symbol;
Line 944 ⟶ 1,842:
prettyPrint(tree);
}
</syntaxhighlight>
</lang>
{{out}}
<pre>
Line 970 ⟶ 1,868:
=={{header|CoffeeScript}}==
{{improve|CoffeeScript|This solution does not reproduce unquoted strings as per task description}}
<langsyntaxhighlight lang="coffeescript">
# This code works with Lisp-like s-expressions.
#
Line 1,101 ⟶ 1,999:
console.log "output:\n#{pp output}\n"
console.log "round trip:\n#{sexp output}\n"
</syntaxhighlight>
</lang>
{{out}}
<syntaxhighlight lang="text">
> coffee sexp.coffee
input:
Line 1,133 ⟶ 2,031:
round trip:
(("data" "quoted data with escaped \"" 123 4.5 "14") ("data" ("!@#" (4.5) "(more" "data)")))
</syntaxhighlight>
</lang>
 
=={{header|Common Lisp}}==
Line 1,151 ⟶ 2,049:
Unfortunately, our pointy-haired boss has asked you to write a parser for an unusual s-expression syntax that uses square brackets instead of parenthesis. In most programming languages, this would necessitate writing an entire parser. Fortunately, the Common Lisp reader can be modified through the use of macro-characters to accomplish this task. When the reader parses a macro-character token, a function associated with the macro-character is called. As evidenced below, modifying the behavior of the Lisp reader by setting macro-character functions to handle additional sytax requires far less work than writing a complete parser from scratch.
 
<langsyntaxhighlight lang="lisp">(defun lsquare-reader (stream char)
(declare (ignore char))
(read-delimited-list #\] stream t))
(set-macro-character #\[ #'lsquare-reader) ;;Call the lsquare-reader function when a '[' token is parsed
(set-macro-character #\] (get-macro-character #\) nil)) ;;Do the same thing as ')' when a ']' token is parsed</langsyntaxhighlight>
Unit test code:
<langsyntaxhighlight lang="lisp">;;A list of unit tests. Each test is a cons in which the car (left side) contains the
;;test string and the cdr (right side) the expected result of reading the S-Exp.
(setf unit-tests
Line 1,172 ⟶ 2,070:
(dolist (test unit-tests)
(format t "String: ~23s Expected: ~23s Actual: ~s~%"
(car test) (cdr test) (read-from-string (car test)))))</langsyntaxhighlight>
{{out| Unit test output}}
<pre>CL-USER> (run-tests)
Line 1,206 ⟶ 2,104:
===Writing S-Expressions===
The next step in this task is to write a standard Lisp s-expression in the square bracket notation.
<langsyntaxhighlight lang="lisp">(defun write-sexp (sexp)
"Writes a Lisp s-expression in square bracket notation."
(labels ((parse (sexp)
Line 1,229 ⟶ 2,127:
(subseq str 0 last-char)
str)))))))
(concatenate 'string "[" (fix-spacing (parse sexp)) "]")))</langsyntaxhighlight>
Unit test code:
<langsyntaxhighlight lang="lisp">(setf unit-tests '(((1 2) (3 4)) (1 2 3 4) ("ab(cd" "mn)op")
(1 (2 (3 (4)))) ((1) (2) (3)) ()))
 
Line 1,237 ⟶ 2,135:
(dolist (test unit-tests)
(format t "Before: ~18s After: ~s~%"
test (write-sexp test))))</langsyntaxhighlight>
{{out|Unit test output}}
<pre>CL-USER> (run-tests)
Line 1,256 ⟶ 2,154:
CL-USER> </pre>
 
=={{header|C#Cowgol}}==
Implementation of S-expression serializer & deserializer in C# 6.0 language.
 
This parser allows both the use of quotes in symbols (e.g. <code>abc"def</code>),
Git repository with code and tests can be found here: https://github.com/ichensky/SExpression/tree/rosettacode
as well as escaped quotes in quoted strings (e.g. <code>"Hello \" world"</code>).
Integer numbers are recognized and stored as such, but since Cowgol does not have
a native floating point type, floating point numbers are not.
 
<syntaxhighlight lang="cowgol">include "cowgol.coh";
<lang csharp>
include "strings.coh";
using System;
include "malloc.coh";
using System.Collections.Generic;
using System.Text;
 
const MAXDEPTH := 256; # Maximum depth (used for stack sizes)
public class SNode
const MAXSTR := 256; # Maximum string length
{
private List<SNode> _items;
public string Name { get; set; }
public IReadOnlyCollection<SNode> Items { get { return _items.AsReadOnly(); } }
public SNode()
{
this._items = new List<SNode>();
}
public SNode(string name):this()
{
this.Name=name;
}
public void AddNode(SNode node)
{
this._items.Add(node);
}
}
 
# Type markers
public class SNodeFull : SNode
const T_ATOM := 1;
{
const T_STRING := 2;
private bool _isLeaf;
const T_NUMBER := 3;
public bool IsLeaf { get => _isLeaf; }
const T_LIST := 4;
public SNodeFull(bool isLeaf) : base()
{
this._isLeaf = isLeaf;
}
 
# Value union
public SNodeFull(string name, bool isLeaf) : base(name)
record SVal is
{
number @at(0): int32;
this._isLeaf = isLeaf;
string @at(0): [uint8]; # also used for atoms
}
list @at(0): [SExp];
end record;
 
# Holds a linked list of items
public SNodeFull RootNode { get; set; }
record SExp is
type: uint8;
next: [SExp];
val: SVal;
end record;
 
# Free an S-Expression
public void AddNode(SNodeFull node)
sub FreeSExp(exp: [SExp]) is
{
var stack: [SExp][MAXDEPTH];
base.AddNode(node);
stack[0] := exp;
node.RootNode = this;
var sp: @indexof stack }:= 1;
}
while sp > 0 loop
sp := sp - 1;
exp := stack[sp];
while exp != 0 as [SExp] loop
var next := exp.next;
case exp.type is
when T_ATOM:
Free(exp.val.string);
when T_STRING:
Free(exp.val.string);
when T_LIST:
stack[sp] := exp.val.list;
sp := sp + 1;
end case;
Free(exp as [uint8]);
exp := next;
end loop;
end loop;
end sub;
 
# Build an S-Expression
</lang>
sub ParseSExp(in: [uint8]): (out: [SExp]) is
<lang csharp>
out := 0 as [SExp];
using System;
using System.Collections.Generic;
sub SkipSpace() is
using System.Linq;
while ([in] != 0) and ([in] <= 32) loop
using System.Text;
in := @next in;
end loop;
end sub;
sub AtomEnd(): (space: [uint8]) is
space := in;
while ([space] > 32)
and ([space] != '(')
and ([space] != ')') loop
space := @next space;
end loop;
end sub;
record Stk is
start: [SExp];
cur: [SExp];
end record;
 
var strbuf: uint8[MAXSTR];
namespace SExpression
var stridx: @indexof strbuf := 0;
{
var item: [SExp];
public partial class SExpression
var stack: Stk[MAXDEPTH];
{
var sp: @indexof stack := 0;
public const string ErrorStrNotValidFormat = "Not valid format.";
stack[0].start := 0 as [SExp];
}
stack[0].cur := 0 as [SExp];
public partial class SExpression : ISExpression
{
sub Store(item: [SExp]) is
public String Serialize(SNode root)
if stack[sp].start == 0 as [SExp] then
{
ifstack[sp].start (root =:= null)item;
end {if;
if stack[sp].cur != 0 as [SExp] then
throw new ArgumentNullException();
}stack[sp].cur.next := item;
end var sb = new StringBuilder()if;
stack[sp].cur := Serialize(root, sb)item;
end sub;
return sb.ToString();
}
# called on error to clean up memory
private void Serialize(SNode node, StringBuilder sb)
sub FreeAll() {is
loop sb.Append('(');
FreeSExp(stack[sp].start);
 
if (node.Itemsstack[sp].Countstart >:= 0);
{if sp == 0 then break; end if;
sp := sp - int x = 01;
end loop;
foreach (var item in node.Items)
end {sub;
if (x>0)
loop
{
sb.AppendSkipSpace(' ');
case [in] }is
when 0: break;
when '"':
var escape: uint8 := 0;
stridx := 0;
loop
in := @next in;
if [in] == 0 then break;
elseif escape == 1 then
strbuf[stridx] := [in];
stridx := stridx + 1;
escape := 0;
elseif [in] == '\\' then escape := 1;
elseif [in] == '"' then break;
else
{ strbuf[stridx] := [in];
x+stridx := stridx + 1;
}end if;
end if (item.Items.Count > 0)loop;
{
if [in] == 0 Serialize(item, sb);then
}# missing _"_
elseFreeAll();
{return;
end SerializeItem(item, sb)if;
in := @next }in;
}strbuf[stridx] := 0;
} stridx := stridx + 1;
item := Alloc(@bytesof SExp) as [SExp];
item.type := T_STRING;
item.val.string := Alloc(stridx as intptr);
CopyString(&strbuf[0], item.val.string);
Store(item);
when '(':
in := @next in;
sp := sp + 1;
stack[sp].start := 0 as [SExp];
stack[sp].cur := 0 as [SExp];
when ')':
in := @next in;
if sp == 0 then
# stack underflow, error
FreeAll();
return;
else
item := Alloc(@bytesof SExp) as [SExp];
item.type := T_LIST;
item.val.list := stack[sp].start;
sp := sp - 1;
Store(item);
end if;
when else:
var aend := AtomEnd();
item := Alloc(@bytesof SExp) as [SExp];
# if this is a valid integer number then store as number
var ptr: [uint8];
(item.val.number, ptr) := AToI(in);
if ptr == aend then
# a number was parsed and the whole atom consumed
item.type := T_NUMBER;
else
# not a valid integer number, store as atom
item.type := T_ATOM;
var length := aend - in;
item.val.string := Alloc(length + 1);
MemCopy(in, length, item.val.string);
[item.val.string + length] := 0;
end if;
in := aend;
Store(item);
end case;
end loop;
if sp != 0 then
# unterminated list!
FreeAll();
return;
else
# return start of list
out := stack[0].start;
end if;
end sub;
 
# Prettyprint an S-Expression with types
sb.Append(')');
sub prettyprint(sexp: [SExp]) is
}
sub PrintNum(n: int32) is
private void SerializeItem(SNode node, StringBuilder sb)
{var buf: uint8[16];
[IToA(n, 10, &buf[0])] if (node.Name =:= null)0;
{print(&buf[0]);
end sub;
sb.Append("()");
return;
sub PrintQuoteStr(s: [uint8]) is
}
node.Name = node.Name.Replaceprint_char('"\"", "\\\""');
while [s] != 0 loop
if (node.Name.IndexOfAny(new char[] { ' ', '"', '(', ')' }) != -1 || node.Name == string.Empty)
{if [s] == '"' or [s] == '\\' then
sb.Appendprint_char('"').Append(node.Name).Append('"\\');
end returnif;
}print_char([s]);
sb.Append(node.Name)s := @next s;
}end loop;
print_char('"');
}
end sub;
public partial class SExpression
{
var stack: [SExp][MAXDEPTH];
public SNode Deserialize(string st)
var sp: @indexof stack {:= 1;
stack[0] := sexp;
if (st==null)
{
sub Indent(n: @indexof stack) is
return null;
while n > 0 }loop
stprint(" = st.Trim( ");
ifn (string.IsNullOrEmpty(st)):= n - 1;
end {loop;
end sub;
return null;
}
loop
sp := sp - 1;
while stack[sp] != 0 as [SExp] loop
Indent(sp);
case stack[sp].type is
when T_ATOM:
print(stack[sp].val.string);
print(" :: Atom");
stack[sp] := stack[sp].next;
when T_STRING:
PrintQuoteStr(stack[sp].val.string);
print(" :: String");
stack[sp] := stack[sp].next;
when T_NUMBER:
PrintNum(stack[sp].val.number);
print(" :: Number");
stack[sp] := stack[sp].next;
when T_LIST:
print_char('(');
sp := sp + 1;
stack[sp] := stack[sp-1].val.list;
stack[sp-1] := stack[sp-1].next;
end case;
print_nl();
end loop;
if sp == 0 then
break;
end if;
Indent(sp-1);
print_char(')');
print_nl();
end loop;
end sub;
 
var str := "((data \"quoted data\" 123 4.5)\n (data (!@# (4.5) \"(more\" \"data)\")))";
var begin = st.IndexOf('(');
if (begin != 0)
{
throw new Exception();
}
var end = st.LastIndexOf(')');
if (end != st.Length - 1)
{
throw new Exception(ErrorStrNotValidFormat);
}
st = st.Remove(st.Length-1).Remove(0, 1).ToString();
var node = new SNodeFull(false);
Deserialize(ref st, node);
return node;
}
 
print("Input:\n");
private void Deserialize(ref string st, SNodeFull root)
print(str);
{
print_nl();
st = st.Trim();
if (string.IsNullOrEmpty(st))
{
return;
}
 
print("Parsed:\n");
SNodeFull node = null;
prettyprint(ParseSExp(str));
SNodeFull r = root;
print_nl();</syntaxhighlight>
do
{
while (st[0] == ')')
{
st = st.Remove(0, 1).Trim();
if (st.Length==0)
{
return;
}
r = root.RootNode;
if (r==null)
{
throw new Exception(ErrorStrNotValidFormat);
}
}
node = DeserializeItem(ref st);
st = st.Trim();
 
r.AddNode(node);
if (!node.IsLeaf)
{
Deserialize(ref st,node);
}
}
while (st.Length > 0);
}
 
private SNodeFull DeserializeItem(ref string st)
{
if (st[0] == '(')
{
st = st.Remove(0, 1);
return new SNodeFull(false);
}
 
var x = 0;
var esc = 0;
for (int i = 0; i < st.Length; i++)
{
if (st[i] == '"')
{
if (esc == 0)
{
esc = 1;
}
else if(esc == 1 && (i> 0 && st[i - 1] == '\\'))
{
throw new Exception(ErrorStrNotValidFormat);
}
else
{
esc = 2;
break;
}
}
else if (esc==0 && " ()".Contains(st[i]))
{
break;
}
 
x++;
}
if (esc == 1)
{
throw new Exception(ErrorStrNotValidFormat);
}
 
var head = esc==0? st.Substring(0, x): st.Substring(1,x-1);
st = st.Remove(0, esc ==0 ? x: x + 2);
return new SNodeFull(head, true);
}
}
}
</lang>
<lang csharp>
using System;
using System.Collections.Generic;
 
namespace SExpression.Test
{
class Program
{
static void Main(string[] args)
{
var str =
@"((data ""quoted data"" 123 4.5)
(data(!@# (4.5) ""(more"" ""data)"")))";
 
var se = new SExpression();
var node = se.Deserialize(str);
var result = se.Serialize(node);
Console.WriteLine(result);
}
}
}
</lang>
{{out}}
 
<pre>
<pre>Input:
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))
((data "quoted data" 123 4.5)
</pre>
(data (!@# (4.5) "(more" "data)")))
Parsed:
(
(
data :: Atom
"quoted data" :: String
123 :: Number
4.5 :: Atom
)
(
data :: Atom
(
!@# :: Atom
(
4.5 :: Atom
)
"(more" :: String
"data)" :: String
)
)
)</pre>
 
=={{header|D}}==
<langsyntaxhighlight lang="d">import std.stdio, std.conv, std.algorithm, std.variant, std.uni,
std.functional, std.string;
 
Line 1,609 ⟶ 2,541:
"Printed: ".write;
pTest.writeSexp;
}</langsyntaxhighlight>
{{out}}
<pre>Parsed: [[data, quoted data, 123, 4.5], [data, [!@#, [4.5], (more, data)]]]
Line 1,616 ⟶ 2,548:
=={{header|EchoLisp}}==
The '''(read-from-string input-string)''' function parses a string into an s-expression, which is the native representation of program/data in EchoLisp and the majority of Lisps .
<langsyntaxhighlight lang="lisp">
(define input-string #'((data "quoted data" 123 4.5)\n(data (!@# (4.5) "(more" "data)")))'#)
 
Line 1,633 ⟶ 2,565:
(first(rest s-expr))
→ (data (!@# (4.5) "(more" "data)"))
</syntaxhighlight>
</lang>
 
=={{header|F_Sharp|F#}}==
 
Implementation of S-expression parser in F# 4.7 language.
 
Visual Studio COmmunity 2019 Edition - Version 16.4.5.
 
Learn more about F# at https://fsharp.org
 
 
The file <code>SExpr.fs</code> containing the implementation:
 
<syntaxhighlight lang="fsharp">
module SExpr
(* This module is a very simple port of the OCaml version to F# (F-Sharp) *)
(* The original OCaml setatment is comment out and the F# statement(s) follow *)
(* Port performed by Bob Elward 23 Feb 2020 *)
 
(* The .Net standard would use "+" and not "^" for string concatenation *)
(* I kept the "^" to be compatable with the "ml" standard *)
(* The line below eliminates the warning/suggestion to use "+" *)
#nowarn "62"
 
(** This module is a very simple parsing library for S-expressions. *)
(* Copyright (C) 2009 Florent Monnier, released under MIT license. *)
(* modified to match the task description *)
 
(* Code obtained from: https://rosettacode.org/wiki/S-Expressions#OCaml *)
(* Note: The type below defines the grammar for this S-Expression (S-Expr).
An S-Expr is either an Atom or an S-Expr
*)
 
open System.Text
open System.IO
open System
 
type sexpr = Atom of string | Expr of sexpr list
 
type state =
| Parse_root of sexpr list
| Parse_content of sexpr list
| Parse_word of StringBuilder * sexpr list
| Parse_string of bool * StringBuilder * sexpr list
 
let parse pop_char =
let rec aux st =
match pop_char() with
| None ->
begin match st with
| Parse_root sl -> (List.rev sl)
| Parse_content _
| Parse_word _
| Parse_string _ ->
failwith "Parsing error: content not closed by parenthesis"
end
| Some c ->
match c with
| '(' ->
begin match st with
| Parse_root sl ->
let this = aux(Parse_content []) in
aux(Parse_root((Expr this)::sl))
| Parse_content sl ->
let this = aux(Parse_content []) in
aux(Parse_content((Expr this)::sl))
| Parse_word(w, sl) ->
let this = aux(Parse_content []) in
(* aux(Parse_content((Expr this)::Atom(Buffer.contents w)::sl)) *)
aux(Parse_content((Expr this)::Atom(w.ToString())::sl))
| Parse_string(_, s, sl) ->
//Buffer.add_char s c;
s.Append(c) |> ignore;
aux(Parse_string(false, s, sl))
end
| ')' ->
begin match st with
| Parse_root sl ->
failwith "Parsing error: closing parenthesis without openning"
| Parse_content sl -> (List.rev sl)
(* | Parse_word(w, sl) -> List.rev(Atom(Buffer.contents w)::sl) *)
| Parse_word(w, sl) -> List.rev(Atom(w.ToString())::sl)
| Parse_string(_, s, sl) ->
(* Buffer.add_char s c; *)
s.Append(c) |> ignore;
aux(Parse_string(false, s, sl))
end
| ' ' | '\n' | '\r' | '\t' ->
begin match st with
| Parse_root sl -> aux(Parse_root sl)
| Parse_content sl -> aux(Parse_content sl)
(* | Parse_word(w, sl) -> aux(Parse_content(Atom(Buffer.contents w)::sl)) *)
| Parse_word(w, sl) -> aux(Parse_content(Atom(w.ToString())::sl))
| Parse_string(_, s, sl) ->
//Buffer.add_char s c;
s.Append(c) |> ignore;
aux(Parse_string(false, s, sl))
end
| '"' ->
(* '"' *)
begin match st with
| Parse_root _ -> failwith "Parse error: double quote at root level"
| Parse_content sl ->
(* let s = Buffer.create 74 in *)
let s = StringBuilder(74) in
aux(Parse_string(false, s, sl))
| Parse_word(w, sl) ->
(* let s = Buffer.create 74 in *)
let s = StringBuilder(74) in
(* aux(Parse_string(false, s, Atom(Buffer.contents w)::sl)) *)
aux(Parse_string(false, s, Atom(w.ToString())::sl))
| Parse_string(true, s, sl) ->
(* Buffer.add_char s c; *)
s.Append(c) |> ignore;
aux(Parse_string(false, s, sl))
| Parse_string(false, s, sl) ->
(* aux(Parse_content(Atom(Buffer.contents s)::sl)) *)
aux(Parse_content(Atom(s.ToString())::sl))
end
| '\\' ->
begin match st with
| Parse_string(true, s, sl) ->
(* Buffer.add_char s c; *)
s.Append(c) |> ignore;
aux(Parse_string(false, s, sl))
| Parse_string(false, s, sl) ->
aux(Parse_string(true, s, sl))
| _ ->
failwith "Parsing error: escape character in wrong place"
end
| _ ->
begin match st with
| Parse_root _ ->
failwith(Printf.sprintf "Parsing error: char '%c' at root level" c)
| Parse_content sl ->
(* let w = Buffer.create 16 in *)
let w = StringBuilder(16) in
(* Buffer.add_char w c; *)
w.Append(c) |> ignore;
aux(Parse_word(w, sl))
| Parse_word(w, sl) ->
(* Buffer.add_char w c; *)
w.Append(c) |> ignore;
aux(Parse_word(w, sl))
| Parse_string(_, s, sl) ->
(* Buffer.add_char s c; *)
s.Append(c) |> ignore;
aux(Parse_string(false, s, sl))
end
in
aux (Parse_root [])
 
 
let string_pop_char str =
let len = String.length str in
let i = ref(-1) in
(function () -> incr i; if !i >= len then None else Some(str.[!i]))
 
 
let parse_string str =
parse (string_pop_char str)
 
(*
let ic_pop_char ic =
(function () ->
try Some(input_char ic)
with End_of_file -> (None))
*)
 
let ic_pop_char (ic:TextReader) =
(function () -> try Some(Convert.ToChar(ic.Read()))
with _End_of_file -> (None)
)
 
 
let parse_ic ic =
parse (ic_pop_char ic)
 
 
let parse_file filename =
(* let ic = open_in filename in *)
let ic = File.OpenText filename in
let res = parse_ic ic in
(* close_in ic; *)
ic.Close();
(res)
 
 
let quote s =
"\"" ^ s ^ "\""
 
let needs_quote s =
(* List.exists (String.contains s) [' '; '\n'; '\r'; '\t'; '('; ')'] *)
List.exists (fun elem -> (String.exists (fun c -> c = elem) s)) [' '; '\n'; '\r'; '\t'; '('; ')']
 
let protect s =
(* There is no need to "escape" .Net strings the framework takes care of this *)
(* let s = String.escaped s in *)
if needs_quote s then quote s else s
 
 
let string_of_sexpr s =
let rec aux acc = function
| (Atom tag)::tl -> aux ((protect tag)::acc) tl
| (Expr e)::tl ->
let s =
"(" ^
(String.concat " " (aux [] e))
^ ")"
in
aux (s::acc) tl
| [] -> (List.rev acc)
in
String.concat " " (aux [] s)
 
 
let print_sexpr s =
(* print_endline (string_of_sexpr s) *)
printfn "%s" (string_of_sexpr s)
 
 
let string_of_sexpr_indent s =
let rec aux i acc = function
| (Atom tag)::tl -> aux i ((protect tag)::acc) tl
| (Expr e)::tl ->
let s =
(*
"\n" ^ (String.make i ' ') ^ "(" ^
(String.concat " " (aux (succ i) [] e))
^ ")"
*)
"\n" ^ (String.replicate i " ") ^ "(" ^
(String.concat " " (aux (i + 1) [] e))
^ ")"
in
aux i (s::acc) tl
| [] -> (List.rev acc)
in
String.concat "\n" (aux 0 [] s)
 
 
let print_sexpr_indent s =
(* print_endline (string_of_sexpr_indent s) *)
printfn "%s" (string_of_sexpr_indent s)
</syntaxhighlight>
 
 
The file <code>Program.fs</code> containing the main module to
call the S-Expression parser and printer. Two options are shown:
Read the experession from a file of preset it in the code.
 
<syntaxhighlight lang="fsharp">
module Program
(* Learn more about F# at https://fsharp.org *)
 
open System
 
[<EntryPoint>]
let main argv =
let sexpr =
(* Data from file supplied at runtime or a preset string? *)
if argv.Length > 0 then
(* Data from file supplied at runtime *)
begin
(* Get the file to parse *)
let name = argv.[0] in
(* parse the program file *)
SExpr.parse_file name
end
else
(* Data from a preset string *)
begin
(* preset the string *)
let data= "((data \"quoted data\" 123 4.5) (data (!@# (4.5) \"(more\" \"data)\")))" in
 
(* parse the program file *)
SExpr.parse_string data
end
(* Print the parsed program token list *)
(printf "\nSExpr: \n");
SExpr.print_sexpr sexpr;
(printf "\nSExpr - Indented: \n");
SExpr.print_sexpr_indent sexpr;
(* return an integer exit code *)
0
</syntaxhighlight>
 
{{out}}
<pre>
 
SExpr:
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))
 
SExpr - Indented:
 
(
(data "quoted data" 123 4.5)
(data
(!@#
(4.5) "(more" "data)")))
</pre>
 
=={{header|Factor}}==
Factor's nested sequences are close enough to s-expressions that in most cases we can simply <code>eval</code> s-expression strings after some minor whitespace/bracket/parenthesis transformations. However, if we wish to support symbols, this approach becomes complicated because symbols need to be declared before use. This means we need to go into the string and identify them, so we may as well parse the s-expression properly while we're there.
 
We have a nice tool at our disposal for doing this. In its standard library, Factor contains a domain-specific language for defining [[wp:extended Backus-Naur form|extended Backus-Naur form]] (EBNF) grammars. EBNF is a convenient, declarative way to describe different parts of a grammar where later rules build on earlier ones until the final rule defines the entire grammar. Upon calling the word defined by <code>EBNF:</code>, an input string will be tokenized according to the declared rules and stored in an abstract syntax tree.
 
To get an idea of how this works, look at the final rule. It declares that an s-expression is any number of objects (comprised of numbers, floats, strings, and symbols) and s-expressions (the rule is recursive, allowing for nested s-expressions) surrounded by parenthesis which are in turn surrounded by any amount of whitespace. This implementation of EBNF allows us to define actions: the quotation after the <code>=></code> is called on the rule token just before being added to the abstract syntax tree. This is convenient for our use case where we need to parse different types of objects into our sequence structure.
 
Factor has a comprehensive prettyprinter which can print any Factor object in a readable way. Not only can we leverage it to easily print our native data structure, but we can also call <code>unparse</code> to convert it to a string. This leaves us with a string reminiscent of the original input, and we are able to take it the rest of the way with two simple regular expressions.
 
<syntaxhighlight lang="factor">USING: formatting kernel math.parser multiline peg peg.ebnf
regexp sequences prettyprint words ;
IN: rosetta-code.s-expressions
 
STRING: input
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
;
 
EBNF: sexp>seq [=[
ws = [\n\t\r ]* => [[ drop ignore ]]
digits = [0-9]+
number = digits => [[ string>number ]]
float = digits:a "." digits:b => [[ a b "." glue string>number ]]
string = '"'~ [^"]* '"'~ => [[ "" like ]]
symbol = [!#-'*-~]+ => [[ "" like <uninterned-word> ]]
object = ws ( float | number | string | symbol ) ws
sexp = ws "("~ ( object | sexp )* ")"~ ws => [[ { } like ]]
]=]
 
: seq>sexp ( seq -- str )
unparse R/ {\s+/ "(" R/ \s+}/ ")" [ re-replace ] 2bi@ ;
input [ "Input:\n%s\n\n" printf ] [
sexp>seq dup seq>sexp
"Native:\n%u\n\nRound trip:\n%s\n" printf
] bi</syntaxhighlight>
{{out}}
<pre>
Input:
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
 
Native:
{
{ data "quoted data" 123 4.5 }
{ data { !@# { 4.5 } "(more" "data)" } }
}
 
Round trip:
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
</pre>
 
=={{header|Go}}==
<langsyntaxhighlight lang="go">package main
 
import (
Line 1,810 ⟶ 3,099:
fmt.Println(s.i)
}
}</langsyntaxhighlight>
{{out}}
<pre>
Line 1,838 ⟶ 3,127:
 
=={{header|Haskell}}==
<langsyntaxhighlight lang="haskell">import qualified Data.Functor.Identity as F
import qualified Text.Parsec.Prim as Prim
import Text.Parsec ((<|>), (<?>), many, many1, char, try, parse, sepBy, choice, between)
import Text.Parsec
((<|>), (<?>), many, many1, char, try, parse, sepBy, choice,
between)
import Text.Parsec.Token
(integer, float, whiteSpace, stringLiteral, makeTokenParser)
import Text.Parsec.Char (noneOf)
import Text.Parsec.Token (integer, float, whiteSpace, stringLiteral, makeTokenParser)
import Text.Parsec.Language (haskell)
 
data Val = Int Integer
= Int Integer
| Float Double
| StringFloat StringDouble
| SymbolString String
| Symbol String
| List [Val] deriving (Eq, Show)
| List [Val]
deriving (Eq, Show)
 
tProg :: Prim.ParsecT String a F.Identity [Val]
tProg = many tExpr <?> "program"
where
where tExpr = between ws ws (tList <|> tAtom) <?> "expression"
tExpr = between ws ws (tList <|> tAtom) <?> "expression"
ws = whiteSpace haskell
ws = whiteSpace haskell
tAtom = (try (Float <$> float haskell) <?> "floating point number")
tAtom =
<|> (try (Int <$> integer haskell) <?> "integer")
<|>(try (StringFloat <$> stringLiteralfloat haskell) <?> "stringfloating point number") <|>
<|>(try (SymbolInt <$> many1integer (noneOf "()\"\t\n\r"haskell) <?> "symbolinteger") <|>
(String <$> stringLiteral haskell <?> "atomic expressionstring") <|>
tList = List(Symbol <$> betweenmany1 (charnoneOf '"(')\"\t\n\r (char ')'") (many<?> tExpr"symbol") <?> "list"
"atomic expression"
tList = List <$> between (char '(') (char ')') (many tExpr) <?> "list"
 
p :: String -> IO ()
p = either print (putStrLn . unwords . map show) . parse tProg ""
 
main :: IO ()
main = do
let expr =
let expr = "((data \"quoted data\" 123 4.5)\n (data (!@# (4.5) \"(more\" \"data)\")))"
"((data \"quoted data\" 123 4.5)\n (data (!@# (4.5) \"(more\" \"data)\")))"
putStrLn ("The input:\n" ++ expr ++ "\nParsed as:")
putStrLn ("The input:\n" ++ expr ++ "\n\nParsed as:")
p expr</lang>
p expr</syntaxhighlight>
{{Out}}
<pre>The input:
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
 
Parsed as:
List [List [Symbol "data",String "quoted data",Int 123,Float 4.5],List [Symbol "data",List [Symbol "!@#",List [Float 4.5],String "(more",String "data)"]]]</pre>
 
 
Or, parsing by hand (rather than with a parser combinator library) and printing a parse tree diagram:
<syntaxhighlight lang="haskell">{-# LANGUAGE TupleSections #-}
import Data.Bifunctor (bimap)
import Data.List (mapAccumL)
import Data.List.Split (splitOn)
import Data.Maybe (catMaybes, fromMaybe, listToMaybe)
import Data.Tree (Forest, Tree (..), drawForest)
------------------------ DATA TYPE -----------------------
data Val
= Int Integer
| Float Double
| String String
| Symbol String
| List [Val]
deriving (Eq, Show, Read)
instance Semigroup Val where
List a <> List b = List (a <> b)
instance Monoid Val where
mempty = List []
--------------------------- MAIN -------------------------
main :: IO ()
main = do
let expr =
unlines
[ "((data \"quoted data\" 123 4.5)",
" (data (!@# (4.5) \"(more\" \"data)\")))"
]
parse = fst (parseExpr (tokenized expr))
putStrLn $ treeDiagram $ forestFromVal parse
putStrLn "Serialized from the parse tree:\n"
putStrLn $ litVal parse
------------------- S-EXPRESSION PARSER ------------------
parseExpr :: [String] -> (Val, [String])
parseExpr = until finished parseToken . (mempty,)
finished :: (Val, [String]) -> Bool
finished (_, []) = True
finished (_, token : _) = ")" == token
parseToken :: (Val, [String]) -> (Val, [String])
parseToken (v, "(" : rest) =
bimap
((v <>) . List . return)
tail
(parseExpr rest)
parseToken (v, ")" : rest) = (v, rest)
parseToken (v, t : rest) = (v <> List [atom t], rest)
----------------------- TOKEN PARSER ---------------------
atom :: String -> Val
atom [] = mempty
atom s@('"' : _) =
fromMaybe mempty (maybeRead ("String " <> s))
atom s =
headDef (Symbol s) $
catMaybes $
maybeRead . (<> (' ' : s)) <$> ["Int", "Float"]
maybeRead :: String -> Maybe Val
maybeRead = fmap fst . listToMaybe . reads
----------------------- TOKENIZATION ---------------------
tokenized :: String -> [String]
tokenized s = quoteTokens '"' s >>= go
where
go [] = []
go token@('"' : _) = [token]
go s = words $ spacedBrackets s
quoteTokens :: Char -> String -> [String]
quoteTokens q s = snd $ mapAccumL go False (splitOn [q] s)
where
go b s
| b = (False, '"' : s <> "\"")
| otherwise = (True, s)
spacedBrackets :: String -> String
spacedBrackets [] = []
spacedBrackets (c : cs)
| c `elem` "()" = ' ' : c : " " <> spacedBrackets cs
| otherwise = c : spacedBrackets cs
------------------------- DIAGRAMS -----------------------
treeDiagram :: Forest Val -> String
treeDiagram = drawForest . fmap (fmap show)
forestFromVal :: Val -> Forest Val
forestFromVal (List xs) = treeFromVal <$> xs
treeFromVal :: Val -> Tree Val
treeFromVal (List xs) =
Node (Symbol "List") (treeFromVal <$> xs)
treeFromVal v = Node v []
---------------------- SERIALISATION ---------------------
litVal (Symbol x) = x
litVal (Int x) = show x
litVal (Float x) = show x
litVal (String x) = '"' : x <> "\""
litVal (List [List xs]) = litVal (List xs)
litVal (List xs) = '(' : (unwords (litVal <$> xs) <> ")")
------------------------- GENERIC ------------------------
headDef :: a -> [a] -> a
headDef d [] = d
headDef _ (x : _) = x</syntaxhighlight>
{{Out}}
<pre>Symbol "List"
|
+- Symbol "List"
| |
| +- Symbol "data"
| |
| +- String "quoted data"
| |
| +- Int 123
| |
| `- Float 4.5
|
`- Symbol "List"
|
+- Symbol "data"
|
`- Symbol "List"
|
+- Symbol "!@#"
|
+- Symbol "List"
| |
| `- Float 4.5
|
+- String "(more"
|
`- String "data)"
 
 
Serialized from the parse tree:
 
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))</pre>
 
=={{header|Icon}} and {{header|Unicon}}==
Line 1,873 ⟶ 3,335:
The example takes single and double quotes. <br>
Single quotes were used instead of doubles in the input.
<langsyntaxhighlight Iconlang="icon">link ximage
 
procedure main()
Line 1,932 ⟶ 3,394:
}
return T
end</langsyntaxhighlight>
 
{{libheader|Icon Programming Library}}
Line 1,963 ⟶ 3,425:
This implementation does not support escape characters. If escape characters were added, we would need additional support in the tokenizer (an extra character class, and in the state table an extra column and two extra rows, or almost double the number of state transitions: 35 instead of 20), and additional support in the data language (unfmt would need to strip out escape characters and fmt would need to insert escape characters -- so each of these routines would also perhaps double in size.) And that's a lot of bulk for serialize/deserialize mechanism which, by design, cannot represent frequently used data elements (such as matrices and gerunds).
 
<langsyntaxhighlight lang="j">NB. character classes: 0: paren, 1: quote, 2: whitespace, 3: wordforming (default)
chrMap=: '()';'"';' ',LF,TAB,CR
 
Line 2,014 ⟶ 3,476:
 
readSexpr=: fmt L:0 @rdSexpr :.writeSexpr
writeSexpr=: wrSexpr @(unfmt L:0) :.readSexpr</langsyntaxhighlight>
 
 
Example use:
 
<langsyntaxhighlight lang="j"> readSexpr '((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))'
┌───────────────────────────┬────────────────────────────────┐
│┌─────┬───────────┬───┬───┐│┌─────┬────────────────────────┐│
Line 2,030 ⟶ 3,492:
└───────────────────────────┴────────────────────────────────┘
writeSexpr readSexpr '((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))'
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))</langsyntaxhighlight>
 
=={{header|Java}}==
Line 2,040 ⟶ 3,502:
 
====LispTokenizer.java====
<langsyntaxhighlight lang="java">package jfkbits;
 
import java.io.BufferedReader;
Line 2,150 ⟶ 3,612:
{
}
}</langsyntaxhighlight>
 
====Token.java====
<langsyntaxhighlight lang="java">package jfkbits;
import java.io.StreamTokenizer;
 
Line 2,181 ⟶ 3,643:
}
}
}</langsyntaxhighlight>
 
====Atom.java====
<langsyntaxhighlight lang="java">package jfkbits;
 
import jfkbits.LispParser.Expr;
Line 2,200 ⟶ 3,662:
}
 
}</langsyntaxhighlight>
 
====StringAtom.java====
<langsyntaxhighlight lang="java">package jfkbits;
 
public class StringAtom extends Atom
Line 2,223 ⟶ 3,685:
}
}
</syntaxhighlight>
</lang>
 
====ExprList.java====
<langsyntaxhighlight lang="java">package jfkbits;
 
import java.util.AbstractCollection;
Line 2,296 ⟶ 3,758:
}
 
}</langsyntaxhighlight>
 
====LispParser.java====
<langsyntaxhighlight lang="java">package jfkbits;
 
 
Line 2,346 ⟶ 3,808:
 
}
</syntaxhighlight>
</lang>
 
====LispParserDemo.java====
<langsyntaxhighlight lang="java">import jfkbits.ExprList;
import jfkbits.LispParser;
import jfkbits.LispParser.ParseException;
Line 2,374 ⟶ 3,836:
}
}
}</langsyntaxhighlight>
 
=={{header|JavaScript}}==
(for a '''bug-fix''' concerning \" and \n in strings see the [[Talk:S-expressions#JavaScript_version_bugfix_for_%5C%22_and_%5Cn_in_strings|Discussion]])
<lang JavaScript>String.prototype.parseSexpr = function() {
===Procedural===
<syntaxhighlight lang="javascript">String.prototype.parseSexpr = function() {
var t = this.match(/\s*("[^"]*"|\(|\)|"|[^\s()"]+)/g)
for (var o, c=0, i=t.length-1; i>=0; i--) {
Line 2,414 ⟶ 3,878:
document.write('Invalid s-expr!', '<br>')
else
document.write('s-expr:<br>', sexpr, '<br><br>', sexpr.constructor != Array ? '' : 'pretty print:<br>' + sexpr.toPretty())</langsyntaxhighlight>
{{outputout}}
<pre>text:
text:
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
Line 2,443 ⟶ 3,906:
)
)
)</pre>
)
 
 
===Functional===
Showing the parse tree in an indented JSON format, and writing out a reserialization:
<syntaxhighlight lang="javascript">(() => {
"use strict";
 
// ------------------ S-EXPRESSIONS ------------------
const main = () => {
const expr = [
"((data \"quoted data\" 123 4.5)",
" (data (!@# (4.5) \"(more\" \"data)\")))"
]
.join("\n");
 
const [parse, residue] = parseExpr(
tokenized(expr)
);
 
return 0 < residue.length
? `Unparsed tokens: ${JSON.stringify(residue)}`
: 0 < parse.length
? [
JSON.stringify(parse, null, 2),
"Reserialized from parse:",
parse.map(serialized).join(" ")
]
.join("\n\n")
: "Could not be parsed";
};
 
// ---------------- EXPRESSION PARSER ----------------
 
// parseExpr [String] -> ([Expr], [String])
const parseExpr = tokens =>
// A tuple of (parsed trees, residual tokens)
// derived from a list of tokens.
until(finished)(readToken)([
[], tokens
]);
 
 
// finished :: ([Expr], [String]) -> Bool
const finished = ([, tokens]) =>
// True if no tokens remain, or the next
// closes a sub-expression.
0 === tokens.length || ")" === tokens[0];
 
 
// readToken :: ([Expr], [String]) -> ([Expr], [String])
const readToken = ([xs, tokens]) => {
// A tuple of enriched expressions and
// depleted tokens.
const [token, ...ts] = tokens;
 
// An open bracket introduces recursion over
// a sub-expression to define a sub-list.
return "(" === token
? (() => {
const [expr, rest] = parseExpr(ts);
 
return [xs.concat([expr]), rest.slice(1)];
})()
: ")" === token
? [xs, token]
: [xs.concat(atom(token)), ts];
};
 
// ------------------- ATOM PARSER -------------------
 
// atom :: String -> Expr
const atom = s =>
0 < s.length
? isNaN(s)
? "\"'".includes(s[0])
? s.slice(1, -1)
: {name: s}
: parseFloat(s, 10)
: "";
 
 
// ------------------ TOKENIZATION -------------------
 
// tokenized :: String -> [String]
const tokenized = s =>
// Brackets and quoted or unquoted atomic strings.
quoteTokens("\"")(s).flatMap(
segment => "\"" !== segment[0]
? segment.replace(/([()])/gu, " $1 ")
.split(/\s+/u)
.filter(Boolean)
: [segment]
);
 
 
// quoteTokens :: Char -> String -> [String]
const quoteTokens = q =>
// Alternating unquoted and quoted segments.
s => s.split(q).flatMap(
(k, i) => even(i)
? 0 < k.length
? [k]
: []
: [`${q}${k}${q}`]
);
 
// ------------------ SERIALIZATION ------------------
 
// serialized :: Expr -> String
const serialized = e => {
const t = typeof e;
 
return "number" === t
? `${e}`
: "string" === t
? `"${e}"`
: "object" === t
? Array.isArray(e)
? `(${e.map(serialized).join(" ")})`
: e.name
: "?";
};
 
 
// --------------------- GENERIC ---------------------
 
// even :: Int -> Bool
const even = n =>
// True if 2 is a factor of n.
0 === n % 2;
 
 
// until :: (a -> Bool) -> (a -> a) -> a -> a
const until = p =>
// The value resulting from repeated applications
// of f to the seed value x, terminating when
// that result returns true for the predicate p.
f => {
const go = x =>
p(x)
? x
: go(f(x));
 
return go;
};
 
return main();
})();</syntaxhighlight>
{{Out}}
<pre>[
[
[
{
"name": "data"
},
"quoted data",
123,
4.5
],
[
{
"name": "data"
},
[
{
"name": "!@#"
},
[
4.5
],
"(more",
"data)"
]
]
]
]
 
Reserialized from parse:
 
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))</pre>
 
=={{header|jq}}==
{{works with|jq}}
'''Also works with gojq, the Go implementation of jq'''
[[Category:PEG]]
This entry is based on a Parsing Expression Grammar (PEG) for S-expressions.
The idea is to pass a JSON object `{remainder:_, result:_ }` through a
jq pipeline corresponding to a PEG for S-expressions, consuming the
text in `.remainder` and building up `.result`.
For further details about this approach, see e.g.
[https://github.com/stedolan/jq/wiki/Parsing-Expression-Grammars jq as a PEG Engine].
<syntaxhighlight lang=jq>
# PEG infrastructure
def star(E): ((E | star(E)) // .) ;
 
### Helper functions:
# Consume a regular expression rooted at the start of .remainder, or emit empty;
# on success, update .remainder and set .match but do NOT update .result
def consume($re):
# on failure, match yields empty
(.remainder | match("^" + $re)) as $match
| .remainder |= .[$match.length :]
| .match = $match.string;
 
def parse($re):
consume($re)
| .result = .result + [.match] ;
 
def parseNumber($re):
consume($re)
| .result = .result + [.match|tonumber] ;
 
def eos: select(.remainder == "");
 
# whitespace
def ws: consume("[ \t\r\n]*");
 
def box(E):
((.result = null) | E) as $e
| .remainder = $e.remainder
| .result += [$e.result] # the magic sauce
;
 
# S-expressions
 
# Input: a string
# Output: an array representation of the input if it is an S-expression
def SExpression:
def string: consume("\"") | parse("[^\"]") | consume("\"");
def identifier: parse("[^ \t\n\r()]+");
def decimal: parseNumber("[0-9]+([.][0-9]*)?");
def hex: parse("0x[0-9A-Fa-f]+") ;
def number: hex // decimal;
def atom: ws | (string // number // identifier);
 
def SExpr: ws | consume("[(]") | ws | box(star(atom // SExpr)) | consume("[)]");
 
{remainder: .} | SExpr | ws | eos | .result;
 
SExpression
</syntaxhighlight>
'''Invocation:'''
<pre>
cat << EOF |
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
EOF
jq -Rsc -f s-expression.jq
</pre>
{{output}}
<pre>
[[["data","\"quoted","data\"",123,4.5],["data",["!@#",[4.5],"\"",["more\"","\"data"],"\""]]]]
</pre>
 
=={{header|Julia}}==
<langsyntaxhighlight lang="julia">
function rewritequotedparen(s)
segments = split(s, "\"")
Line 2,522 ⟶ 4,237:
println("The processed native structure is:\n", nat)
println("The reconstructed string is:\n"), printAny(nat)
</syntaxhighlight>
</lang>
{{output}}<pre>
The input string is:
Line 2,538 ⟶ 4,253:
=={{header|Kotlin}}==
{{trans|JavaScript}}
<langsyntaxhighlight lang="groovy">// version 1.2.31
 
const val INDENT = 2
Line 2,635 ⟶ 4,350:
tokens2.prettyPrint()
}
}</langsyntaxhighlight>
 
{{out}}
Line 2,688 ⟶ 4,403:
</pre>
 
=={{header|OCamlLua}}==
 
This uses LPeg, a parsing expression grammar library written by one the authors of Lua.
You may be interested by [https://realworldocaml.org/v1/en/html/data-serialization-with-s-expressions.html this chapter of the book Real World Ocaml]
Tested with Lua 5.3.5 and LPeg 1.0.2-1.
 
<syntaxhighlight lang="lua">lpeg = require 'lpeg' -- see http://www.inf.puc-rio.br/~roberto/lpeg/
 
imports = 'P R S C V match'
for w in imports:gmatch('%a+') do _G[w] = lpeg[w] end -- make e.g. 'lpeg.P' function available as 'P'
 
function tosymbol(s) return s end
function tolist(x, ...) return {...} end -- ignore the first capture, the whole sexpr
ws = S' \t\n'^0 -- whitespace, 0 or more
 
digits = R'09'^1 -- digits, 1 or more
Tnumber = C(digits * (P'.' * digits)^-1) * ws / tonumber -- ^-1 => at most 1
 
Tstring = C(P'"' * (P(1) - P'"')^0 * P'"') * ws
 
sep = S'()" \t\n'
symstart = (P(1) - (R'09' + sep))
symchar = (P(1) - sep)
Tsymbol = C(symstart * symchar^0) * ws / tosymbol
 
atom = Tnumber + Tstring + Tsymbol
lpar = P'(' * ws
rpar = P')' * ws
sexpr = P{ -- defines a recursive pattern
'S';
S = ws * lpar * C((atom + V'S')^0) * rpar / tolist
}</syntaxhighlight>
 
Now to use the <i>sexpr</i> pattern:
 
<syntaxhighlight lang="lua">eg_input = [[
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
]]
 
eg_produced = match(sexpr, eg_input)
 
eg_expected = { -- expected Lua data structure of the reader (lpeg.match)
{'data', '"quoted data"', 123, 4.5},
{'data', {'!@#', {4.5}, '"(more"', '"data)"'}}
}
 
function check(produced, expected)
assert(type(produced) == type(expected))
if type(expected) == 'table' then -- i.e. a list
assert(#produced == #expected)
for i = 1, #expected do check(produced[i], expected[i]) end
else
assert(produced == expected)
end
end
 
check(eg_produced, eg_expected)
print("checks out!") -- won't get here if any <i>check()</i> assertion fails
</syntaxhighlight>
 
And here's the pretty printer, whose output looks like all the others:
 
<syntaxhighlight lang="lua">function pprint(expr, indent)
local function prindent(fmt, expr)
io.write(indent) -- no line break
print(string.format(fmt, expr))
end
if type(expr) == 'table' then
if #expr == 0 then
prindent('()')
else
prindent('(')
local indentmore = ' ' .. indent
for i= 1,#expr do pprint(expr[i], indentmore) end
prindent(')')
end
elseif type(expr) == 'string' then
if expr:sub(1,1) == '"' then
prindent("%q", expr:sub(2,-2)) -- print as a Lua string
else
prindent("%s", expr) -- print as a symbol
end
else
prindent("%s", expr)
end
end
 
pprint(eg_expected, '')</syntaxhighlight>
 
=={{header|Nim}}==
 
<syntaxhighlight lang="nim">import strutils
 
const Input = """
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
"""
 
type
TokenKind = enum
tokInt, tokFloat, tokString, tokIdent
tokLPar, tokRPar
tokEnd
Token = object
case kind: TokenKind
of tokString: stringVal: string
of tokInt: intVal: int
of tokFloat: floatVal: float
of tokIdent: ident: string
else: discard
 
proc lex(input: string): seq[Token] =
var pos = 0
 
template current: char =
if pos < input.len: input[pos]
else: '\x00'
 
while pos < input.len:
case current
of ';':
inc(pos)
while current notin {'\r', '\n'}:
inc(pos)
if current == '\r': inc(pos)
if current == '\n': inc(pos)
of '(': inc(pos); result.add(Token(kind: tokLPar))
of ')': inc(pos); result.add(Token(kind: tokRPar))
of '0'..'9':
var
num = ""
isFloat = false
while current in Digits:
num.add(current)
inc(pos)
if current == '.':
num.add(current)
isFloat = true
inc(pos)
while current in Digits:
num.add(current)
inc(pos)
result.add(if isFloat: Token(kind: tokFloat, floatVal: parseFloat(num))
else: Token(kind: tokInt, intVal: parseInt(num)))
of ' ', '\t', '\n', '\r': inc(pos)
of '"':
var str = ""
inc(pos)
while current != '"':
str.add(current)
inc(pos)
inc(pos)
result.add(Token(kind: tokString, stringVal: str))
else:
const BannedChars = {' ', '\t', '"', '(', ')', ';'}
var ident = ""
while current notin BannedChars:
ident.add(current)
inc(pos)
result.add(Token(kind: tokIdent, ident: ident))
result.add(Token(kind: tokEnd))
 
type
SExprKind = enum
seInt, seFloat, seString, seIdent, seList
SExpr = ref object
case kind: SExprKind
of seInt: intVal: int
of seFloat: floatVal: float
of seString: stringVal: string
of seIdent: ident: string
of seList: children: seq[SExpr]
ParseError = object of CatchableError
 
proc `$`*(se: SExpr): string =
case se.kind
of seInt: result = $se.intVal
of seFloat: result = $se.floatVal
of seString: result = '"' & se.stringVal & '"'
of seIdent: result = se.ident
of seList:
result = "("
for i, ex in se.children:
if ex.kind == seList and ex.children.len > 1:
result.add("\n")
result.add(indent($ex, 2))
else:
if i > 0:
result.add(" ")
result.add($ex)
result.add(")")
 
var
tokens = lex(Input)
pos = 0
 
template current: Token =
if pos < tokens.len: tokens[pos]
else: Token(kind: tokEnd)
 
proc parseInt(token: Token): SExpr =
result = SExpr(kind: seInt, intVal: token.intVal)
 
proc parseFloat(token: Token): SExpr =
result = SExpr(kind: seFloat, floatVal: token.floatVal)
 
proc parseString(token: Token): SExpr =
result = SExpr(kind: seString, stringVal: token.stringVal)
 
proc parseIdent(token: Token): SExpr =
result = SExpr(kind: seIdent, ident: token.ident)
 
proc parse(): SExpr
 
proc parseList(): SExpr =
result = SExpr(kind: seList)
while current.kind notin {tokRPar, tokEnd}:
result.children.add(parse())
if current.kind == tokEnd:
raise newException(ParseError, "Missing right paren ')'")
else:
inc(pos)
 
proc parse(): SExpr =
var token = current
inc(pos)
result =
case token.kind
of tokInt: parseInt(token)
of tokFloat: parseFloat(token)
of tokString: parseString(token)
of tokIdent: parseIdent(token)
of tokLPar: parseList()
else: nil
 
echo parse()</syntaxhighlight>
 
{{out}}
 
<pre>
(
(data "quoted data" 123 4.5)
(data
(!@# (4.5) "(more" "data)")))
</pre>
 
=={{header|OCaml}}==
You may be interested in this [https://dev.realworldocaml.org/data-serialization.html chapter of the book Real World OCaml].
 
The file <code>SExpr.mli</code> containing the interface:
 
<langsyntaxhighlight lang="ocaml">(** This module is a very simple parsing library for S-expressions. *)
(* Copyright (C) 2009 Florent Monnier, released under MIT license. *)
 
Line 2,722 ⟶ 4,683:
 
val string_of_sexpr_indent : sexpr list -> string
(** same than [string_of_sexpr] but with indentation *)</langsyntaxhighlight>
 
The file <code>SExpr.ml</code> containing the implementation:
 
<langsyntaxhighlight lang="ocaml">(** This module is a very simple parsing library for S-expressions. *)
(* Copyright (C) 2009 Florent Monnier, released under MIT license. *)
(* modified to match the task description *)
Line 2,903 ⟶ 4,864:
 
let print_sexpr_indent s =
print_endline (string_of_sexpr_indent s)</langsyntaxhighlight>
 
Then we compile this small module and test it in the interactive loop:
Line 2,941 ⟶ 4,902:
 
=={{header|Perl}}==
<langsyntaxhighlight lang="perl">#!/usr/bin/perl -w
use strict;
use warnings;
Line 2,995 ⟶ 4,956:
ref($_) eq 'ARRAY' ? sexpr2txt($_) : $$_
} @{$_[0]} ]})}
}</langsyntaxhighlight>
Check:
<langsyntaxhighlight lang="perl">my $s = sexpr(q{
 
((data "quoted data" 123 4.5)
Line 3,009 ⟶ 4,970:
 
# Convert back
print sexpr2txt($s)."\n";</langsyntaxhighlight>
Output:
<pre>$VAR1 = [
Line 3,030 ⟶ 4,991:
]
];
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))</pre>
 
=={{header|Perl 6}}==
{{works with|Rakudo|version 2014.04-250-gea173d0 built on MoarVM version 2014.04-98-gbed1693
}}
 
This parses the task, but it isn't really a good lisp parser, because it always wants whitespace between lists, so <code>(()())</code> will fail ( <code>(() ())</code> wont)
 
<lang perl6>grammar S-Exp {
rule TOP {^ <s-list> $};
 
token s-list { '(' ~ ')' [ <in_list>+ % [\s+] | '' ] }
token in_list { <s-token> | <s-list> }
proto token s-token {*}
token s-token:sym<Num> {\d*\.?\d+}
token s-token:sym<String> {'"' ['\"' |<-[\\"]>]*? '"'} #'
token s-token:sym<Atom> {<-[()\s]>+}
}
# The Actions class, for each syntactic rule there is a method
# that stores some data in the abstract syntax tree with make
class S-Exp::ACTIONS {
method TOP ($/) {make $<s-list>.ast}
method s-list ($/) {make [$<in_list>».ast]}
method in_list ($/) {make $/.values[0].ast}
method s-token:sym<Num> ($/){make +$/}
method s-token:sym<String> ($/){make ~$/.substr(1,*-1)}
method s-token:sym<Atom> ($/){make ~$/}
}
multi s-exp_writer (Positional $ary) {'(' ~ $ary.map(&s-exp_writer).join(' ') ~ ')'}
multi s-exp_writer (Numeric $num) {~$num}
multi s-exp_writer (Str $str) {
return $str unless $str ~~ /<[(")]>|\s/;
return '()' if $str eq '()';
'"' ~ $str.subst('"', '\"' ) ~ '"';
}
my $s-exp = '((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))';
my $actions = S-Exp::ACTIONS.new();
my $perl_array = (S-Exp.parse($s-exp, :$actions)).ast;
say "the expression:\n$s-exp\n";
say "the perl6-expression:\n{$perl_array.perl}\n";
say "and back:\n{s-exp_writer($perl_array)}";</lang>
 
{{out}}
<pre>the expression:
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
 
the perl6-expression:
[["data", "quoted data", "123", 9/2], ["data", ["!\@#", [9/2], "(more", "data)"]]]
 
and back:
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))</pre>
 
Line 3,099 ⟶ 4,999:
that may not be clear on the display: 4e-5 and 4-e5 may appear similar but the latter is probably a parse failure. It may
be more sensible for get_term() to raise an error if the scanf fails, than assume it is a symbol like it does now.
<!--<syntaxhighlight lang="phix">(phixonline)-->
Also, I added pp_StrFmt -3 (a combination of existing -1 and -2 behaviour) specifically for this task.
<span style="color: #008080;">with</span> <span style="color: #008080;">javascript_semantics</span>
<lang Phix>constant s_expr_str = """
<span style="color: #008080;">constant</span> <span style="color: #000000;">s_expr_str</span> <span style="color: #0000FF;">=</span> <span style="color: #008000;">"""
((data "quoted data" 123 4.5)
((data (!@#"quoted data" 123 (4.5) "(more" "data)")))"""
(data (!@# (4.5) "(more" "data)")))"""</span>
 
function skip_spaces(string s, integer sidx)
<span style="color: #008080;">function</span> <span style="color: #000000;">skip_spaces</span><span style="color: #0000FF;">(</span><span style="color: #004080;">string</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">,</span> <span style="color: #004080;">integer</span> <span style="color: #000000;">sidx</span><span style="color: #0000FF;">)</span>
while sidx<=length(s) and find(s[sidx]," \t\r\n") do sidx += 1 end while
<span style="color: #008080;">while</span> <span style="color: #000000;">sidx</span><span style="color: #0000FF;"><=</span><span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">and</span> <span style="color: #7060A8;">find</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s</span><span style="color: #0000FF;">[</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">],{</span><span style="color: #008000;">' '</span><span style="color: #0000FF;">,</span><span style="color: #008000;">'\t'</span><span style="color: #0000FF;">,</span><span style="color: #008000;">'\r'</span><span style="color: #0000FF;">,</span><span style="color: #008000;">'\n'</span><span style="color: #0000FF;">})</span> <span style="color: #008080;">do</span> <span style="color: #000000;">sidx</span> <span style="color: #0000FF;">+=</span> <span style="color: #000000;">1</span> <span style="color: #008080;">end</span> <span style="color: #008080;">while</span>
return sidx
<span style="color: #008080;">return</span> <span style="color: #000000;">sidx</span>
end function
<span style="color: #008080;">end</span> <span style="color: #008080;">function</span>
 
function get_term(string s, integer sidx)
<span style="color: #008080;">function</span> <span style="color: #000000;">get_term</span><span style="color: #0000FF;">(</span><span style="color: #004080;">string</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">,</span> <span style="color: #004080;">integer</span> <span style="color: #000000;">sidx</span><span style="color: #0000FF;">)</span>
-- get a single quoted string, symbol, or number.
<span style="color: #000080;font-style:italic;">-- get a single quoted string, symbol, or number.</span>
integer ch = s[sidx]
<span style="color: #004080;">integer</span> <span style="color: #000000;">ch</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">[</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">]</span>
string res = ""
<span style="color: #004080;">string</span> <span style="color: #000000;">res</span> <span style="color: #0000FF;">=</span> <span style="color: #008000;">""</span>
if ch='\"' then
<span style="color: #008080;">if</span> <span style="color: #000000;">ch</span><span style="color: #0000FF;">=</span><span style="color: #008000;">'\"'</span> <span style="color: #008080;">then</span>
res &= ch
<span style="color: #000000;">res</span> <span style="color: #0000FF;">&=</span> <span style="color: #000000;">ch</span>
while 1 do
<span style="color: #008080;">while</span> <span style="color: #000000;">1</span> <span style="color: #008080;">do</span>
sidx += 1
<span style="color: #000000;">sidx</span> <span style="color: #0000FF;">+=</span> <span style="color: #000000;">1</span>
ch = s[sidx]
<span style="color: #000000;">ch</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">[</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">]</span>
res &= ch
<span style="color: #000000;">res</span> <span style="color: #0000FF;">&=</span> <span style="color: #000000;">ch</span>
if ch='\\' then
<span style="color: #008080;">if</span> <span style="color: #000000;">ch</span><span style="color: #0000FF;">=</span><span style="color: #008000;">'\\'</span> <span style="color: #008080;">then</span>
sidx += 1
<span style="color: #000000;">sidx</span> <span style="color: #0000FF;">+=</span> <span style="color: #000000;">1</span>
ch = s[sidx]
<span style="color: #000000;">ch</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">[</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">]</span>
res &= ch
<span style="color: #000000;">res</span> <span style="color: #0000FF;">&=</span> <span style="color: #000000;">ch</span>
elsif ch='\"' then
<span style="color: #008080;">elsif</span> <span style="color: #000000;">ch</span><span style="color: #0000FF;">=</span><span style="color: #008000;">'\"'</span> <span style="color: #008080;">then</span>
sidx += 1
<span style="color: #000000;">sidx</span> <span style="color: #0000FF;">+=</span> <span style="color: #000000;">1</span>
exit
end if <span style="color: #008080;">exit</span>
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
end while
<span style="color: #008080;">end</span> <span style="color: #008080;">while</span>
else
<span style="color: #008080;">else</span>
integer asnumber = (ch>='0' and ch<='9')
<span style="color: #004080;">integer</span> <span style="color: #000000;">asnumber</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">(</span><span style="color: #000000;">ch</span><span style="color: #0000FF;">>=</span><span style="color: #008000;">'0'</span> <span style="color: #008080;">and</span> <span style="color: #000000;">ch</span><span style="color: #0000FF;"><=</span><span style="color: #008000;">'9'</span><span style="color: #0000FF;">)</span>
while not find(ch,") \t\r\n") do
<span style="color: #008080;">while</span> <span style="color: #008080;">not</span> <span style="color: #7060A8;">find</span><span style="color: #0000FF;">(</span><span style="color: #000000;">ch</span><span style="color: #0000FF;">,{</span><span style="color: #008000;">')'</span><span style="color: #0000FF;">,</span><span style="color: #008000;">' '</span><span style="color: #0000FF;">,</span><span style="color: #008000;">'\t'</span><span style="color: #0000FF;">,</span><span style="color: #008000;">'\r'</span><span style="color: #0000FF;">,</span><span style="color: #008000;">'\n'</span><span style="color: #0000FF;">})</span> <span style="color: #008080;">do</span>
res &= ch
<span style="color: #000000;">res</span> <span style="color: #0000FF;">&=</span> <span style="color: #000000;">ch</span>
sidx += 1
<span style="color: #000000;">sidx</span> <span style="color: #0000FF;">+=</span> <span style="color: #000000;">1</span>
if sidx>length(s) then exit end if
<span style="color: #008080;">if</span> <span style="color: #000000;">sidx</span><span style="color: #0000FF;">></span><span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">then</span> <span style="color: #008080;">exit</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
ch = s[sidx]
<span style="color: #000000;">ch</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">[</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">]</span>
end while
<span style="color: #008080;">end</span> <span style="color: #008080;">while</span>
if asnumber then
<span style="color: #008080;">if</span> <span style="color: #000000;">asnumber</span> <span style="color: #008080;">then</span>
sequence scanres = scanf(res,"%f")
<span style="color: #004080;">sequence</span> <span style="color: #000000;">scanres</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">scanf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">res</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"%f"</span><span style="color: #0000FF;">)</span>
if length(scanres)=1 then return {scanres[1][1],sidx} end if
<span style="color: #008080;">if</span> <span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">scanres</span><span style="color: #0000FF;">)=</span><span style="color: #000000;">1</span> <span style="color: #008080;">then</span> <span style="color: #008080;">return</span> <span style="color: #0000FF;">{</span><span style="color: #000000;">scanres</span><span style="color: #0000FF;">[</span><span style="color: #000000;">1</span><span style="color: #0000FF;">][</span><span style="color: #000000;">1</span><span style="color: #0000FF;">],</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">}</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
-- error? (failed to parse number)
<span style="color: #000080;font-style:italic;">-- error? (failed to parse number)</span>
end if
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
end if
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
return {res,sidx}
<span style="color: #008080;">return</span> <span style="color: #0000FF;">{</span><span style="color: #000000;">res</span><span style="color: #0000FF;">,</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">}</span>
end function
<span style="color: #008080;">end</span> <span style="color: #008080;">function</span>
 
function parse_s_expr(string s, integer sidx)
<span style="color: #008080;">function</span> <span style="color: #000000;">parse_s_expr</span><span style="color: #0000FF;">(</span><span style="color: #004080;">string</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">,</span> <span style="color: #004080;">integer</span> <span style="color: #000000;">sidx</span><span style="color: #0000FF;">)</span>
integer ch = s[sidx]
<span style="color: #004080;">integer</span> <span style="color: #000000;">ch</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">[</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">]</span>
sequence res = {}
<span style="color: #004080;">sequence</span> <span style="color: #000000;">res</span> <span style="color: #0000FF;">=</span> <span style="color: #0000FF;">{}</span>
object element
<span style="color: #004080;">object</span> <span style="color: #000000;">element</span>
if ch!='(' then ?9/0 end if
<span style="color: #008080;">if</span> <span style="color: #000000;">ch</span><span style="color: #0000FF;">!=</span><span style="color: #008000;">'('</span> <span style="color: #008080;">then</span> <span style="color: #0000FF;">?</span><span style="color: #000000;">9</span><span style="color: #0000FF;">/</span><span style="color: #000000;">0</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
sidx += 1
<span style="color: #000000;">sidx</span> <span style="color: #0000FF;">+=</span> <span style="color: #000000;">1</span>
while 1 do
<span style="color: #008080;">while</span> <span style="color: #000000;">1</span> <span style="color: #008080;">do</span>
sidx = skip_spaces(s,sidx)
<span style="color: #000000;">sidx</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">skip_spaces</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s</span><span style="color: #0000FF;">,</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">)</span>
-- error? (if past end of string/missing ')')
<span style="color: #000080;font-style:italic;">-- error? (if past end of string/missing ')')</span>
ch = s[sidx]
<span style="color: #000000;">ch</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">s</span><span style="color: #0000FF;">[</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">]</span>
if ch=')' then exit end if
<span style="color: #008080;">if</span> <span style="color: #000000;">ch</span><span style="color: #0000FF;">=</span><span style="color: #008000;">')'</span> <span style="color: #008080;">then</span> <span style="color: #008080;">exit</span> <span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
if ch='(' then
<span style="color: #008080;">if</span> <span style="color: #000000;">ch</span><span style="color: #0000FF;">=</span><span style="color: #008000;">'('</span> <span style="color: #008080;">then</span>
{element,sidx} = parse_s_expr(s,sidx)
<span style="color: #0000FF;">{</span><span style="color: #000000;">element</span><span style="color: #0000FF;">,</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">}</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">parse_s_expr</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s</span><span style="color: #0000FF;">,</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">)</span>
else
<span style="color: #008080;">else</span>
{element,sidx} = get_term(s,sidx)
<span style="color: #0000FF;">{</span><span style="color: #000000;">element</span><span style="color: #0000FF;">,</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">}</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">get_term</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s</span><span style="color: #0000FF;">,</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">)</span>
end if
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
res = append(res,element)
<span style="color: #000000;">res</span> <span style="color: #0000FF;">=</span> <span style="color: #7060A8;">append</span><span style="color: #0000FF;">(</span><span style="color: #000000;">res</span><span style="color: #0000FF;">,</span><span style="color: #000000;">element</span><span style="color: #0000FF;">)</span>
end while
<span style="color: #008080;">end</span> <span style="color: #008080;">while</span>
sidx = skip_spaces(s,sidx+1)
<span style="color: #000000;">sidx</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">skip_spaces</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s</span><span style="color: #0000FF;">,</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">+</span><span style="color: #000000;">1</span><span style="color: #0000FF;">)</span>
return {res,sidx}
<span style="color: #008080;">return</span> <span style="color: #0000FF;">{</span><span style="color: #000000;">res</span><span style="color: #0000FF;">,</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">}</span>
end function
<span style="color: #008080;">end</span> <span style="color: #008080;">function</span>
 
sequence s_expr
<span style="color: #004080;">sequence</span> <span style="color: #000000;">s_expr</span>
integer sidx
<span style="color: #004080;">integer</span> <span style="color: #000000;">sidx</span>
{s_expr,sidx} = parse_s_expr(s_expr_str,1)
<span style="color: #0000FF;">{</span><span style="color: #000000;">s_expr</span><span style="color: #0000FF;">,</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">}</span> <span style="color: #0000FF;">=</span> <span style="color: #000000;">parse_s_expr</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s_expr_str</span><span style="color: #0000FF;">,</span><span style="color: #000000;">1</span><span style="color: #0000FF;">)</span>
if sidx<=length(s_expr_str) then
<span style="color: #008080;">if</span> <span style="color: #000000;">sidx</span><span style="color: #0000FF;"><=</span><span style="color: #7060A8;">length</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s_expr_str</span><span style="color: #0000FF;">)</span> <span style="color: #008080;">then</span>
printf(1,"incomplete parse(\"%s\")\n",{s_expr_str[sidx..$]})
<span style="color: #7060A8;">printf</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"incomplete parse(\"%s\")\n"</span><span style="color: #0000FF;">,{</span><span style="color: #000000;">s_expr_str</span><span style="color: #0000FF;">[</span><span style="color: #000000;">sidx</span><span style="color: #0000FF;">..$]})</span>
end if
<span style="color: #008080;">end</span> <span style="color: #008080;">if</span>
 
puts(1,"\nThe string:\n")
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"\nThe string:\n"</span><span style="color: #0000FF;">)</span>
?s_expr_str
<span style="color: #0000FF;">?</span><span style="color: #000000;">s_expr_str</span>
 
puts(1,"\nDefault pretty printing:\n")
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"\nDefault pretty printing:\n"</span><span style="color: #0000FF;">)</span>
--?s_expr
<span style="color: #000080;font-style:italic;">--?s_expr</span>
pp(s_expr)
<span style="color: #7060A8;">pp</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s_expr</span><span style="color: #0000FF;">)</span>
 
puts(1,"\nBespoke pretty printing:\n")
<span style="color: #7060A8;">puts</span><span style="color: #0000FF;">(</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"\nBespoke pretty printing:\n"</span><span style="color: #0000FF;">)</span>
--ppEx(s_expr,{pp_Nest,1,pp_StrFmt,-3,pp_Brkt,"()"})
<span style="color: #000080;font-style:italic;">--ppEx(s_expr,{pp_Nest,41,pp_StrFmt,-31,pp_IntCh,false,pp_Brkt,"()"})</langspan>
<span style="color: #7060A8;">ppEx</span><span style="color: #0000FF;">(</span><span style="color: #000000;">s_expr</span><span style="color: #0000FF;">,{</span><span style="color: #004600;">pp_Nest</span><span style="color: #0000FF;">,</span><span style="color: #000000;">4</span><span style="color: #0000FF;">,</span><span style="color: #004600;">pp_StrFmt</span><span style="color: #0000FF;">,-</span><span style="color: #000000;">1</span><span style="color: #0000FF;">,</span><span style="color: #004600;">pp_IntCh</span><span style="color: #0000FF;">,</span><span style="color: #004600;">false</span><span style="color: #0000FF;">,</span><span style="color: #004600;">pp_Brkt</span><span style="color: #0000FF;">,</span><span style="color: #008000;">"()"</span><span style="color: #0000FF;">})</span>
<!--</syntaxhighlight>-->
{{out}}
<pre>
Line 3,207 ⟶ 5,109:
=={{header|PicoLisp}}==
The '[http://software-lab.de/doc/refA.html#any any]' function parses an s-expression from a string (indentical to the way '[http://software-lab.de/doc/refR.html#read read]' does this from an input stream).
<langsyntaxhighlight PicoLisplang="picolisp">: (any "((data \"quoted data\" 123 4.5) (data (!@# (4.5) \"(more\" \"data)\")))")
-> ((data "quoted data" 123 5) (data (!@# (5) "(more" "data)")))
 
Line 3,227 ⟶ 5,129:
+-- "(more"
|
+-- "data)"</langsyntaxhighlight>
Implementing a subset of 'any' explicitly:
<langsyntaxhighlight PicoLisplang="picolisp">(de readSexpr ()
(case (skip)
("(" (char) (readList))
Line 3,254 ⟶ 5,156:
(until (or (sp? (peek)) (member (peek) '("(" ")")))
(link (char)) ) )
(or (format X) (intern (pack X))) ) )</langsyntaxhighlight>
It can be used in a pipe to read from a string:
<langsyntaxhighlight PicoLisplang="picolisp">: (pipe (prin "((data \"quoted data\" 123 4.5) (data (!@# (4.5) \"(more\" \"data)\")))") (readSexpr))
-> ((data "quoted data" 123 5) (data (!@# (5) "(more" "data)")))</langsyntaxhighlight>
'[http://software-lab.de/doc/refS.html#sym sym]' does the reverse (i.e. builds a symbol (string) from an expression).
<langsyntaxhighlight PicoLisplang="picolisp">: (sym @@)
-> "((data \"quoted data\" 123 5) (data (!@# (5) \"(more\" \"data)\")))"</langsyntaxhighlight>
Implementing a subset of the built-in printer:
<langsyntaxhighlight PicoLisplang="picolisp">(de printSexpr (Expr Fun)
(cond
((pair Expr)
Line 3,275 ⟶ 5,177:
(mapc Fun (chop Expr))
(Fun "\"") )
(T (mapc Fun (chop Expr))) ) )</langsyntaxhighlight>
This can be used for plain printing
<langsyntaxhighlight PicoLisplang="picolisp">: (printSexpr
'((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))
prin )
((data "quoted data" 123 5) (data (!@# (5) "(more" "data)")))</langsyntaxhighlight>
or to collect the characters into a string:
<langsyntaxhighlight PicoLisplang="picolisp">: (pack
(make
(printSexpr
'((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))
link ) ) )
-> "((data \"quoted data\" 123 5) (data (!@# (5) \"(more\" \"data)\")))"</langsyntaxhighlight>
 
=={{header|Pike}}==
<langsyntaxhighlight lang="pike">class Symbol(string name)
{
string _sprintf(int type)
Line 3,395 ⟶ 5,297:
string input = "((data \"quoted data\" 123 4.5)\n (data (!@# (4.5) \"(more\" \"data)\")))";
array data = group(tokenizer(input))[0];
string output = sexp(data);</langsyntaxhighlight>
 
Output:
Line 3,407 ⟶ 5,309:
=={{header|Potion}}==
How values are stored: Tuples for list, integers for integers, floats for floats, strings for symbols, quoted strings for strings. This implementation is not the most elegant/succinct or practical (it's trusty and has no real error handling).
<langsyntaxhighlight lang="potion">isdigit = (c): 47 < c ord and c ord < 58.
iswhitespace = (c): c ord == 10 or c ord == 13 or c == " ".
 
Line 3,478 ⟶ 5,380:
parsesexpr("((data \"quoted data\" 123 4.5)
(data (!@# (4.5) \"(more\" \"data)\")))") string print
"\n" print</langsyntaxhighlight>
 
=={{header|Python}}==
===Procedural===
<lang python>import re
<syntaxhighlight lang="python">import re
 
dbg = False
Line 3,540 ⟶ 5,443:
print("\nParsed to Python:", parsed)
 
print("\nThen back to: '%s'" % print_sexp(parsed))</langsyntaxhighlight>
 
;Output:
Line 3,551 ⟶ 5,454:
;Simpler parser:
Note that in the example above the parser also recognises and changes the type of some tokens as well as generating a nested list. If that functionality is not needed, or better done elsewhere, then the parse function can be achieved more simply by just applying the regexp:
<langsyntaxhighlight lang="python">>>> from pprint import pprint as pp
>>> x = [[(t,v) for t,v in termtypes.groupdict().items() if v][0] for termtypes in re.finditer(term_regex, sexp)]
>>> pp(x)
Line 3,573 ⟶ 5,476:
('brackr', ')'),
('brackr', ')')]
>>> </langsyntaxhighlight>
 
===Functional===
Composing functionally, and writing out a tree diagram, and a serialization, of the parse.
<syntaxhighlight lang="python">'''S-expressions'''
 
from itertools import chain, repeat
import re
 
 
def main():
'''Sample s-expression parsed, diagrammed,
and reserialized from the parse tree.
'''
expr = "((data \"quoted data\" 123 4.5)\n" + (
" (data (!@# (4.5) \"(more\" \"data)\")))"
)
parse = parseExpr(tokenized(expr))[0]
print(
drawForest([
fmapTree(str)(tree) for tree
in forestFromExprs(parse)
])
)
print(
f'\nReserialized from parse:\n\n{serialized(parse)}'
)
 
 
# ----------------- S-EXPRESSION PARSER ------------------
 
# parseExpr :: [String] -> ([Expr], [String]
def parseExpr(tokens):
'''A tuple of a nested list with any
unparsed tokens that remain.
'''
return until(finished)(parseToken)(
([], tokens)
)
 
 
# finished :: ([Expr], [String]) -> Bool
def finished(xr):
'''True if no tokens remain,
or the next token is a closing bracket.
'''
r = xr[1]
return (not r) or (r[0] == ")")
 
 
# parseToken :: ([Expr], [String]) -> ([Expr], [String])
def parseToken(xsr):
'''A tuple of an expanded expression list
and a reduced token list.
'''
xs, r = xsr
h, *t = r
if "(" == h:
expr, rest = parseExpr(t)
return xs + [expr], rest[1:]
else:
return (xs, t) if ")" == h else (
xs + [atom(h)], t
)
 
# --------------------- ATOM PARSER ----------------------
 
# atom :: String -> Expr
def atom(s):
'''A Symbol, String, Float, or Int derived from s.
Symbol is represented as a dict with a 'name' key.
'''
def n(k):
return float(k) if '.' in k else int(k)
 
return s if '"' == s[0] else (
n(s) if s.replace('.', '', 1).isdigit() else {
"name": s
}
)
 
 
# --------------------- TOKENIZATION ---------------------
 
# tokenized :: String -> [String]
def tokenized(s):
'''A list of the tokens in s.
'''
return list(chain.from_iterable(map(
lambda token: [token] if '"' == token[0] else (
x for x in re.split(
r'\s+',
re.sub(r"([()])", r" \1 ", token)
) if x
) if token else [], (
x if (0 == i % 2) else f'"{x}"'
for (i, x) in enumerate(s.split('"'))
)
)))
 
 
# -------------------- SERIALIZATION ---------------------
 
# serialized :: Expr -> String
def serialized(e):
'''An s-expression written out from the parse tree.
'''
k = typename(e)
 
return str(e) if k in ['int', 'float', 'str'] else (
(
f'({" ".join([serialized(x) for x in e])})' if (
(1 < len(e)) or ('list' != typename(e[0]))
) else serialized(e[0])
) if 'list' == k else (
e.get("name") if 'dict' == k else "?"
)
)
 
 
# typename :: a -> String
def typename(x):
'''Name property of the type of a value.'''
return type(x).__name__
 
 
# ------------------- TREE DIAGRAMMING -------------------
 
# Node :: a -> [Tree a] -> Tree a
def Node(v):
'''Constructor for a Tree node which connects a
value of some kind to a list of zero or
more child trees.
'''
return lambda xs: {'type': 'Tree', 'root': v, 'nest': xs}
 
 
# append :: [a] -> [a] -> [a]
def append(a, b):
'''Concatenation.'''
return a + b
 
 
# draw :: Tree a -> [String]
def draw(node):
'''List of the lines of an ASCII
diagram of a tree.
'''
def shift_(h, other, xs):
return list(map(
append,
chain(
[h], (
repeat(other, len(xs) - 1)
)
),
xs
))
 
def drawSubTrees(xs):
return (
(
['|'] + shift_(
'├─ ', '│ ', draw(xs[0])
) + drawSubTrees(xs[1:])
) if 1 < len(xs) else ['|'] + shift_(
'└─ ', ' ', draw(xs[0])
)
) if xs else []
 
return (root(node)).splitlines() + (
drawSubTrees(nest(node))
)
 
 
# drawForest :: [Tree String] -> String
def drawForest(trees):
'''A simple unicode character representation of
a list of trees.
'''
return '\n'.join(map(drawTree, trees))
 
 
# drawTree :: Tree a -> String
def drawTree(tree):
'''ASCII diagram of a tree.'''
return '\n'.join(draw(tree))
 
 
# fmapTree :: (a -> b) -> Tree a -> Tree b
def fmapTree(f):
'''A new tree holding the results of
an application of f to each root in
the existing tree.
'''
def go(x):
return Node(
f(root(x))
)([go(v) for v in nest(x)])
return go
 
 
# forestFromExprs :: [Expr] -> [Tree Expr]
def forestFromExprs(es):
'''A list of expressions rewritten as a forest.
'''
return [treeFromExpr(x) for x in es]
 
 
# nest :: Tree a -> [Tree a]
def nest(t):
'''Accessor function for children of tree node.'''
return t.get('nest')
 
 
# root :: Tree a -> a
def root(t):
'''Accessor function for data of tree node.'''
return t.get('root')
 
 
# treeFromExprs :: Expr -> Tree Expr
def treeFromExpr(e):
'''An expression rewritten as a tree.
'''
return (
Node({"name": "List"})(forestFromExprs(e))
) if type(e) is list else (
Node(e)([])
)
 
 
# ----------------------- GENERIC ------------------------
 
# until :: (a -> Bool) -> (a -> a) -> a -> a
def until(p):
'''The result of repeatedly applying f until p holds.
The initial seed value is x.
'''
def go(f):
def loop(x):
v = x
while not p(v):
v = f(v)
return v
return loop
return go
 
 
# MAIN ---
if __name__ == '__main__':
main()</syntaxhighlight>
{{Out}}
<pre>{'name': 'List'}
|
├─ {'name': 'List'}
│ |
│ ├─ {'name': 'data'}
│ |
│ ├─ "quoted data"
│ |
│ ├─ 123
│ |
│ └─ 4.5
|
└─ {'name': 'List'}
|
├─ {'name': 'data'}
|
└─ {'name': 'List'}
|
├─ {'name': '!@#'}
|
├─ {'name': 'List'}
│ |
│ └─ 4.5
|
├─ "(more"
|
└─ "data)"
 
Reserialized from parse:
 
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))</pre>
 
=={{header|Racket}}==
 
Racket has builtin support for S-expressions in the form of the read function.
<langsyntaxhighlight lang="racket">
#lang racket
(define input
Line 3,588 ⟶ 5,774:
 
(read (open-input-string input))
</syntaxhighlight>
</lang>
Output:
<pre>
'((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))
</pre>
 
=={{header|Raku}}==
(formerly Perl 6)
{{works with|Rakudo|2020.02}}
 
This parses the task, but it isn't really a good lisp parser, because it always wants whitespace between lists, so <code>(()())</code> will fail ( <code>(() ())</code> wont)
 
<syntaxhighlight lang="raku" line>grammar S-Exp {
rule TOP {^ <s-list> $};
 
token s-list { '(' ~ ')' [ <in_list>+ % [\s+] | '' ] }
token in_list { <s-token> | <s-list> }
proto token s-token {*}
token s-token:sym<Num> {\d*\.?\d+}
token s-token:sym<String> {'"' ['\"' |<-[\\"]>]*? '"'} #'
token s-token:sym<Atom> {<-[()\s]>+}
}
# The Actions class, for each syntactic rule there is a method
# that stores some data in the abstract syntax tree with make
class S-Exp::ACTIONS {
method TOP ($/) {make $<s-list>.ast}
method s-list ($/) {make [$<in_list>».ast]}
method in_list ($/) {make $/.values[0].ast}
method s-token:sym<Num> ($/){make +$/}
method s-token:sym<String> ($/){make ~$/.substr(1,*-1)}
method s-token:sym<Atom> ($/){make ~$/}
}
multi s-exp_writer (Positional $ary) {'(' ~ $ary.map(&s-exp_writer).join(' ') ~ ')'}
multi s-exp_writer (Numeric $num) {~$num}
multi s-exp_writer (Str $str) {
return $str unless $str ~~ /<[(")]>|\s/;
return '()' if $str eq '()';
'"' ~ $str.subst('"', '\"' ) ~ '"';
}
my $s-exp = '((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))';
my $actions = S-Exp::ACTIONS.new();
my $raku_array = (S-Exp.parse($s-exp, :$actions)).ast;
say "the expression:\n$s-exp\n";
say "the Raku expression:\n{$raku_array.raku}\n";
say "and back:\n{s-exp_writer($raku_array)}";</syntaxhighlight>
 
{{out}}
<pre>the expression:
((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))
 
the Raku expression:
[["data", "quoted data", "123", 9/2], ["data", ["!\@#", [9/2], "(more", "data)"]]]
 
and back:
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))</pre>
 
=={{header|REXX}}==
Line 3,599 ⟶ 5,846:
<br>Also added were two more separators &nbsp; (a comma and semicolon).
<br>Separators that could be added are more whitespace characters (vertical/horizontal tabs, line feed, form feed, tab char, etc).
 
<br><br>It would normally be considered improper, but the literal string delimiters were left intact; making it much easier to understand what is/was being parsed.
It would normally be considered improper, but the literal string delimiters were left intact; making it much easier to understand what is/was being parsed.
<lang rexx>/*REXX program parses an S-expression and displays the results to the terminal. */
<syntaxhighlight lang="rexx">/*REXX program parses an S-expression and displays the results to the terminal. */
input= '((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))'
say center('input', length(input), "═") /*display the header title to terminal.*/
Line 3,624 ⟶ 5,872:
end /*n*/ /* [↑] handle a bunch of grouping syms*/
literals=
do k=1 while q.k\==''; literals= literals || q.k /*add literal delimiters*/
end /*k*/
!=; literalStart=
do j=1 to length(input); $= substr(input, j, 1) /* ◄■■■■■text parsing*/
/* ◄■■■■■text parsing*/
if quoted then do; !=! || $; if $==literalStart then quoted= 0 /* ◄■■■■■text parsing*/
iterate /* ◄■■■■■text parsing*/
end /* [↑] handle running quoted strstring. */ /* ◄■■■■■text parsing*/
/* ◄■■■■■text parsing*/
if pos($, literals)\==0 then do; literalStart= $; != ! || $; quoted=1 1 /* ◄■■■■■text parsing*/
iterate /* ◄■■■■■text parsing*/
end /* [↑] handle start of quoted strstrring.*/ /* ◄■■■■■text parsing*/
/* ◄■■■■■text parsing*/
if pos($, atoms)==0 then do; != ! || $ ; iterate; end /*is an atom?*/ /* ◄■■■■■text parsing*/
else do; call add!; != $; end /*isn't an atam" " ?*/ /* ◄■■■■■text parsing*/
/* ◄■■■■■text parsing*/
if pos($, literals)==0 then do; if pos($, grpU)\==0 then level= level + 1 /* ◄■■■■■text parsing*/
call add! /* ◄■■■■■text parsing*/
if pos($, grpD)\==0 then level= level - 1 /* ◄■■■■■text parsing*/
if level<0 then say 'error, mismatched' $ /* ◄■■■■■text parsing*/
end /* ◄■■■■■text parsing*/
end /*j*/ /* ◄■■■■■text parsing*/
/* ◄■■■■■text parsing*/
call add! /*process any residual tokens.*/ */ /* ◄■■■■■text parsing*/
if level\==0 then say 'error, mismatched grouping symbol' /* ◄■■■■■text parsing*/
if quoted then say 'error, no end of quoted literal' literalStart /* ◄■■■■■text parsing*/
 
do m=1 for #; say @.m; end /*m*/ /*display the tokens to the───► terminal. */
end /*m*/
exit /*stick a fork in it, we're all done. */
exit 0 /*stick a fork in it, we're all done. */
/*──────────────────────────────────────────────────────────────────────────────────────*/
add!: if !='' then return; #=#+1; @.#=left("", max(0, tabs*(level-1)))!; !=; return</langsyntaxhighlight>
{{out|output|text=&nbsp; when using the default input:}}
<pre>
Line 3,683 ⟶ 5,932:
=={{header|Ruby}}==
{{works with|Ruby|1.9}}
<langsyntaxhighlight lang="ruby">class SExpr
def initialize(str)
@original = str
Line 3,804 ⟶ 6,053:
puts "original sexpr:\n#{sexpr.original}"
puts "\nruby data structure:\n#{sexpr.data}"
puts "\nand back to S-Expr:\n#{sexpr.to_sexpr}"</langsyntaxhighlight>
 
{{out}}
Line 3,818 ⟶ 6,067:
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))
</pre>
 
=={{header|Rust}}==
lib.rs:
<syntaxhighlight lang="rust">
//! This implementation isn't based on anything in particular, although it's probably informed by a
//! lot of Rust's JSON encoding code. It should be very fast (both encoding and decoding the toy
//! example here takes under a microsecond on my machine) and tries to avoid unnecessary allocation.
//!
//! In a real implementation, most of this would be private, with only a few visible functions, and
//! there would be somewhat nicer signatures (in particular, the fact that `ParseContext` has to be
//! mutable would get annoying in real code pretty quickly, so it would probably be split out).
//!
//! It supports the ability to read individual atoms, not just lists, although whether this is
//! useful is questionable.
//!
//! Caveats: Does not support symbols vs. non-symbols (it wouldn't be hard, but it would greatly
//! complicate setting up our test structure since we'd have to force it to go through functions
//! that checked to make sure `Symbol`s couldn't have spaces, or slow down our parser by checking
//! for this information each time, which is obnoxious). Does not support string escaping, because
//! the decoding technique doesn't allocate extra space for strings. Does support numbers, but
//! only float types (supporting more types is possible but would complicate the code
//! significantly).
 
extern crate typed_arena;
 
use typed_arena::Arena;
 
use self::Error::*;
use self::SExp::*;
use self::Token::*;
use std::io;
use std::num::FpCategory;
use std::str::FromStr;
 
/// The actual `SExp` structure. Supports `f64`s, lists, and string literals. Note that it takes
/// everything by reference, rather than owning it--this is mostly done just so we can allocate
/// `SExp`s statically (since we don't have to call `Vec`). It does complicate the code a bit,
/// requiring us to have a `ParseContext` that holds an arena where lists are actually allocated.
#[derive(PartialEq, Debug)]
pub enum SExp<'a> {
/// Float literal: 0.5
F64(f64),
 
/// List of SExps: ( a b c)
List(&'a [SExp<'a>]),
 
/// Plain old string literal: "abc"
Str(&'a str),
}
 
/// Errors that can be thrown by the parser.
#[derive(PartialEq, Debug)]
pub enum Error {
/// If the float is `NaN`, `Infinity`, etc.
NoReprForFloat,
 
/// Missing an end double quote during string parsing
UnterminatedStringLiteral,
 
/// Some other kind of I/O error
Io,
 
/// ) appeared where it shouldn't (usually as the first token)
IncorrectCloseDelimiter,
 
/// Usually means a missing ), but could also mean there were no tokens at all.
UnexpectedEOF,
 
/// More tokens after the list is finished, or after a literal if there is no list.
ExpectedEOF,
}
 
impl From<io::Error> for Error {
fn from(_err: io::Error) -> Error {
Error::Io
}
}
 
/// Tokens returned from the token stream.
#[derive(PartialEq, Debug)]
enum Token<'a> {
/// Left parenthesis
ListStart,
 
/// Right parenthesis
ListEnd,
 
/// String or float literal, quotes removed.
Literal(SExp<'a>),
 
/// Stream is out of tokens.
Eof,
}
 
/// An iterator over a string that yields a stream of Tokens.
///
/// Implementation note: it probably seems weird to store first, rest, AND string, since they should
/// all be derivable from string. But see below.
#[derive(Copy, Clone, Debug)]
struct Tokens<'a> {
/// The part of the string that still needs to be parsed
string: &'a str,
 
/// The first character to parse
first: Option<char>,
 
/// The rest of the string after the first character
rest: &'a str,
}
 
impl<'a> Tokens<'a> {
/// Initialize a token stream for a given string.
fn new(string: &str) -> Tokens {
let mut chars = string.chars();
 
match chars.next() {
Some(ch) => Tokens {
string,
first: Some(ch),
rest: chars.as_str(),
},
None => Tokens {
string,
first: None,
rest: string,
},
}
}
 
/// Utility function to update information in the iterator. It might not be performant to keep
/// rest cached, but there are times where we don't know exactly what string is (at least, not
/// in a way that we can *safely* reconstruct it without allocating), so we keep both here.
/// With some unsafe code we could probably get rid of one of them (and maybe first, too).
fn update(&mut self, string: &'a str) {
self.string = string;
let mut chars = self.string.chars();
 
if let Some(ch) = chars.next() {
self.first = Some(ch);
self.rest = chars.as_str();
} else {
self.first = None;
};
}
 
/// This is where the lexing happens. Note that it does not handle string escaping.
fn next_token(&mut self) -> Result<Token<'a>, Error> {
loop {
match self.first {
// List start
Some('(') => {
self.update(self.rest);
return Ok(ListStart);
}
// List end
Some(')') => {
self.update(self.rest);
return Ok(ListEnd);
}
// Quoted literal start
Some('"') => {
// Split the string at most once. This lets us get a
// reference to the next piece of the string without having
// to loop through the string again.
let mut iter = self.rest.splitn(2, '"');
// The first time splitn is run it will never return None, so this is safe.
let str = iter.next().unwrap();
match iter.next() {
// Extract the interior of the string without allocating. If we want to
// handle string escaping, we would have to allocate at some point though.
Some(s) => {
self.update(s);
return Ok(Literal(Str(str)));
}
None => return Err(UnterminatedStringLiteral),
}
}
// Plain old literal start
Some(c) => {
// Skip whitespace. This could probably be made more efficient.
if c.is_whitespace() {
self.update(self.rest);
continue;
}
// Since we've exhausted all other possibilities, this must be a real literal.
// Unlike the quoted case, it's not an error to encounter EOF before whitespace.
let mut end_ch = None;
let str = {
let mut iter = self.string.splitn(2, |ch: char| {
let term = ch == ')' || ch == '(';
if term {
end_ch = Some(ch)
}
term || ch.is_whitespace()
});
// The first time splitn is run it will never return None, so this is safe.
let str = iter.next().unwrap();
self.rest = iter.next().unwrap_or("");
str
};
match end_ch {
// self.string will be incorrect in the Some(_) case. The only reason it's
// okay is because the next time next() is called in this case, we know it
// will be '(' or ')', so it will never reach any code that actually looks
// at self.string. In a real implementation this would be enforced by
// visibility rules.
Some(_) => self.first = end_ch,
None => self.update(self.rest),
}
return Ok(Literal(parse_literal(str)));
}
None => return Ok(Eof),
}
}
}
}
 
/// This is not the most efficient way to do this, because we end up going over numeric literals
/// twice, but it avoids having to write our own number parsing logic.
fn parse_literal(literal: &str) -> SExp {
match literal.bytes().next() {
Some(b'0'..=b'9') | Some(b'-') => match f64::from_str(literal) {
Ok(f) => F64(f),
Err(_) => Str(literal),
},
_ => Str(literal),
}
}
 
/// Parse context, holds information required by the parser (and owns any allocations it makes)
pub struct ParseContext<'a> {
/// The string being parsed. Not required, but convenient.
string: &'a str,
 
/// Arena holding any allocations made by the parser.
arena: Option<Arena<Vec<SExp<'a>>>>,
 
/// Stored in the parse context so it can be reused once allocated.
stack: Vec<Vec<SExp<'a>>>,
}
 
impl<'a> ParseContext<'a> {
/// Create a new parse context from a given string
pub fn new(string: &'a str) -> ParseContext<'a> {
ParseContext {
string,
arena: None,
stack: Vec::new(),
}
}
}
 
impl<'a> SExp<'a> {
/// Serialize a SExp.
fn encode<T: io::Write>(&self, writer: &mut T) -> Result<(), Error> {
match *self {
F64(f) => {
match f.classify() {
// We don't want to identify NaN, Infinity, etc. as floats.
FpCategory::Normal | FpCategory::Zero => {
write!(writer, "{}", f)?;
Ok(())
}
_ => Err(Error::NoReprForFloat),
}
}
List(l) => {
// Writing a list is very straightforward--write a left parenthesis, then
// recursively call encode on each member, and then write a right parenthesis. The
// only reason the logic is as long as it is is to make sure we don't write
// unnecessary spaces between parentheses in the zero or one element cases.
write!(writer, "(")?;
let mut iter = l.iter();
if let Some(sexp) = iter.next() {
sexp.encode(writer)?;
for sexp in iter {
write!(writer, " ")?;
sexp.encode(writer)?;
}
}
write!(writer, ")")?;
Ok(())
}
Str(s) => {
write!(writer, "\"{}\"", s)?;
Ok(())
}
}
}
 
/// Deserialize a SExp.
pub fn parse(ctx: &'a mut ParseContext<'a>) -> Result<SExp<'a>, Error> {
ctx.arena = Some(Arena::new());
// Hopefully this unreachable! gets optimized out, because it should literally be
// unreachable.
let arena = match ctx.arena {
Some(ref mut arena) => arena,
None => unreachable!(),
};
let ParseContext {
string,
ref mut stack,
..
} = *ctx;
// Make sure the stack is cleared--we keep it in the context to avoid unnecessary
// reallocation between parses (if you need to remember old parse information for a new
// list, you can pass in a new context).
stack.clear();
let mut tokens = Tokens::new(string);
// First, we check the very first token to see if we're parsing a full list. It
// simplifies parsing a lot in the subsequent code if we can assume that.
let next = tokens.next_token();
let mut list = match next? {
ListStart => Vec::new(),
Literal(s) => {
return if tokens.next_token()? == Eof {
Ok(s)
} else {
Err(ExpectedEOF)
};
}
ListEnd => return Err(IncorrectCloseDelimiter),
Eof => return Err(UnexpectedEOF),
};
 
// We know we're in a list if we got this far.
loop {
let tok = tokens.next_token();
match tok? {
ListStart => {
// We push the previous context onto our stack when we start reading a new list.
stack.push(list);
list = Vec::new()
}
Literal(s) => list.push(s), // Plain old literal, push it onto the current list
ListEnd => {
match stack.pop() {
// Pop the old context off the stack on list end.
Some(mut l) => {
// We allocate a slot for the current list in our parse context (needed
// for safety) before pushing it onto its parent list.
l.push(List(&*arena.alloc(list)));
// Now reset the current list to the parent list
list = l;
}
// There was nothing on the stack, so we're at the end of the topmost list.
// The check to make sure there are no more tokens is required for
// correctness.
None => {
return match tokens.next_token()? {
Eof => Ok(List(&*arena.alloc(list))),
_ => Err(ExpectedEOF),
};
}
}
}
// We encountered an EOF before the list ended--that's an error.
Eof => return Err(UnexpectedEOF),
}
}
}
 
/// Convenience method for the common case where you just want to encode a SExp as a String.
pub fn buffer_encode(&self) -> Result<String, Error> {
let mut m = Vec::new();
self.encode(&mut m)?;
// Because encode() only ever writes valid UTF-8, we can safely skip the secondary check we
// normally have to do when converting from Vec<u8> to String. If we didn't know that the
// buffer was already UTF-8, we'd want to call container_as_str() here.
unsafe { Ok(String::from_utf8_unchecked(m)) }
}
}
 
pub const SEXP_STRUCT: SExp<'static> = List(&[
List(&[Str("data"), Str("quoted data"), F64(123.), F64(4.5)]),
List(&[
Str("data"),
List(&[Str("!@#"), List(&[F64(4.5)]), Str("(more"), Str("data)")]),
]),
]);
 
pub const SEXP_STRING_IN: &str = r#"((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))"#;
 
 
and main.rs:
 
use s_expressions::{ParseContext, SExp, SEXP_STRING_IN, SEXP_STRUCT};
 
fn main() {
println!("{:?}", SEXP_STRUCT.buffer_encode());
let ctx = &mut ParseContext::new(SEXP_STRING_IN);
println!("{:?}", SExp::parse(ctx));
}
</syntaxhighlight>{{out}}
<pre>
Ok("((\"data\" \"quoted data\" 123 4.5) (\"data\" (\"!@#\" (4.5) \"(more\" \"data)\")))")
Ok(List([List([Str("data"), Str("quoted data"), F64(123.0), F64(4.5)]), List([Str("data"), List([Str("!@#"), List([F64(4.5)]), Str("(more"), Str("data)")])])]))
</pre>
 
 
=={{header|Scheme}}==
Line 3,829 ⟶ 6,478:
Using guile scheme 2.0.11
 
<langsyntaxhighlight lang="scheme">(define (sexpr-read port)
(define (help port)
(let ((char (read-char port)))
Line 3,872 ⟶ 6,521:
 
(format-sexpr (sexpr-read
(open-input-string "((data \"quoted data\" 123 4.5) (data (!@# (4.5) \"(more\" \"data)\")))")))</langsyntaxhighlight>
 
Output:
Line 3,897 ⟶ 6,546:
=={{header|Sidef}}==
{{trans|Perl}}
<syntaxhighlight lang="ruby">func sexpr(txt) {
<lang ruby>var t = frequire('Text::Balanced');
txt.trim!
 
if (txt.match(/^\((.*)\)$/s)) {|m|
func sexpr(txt) {
txt.trim!; = m[0]
}
else {
die "Invalid: <<#{txt}>>"
}
 
var w
var m = txt.match(/^\((.*)\)$/s) || die "Invalid: <<#{txt}>>";
txtvar ret = m[0];
 
var w;
var ret = [];
while (!txt.is_empty) {
given (txt.first) {
when('(') {
(w, txt) = ttxt.extract_bracketed(txt, '()');
w = sexpr(w);
}
when ('"') {
(w, txt) = ttxt.extract_delimited(txt, '"')
w.sub!(/^"(.*)"/, {|s1| s1 });
}
defaultelse {
txt.sub!(/^(\S+)/, {|s1| w = s1; '' });
}
}
ret << w;
txt.trim_beg!;
}
return ret;
}
 
func sexpr2txt(String e) {
e ~~ /[\s"\(\)]/ ? do { e.gsub!('"', '\\"'); %Q("#{e}") } : e;
}
 
func sexpr2txt(expr) {
'(' + expr.map {|e| sexpr2txt(e) }.join(' ') + ')';
}
 
Line 3,940 ⟶ 6,592:
(data (!@# (4.5) "(more" "data)")))
 
});
 
say s; # dump structure
say sexpr2txt(s); # convert back</langsyntaxhighlight>
{{out}}
<pre>
Line 3,952 ⟶ 6,604:
=={{header|Tcl}}==
Note that because Tcl doesn't expose a type system (well, not in a conventional sense) the parts of the parsed out data structure are tagged lists; the first element is one of “<tt>string</tt>”, “<tt>int</tt>”, “<tt>real</tt>” and “<tt>atom</tt>” to indicate a leaf token, or “<tt>list</tt>” to indicate a sublist. A “native” data structure could also be generated, but then that would turn things into lists that are not in the original.
<langsyntaxhighlight lang="tcl">package require Tcl 8.5
 
proc fromSexp {str} {
Line 4,000 ⟶ 6,652:
return [lindex $content 0]
}
}</langsyntaxhighlight>
Demonstrating with the sample data:
<langsyntaxhighlight lang="tcl">set sample {((data "quoted data" 123 4.5)
(data (!@# (4.5) "(more" "data)")))}
set parsed [fromSexp $sample]
puts "sample: $sample"
puts "parsed: $parsed"
puts "regen: [toSexp $parsed]"</langsyntaxhighlight>
Output:
<pre>
Line 4,053 ⟶ 6,705:
Code:
 
<langsyntaxhighlight lang="txr">@(define float (f))@\
@(local (tok))@\
@(cases)@\
Line 4,096 ⟶ 6,748:
expr: @(format nil "~s" e)
junk: @junk
@(end)</langsyntaxhighlight>
 
Run:
Line 4,131 ⟶ 6,783:
Explanation of most confusing line:
 
<langsyntaxhighlight lang="txr"> @/\s*\(\s*/@(coll :vars (e))@(expr e)@/\s*/@(last))@(end)</langsyntaxhighlight>
 
First, we match an open parenthesis that can be embedded in whitespace. Then we have a <code>@(coll)</code> construct which terminates with <code>@(end)</code>. This is a repetition construct for collecting zero or more items. The <code>:vars (e)</code> argument makes the collect strict: each repetition must bind the variable <code>e</code>. More importantly, in this case, if nothing is
collected, then <code>e</code> gets bound to <code>nil</code> (the empty list). The collect construct does not look at context beyond itself. To terminate the collect at the closing parenthesis we use <code>@(last))</code>. The second closing parenthesis here is literal text to be matched, not TXR syntax. This special clause establishes the terminating context without which the collect will munge all input. When the last clause matches, whatever it matches is consumed and the collect ends. (There is a related <code>@(until)</code> clause which terminates the collect, but leaves its own match unconsumed.)
 
=={{header|Wren}}==
{{trans|Kotlin}}
{{libheader|Wren-pattern}}
{{libheader|Wren-fmt}}
<syntaxhighlight lang="wren">import "./pattern" for Pattern
import "./fmt" for Fmt
 
var INDENT = 2
 
var parseSExpr = Fn.new { |str|
var ipat = " \t\n\f\v\r()\""
var p = Pattern.new("""+0/s["+0^""|(|)|"|+1/I]""", Pattern.within, ipat)
var t = p.findAll(str).map { |m| m.text }.toList
if (t.count == 0) return null
var o = false
var c = 0
for (i in t.count-1..0) {
var ti = t[i].trim()
var nd = Num.fromString(ti)
if (ti == "\"") return null
if (ti == "(") {
t[i] = "["
c = c + 1
} else if (ti == ")") {
t[i] = "]"
c = c - 1
} else if (nd) {
var ni = Num.fromString(ti)
t[i] = ni ? ni.toString : nd.toString
} else if (ti.startsWith("\"")) { // escape embedded double quotes
var temp = ti[1...-1]
t[i] = "\"" + temp.replace("\"", "\\\"") + "\""
}
if (i > 0 && t[i] != "]" && t[i - 1].trim() != "(") t.insert(i, ", ")
if (c == 0) {
if (!o) o = true else return null
}
}
return (c != 0) ? null : t
}
 
var toSExpr = Fn.new { |tokens|
for (i in 0...tokens.count) {
if (tokens[i] == "[") {
tokens[i] = "("
} else if (tokens[i] == "]") {
tokens[i] = ")"
} else if (tokens[i] == ", ") {
tokens[i] = " "
} else if (tokens[i].startsWith("\"")) { // unescape embedded quotes
var temp = tokens[i][1...-1]
tokens[i] = "\"" + temp.replace("\\\"", "\"") + "\""
}
}
return tokens.join()
}
 
var prettyPrint = Fn.new { |tokens|
var level = 0
for (t in tokens) {
var n
if (t == ", " || t == " ") {
continue
} else if (t == "[" || t == "(") {
n = level * INDENT + 1
level = level + 1
} else if (t == "]" || t == ")") {
level = level - 1
n = level * INDENT + 1
} else {
n = level * INDENT + t.count
}
Fmt.print("$*s", n, t)
}
}
 
var str = """((data "quoted data" 123 4.5)""" + "\n" +
""" (data (!@# (4.5) "(more" "data)")))"""
var tokens = parseSExpr.call(str)
if (!tokens) {
System.print("Invalid s-expr!")
} else {
System.print("Native data structure:")
System.print(tokens.join())
System.print("\nNative data structure (pretty print):")
prettyPrint.call(tokens)
 
System.print("\nRecovered S-Expression:")
System.print(toSExpr.call(tokens))
System.print("\nRecovered S-Expression (pretty print):")
prettyPrint.call(tokens)
}</syntaxhighlight>
 
{{out}}
<pre>
Native data structure:
[[data, "quoted data", 123, 4.5], [data, [!@#, [4.5], "(more", "data)"]]]
 
Native data structure (pretty print):
[
[
data
"quoted data"
123
4.5
]
[
data
[
!@#
[
4.5
]
"(more"
"data)"
]
]
]
 
Recovered S-Expression:
((data "quoted data" 123 4.5) (data (!@# (4.5) "(more" "data)")))
 
Recovered S-Expression (pretty print):
(
(
data
"quoted data"
123
4.5
)
(
data
(
!@#
(
4.5
)
"(more"
"data)"
)
)
)
</pre>
 
{{omit from|Brlcad}}
9,655

edits