Geometric algebra: Difference between revisions

From Rosetta Code
Content added Content deleted
Line 438: Line 438:
Note that sparse arrays display as <code>indices | value</code> for elements which are not the default element (0).
Note that sparse arrays display as <code>indices | value</code> for elements which are not the default element (0).


So, for example in the part where we check for orthogonality, we are forming a 5 by 5 by 9223372036854775807 array. The first two dimensions correspond to arguments to <code>e</code> and the final dimension is the multivector dimension. A <code>0</code> in the multivector dimension means that that's the "real" or "scalar" part of the multivector. And, since the pair of dimensions for the <code>e</code> arguments are identical, we know we have an [[wp:Identity_matrix|identity matrix]].
So, for example in the part where we check for orthogonality, we are forming a 5 by 5 by 9223372036854775807 array. The first two dimensions correspond to arguments to <code>e</code> and the final dimension is the multivector dimension. A <code>0</code> in the multivector dimension means that that's the "real" or "scalar" part of the multivector. And, since the pair of dimensions for the <code>e</code> arguments whose "dot" product are <code>1</code> are identical, we know we have an [[wp:Identity_matrix|identity matrix]].


=={{header|javascript}}==
=={{header|javascript}}==

Revision as of 21:43, 20 October 2015

Geometric algebra is a draft programming task. It is not yet considered ready to be promoted as a complete task, for reasons that should be found in its talk page.

Geometric algebra is an other name for Clifford algebras and it's basically an algebra containing a vector space and obeying the following axioms:

The product operation in this algebra is called geometric product. The elements are called multivectors. Multivectors in are just called vectors. If the dimension of is , then the dimension of the algebra is .

The purpose of this task is to implement such an algebra for an arbitrary large dimension , but just take if that's easier to implement in your language (for instance if support of large integers is not easy). For this task we will also focus on simple versions of this algebra where , but you can also implement a general version for extra credit.

To demonstrate your solution, you will use it to implement quaternions:

  • define the inner product of two vectors : .

Since , the fourth axiom tells us that the inner product is a real number, and since it is also clearly symmetric and bilinear, it defines a scalar product on .

  • define a function which allows for creation of an orthonormal basis of

,

  • verify that the square of an arbitrary vector is real. Use this example:
  • verify the orthonormality for i, j in .
  • define the following three constants:
  • verify that
  • to verify that your implementation does not just reuse an ad-hoc implementation of quaternions, also define the following three constants:
  • verify that


EchoLisp

We build a CGA based upon a generationg quadratic form in R^n. The implementation is general enough, that is ei*ei = +/- 1 , and not restricted to 1. The 5 dimension limit comes from the use of 32 bits numbers to generate all permutations 101... , but this could be improved. The multi-vector multiplication is based on a multiplication table 2^n * 2^n , generated once for all.

<lang scheme> (define e-bits (build-vector 32 (lambda(i) (arithmetic-shift 1 i)))) ;; 1,2,4,.. (define (e-index i) ;; index of ei in native vector (if (zero? i) 0 (arithmetic-shift 1 (1- i))))

(define DIM 0) ;; 2^N (define N 0) (define MultTable null) ;; multiplication table eijk * el.. = exyz.. (define SignTable null) ;; sign of products (define Signature null) ;; input quadratic form

return "eijk"

(define( e-print E sign ) (string-append (cond ((= sign 1) " ") ((= sign -1) "- ") (else "")) (if (zero? E) "1" (for/string ((i N)) #:continue (zero? (bitwise-and E (vector-ref e-bits i))) (string-append "e" (1+ i))))))

returns a string a *e1 + b*e2 + .. z*eijk + ..

(define (multi-print V (x)) (for/string ((i DIM)) (set! x (vector-ref V i)) #:continue (zero? x) (string-append " " (if (> x 0) "+" "") x "*" (e-print i 0))))


generates the multiplication table e_i e__k . * e_j e_l ..==> e_u e_v ...
E_I and E_J are sets of indices >=1 , increasing order, represented by a 32 bits number

(define (make-mult-table (verbose #f) (result) (swaps) (ej)) (when verbose (writeln 'N= N 'DIM= DIM 'Q= Signature)) (for* ((E_I (in-range 1 DIM))(E_J (in-range 1 DIM))) (set! result E_I) (set! swaps 0) (for ((j DIM)) ; each bit# in E_J (set! ej (vector-ref e-bits j)) #:continue (zero? (bitwise-and ej E_J))

(for((s (in-range (1- N) j -1))) ;; count swaps (when (!zero? (bitwise-and E_I (vector-ref e-bits s))) (set! swaps (1+ swaps))))

(if (zero? (bitwise-and E_I ej)) ;; e_i * e_j (set! result (bitwise-ior result ej)) (begin ;; else e_i * e_i (set! result (bitwise-xor result ej)) (when (= -1 (vector-ref Signature ej)) (set! swaps (1+ swaps))) ))) ;; j loop

(when verbose (writeln (e-print E_I 0) '* (e-print E_J 0) '= (e-print result (if (even? swaps) 1 -1))))

(matrix-set! MultTable E_I E_J result) (matrix-set! SignTable E_I E_J (if (even? swaps) 1 -1)) ))

multivector operations
addition is standard vector addition
multiplication a b -> c

(define (multi-mult a b) (define c (make-vector DIM 0)) (for* ((i DIM) (j DIM)) #:continue (zero? (vector-ref a i)) #:continue (zero? (vector-ref b j)) (vector-set! c (array-ref MultTable i j) (+ (* (array-ref SignTable i j) (vector-ref a i) (vector-ref b j)) (vector-ref c (array-ref MultTable i j))))) c)

pretty print a • b or a • b • c

(define ( • a b (c #f)) (multi-print (if c (multi-mult a (multi-mult b c)) (multi-mult a b))))


(Eij i j) -> return multi-vector eiej 0 <= i <= n

(define (Eij i j (coeff 1)) (define Eij (make-vector DIM)) (vector-set! Eij (array-ref MultTable (e-index i) (e-index j)) coeff) Eij)


Reference
https://en.wikipedia.org/wiki/Clifford_algebra#Real_numbers
(make-cga m p [verbose]) => Algebra A(m p)
Input
a quadratic form Q(x) = x1*x1 + + xm*xm - xm+1*xm+1 - xm+p*xm+p
n = m + p = dimension of vector space R^n
generates an algebra A(m p) of dimension DIM = 2^n
Ex
A(n 0) = use R^n dot product as quadratic form : ei*ei = 1
Ex
A (0 1) = Complex , e1*e1 = -1 ; A(0 2) => quaternions ei*ei = -1
Implementation
limitation n <= 5
multivectors of A(m p) will be mapped on Vectors V of dimension 2^n
V[0] is the scalar part of a multivector.
Blade of vectors of R^n
:V[2^(i-1)] = 1 , 0 elsewhere , i in [1 ..n]

(define (make-cga m p (verbose #f)) (string-delimiter "") (set! N (+ m p)) (set! DIM (expt 2 N)) (set! MultTable (build-array DIM DIM (lambda(i j) (cond ((zero? i) j)((zero? j) i)(else 0))))) (set! SignTable (make-array DIM DIM 1)) (set! Signature (make-vector DIM 1)) ;; Q polynomial (for ((j (in-range m N))) (vector-set! Signature (vector-ref e-bits j) -1))

(make-mult-table verbose) DIM )


</lang>

Output:

<lang scheme>

we use indices (1 ... n) in conformity with the Wikipedia reference
dimension 2
(1 + e1e2) • (e1 -2e2) = -e1 -3e2

(make-cga 2 0) (define u #(1 0 0 1)) (define v #(0 1 -2 0))

(multi-print u) → +1*1 +1*e1e2 (multi-print v) → +1*e1 -2*e2 (• u v) → -1*e1 -3*e2

task

(make-cga 5 0) (define X #(0 1 -1 0 2 0 0 0 -3 0 0 0 0 0 0 0 -2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 )) (multi-print X) → +1*e1 -1*e2 +2*e3 -3*e4 -2*e5 (• X X) → +19*1

with another polynomial

(make-cga 0 5) Signature → #( 1 -1 -1 1 -1 1 1 1 -1 1 1 1 1 1 1 1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1) (• X X) → -19*1

(make-cga 4 0) (define i (Eij 1 2)) (define j (Eij 2 3)) (define k (Eij 1 3))

(multi-print i) → +1*e1e2 (multi-print j) → +1*e2e3 (multi-print k)]→ +1*e1e3 (• i i) → -1*1 (• j j) → -1*1 (• k k) → -1*1 (• i j k) → -1*1

(define I (Eij 2 3)) 😖️ error: define : cannot redefine : I (used in Complex) ;; use II instead

(define II (Eij 2 3)) → +1*e2e3 (define J (Eij 3 4)) → +1*e3e4 (define K (Eij 2 4)) → +1*e2e4

(• II II) → -1*1 (• J J) → -1*1 (• K K) → -1*1 (• II J K) → -1*1

</lang> Multiplication table for A(3 0)

N= 3 DIM= 8 Q= #( 1 1 1 1 1 1 1 1)
e1 * e1 = 1
e1 * e2 = e1e2
e1 * e1e2 = e2
e1 * e3 = e1e3
e1 * e1e3 = e3
e1 * e2e3 = e1e2e3
e1 * e1e2e3 = e2e3
e2 * e1 = - e1e2
e2 * e2 = 1
e2 * e1e2 = - e1
e2 * e3 = e2e3
e2 * e1e3 = - e1e2e3
e2 * e2e3 = e3
e2 * e1e2e3 = - e1e3
e1e2 * e1 = - e2
e1e2 * e2 = e1
e1e2 * e1e2 = - 1
e1e2 * e3 = e1e2e3
e1e2 * e1e3 = - e2e3
e1e2 * e2e3 = e1e3
e1e2 * e1e2e3 = - e3
e3 * e1 = - e1e3
e3 * e2 = - e2e3
e3 * e1e2 = e1e2e3
e3 * e3 = 1
e3 * e1e3 = - e1
e3 * e2e3 = - e2
e3 * e1e2e3 = e1e2
e1e3 * e1 = - e3
e1e3 * e2 = - e1e2e3
e1e3 * e1e2 = e2e3
e1e3 * e3 = e1
e1e3 * e1e3 = - 1
e1e3 * e2e3 = - e1e2
e1e3 * e1e2e3 = e2
e2e3 * e1 = e1e2e3
e2e3 * e2 = - e3
e2e3 * e1e2 = - e1e3
e2e3 * e3 = e2
e2e3 * e1e3 = e1e2
e2e3 * e2e3 = - 1
e2e3 * e1e2e3 = - e1
e1e2e3 * e1 = e2e3
e1e2e3 * e2 = - e1e3
e1e2e3 * e1e2 = - e3
e1e2e3 * e3 = e1e2
e1e2e3 * e1e3 = e2
e1e2e3 * e2e3 = - e1
e1e2e3 * e1e2e3 = - 1

J

Based on the quaternion implementation at quaternion type, but implementing a clifford algebra which specifically corresponds to the task requirements. If performance mattered, we might want to use sparse arrays here.

Implementation:

<lang J>dim=.32

T=.3 :0 dim

 s=: _1^~:/@,@(* 0,_1}.~:/\)/@#:@,"0/~x: i. y
 s (<@([ , #.@:(~:/)@#:@, , ])"0/~i.y) } (3#y)$0

)

domain=: 32&{. :.(#~ +./\.@:(0&~:))"1

ip=: +/ .* gmul=: (ip T ip ])&.domain gadd=: +/@,:&.domain gdot=: (gmul gadd gmul~)&.domain % 2:

e=: {&((2^i.2^.dim){=i. dim)</lang>

See the alternate implementation for more detail on what we are doing here.

In this case, though, we are using a non-sparse representation, 32 element multivectors and, thus five element vectors. This approach is slow, but the code is a bit more concise because we do not have to deal with the details of the sparse implementation.

Use:

<lang J> NB. test arbitrary vector being real

  gmul~ gadd/ (e 0 1 2 3 4) * 1 _1 2 3 _2 

19

  NB. required orthogonality
  gdot&e&.>/~i.4

┌─┬─┬─┬─┐ │1│ │ │ │ ├─┼─┼─┼─┤ │ │1│ │ │ ├─┼─┼─┼─┤ │ │ │1│ │ ├─┼─┼─┼─┤ │ │ │ │1│ └─┴─┴─┴─┘

  NB. i j k
  i=: 0 gmul&e 1
  j=: 1 gmul&e 2
  k=: 0 gmul&e 2
  
  i gmul i

_1

  j gmul j

_1

  k gmul k

_1

  i gmul j gmul k

_1

  NB. I J K
  I=: 1 gmul&e 2
  J=: 2 gmul&e 3
  K=: 1 gmul&e 3
  
  I gmul I

_1

  J gmul J

_1

  K gmul K

_1

  I gmul J gmul K

_1</lang>

J: alternate implementation

Sparse arrays give better performance for this task, and support relatively arbitrary dimensions, but can be a bit quirky because current implementations of J do not support some features for sparse arrays. We add vectors x and y using x + y. Also, the first element of one of these vectors represents the "real valued" or "scalar component" of the vector.

Implementation:

<lang J>NB. indices are signed machine integers vzero=: 1 $.2^31+IF64*32 odim=. 2^.#vzero

ndx01=:1 :0

 NB. indexed update of numeric rank 1 sparse y
 NB. creating rank 2 sparse result 
 NB. using scalar values from x and scalar inds from m
 NB. where x, m are rank 0 or 1
 NB. (this works around a spurious error in sparse handling)
 n=. #x,.m
 x ((i.n),&.> m)} n#,:y

)

NB. specify that all axes are sparse, for better display clean=: (2;i.@#@$) $. ]

gmul=:4 :0"1

 xj=. ,4$.x
 yj=. ,4$.y
 if. 0= xj *&# yj do. vzero return. end.
 b=. (-##:>./0,xj,yj)&{.@|:@:#:
 xb=. b xj
 yb=. b yj
 rj=. #.|:xb~:yb
 s=. _1^ ~:/ yb * 0,}: ~:/\ xb
 vzero (~.rj)}~ rj +//. s*(xj{x)*yj{y 

)

gdot=: (gmul + gmul~) % 2:

obasis=:1 (2^i.odim)ndx01 vzero e=: {&obasis</lang>

Explanation:

We work with sparse vectors of length 2147483647 on 32 bit machines and of length 9223372036854775807 on 64 bit machines. These are the largest representable vector lengths in current J implementations. J allows negative indices and J uses an index value corresponding to the length of the list to indicate value not found when searching. Thus, current J implementations use signed machine integers for performance and correctness reasons and these are the largest vector lengths we can use in J.

Except, for the purpose of this task, we must pretend that these are "multivectors" instead of vectors - task vectors have a length of log2 the multivector length. So technically speaking, we can only represent 30 element (or less) vectors on 32 bit J and 62 element (or less) vectors on 64 bit J. (Actually, we can represent 31 element vectors on 32 bit J and 63 bit vectors on 64 bit J, but there are hypothetical operations involving the last element which perhaps would be hindered by the fact that the multivector length is not 2147483648 or 9223372036854775808 -- but fortunately, none of this is actually relevant.)

Since these multivectors are silly large, and almost all zeros, we use a sparse representation and only concern ourselves with non-zero values, and their indices.

For addition on multivectors we use J's + (and "multivectors" includes "task vectors").

For "geometric multiplication" (whose relationship to geometry seems obscure, at best) we define something analogous to an inner product. More specifically, we look at the base 2 representation of the argument indices to find the result index and to find whether we negate the product of those two values.

For each pair of values in the left and right argument we use the base 2 representation of the indices to determine what to do with the product of the corresponding two non-zero values:

  • The result index for that product is the bit-wise exclusive-or of the two indices (the corresponding bit in the result index is 1 when the corresponding bits in the argument indices are different, 0 otherwise).
  • We negate the product depending on the bits of the left and right argument index. For each 1 bit of the right argument index, we count the number of more significant bits in the left argument index. If this total is odd, we negate the product (but not if it's even). Note that we don't actually have to form a sum to do this - exclusive-or is an adequate replacement for sum if we only need to know whether it's odd or even.

Where we have more than one non-zero value pair contributing to the same product index we sum those values. (Of course, this would apply to value pairs whose result is zero, but since typically there's something approaching 85070591730234615847396907784232501249 of those, we don't actually compute all those zeros...)

[It's unclear whether there are other ways of satisfying all of the task requirements. No other approach seems simpler, at least...]

Task examples:

<lang J> NB. test arbitrary vector being real (and having the specified result)

  clean gmul~ +/ (e 0 1 2 3 4) gmul 1 _1 2 3 _2 (0 ndx01) vzero 

0 │ 19

  NB. required orthogonality
  clean gdot&e&>/~i.4

0 0 0 │ 1 1 1 0 │ 1 2 2 0 │ 1 3 3 0 │ 1

  NB. i j k
  i=: 0 gmul&e 1
  j=: 1 gmul&e 2
  k=: 0 gmul&e 2
   
  i gmul i

0 │ _1

  j gmul j

0 │ _1

  k gmul k

0 │ _1

  i gmul j gmul k

0 │ _1

  NB. I J K
  I=: 1 gmul&e 2
  J=: 2 gmul&e 3
  K=: 1 gmul&e 3
   
  I gmul I

0 │ _1

  J gmul J

0 │ _1

  K gmul K

0 │ _1

  I gmul J gmul K

0 │ _1</lang>

Note that sparse arrays display as indices | value for elements which are not the default element (0).

So, for example in the part where we check for orthogonality, we are forming a 5 by 5 by 9223372036854775807 array. The first two dimensions correspond to arguments to e and the final dimension is the multivector dimension. A 0 in the multivector dimension means that that's the "real" or "scalar" part of the multivector. And, since the pair of dimensions for the e arguments whose "dot" product are 1 are identical, we know we have an identity matrix.

JavaScript

<lang javascript>var CGA = function () {

   // parts of this comes from https://github.com/weshoke/versor.js/
   function e(n) {

var result = []; result[1 << n] = 1; return result;

   }
   function cdot(a, b) { return CGA.mul([0.5], CGA.add(CGA.mul(a, b), CGA.mul(b, a))) }
   function neg(x) { return multiply([-1], x) }
   function bitCount(i) {

// Note that unsigned shifting (>>>) is not required. i = i - ((i >> 1) & 0x55555555); i = (i & 0x33333333) + ((i >> 2) & 0x33333333); i = (i + (i >> 4)) & 0x0F0F0F0F; i = i + (i >> 8); i = i + (i >> 16); return i & 0x0000003F;

   }
   function reorderingSign(a, b) {

a >>= 1; var sum = 0; while (a != 0) { sum += bitCount(a & b); a >>= 1; } return (sum & 1) == 0 ? 1 : -1;

   }
   function add(a, b) {

var result = a.slice(0); for (var i in b) { if (result[i]) { result[i] += b[i]; } else { result[i] = b[i]; } } return result;

   }
   function multiply(a, b)
   {

var result = []; for (var i in a) { if (a[i]) { for (var j in b) { if (b[j]) { var s = reorderingSign(i, j) * a[i] * b[j]; // if (i == 1 && j == 1) { s *= -1 } // e0*e0 == -1 var k = i ^ j; if (result[k]) { result[k] += s; } else { result[k] = s; } } } } } return result;

   }
   return {

e  : e, cdot : cdot, neg : neg, add : add, mul : multiply

   };

}();</lang>

And then, from the console:

<lang javascript>var e = CGA.e; function cdot(a, b) { return CGA.mul([0.5], CGA.add(CGA.mul(a, b), CGA.mul(b, a))) }

for (var i = 0; i < 4; i++) {

   for (var j = 0; j < 4; j++) {
       if (i < j) {
           if (cdot(e(i), e(j))[0]) { console.log("unexpected non-nul scalar product"); }
       } else if (i === j) {
           if (!cdot(e(i), e(j))[0]) { console.log("unexpected nul scalar product"); }
       }
   }

}

var v = e(0); v = CGA.add(v, CGA.mul([-1], e(1))); v = CGA.add(v, CGA.mul([2], e(2))); v = CGA.add(v, CGA.mul([3], e(3))); v = CGA.add(v, CGA.mul([-2], e(4)));

console.log(CGA.mul(v, v)); // [19, 3: 0, 5: 0, 6: 0, 9: 0, 10: 0, 12: 0, 17: 0, 18: 0, 20: 0, 24: 0]

var i = CGA.mul(e(0), e(1)); var j = CGA.mul(e(1), e(2)); var k = CGA.mul(e(0), e(2));

console.log(CGA.mul(i, i)); // [-1] console.log(CGA.mul(j, j)); // [-1] console.log(CGA.mul(k, k)); // [-1] console.log(CGA.mul(CGA.mul(i, j), k)); // [-1]

var I = CGA.mul(e(1), e(2)); var J = CGA.mul(e(2), e(3)); var K = CGA.mul(e(1), e(3));

console.log(CGA.mul(I, I)); // [-1] console.log(CGA.mul(J, J)); // [-1] console.log(CGA.mul(K, K)); // [-1] console.log(CGA.mul(CGA.mul(I, J), K)); // [-1]</lang>

Perl 6

<lang perl6>unit class MultiVector; has Real %.blades{UInt}; method clean { for %!blades { %!blades{.key} :delete unless .value; } } method narrow {

   for %!blades { return self if .key > 0 && .value !== 0; }
   return %!blades{0} // 0;

}

sub e(UInt $n?) returns MultiVector is export {

   $n.defined ?? MultiVector.new(:blades(my Real %{UInt} = (1 +< $n) => 1)) !! MultiVector.new

}

my sub order(UInt:D $i is copy, UInt:D $j) is cached {

   my $n = 0;
   repeat {

$i +>= 1; $n += [+] ($i +& $j).base(2).comb;

   } until $i == 0;
   return $n +& 1 ?? -1 !! 1;

}

multi infix:<+>(MultiVector $A, MultiVector $B) returns MultiVector is export {

   my Real %blades{UInt} = $A.blades.clone;
   for $B.blades {

%blades{.key} += .value; %blades{.key} :delete unless %blades{.key};

   }
   return MultiVector.new: :%blades;

} multi infix:<+>(Real $s, MultiVector $A) returns MultiVector is export {

   my Real %blades{UInt} = $A.blades.clone;
   %blades{0} += $s;
   %blades{0} :delete unless %blades{0};
   return MultiVector.new: :%blades;

} multi infix:<+>(MultiVector $A, Real $s) returns MultiVector is export { $s + $A } multi infix:<*>(MultiVector $A, MultiVector $B) returns MultiVector is export {

   my Real %blades{UInt};
   for $A.blades -> $a {

for $B.blades -> $b { my $c = $a.key +^ $b.key; %blades{$c} += $a.value * $b.value * order($a.key, $b.key); %blades{$c} :delete unless %blades{$c}; }

   }
   return MultiVector.new: :%blades;

} multi infix:<**>(MultiVector $ , 0) returns MultiVector is export { MultiVector.new } multi infix:<**>(MultiVector $A, 1) returns MultiVector is export { $A } multi infix:<**>(MultiVector $A, 2) returns MultiVector is export { $A * $A } multi infix:<**>(MultiVector $A, UInt $n where $n %% 2) returns MultiVector is export { ($A ** ($n div 2)) ** 2 } multi infix:<**>(MultiVector $A, UInt $n) returns MultiVector is export { $A * ($A ** ($n div 2)) ** 2 }

multi infix:<*>(MultiVector $, 0) returns MultiVector is export { MultiVector.new } multi infix:<*>(MultiVector $A, 1) returns MultiVector is export { $A } multi infix:<*>(MultiVector $A, Real $s) returns MultiVector is export {

   return MultiVector.new: :blades(my Real %{UInt} = map { .key => $s * .value }, $A.blades);

} multi infix:<*>(Real $s, MultiVector $A) returns MultiVector is export { $A * $s } multi infix:</>(MultiVector $A, Real $s) returns MultiVector is export { $A * (1/$s) } multi prefix:<->(MultiVector $A) returns MultiVector is export { return -1 * $A } multi infix:<->(MultiVector $A, MultiVector $B) returns MultiVector is export { $A + -$B } multi infix:<->(MultiVector $A, Real $s) returns MultiVector is export { $A + -$s } multi infix:<->(Real $s, MultiVector $A) returns MultiVector is export { $s + -$A }

multi infix:<==>(MultiVector $A, MultiVector $B) returns Bool is export { $A - $B == 0 } multi infix:<==>(Real $x, MultiVector $A) returns Bool is export { $A == $x } multi infix:<==>(MultiVector $A, Real $x) returns Bool is export {

   my $narrowed = $A.narrow;
   $narrowed ~~ Real and $narrowed == $x;

}</lang>

And here is the code implementing and verifying quaternions:

<lang perl6>use MultiVector; use Test;

plan 13;

sub infix:<cdot>($a, $b) { ($a*$b + $b*$a) / 2 }

for ^4 X ^4 -> ($i, $j) {

   next if $i > $j;
   ok e($i) cdot e($j) == +($i == $j);

}

my $v = e(0) - e(1) + 2*e(2) + 3*e(3) - 2*e(4);

ok $v**2 == 19, 'v² = 19';

my constant i = e(0)*e(1); my constant j = e(1)*e(2); my constant k = e(0)*e(2);

ok i**2 == j**2 == k**2 == i*j*k == -1, "i² = j² = k² = ijk = -1";

my constant I = e(1)*e(2); my constant J = e(2)*e(3); my constant K = e(1)*e(3);

ok I**2 == J**2 == K**2 == I*J*K == -1, "I² = J² = K² = IJK = -1";</lang>

Output:
1..13
ok 1 - 
ok 2 - 
ok 3 - 
ok 4 - 
ok 5 - 
ok 6 - 
ok 7 - 
ok 8 - 
ok 9 - 
ok 10 - 
ok 11 - v² = 19
ok 12 - i² = j² = k² = ijk = -1
ok 13 - I² = J² = K² = IJK = -1