Perceptron: Difference between revisions

From Rosetta Code
Content added Content deleted
(added Scheme example)
Line 567: Line 567:


Run it and see the image for yourself, I can't get it onto RC!
Run it and see the image for yourself, I can't get it onto RC!

=={{header|Scheme}}==

<lang scheme>
(import (scheme base)
(scheme case-lambda)
(scheme write)
(srfi 27)) ; for random numbers

(random-source-randomize! default-random-source)

;;; Function to create a perceptron

;; num-inputs: size of input data
;; learning-rate: small number, to give rate of learning
;; returns perceptron as a function
;; accepting 'train data -> trains on given list of data
;; 'test data -> returns percent correct on given list of data
;; 'show -> displays the perceptron weights
;; classes assumed to be 1, -1
(define (create-perceptron num-inputs learning-rate)
(define (make-rnd-vector n) ; rnd vector, values in [-1,1]
(let ((result (make-vector n)))
(do ((i 0 (+ 1 i)))
((= i n) result)
(vector-set! result i (- (* 2 (random-real)) 1)))))
(define (extended input) ; add a 1 to end of vector
(let* ((n (vector-length input))
(result (make-vector (+ 1 n))))
(do ((i 0 (+ 1 i)))
((= i n) (vector-set! result i 1)
result)
(vector-set! result i (vector-ref input i)))))
(define (predict weights extended-input)
(let ((sum 0))
(vector-for-each (lambda (w i) (set! sum (+ sum (* w i))))
weights extended-input)
(if (positive? sum) 1 -1)))
;
(let ((weights (make-rnd-vector (+ 1 num-inputs))))
(case-lambda ; defines a function for the perceptron
((key)
(when (eq? key 'show)
(display weights) (newline)))
((action data)
(case action
((train)
(for-each
(lambda (datum)
(let* ((extended-input (extended (car datum)))
(error (- (cdr datum) (predict weights extended-input))))
(set! weights (vector-map (lambda (w i) (+ w (* learning-rate error i)))
weights
extended-input))))
data))
((test)
(let ((count 0))
(for-each
(lambda (datum) (when (= (cdr datum) (predict weights (extended (car datum))))
(set! count (+ 1 count))))
data)
(inexact (* 100 (/ count (length data)))))))))))

;; create data: list of n ( #(input values) . target ) pairs
;; using formula y = mx + b, target based on if input above / below line
(define (create-data m b n)
(define (target x y)
(let ((fx (+ b (* m x))))
(if (< fx y) 1 -1)))
(define (create-datum)
(let ((x (random-real))
(y (random-real)))
(cons (vector x y) (target x y))))
;
(do ((data '() (cons (create-datum) data)))
((= n (length data)) data)))

;; train on 5000 points, show weights and result on 1000 test points
(let* ((m 0.7)
(b 0.2)
(perceptron (create-perceptron 2 0.001)))
(perceptron 'train (create-data m b 5000))
(perceptron 'show)
(display "Percent correct on test set: ")
(display (perceptron 'test (create-data m b 1000)))
(newline))

;; show performance along training stages
(let* ((m 0.7) ; gradient of target line
(b 0.2) ; y-intercept of target line
(train-step 1000) ; step in training set size
(train-stop 20000) ; largest training set size
(test-set (create-data m b 1000)) ; create a fixed test set
(perceptron (create-perceptron 2 0.001)))
(do ((i train-step (+ i train-step)))
((> i train-stop) )
(perceptron 'train (create-data m b train-step))
(display (string-append "Trained on " (number->string i)
", percent correct is "
(number->string (perceptron 'test test-set))
"\n"))))
</lang>

{{out}}
<pre>
#(-0.5914540100624854 1.073343782042039 -0.29780862758499393)
Percent correct on test set: 95.4
Trained on 1000, percent correct is 18.1
Trained on 2000, percent correct is 91.1
Trained on 3000, percent correct is 98.0
Trained on 4000, percent correct is 92.5
Trained on 5000, percent correct is 98.6
Trained on 6000, percent correct is 98.6
Trained on 7000, percent correct is 98.8
Trained on 8000, percent correct is 97.8
Trained on 9000, percent correct is 99.1
Trained on 10000, percent correct is 96.0
Trained on 11000, percent correct is 98.6
Trained on 12000, percent correct is 98.2
Trained on 13000, percent correct is 99.2
Trained on 14000, percent correct is 99.4
Trained on 15000, percent correct is 99.0
Trained on 16000, percent correct is 98.8
Trained on 17000, percent correct is 97.5
Trained on 18000, percent correct is 99.8
Trained on 19000, percent correct is 99.2
Trained on 20000, percent correct is 100.0
</pre>


=={{header|XLISP}}==
=={{header|XLISP}}==

Revision as of 17:21, 25 February 2017

Perceptron is a draft programming task. It is not yet considered ready to be promoted as a complete task, for reasons that should be found in its talk page.

A perceptron is an algorithm used in machine-learning. It's the simplest of all neural networks, consisting of only one neuron, and is typically used for pattern recognition.

A perceptron attempts to separate input into a positive and a negative class with the aid of a linear function. The inputs are each multiplied by weights, random weights at first, and then summed. Based on the sign of the sum a decision is made.

In order for the perceptron to make the right decision, it needs to train with input for which the correct outcome is known, so that the weights can slowly be adjusted until they start producing the desired results.


Task

The website The Nature of Code demonstrates a perceptron by making it perform a very simple task : determine if a randomly chosen point (x, y) is above or below a line:

 y = mx + b

Implement this perceptron and display an image of the result.


See also



Java

Works with: Java version 8

<lang java>import java.awt.*; import java.awt.event.ActionEvent; import java.util.*; import javax.swing.*; import javax.swing.Timer;

public class Perceptron extends JPanel {

   class Trainer {
       double[] inputs;
       int answer;
       Trainer(double x, double y, int a) {
           inputs = new double[]{x, y, 1};
           answer = a;
       }
   }
   Trainer[] training = new Trainer[2000];
   double[] weights;
   double c = 0.00001;
   int count;
   public Perceptron(int n) {
       Random r = new Random();
       Dimension dim = new Dimension(640, 360);
       setPreferredSize(dim);
       setBackground(Color.white);
       weights = new double[n];
       for (int i = 0; i < weights.length; i++) {
           weights[i] = r.nextDouble() * 2 - 1;
       }
       for (int i = 0; i < training.length; i++) {
           double x = r.nextDouble() * dim.width;
           double y = r.nextDouble() * dim.height;
           int answer = y < f(x) ? -1 : 1;
           training[i] = new Trainer(x, y, answer);
       }
       new Timer(10, (ActionEvent e) -> {
           repaint();
       }).start();
   }
   private double f(double x) {
       return x * 0.7 + 40;
   }
   int feedForward(double[] inputs) {
       assert inputs.length == weights.length : "weights and input length mismatch";
       double sum = 0;
       for (int i = 0; i < weights.length; i++) {
           sum += inputs[i] * weights[i];
       }
       return activate(sum);
   }
   int activate(double s) {
       return s > 0 ? 1 : -1;
   }
   void train(double[] inputs, int desired) {
       int guess = feedForward(inputs);
       double error = desired - guess;
       for (int i = 0; i < weights.length; i++) {
           weights[i] += c * error * inputs[i];
       }
   }
   @Override
   public void paintComponent(Graphics gg) {
       super.paintComponent(gg);
       Graphics2D g = (Graphics2D) gg;
       g.setRenderingHint(RenderingHints.KEY_ANTIALIASING,
               RenderingHints.VALUE_ANTIALIAS_ON);
       // we're drawing upside down
       int x = getWidth();
       int y = (int) f(x);
       g.setStroke(new BasicStroke(2));
       g.setColor(Color.orange);
       g.drawLine(0, (int) f(0), x, y);
       train(training[count].inputs, training[count].answer);
       count = (count + 1) % training.length;
       g.setStroke(new BasicStroke(1));
       g.setColor(Color.black);
       for (int i = 0; i < count; i++) {
           int guess = feedForward(training[i].inputs);
           x = (int) training[i].inputs[0] - 4;
           y = (int) training[i].inputs[1] - 4;
           if (guess > 0)
               g.drawOval(x, y, 8, 8);
           else
               g.fillOval(x, y, 8, 8);
       }
   }
   public static void main(String[] args) {
       SwingUtilities.invokeLater(() -> {
           JFrame f = new JFrame();
           f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
           f.setTitle("Perceptron");
           f.setResizable(false);
           f.add(new Perceptron(3), BorderLayout.CENTER);
           f.pack();
           f.setLocationRelativeTo(null);
           f.setVisible(true);
       });
   }

}</lang>

JavaScript

Uses P5 lib. <lang javascript> const EPOCH = 1500, TRAINING = 1, TRANSITION = 2, SHOW = 3;

var perceptron; var counter = 0; var learnRate = 0.02; var state = TRAINING;

function setup() {

   createCanvas( 800, 600 );
   clearBack();
   perceptron = new Perceptron( 2 );

}

function draw() {

   switch( state ) {
       case TRAINING: training(); break;
       case TRANSITION: transition(); break;
       case SHOW: show(); break;
   }

}

function clearBack() {

   background( 0 );
   stroke( 255 );
   strokeWeight( 4 );
   var x = width;
   line( 0, 0, x, lineDef( x ) );

}

function transition() {

   clearBack();
   state = SHOW;

}

function lineDef( x ) {

   return .75 * x;

}

function training() {

   var a = random( width ),
       b = random( height );
   lDef = lineDef( a ) > b ? -1 : 1;
   perceptron.setInput( [a, b] );
   perceptron.feedForward();
   var pRes = perceptron.getOutput();
   var match = (pRes == lDef);
   var clr;
   if( !match ) {
       var err = ( pRes - lDef ) * learnRate;
       perceptron.adjustWeights( err );
       clr = color( 255, 0, 0 );
   } else {
       clr = color( 0, 255, 0 );
   }
   noStroke();
   fill( clr );
   ellipse( a, b, 4, 4 );
   if( ++counter == EPOCH ) state = TRANSITION;

}

function show() {

   var a = random( width ),
       b = random( height ),
       clr;
   perceptron.setInput( [a, b] );
   perceptron.feedForward();
   var pRes = perceptron.getOutput();
   if( pRes < 0 )
       clr = color( 255, 0, 0 );
   else 
       clr = color( 0, 255, 0 );
   noStroke();
   fill( clr );
   ellipse( a, b, 4, 4 );

}

function Perceptron( inNumber ) {

   this.inputs = [];
   this.weights = [];
   this.output;
   this.bias = 1;
   
   // one more weight for bias
   for( var i = 0; i < inNumber + 1; i++ ) {
       this.weights.push( Math.random() );
   };
   this.activation = function( a ) {
       return( Math.tanh( a ) < .5 ? 1 : -1 );
   }
   this.feedForward = function() {
       var sum = 0;
       for( var i = 0; i < this.inputs.length; i++ ) {
           sum += this.inputs[i] * this.weights[i];
       }
       sum += this.bias * this.weights[this.weights.length - 1];
       this.output = this.activation( sum );
   }
   this.getOutput = function() {
       return this.output;
   }
   this.setInput= function( inputs ) {
       this.inputs = [];
       for( var i = 0; i < inputs.length; i++ ) {
           this.inputs.push( inputs[i] );
       }
   }
   this.adjustWeights = function( err ) {
       for( var i = 0; i < this.weights.length - 1; i++ ) {
           this.weights[i] += err * this.inputs[i];
       }
   }

} </lang> File:PerceptronJS.png

Well, it seems I cannot upload an image :(

Pascal

This is a text-based implementation, using a 20x20 grid (just like the original Mark 1 Perceptron had). The rate of improvement drops quite markedly as you increase the number of training runs. <lang pascal>program Perceptron;

(*

* implements a version of the algorithm set out at
* http://natureofcode.com/book/chapter-10-neural-networks/ ,
* but without graphics
*)

function targetOutput( a, b : integer ) : integer; (* the function the perceptron will be learning is f(x) = 2x + 1 *) begin

   if a * 2 + 1 < b then
       targetOutput := 1
   else
       targetOutput := -1

end;

procedure showTargetOutput; var x, y : integer; begin

   for y := 10 downto -9 do
   begin
       for x := -9 to 10 do
           if targetOutput( x, y ) = 1 then
               write( '#' )
           else
               write( 'O' );
       writeln
   end;
   writeln

end;

procedure randomWeights( var ws : array of real ); (* start with random weights -- NB pass by reference *) var i : integer; begin

   randomize; (* seed random-number generator *)
   for i := 0 to 2 do
       ws[i] := random * 2 - 1

end;

function feedForward( ins : array of integer; ws : array of real ) : integer; (* the perceptron outputs 1 if the sum of its inputs multiplied by its input weights is positive, otherwise -1 *) var sum : real;

   i : integer;

begin

   sum := 0;
   for i := 0 to 2 do
       sum := sum + ins[i] * ws[i];
   if sum > 0 then
       feedForward := 1
   else
       feedForward := -1

end;

procedure showOutput( ws : array of real ); var inputs : array[0..2] of integer;

   x, y : integer;

begin

   inputs[2] := 1; (* bias *)
   for y := 10 downto -9 do
   begin
       for x := -9 to 10 do
       begin
           inputs[0] := x;
           inputs[1] := y;
           if feedForward( inputs, ws ) = 1 then
               write( '#' )
           else
               write( 'O' )
       end;
       writeln
   end;
   writeln

end;

procedure train( var ws : array of real; runs : integer ); (* pass the array of weights by reference so it can be modified *) var inputs : array[0..2] of integer;

   error : real;
   x, y, i, j : integer;

begin

   inputs[2] := 1; (* bias *)
   for i := 1 to runs do
   begin
       for y := 10 downto -9 do
       begin
           for x := -9 to 10 do
           begin
               inputs[0] := x;
               inputs[1] := y;
               error := targetOutput( x, y ) - feedForward( inputs, ws );
               for j := 0 to 2 do
                   ws[j] := ws[j] + error * inputs[j] * 0.01;
                   (* 0.01 is the learning constant *)
           end;
       end;
   end;

end;

var weights : array[0..2] of real;

begin

   writeln( 'Target output for the function f(x) = 2x + 1:' );
   showTargetOutput;
   randomWeights( weights );
   writeln( 'Output from untrained perceptron:' );
   showOutput( weights );
   train( weights, 1 );
   writeln( 'Output from perceptron after 1 training run:' );
   showOutput( weights );
   train( weights, 4 );
   writeln( 'Output from perceptron after 5 training runs:' );
   showOutput( weights )

end.</lang>

Output:
Target output for the function f(x) = 2x + 1:
##############OOOOOO
#############OOOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO

Output from untrained perceptron:
OOO#################
OOOO################
OOOOO###############
OOOOO###############
OOOOOO##############
OOOOOO##############
OOOOOOO#############
OOOOOOOO############
OOOOOOOO############
OOOOOOOOO###########
OOOOOOOOO###########
OOOOOOOOOO##########
OOOOOOOOOOO#########
OOOOOOOOOOO#########
OOOOOOOOOOOO########
OOOOOOOOOOOOO#######
OOOOOOOOOOOOO#######
OOOOOOOOOOOOOO######
OOOOOOOOOOOOOO######
OOOOOOOOOOOOOOO#####

Output from perceptron after 1 training run:
###############OOOOO
###############OOOOO
##############OOOOOO
#############OOOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO

Output from perceptron after 5 training runs:
##############OOOOOO
#############OOOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO

Racket

Translation of: Java

<lang racket>#lang racket (require 2htdp/universe

        2htdp/image)

(define (activate s) (if (positive? s) 1 -1))

---------------------------------------------------------------------------------------------------
PERCEPTRON

(define perceptron%

 (class object%
   (super-new)
   (init-field n)
   
   (field [weights (build-vector n (λ (i) (- (* (random) 2) 1)))])
   
   (define c 0.001)
   (define/public (feed-forward inputs)
     (unless (= (vector-length inputs) (vector-length weights))
       (error 'feed-forward "weights and inputs lengths mismatch"))
     (activate (for/sum ((i (in-vector inputs)) (w (in-vector weights))) (* i w))))
   (define/public (train! inputs desired)
     (let ((error (- desired (feed-forward inputs))))
       (set! weights (vector-map (λ (w i) (+ w (* c error i))) weights inputs))))))
---------------------------------------------------------------------------------------------------
TRAINING

(struct training-data (inputs answer))

(define (make-training-data x y f)

 (training-data (vector x y 1) (activate (- (f x) y))))
---------------------------------------------------------------------------------------------------
DEMO

(define (demo)

 (struct demonstration (p w h f i))
 (define (draw-classification-space p w h scl n)
   (for/fold ((scn (place-image (text (~a (get-field weights p)) 12 "red")
                                (* scl (/ w 2))
                                (* scl (/ h 2))
                                (empty-scene (* w scl) (* h scl)))))
             ((_ (in-range n)))
     (let* ((x (* (random) w))
            (y (* (random) h))
            (guess+? (positive? (send p feed-forward (vector x y 1)))))          
       (place-image (rectangle 4 4 (if guess+? 'solid 'outline) (if guess+? 'red 'black))
                    (- (* scl x) 2) (- (* scl (- h y)) 2)
                    scn))))
 (define the-demo
   (let ((w 640/100) (h 360/100) (f (λ (x) (+ (* x 0.7) 0.8))))
     (demonstration (new perceptron% [n 3]) w h f 0)))
 (define (demo-train p w h f)
   (let ((td (make-training-data (* (random) w) (* (random) h) f)))
     (send p train! (training-data-inputs td) (training-data-answer td))))
 (define tick-handler
   (match-lambda
     [(and d (demonstration p w h f i))
      (for ((_ (in-range 100))) (demo-train p w h f))
      (struct-copy demonstration d [i (+ 100 i)])]))
 (define draw-demo (match-lambda
                     [(demonstration p w h f i)
                      (let ((scl 100))
                        (scene+line (place-image (text (~a i) 24 "magenta")
                                                 (* scl (/ w 2))
                                                 (* scl (/ h 3))
                                                 (draw-classification-space p w h scl 1000))
                                    0 (* scl (- h (f 0))) (* scl w) (* scl (- h (f w))) "red"))]))
 
 (big-bang the-demo (to-draw draw-demo) (on-tick tick-handler)))
                     

(module+ main (demo))</lang>

Run it and see the image for yourself, I can't get it onto RC!

Scheme

<lang scheme> (import (scheme base)

       (scheme case-lambda)
       (scheme write)
       (srfi 27))      ; for random numbers

(random-source-randomize! default-random-source)

Function to create a perceptron
num-inputs
size of input data
learning-rate
small number, to give rate of learning
returns perceptron as a function
accepting 'train data -> trains on given list of data
'test data -> returns percent correct on given list of data
'show -> displays the perceptron weights
classes assumed to be 1, -1

(define (create-perceptron num-inputs learning-rate)

 (define (make-rnd-vector n) ; rnd vector, values in [-1,1]
   (let ((result (make-vector n)))
     (do ((i 0 (+ 1 i)))
       ((= i n) result)
       (vector-set! result i (- (* 2 (random-real)) 1)))))
 (define (extended input) ; add a 1 to end of vector
   (let* ((n (vector-length input))
          (result (make-vector (+ 1 n))))
     (do ((i 0 (+ 1 i)))
       ((= i n) (vector-set! result i 1)
                result)
       (vector-set! result i (vector-ref input i)))))
 (define (predict weights extended-input)
   (let ((sum 0))
     (vector-for-each (lambda (w i) (set! sum (+ sum (* w i))))
                      weights extended-input)
     (if (positive? sum) 1 -1)))
 ;
 (let ((weights (make-rnd-vector (+ 1 num-inputs))))
   (case-lambda ; defines a function for the perceptron
     ((key)
      (when (eq? key 'show)
        (display weights) (newline)))
     ((action data) 
      (case action
        ((train) 
         (for-each 
           (lambda (datum)
             (let* ((extended-input (extended (car datum)))
                    (error (- (cdr datum) (predict weights extended-input))))
               (set! weights (vector-map (lambda (w i) (+ w (* learning-rate error i)))
                                         weights
                                         extended-input))))
           data))
        ((test) 
         (let ((count 0))
           (for-each 
             (lambda (datum) (when (= (cdr datum) (predict weights (extended (car datum))))
                               (set! count (+ 1 count))))
             data)
           (inexact (* 100 (/ count (length data)))))))))))
create data
list of n ( #(input values) . target ) pairs
using formula y = mx + b, target based on if input above / below line

(define (create-data m b n)

 (define (target x y)
   (let ((fx (+ b (* m x)))) 
     (if (< fx y) 1 -1)))
 (define (create-datum)
   (let ((x (random-real))
         (y (random-real)))
     (cons (vector x y) (target x y))))
 ;
 (do ((data '() (cons (create-datum) data)))
   ((= n (length data)) data)))
train on 5000 points, show weights and result on 1000 test points

(let* ((m 0.7)

      (b 0.2)
      (perceptron (create-perceptron 2 0.001)))
 (perceptron 'train (create-data m b 5000))
 (perceptron 'show)
 (display "Percent correct on test set: ")
 (display (perceptron 'test (create-data m b 1000)))
 (newline))
show performance along training stages

(let* ((m 0.7) ; gradient of target line

      (b 0.2) ; y-intercept of target line
      (train-step 1000)  ; step in training set size
      (train-stop 20000) ; largest training set size
      (test-set (create-data m b 1000)) ; create a fixed test set
      (perceptron (create-perceptron 2 0.001)))
 (do ((i train-step (+ i train-step)))
   ((> i train-stop) )
   (perceptron 'train (create-data m b train-step))
   (display (string-append "Trained on " (number->string i)
                           ", percent correct is " 
                           (number->string (perceptron 'test test-set))
                           "\n"))))

</lang>

Output:
#(-0.5914540100624854 1.073343782042039 -0.29780862758499393)
Percent correct on test set: 95.4
Trained on 1000, percent correct is 18.1
Trained on 2000, percent correct is 91.1
Trained on 3000, percent correct is 98.0
Trained on 4000, percent correct is 92.5
Trained on 5000, percent correct is 98.6
Trained on 6000, percent correct is 98.6
Trained on 7000, percent correct is 98.8
Trained on 8000, percent correct is 97.8
Trained on 9000, percent correct is 99.1
Trained on 10000, percent correct is 96.0
Trained on 11000, percent correct is 98.6
Trained on 12000, percent correct is 98.2
Trained on 13000, percent correct is 99.2
Trained on 14000, percent correct is 99.4
Trained on 15000, percent correct is 99.0
Trained on 16000, percent correct is 98.8
Trained on 17000, percent correct is 97.5
Trained on 18000, percent correct is 99.8
Trained on 19000, percent correct is 99.2
Trained on 20000, percent correct is 100.0

XLISP

Like the Pascal example, this is a text-based program using a 20x20 grid. It is slightly more general, however, because it allows the function that is to be learnt and the perceptron's bias and learning constant to be passed as arguments to the trainer and perceptron objects. <lang scheme>(define-class perceptron

   (instance-variables weights bias learning-constant) )

(define-method (perceptron 'initialize b lc)

   (defun random-weights (n)
       (if (> n 0)
           (cons (- (/ (random 20000) 10000) 1) (random-weights (- n 1))) ) )
   (setq weights (random-weights 3))
   (setq bias b)
   (setq learning-constant lc)
   self )

(define-method (perceptron 'value x y)

   (if (> (+ (* x (car weights)) (* y (cadr weights)) (* bias (caddr weights))) 0)
   1
   -1 ) )

(define-method (perceptron 'print-grid)

   (print-row self 10) )

(define-method (perceptron 'learn source runs)

   (defun learn-row (row)
       (defun learn-cell (cell)
           (define inputs `(,cell ,row ,bias))
           (define error (- (source 'value cell row) (self 'value cell row)))
           (defun reweight (ins ws)
               (if (car ins)
                   (cons (+ (car ws) (* error (car ins) learning-constant)) (reweight (cdr ins) (cdr ws))) ) )
           (setq weights (reweight inputs weights))
           (if (< cell 10)
               (learn-cell (+ cell 1)) ) )
       (learn-cell -9)
       (if (> row -9)
           (learn-row (- row 1)) ) )
   (do ((i 1 (+ i 1))) ((> i runs))
       (learn-row 10) ) )

(define-class trainer

   (instance-variables fn) )

(define-method (trainer 'initialize function)

   (setq fn function)
   self )

(define-method (trainer 'print-grid)

   (print-row self 10) )

(define-method (trainer 'value x y)

   (if (apply fn `(,x ,y))
       1
       -1 ) )

(defun print-row (obj row)

   (defun print-cell (cell)
       (if (= (obj 'value cell row) 1)
           (display "#")
           (display "O") )
       (if (< cell 10)
           (print-cell (+ cell 1))
           (newline) ) )
   (print-cell -9)
   (if (> row -9)
       (print-row obj (- row 1))
       (newline) ) )

(define ptron (perceptron 'new 1 0.01))

(define training (trainer 'new

   (lambda (x y) (> y (+ (* x 2) 1))) ) )

(newline) (display "Target output for y = 2x + 1:") (newline) (training 'print-grid) (display "Output from untrained perceptron:") (newline) (ptron 'print-grid) (display "Output from perceptron after 1 training run:") (newline) (ptron 'learn training 1) (ptron 'print-grid) (display "Output from perceptron after 5 training runs:") (newline) (ptron 'learn training 4) (ptron 'print-grid)</lang>

Output:
Target output for y = 2x + 1:
##############OOOOOO
#############OOOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO

Output from untrained perceptron:
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
############OOOOOOOO
############OOOOOOOO
############OOOOOOOO

Output from perceptron after 1 training run:
###############OOOOO
###############OOOOO
##############OOOOOO
##############OOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO

Output from perceptron after 5 training runs:
##############OOOOOO
#############OOOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO

zkl

Translation of: Java

Uses the PPM class from http://rosettacode.org/wiki/Bitmap/Bresenham%27s_line_algorithm#zkl <lang zkl>class Perceptron{

  const c=0.00001;
  var [const] W=640, H=350;

  fcn init(n){
     r:=(0.0).random.fp(1); // r()-->[0..1)
     var weights=n.pump(List(),'wrap(){ r()*2 - 1 }), // Float[n]
         training=(2000).pump(List,'wrap(){         // (x,y,1,answer)[2000]
            x,y,answer:=r()*W, r()*H, (if(y<f(x)) -1 or 1);

return(x,y,1,answer) });

  }
  fcn f(x){ 0.7*x + 40 }    // a line
  fcn feedForward(xy1a){
     sum:=0.0;
     foreach i in (weights.len()){ sum+=xy1a[i]*weights[i] }
     (sum<0) and -1 or 1   // activate(sum)
  }
  fcn train(xy1a){ 
     guess,error:=feedForward(xy1a), xy1a[-1] - guess;
     foreach i in (weights.len()){ weights[i]+=c*error*xy1a[i] }
  }

}</lang> <lang zkl>p:=Perceptron(3); p.training.apply2(p.train);

PPM:=Import("ppm.zkl").PPM; pixmap:=PPM(p.W+20,p.H+20,0xFF|FF|FF);

foreach xy1a in (p.training){

  guess,x,y:=p.feedForward(xy1a), 8 + xy1a[0], 8 + xy1a[1];
  color:=(if(guess>0) 0 else 0xFF|00|00);  // black or red
  pixmap.circle(x,y,8,color);

} pixmap.writeJPGFile("perceptron.zkl.jpg");</lang>

Output: