Perceptron: Difference between revisions

73,385 bytes added ,  3 months ago
m
→‎{{header|Wren}}: Changed to Wren S/H
m (→‎{{header|REXX}}: minor correction)
m (→‎{{header|Wren}}: Changed to Wren S/H)
 
(28 intermediate revisions by 15 users not shown)
Line 18:
* [https://youtu.be/dXuNAkHsos4?t=16m44s Machine Learning - Perceptrons (youtube)]
<br><br>
=={{header|11l}}==
{{trans|Python}}
 
<syntaxhighlight lang="11l">V TRAINING_LENGTH = 2000
 
T Perceptron
c = .01
[Float] weights
 
F (n)
.weights = (0 .< n).map(_ -> random:(-1.0 .. 1.0))
 
F feed_forward(inputs)
[Float] vars
L(i) 0 .< inputs.len
vars.append(inputs[i] * .weights[i])
R .activate(sum(vars))
 
F activate(value)
R I value > 0 {1} E -1
 
F train(inputs, desired)
V guess = .feed_forward(inputs)
V error = desired - guess
L(i) 0 .< inputs.len
.weights[i] += .c * error * inputs[i]
 
T Trainer
[Float] inputs
Int answer
 
F (x, y, a)
.inputs = [x, y, 1.0]
.answer = a
 
F f(x)
R 2 * x + 1
 
V ptron = Perceptron(3)
[Trainer] training
L(i) 0 .< TRAINING_LENGTH
V x = random:(-10.0 .. 10.0)
V y = random:(-10.0 .. 10.0)
V answer = 1
I y < f(x)
answer = -1
training.append(Trainer(x, y, answer))
[[Char]] result
L(y) -10 .< 10
[Char] temp
L(x) -10 .< 10
I ptron.feed_forward([x, y, 1]) == 1
temp.append(Char(‘^’))
E
temp.append(Char(‘.’))
result.append(temp)
 
print(‘Untrained’)
L(row) result
print(row.join(‘’))
 
L(t) training
ptron.train(t.inputs, t.answer)
 
result.clear()
L(y) -10 .< 10
[Char] temp
L(x) -10 .< 10
I ptron.feed_forward([x, y, 1]) == 1
temp.append(Char(‘^’))
E
temp.append(Char(‘.’))
result.append(temp)
 
print(‘Trained’)
L(row) result
print(row.join(‘’))</syntaxhighlight>
 
{{out}}
<pre>
Untrained
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^.
^^^^^^^^^^^^^^^^^^..
^^^^^^^^^^^^^^^^^^..
^^^^^^^^^^^^^^^^^...
^^^^^^^^^^^^^^^^....
^^^^^^^^^^^^^^^.....
^^^^^^^^^^^^^^......
^^^^^^^^^^^^^.......
^^^^^^^^^^^^^.......
^^^^^^^^^^^^........
^^^^^^^^^^^.........
^^^^^^^^^^..........
^^^^^^^^^...........
^^^^^^^^............
^^^^^^^^............
^^^^^^^.............
^^^^^^..............
^^^^^...............
^^^^................
Trained
^^^.................
^^^^................
^^^^^...............
^^^^^...............
^^^^^^..............
^^^^^^^.............
^^^^^^^.............
^^^^^^^^............
^^^^^^^^^...........
^^^^^^^^^...........
^^^^^^^^^^..........
^^^^^^^^^^^.........
^^^^^^^^^^^.........
^^^^^^^^^^^^........
^^^^^^^^^^^^^.......
^^^^^^^^^^^^^^......
^^^^^^^^^^^^^^......
^^^^^^^^^^^^^^^.....
^^^^^^^^^^^^^^^^....
^^^^^^^^^^^^^^^^....
</pre>
=={{header|ARM Assembly}}==
{{works with|as|Raspberry Pi <br> or android 32 bits with application Termux}}
<syntaxhighlight lang="arm assembly">
/* ARM assembly Raspberry PI or andoid with termux */
/* program perceptron3.s */
 
/* compile with as */
/* link with gcc and options -lX11 -L/usr/lpp/X11/lib */
/* REMARK 1 : this program run on android smarphone 32 bits with termux
and X11 x Server. The memory addresses are relocatable and
can be simplified for raspberry pi. */
 
/* REMARK 2 : this program use routines in a include file
see task Include a file language arm assembly
for the routine affichageMess conversion10
see at end of this program the instruction include */
/* for constantes see task include a file in arm assembly */
/************************************/
/* Constantes */
/************************************/
.include "../constantes.inc"
 
/********************************************/
/*Constantes */
/********************************************/
.equ STDOUT, 1 @ Linux output console
.equ EXIT, 1 @ Linux syscall
.equ WRITE, 4 @ Linux syscall
/* constantes X11 */
.equ KeyPressed, 2
.equ ButtonPress, 4
.equ MotionNotify, 6
.equ EnterNotify, 7
.equ LeaveNotify, 8
.equ Expose, 12
.equ ClientMessage, 33
.equ KeyPressMask, 1
.equ ButtonPressMask, 4
.equ ButtonReleaseMask, 8
.equ ExposureMask, 1<<15
.equ StructureNotifyMask, 1<<17
.equ EnterWindowMask, 1<<4
.equ LeaveWindowMask, 1<<5
.equ ConfigureNotify, 22
 
.equ GCForeground, 1<<2
 
/* constantes perceptron */
.equ WINDOWWIDTH, 600 @ windows size
.equ WINDOWHEIGHT, 600
.equ NBENTREES, 2 @ entry number
.equ NBENTRAI, 4000 @ training number
.equ NBPOINTS, 500 @ display points number
/************************************/
/* Structures */
/************************************/
/* training datas */
.struct 0
entrai_entrees:
.struct entrai_entrees + 4 * NBENTREES
entrai_entrees_biais:
.struct entrai_entrees_biais + 4
entrai_reponse:
.struct entrai_reponse +4
entrai_fin:
 
/*******************************************/
/* DONNEES INITIALISEES */
/*******************************************/
.data
szWindowName: .asciz "Windows Raspberry"
szRetourligne: .asciz "\n"
szMessDebutPgm: .asciz "Program start. \n"
szMessErreur: .asciz "Server X not found.\n"
szMessErrfen: .asciz "Can not create window.\n"
szMessErreurX11: .asciz "Error call function X11. \n"
szMessErrGc: .asciz "Can not create graphics context.\n"
szTitreFenRed: .asciz "Pi"
 
szLibDW: .asciz "WM_DELETE_WINDOW" @ special label for correct close error
 
 
.align 4
tbfEntrees: .float 10.0,0.0,1.0 @ entries for tests
.float 10.0,20.0,1.0
.float 10.0,40.0,1.0
.float 10.0,60.0,1.0
.float 10.0,80.0,1.0
.float 20.0,50.0,1.0
.float 40.0,50.0,1.0
.float 60.0,50.0,1.0
.float 80.0,50.0,1.0
.float 100.0,50.0,1.0
.float 10.0,50.0,1.0
.equ NBPOINTDIS, (. - tbfEntrees) / 12
stXGCValues: .int 0,0,0x00FF0000,0,0,0,0,0,0,0,0,0 @ for foreground color red
stXGCValues1: .int 0,0,0x00FFFFFF,0,0,0,0,0,0,0,0,0 @ for foreground color white
stXGCValues2: .int 0,0,0x0000FF00,0,0,0,0,0,0,0,0,0 @ for foreground color green
iGraine: .int 1234567
/*******************************************/
/* DONNEES NON INITIALISEES */
/*******************************************/
.bss
.align 4
ptDisplay: .skip 4 @ pointer display
ptEcranDef: .skip 4 @ pointer screen default
ptFenetre: .skip 4 @ pointer window
ptGC: .skip 4 @ pointer graphic context
ptGC1: .skip 4 @ pointer graphic context
key: .skip 4 @ key code
wmDeleteMessage: .skip 8 @ ident close message
event: .skip 400 @ TODO: event size ??
PrpNomFenetre: .skip 100 @ window name proprety
buffer: .skip 500
iWhite: .skip 4 @ rgb code for white pixel
iBlack: .skip 4 @ rgb code for black pixel
stEnt: .skip entrai_fin * NBENTRAI
tbfPoids: .skip 4 * (NBENTREES + 1)
/**********************************************/
/* -- Code section */
/**********************************************/
.text
.global main
iOfWhite: .int iWhite - .
iOfBlack: .int iBlack - .
iOfszMessDebutPgm: .int szMessDebutPgm - .
main: @ entry of program
adr r0,iOfszMessDebutPgm @ Start message
ldr r1,[r0]
add r0,r1
bl affichageMess
/* attention r6 pointer display*/
/* attention r8 pointer graphic context */
/* attention r9 ident window */
/*****************************/
/* OPEN SERVER X11 */
/*****************************/
mov r0,#0
bl XOpenDisplay @ open X server
cmp r0,#0 @ error ?
beq erreurServeur
adr r2,iOfptDisplay
ldr r1,[r2]
add r1,r2
str r0,[r1] @ store display address
 
mov r6,r0 @ and in register r6
ldr r2,[r0,#+132] @ load default_screen
adr r1,iOfptEcranDef
ldr r3,[r1]
add r1,r3
str r2,[r1] @ store default_screen
mov r2,r0
ldr r0,[r2,#+140] @ load pointer screen list
ldr r5,[r0,#+52] @ load value white pixel
adr r4,iOfWhite @ and store in memory
ldr r3,[r4]
add r4,r3
str r5,[r4]
ldr r3,[r0,#+56] @ load value black pixel
adr r4,iOfBlack @ and store in memory
ldr r5,[r4]
add r4,r5
str r3,[r4]
ldr r4,[r0,#+28] @ load bits par pixel
ldr r1,[r0,#+8] @ load root windows
/**************************/
/* CREATE WINDOW */
/**************************/
mov r0,r6 @ address display
mov r2,#0 @ window position X
mov r3,#0 @ window position Y
mov r8,#0 @ for stack alignement
push {r8}
push {r3} @ background = black pixel
push {r5} @ border = white pixel
mov r8,#2 @ border size
push {r8}
mov r8,#WINDOWHEIGHT @ height
push {r8}
mov r8,#WINDOWWIDTH @ width
push {r8}
bl XCreateSimpleWindow
add sp,#24 @ stack alignement 6 push (4 bytes * 6)
cmp r0,#0 @ error ?
beq erreurF
 
adr r1,iOfptFenetre
ldr r3,[r1]
add r1,r3
str r0,[r1] @ store window address in memory
mov r9,r0 @ and in register r9
/*****************************/
/* add window property */
/*****************************/
mov r0,r6 @ display address
mov r1,r9 @ window address
adr r2,iOfszWindowName @ window name
ldr r5,[r2]
add r2,r5
adr r3,iOfszTitreFenRed @ window name reduced
ldr r5,[r3]
add r3,r5
mov r4,#0
push {r4} @ parameters not use
push {r4}
push {r4}
push {r4}
bl XSetStandardProperties
add sp,sp,#16 @ stack alignement for 4 push
/**************************************/
/* for correction window close error */
/**************************************/
mov r0,r6 @ display address
adr r1,iOfszLibDW @ atom address
ldr r5,[r1]
add r1,r5
mov r2,#1 @ False créate atom if not exists
bl XInternAtom
cmp r0,#0 @ error X11 ?
blt erreurX11 @ Modif avril 22 pour android (ble pour raspberry)
adr r1,iOfwmDeleteMessage @ recept address
ldr r5,[r1]
add r1,r5
str r0,[r1]
mov r2,r1 @ return address
mov r0,r6 @ display address
mov r1,r9 @ window address
mov r3,#1 @ number of protocols
bl XSetWMProtocols
cmp r0,#0 @ error X11 ?
ble erreurX11
/**********************************/
/* create graphic context */
/**********************************/
mov r0,r6 @ display address
mov r1,r9 @ window address
mov r2,#GCForeground @
adr r3,iOfstXGCValues2 @ green color in foreground
ldr r5,[r3]
add r3,r5
bl XCreateGC
cmp r0,#0 @ error ?
beq erreurGC
adr r1,iOfptGC
ldr r5,[r1]
add r1,r5
str r0,[r1] @ store address graphic context
mov r8,r0 @ and in r8
/**********************************/
/* create 2 graphic context */
/**********************************/
mov r0,r6 @ display address
mov r1,r9 @ window address
mov r2,#GCForeground @ red color in Foreground
adr r3,iOfstXGCValues
ldr r5,[r3]
add r3,r5
bl XCreateGC
cmp r0,#0 @ error ?
beq erreurGC
adr r1,iOfptGC1
ldr r5,[r1]
add r1,r5
str r0,[r1] @ store address graphic context
mov r10,r0 @ and in r10
/**********************************/
/* create 2 graphic context */
/**********************************/
mov r0,r6 @ display address
mov r1,r9 @ window address
mov r2,#GCForeground @ white color in Foreground
adr r3,iOfstXGCValues1
ldr r5,[r3]
add r3,r5
bl XCreateGC
cmp r0,#0 @ error ?
beq erreurGC
mov r11,r0 @ address GC2 in r11
/****************************/
/* modif window background */
/****************************/
mov r0,r6 @ display address
mov r1,r9 @ window address
ldr r2,iGris1 @ background color
bl XSetWindowBackground
cmp r0,#0 @ error ?
ble erreurX11
/***************************/
/* OUF!! window display */
/***************************/
mov r0,r6 @ display address
mov r1,r9 @ window address
bl XMapWindow
 
/* init perceptron */
bl initPerceptron
/* draw line */
mov r0,r6 @ display
mov r1,r9 @ windows
mov r2,r11 @ graphic context
bl draw_line_Function
mov r5,#0
0: @ loop to write point
mov r0,r6 @ display
mov r1,r9 @ windows
mov r2,r8 @ GC0
mov r3,r10 @ GC1
bl writePoint
add r5,#1
cmp r5,#NBPOINTS @ maxi ?
blt 0b @ no -> loop
 
/****************************/
/* Autorisations */
/****************************/
mov r0,r6 @ display address
mov r1,r9 @ window address
ldr r2,iFenetreMask @ autorisation mask
bl XSelectInput
cmp r0,#0 @ error ?
ble erreurX11
/****************************/
/* Events loop */
/****************************/
1:
mov r0,r6 @ display address
adr r1,iOfevent @ events address
ldr r5,[r1]
add r1,r5
bl XNextEvent @ event ?
adr r0,iOfevent
ldr r5,[r0]
add r0,r5
ldr r0,[r0] @ code event
cmp r0,#KeyPressed @ key ?
bne 2f
adr r0,iOfevent @ yes read key in buffer
ldr r5,[r0]
add r0,r5
adr r1,iOfbuffer
ldr r5,[r1]
add r1,r5
mov r2,#255
adr r3,iOfkey
ldr r5,[r3]
add r3,r5
mov r4,#0
push {r4} @ stack alignement
push {r4}
bl XLookupString
add sp,#8 @ stack alignement 2 push
cmp r0,#1 @ is character key ?
bne 2f
adr r0,iOfbuffer @ yes -> load first buffer character
ldr r5,[r0]
add r0,r5
ldrb r0,[r0]
cmp r0,#0x71 @ character q for quit
beq 5f @ yes -> end
b 4f
2:
/************************************/
/* for example clic mouse button */
/************************************/
cmp r0,#ButtonPress @ clic mouse buton
bne 3f
adr r0,iOfevent
ldr r5,[r0]
add r0,r5
ldr r1,[r0,#+32] @ position X mouse clic
ldr r2,[r0,#+36] @ position Y
@ etc for eventuel use
b 4f
3:
cmp r0,#ClientMessage @ code for close window within error
bne 4f
adr r0,iOfevent
ldr r5,[r0]
add r0,r5
ldr r1,[r0,#+28] @ code message address
adr r2,iOfwmDeleteMessage @ equal code window créate ???
ldr r5,[r2]
add r2,r5
ldr r2,[r2]
cmp r1,r2
beq 5f @ yes -> end window
 
4: @ loop for other event
b 1b
/***********************************/
/* Close window -> free ressources */
/***********************************/
5:
mov r0,r6 @ display address
adr r1,iOfptGC
ldr r5,[r1]
add r1,r5
ldr r1,[r1] @ load context graphic address
bl XFreeGC
mov r0,r6 @ display address
adr r1,iOfptGC1
ldr r5,[r1]
add r1,r5
ldr r1,[r1] @ load context graphic address
bl XFreeGC
cmp r0,#0
blt erreurX11
mov r0,r6 @ display address
mov r1,r9 @ window address
bl XDestroyWindow
cmp r0,#0
blt erreurX11
mov r0,r6 @ display address
bl XCloseDisplay
cmp r0,#0
blt erreurX11
mov r0,#0 @ return code OK
b 100f
iOfptDisplay: .int ptDisplay - .
iOfptEcranDef: .int ptEcranDef - .
erreurF: @ create error window but possible not necessary. Display error by server
adr r1,iOfszMessErrfen
ldr r5,[r1]
add r1,r5
bl displayError
mov r0,#1 @ return error code
b 100f
erreurGC: @ error create graphic context
adr r1,iOfszMessErrGc
ldr r5,[r1]
add r1,r5
bl displayError
mov r0,#1
b 100f
erreurX11: @ erreur X11
adr r1,iOfszMessErreurX11
ldr r5,[r1]
add r1,r5
bl displayError
mov r0,#1
b 100f
erreurServeur: @ error no found X11 server see doc putty and Xming
adr r1,iOfszMessErreur
ldr r5,[r1]
add r1,r5
bl displayError
mov r0,#1
b 100f
 
100: @ standard end of the program
mov r7, #EXIT
svc 0
iOfptFenetre: .int ptFenetre - .
iOfptGC: .int ptGC - .
iOfptGC1: .int ptGC1 - .
iOfevent: .int event - .
iOfbuffer: .int buffer - .
iOfkey: .int key - .
iOfszLibDW: .int szLibDW - .
iOfszMessErreurX11: .int szMessErreurX11 - .
iOfszMessErrGc: .int szMessErrGc - .
iOfszMessErreur: .int szMessErreur - .
iOfszMessErrfen: .int szMessErrfen - .
iOfszWindowName: .int szWindowName - .
iOfszTitreFenRed: .int szTitreFenRed - .
iOfPrpNomFenetre: .int PrpNomFenetre - .
iOfwmDeleteMessage: .int wmDeleteMessage - .
iOfstXGCValues: .int stXGCValues - .
iOfstXGCValues1: .int stXGCValues1 - .
iOfstXGCValues2: .int stXGCValues2 - .
iFenetreMask: .int KeyPressMask|ButtonPressMask|StructureNotifyMask
iGris1: .int 0xFFA0A0A0
/******************************************************************/
/* initialisation perceptron */
/******************************************************************/
/* */
initPerceptron: @ INFO: initPerceptron
push {r1-r6,lr}
mov r1,#0
adr r2,iOftbfPoids
ldr r5,[r2]
add r2,r5
1: @ création alea weight
mov r0,#10000
bl genereraleasFloat
lsl r3,r1,#2 @ compute offset
add r3,r2 @ compute weight address
vstr s0,[r3] @ and store first alea weight
add r1,#1
cmp r1,#NBENTREES + 1 @ + biais entry
blt 1b
mov r1,#0 @ training indice
mov r6,#entrai_fin @ size one element training
adr r2,iOfstEnt @ address trainning
ldr r5,[r2]
add r2,r5
ldr r4,fUn @ biais value = 1.0
vldr s5,fConst3 @
vldr s6,fConst4
2: @ loop training value
mla r3,r1,r6,r2
mov r0,#WINDOWWIDTH
bl genereraleasFloat @ value x
vmul.f32 s0,s0,s6
vstr s0,[r3]
vmov s2,s0 @ save x
mov r0,#WINDOWHEIGHT
bl genereraleasFloat @ value y
vmul.f32 s0,s0,s5
vstr s0,[r3,#4] @ save y
str r4,[r3,#entrai_entrees_biais] @ store biais
vldr s3,fConst1
vmul.f32 s4,s3,s2 @ x * 0.7
vldr s3,fConst2
vadd.f32 s4,s3 @ + 40
vcmp.f32 s0,s4 @ compare y and résult
vmrs APSR_nzcv,FPSCR @ move float flags in standard flags
movlt r0,#-1 @ -1 if smaller
movge r0,#1 @ +1 else
str r0,[r3,#entrai_reponse] @ store in reply
add r1,#1
cmp r1,#NBENTRAI @ other training ?
blt 2b
 
bl entrainerPerceptron
100:
pop {r1-r6,pc}
iOftbfPoids: .int tbfPoids - .
iOfstEnt: .int stEnt - .
fUn: .float 1.0
fConst3: .float 1000.0
fConst4: .float 1000.0
/***************************************************/
/* training percepton */
/***************************************************/
/* */
entrainerPerceptron: @ INFO: entrainerPerceptron
push {r1-r8,lr}
mov r4,#0 @ training indice
adr r5,iOfstEnt @ entry address
ldr r6,[r5]
add r5,r6
adr r6,iOftbfPoids @ weight address
ldr r7,[r6]
add r6,r7
mov r7,#entrai_fin @ size one entry
1:
mul r0,r7,r4
add r0,r5 @ training element address
ldr r1,[r0,#entrai_reponse] @ desired reply
mov r8,r0
bl feedforward @ compute reply
sub r0,r1,r0 @ error
vmov s3,r0
vcvt.f32.s32 s3,s3 @ float conversion
mov r2,#0 @ indice weight
2:
add r3,r6,r2,lsl #2 @ compute weight address
vldr s5,[r3] @ load weight
add r1,r8,r2,lsl #2 @ compute entry address
vldr s1,[r1] @ load input[n]
vldr s2,fConstC @ constante C
vmul.f32 s4,s2,s3 @ compute new weight = C * error
vmul.f32 s4,s4,s1 @ * input[n]
vadd.f32 s5,s5,s4 @ + weight precedent
vstr s5,[r3] @ store new weight
 
add r2,#1
cmp r2,#NBENTREES + 1
blt 2b
add r4,#1
cmp r4,#NBENTRAI
blt 1b
100:
pop {r1-r8,pc}
fConstC: .float 0.01 @ à adapter suivant problème
fConst1: .float 0.7 @ coefficient
fConst2: .float 40.0
/***************************************************/
/* compute perceptron reply */
/***************************************************/
/* r0 entry address */
/* r0 return résult */
feedforward: @ INFO: feedforward:
push {r1-r5,lr}
mov r4,r0 @ entry address
mov r0,#0
vmov s2,r0
vcvt.f32.u32 s2,s2 @ convert zéro in float
vmov s3,s2 @ and save
mov r1,#0 @ indice weight
adr r2,iOftbfPoids @ weight address
ldr r5,[r2]
add r2,r5
1:
lsl r3,r1,#2
add r5,r3,r2 @ compute weight address
vldr s0,[r5] @ load weight
add r5,r3,r4 @ compute entry address
vldr s1,[r5] @ load entry
vmul.f32 s0,s1,s0 @ multiply entry by weight
vadd.f32 s2,s0 @ and add to sum
add r1,#1
cmp r1,#NBENTREES + 1
blt 1b
 
vcmp.f32 s2,s3 @ compare sum to zéro
vmrs APSR_nzcv,FPSCR @ move float flags to standard flags
movlt r0,#-1 @ -1 if smaller
movge r0,#1 @ +1 else
100:
pop {r1-r5,pc}
/***************************************************/
/* Génération nombre aleatoire format float */
/***************************************************/
/* r0 */
/* s0 retourne (alea r0)/range */
genereraleasFloat: @ INFO: genereraleasFloat
push {r1-r5,lr} @ save registres
mov r4,r0 @ save plage
adr r0,iOfiGraine1 @ load seed
ldr r5,[r0]
add r0,r5
ldr r0,[r0]
ldr r1,iNombre1
mul r0,r1
add r0,#1
adr r1,iOfiGraine1
ldr r5,[r1]
add r1,r5
str r0,[r1] @ store new seed
ldr r1,m @ divisor for 32 bits register
bl division
mov r0,r3 @ remainder
ldr r1,m1 @ divisor 10000
bl division
mul r0,r2,r4 @ multiply quotient for range
ldr r1,m1 @
bl division @
mov r0,r2 @ quotient alea integer
vmov s0,r4
vcvt.f32.u32 s0,s0 @ conversion range en float
vmov s1,r0
vcvt.f32.u32 s1,s1 @ conversion aléa entier en float
vdiv.f32 s0,s1,s0 @ division
100:
pop {r1-r5,pc} @ restaur registres
iOfiGraine1: .int iGraine - .
iNombre1: .int 31415821
m1: .int 10000
m: .int 100000000
/******************************************************************/
/* dessin points */
/******************************************************************/
/* r0 contains display */
/* r1 contains windows */
/* r2 contains context graphic (color point) */
/* r3 contains context graphic 1 */
writePoint: @ INFO: draw_line_function
push {r1-r11,lr} @ save registres
mov r6,r0 @ save display
adr r4,iOftbfEntrees
ldr r5,[r4]
add r4,r5
mov r0,#WINDOWWIDTH @
bl genereraleasFloat @ alea float X
mov r0,#WINDOWWIDTH
vmov s1,r0
vcvt.f32.u32 s1,s1 @ conversion en float
vmul.f32 s0,s1 @ cadrage X
vstr s0,[r4]
mov r0,#WINDOWHEIGHT
bl genereraleasFloat @ alea float Y
mov r0,#WINDOWHEIGHT
vmov s1,r0
vcvt.f32.u32 s1,s1 @ conversion en float
vmul.f32 s0,s1 @ cadrage Y
vstr s0,[r4,#4]
mov r0,r4
bl feedforward @ request perceptron
cmp r0,#0
movgt r2,r10 @ if low use graphic context 1
mov r8,r2
mov r7,r1
mov r0,r6
vldr s0,[r4] @ load X
vcvt.s32.f32 s0,s0 @ conversion entier
vmov r3,s0 @ position x
mov r9,r3
sub sp,sp,#4 @ stack alignement
vldr s1,[r4,#4] @ Load Y
vcvt.s32.f32 s1,s1 @ conversion entier
vmov r4,s1 @ position y
rsb r4,r4,#WINDOWHEIGHT
sub r4,r4,#50 @ correction system bar
push {r4} @ on the stack
bl XDrawPoint
add sp,sp,#8 @ stack alignement 1 push and 1 stack alignement
mov r0,r6
mov r1,r7
mov r2,r8
add r9,#1
mov r3,r9
sub sp,sp,#4 @ stack alignement
push {r4} @ on the stack
bl XDrawPoint
add sp,sp,#8 @ stack alignement 1 push and 1 stack alignement
mov r0,r6
mov r1,r7
mov r2,r8
sub r9,#2
mov r3,r9
sub sp,sp,#4 @ stack alignement
push {r4} @ on the stack
bl XDrawPoint
add sp,sp,#8 @ stack alignement 1 push and 1 stack alignement
mov r0,r6
mov r1,r7
mov r2,r8
add r9,#1
mov r3,r9
sub sp,sp,#4 @ stack alignement
add r4,#1
push {r4} @ on the stack
bl XDrawPoint
add sp,sp,#8 @ stack alignement 1 push and 1 stack alignement
100:
pop {r1-r11,pc} @ restaur registers
iOftbfEntrees: .int tbfEntrees - .
/******************************************************************/
/* draw points */
/******************************************************************/
/* r0 contains display */
/* r1 contains windows */
/* r2 contains context graphic (color line) */
/* r3 contains X position */
/* r4 contains Y position */
draw_points: @ INFO: draw_points
push {r0-r12,lr} @ save registres
sub sp,sp,#4 @ stack alignement
push {r4} @ on the stack
bl XDrawPoint
add sp,sp,#8 @ stack alignement 1 push and 1 stack alignement
100:
pop {r0-r12,pc} @ restaur registers
/******************************************************************/
/* draw line function; */
/******************************************************************/
/* r0 contains display */
/* r1 contains windows */
/* r2 contains context graphic (color line) */
 
draw_line_Function: @ INFO: draw_line_function
push {r1-r6,lr} @ save registres
 
@ compute begin y for x = 0
vldr s1,fConst2
vcvt.s32.f32 s1,s1 @ conversion integer
vmov r3,s1
rsb r4,r3,#WINDOWHEIGHT @ = y = windows size - 40
@ calcul y fin pour x = WINDOWWIDTH
mov r5,#WINDOWWIDTH @ window width = x1
vmov s2,r5
vcvt.f32.s32 s2,s2 @ conversion float
vldr s1,fConst1
vmul.f32 s0,s2,s1 @ * O,7
vldr s2,fConst2
vadd.f32 s0,s2 @ add contante (40)
vcvt.s32.f32 s0,s0 @ conversion entier
vmov r3,s0
rsb r6,r3,#WINDOWHEIGHT @ = y1
mov r3,#0 @ position x
sub sp,sp,#4 @ stack alignement
push {r6} @ position y1
push {r5} @ position x1
push {r4} @ position y
bl XDrawLine
add sp,sp,#16 @ for 4 push
 
100:
pop {r1-r6,pc} @ restaur registers
 
/***************************************************/
/* ROUTINES INCLUDE */
/***************************************************/
.include "../affichage.inc"
</syntaxhighlight>
=={{header|Delphi}}==
{{libheader| System.SysUtils}}
{{libheader| System.Classes}}
{{libheader| Vcl.Graphics}}
{{libheader| Vcl.Forms}}
{{libheader| Vcl.ExtCtrls}}
{{libheader| System.UITypes}}
{{Trans|Java}}
<syntaxhighlight lang="delphi">
unit main;
 
interface
 
uses
System.SysUtils, System.Classes, Vcl.Graphics, Vcl.Forms, Vcl.ExtCtrls,
System.UITypes;
 
type
TTrainer = class
inputs: TArray<Double>;
answer: Integer;
constructor Create(x, y: Double; a: Integer);
end;
 
TForm1 = class(TForm)
tmr1: TTimer;
procedure FormCreate(Sender: TObject);
procedure FormPaint(Sender: TObject);
procedure tmr1Timer(Sender: TObject);
private
procedure Perceptron(n: Integer);
function FeedForward(inputs: Tarray<double>): integer;
procedure Train(inputs: Tarray<double>; desired: integer);
end;
 
var
Form1: TForm1;
Training: TArray<TTrainer>;
weights: TArray<Double>;
c: double = 0.00001;
count: Integer = 0;
 
implementation
 
{$R *.dfm}
 
{ TTrainer }
 
constructor TTrainer.Create(x, y: Double; a: Integer);
begin
inputs := [x, y, 1];
answer := a;
end;
 
function f(x: double): double;
begin
Result := x * 0.7 + 40;
end;
 
function activateFn(s: double): integer;
begin
if (s > 0) then
Result := 1
else
Result := -1;
end;
 
procedure TForm1.FormPaint(Sender: TObject);
const
DotColor: array[Boolean] of TColor = (clRed, clBlue);
var
i, x, y, guess: Integer;
begin
with Canvas do
begin
Brush.Color := Tcolors.Whitesmoke;
FillRect(ClipRect);
 
x := ClientWidth;
y := Trunc(f(x));
Pen.Width := 3;
pen.Color := TColors.Orange;
Pen.Style := TPenStyle.psSolid;
MoveTo(0, Trunc(f(0)));
LineTo(x, y);
Train(training[count].inputs, training[count].answer);
count := (count + 1) mod length(training);
 
Pen.Width := 1;
pen.Color := TColors.Black;
 
for i := 0 to count do
begin
guess := FeedForward(training[i].inputs);
x := trunc(training[i].inputs[0] - 4);
y := trunc(training[i].inputs[1] - 4);
 
Brush.Style := TBrushStyle.bsSolid;
Pen.Style := TPenStyle.psClear;
 
Brush.Color := DotColor[guess > 0];
Ellipse(rect(x, y, x + 8, y + 8));
end;
end;
end;
 
procedure TForm1.Perceptron(n: Integer);
const
answers: array[Boolean] of integer = (-1, 1);
var
i, x, y, answer: Integer;
begin
SetLength(weights, n);
for i := 0 to high(weights) do
weights[i] := Random * 2 - 1;
 
for i := 0 to High(Training) do
begin
x := Trunc(Random() * ClientWidth);
y := Trunc(Random() * ClientHeight);
 
answer := answers[y < f(x)];
 
training[i] := TTrainer.Create(x, y, answer);
end;
tmr1.Enabled := true;
end;
 
procedure TForm1.tmr1Timer(Sender: TObject);
begin
Invalidate;
end;
 
function TForm1.FeedForward(inputs: Tarray<double>): integer;
var
sum: double;
i: Integer;
begin
Assert(length(inputs) = length(weights), 'weights and input length mismatch');
sum := 0;
for i := 0 to high(weights) do
sum := sum + inputs[i] * weights[i];
result := activateFn(sum);
end;
 
procedure TForm1.Train(inputs: Tarray<double>; desired: integer);
var
guess: Integer;
error: Double;
i: Integer;
begin
guess := FeedForward(inputs);
error := desired - guess;
for i := 0 to length(weights) - 1 do
weights[i] := weights[i] + c * error * inputs[i];
end;
 
procedure TForm1.FormCreate(Sender: TObject);
begin
SetLength(Training, 2000);
Perceptron(3);
end;
 
end.</syntaxhighlight>
Form settings (main.dfm)
<syntaxhighlight lang="delphi">
object Form1: TForm1
ClientHeight = 360
ClientWidth = 640
DoubleBuffered = True
OnCreate = FormCreate
OnPaint = FormPaint
object tmr1: TTimer
Enabled = False
Interval = 10
OnTimer = tmr1Timer
end
end
</syntaxhighlight>
{{out}}
[[https://ibb.co/pX7QHLS]]
 
=={{header|Forth}}==
{{works with|GNU Forth}}
Where it says <code>[email protected]</code> it should say <code>f&#64;</code>.
<syntaxhighlight lang="forth">require random.fs
here seed !
 
warnings off
 
( THE PERCEPTRON )
 
: randomWeight 2000 random 1000 - s>f 1000e f/ ;
: createPerceptron create dup , 0 ?DO randomWeight f, LOOP ;
 
variable arity
variable ^weights
variable ^inputs
 
: perceptron! dup @ arity ! cell+ ^weights ! ;
: inputs! ^inputs ! ;
 
0.0001e fconstant learningConstant
: activate 0e f> IF 1e ELSE -1e THEN ;
 
: feedForward
^weights @ ^inputs @ 0e
arity @ 0 ?DO
dup f@ float + swap
dup f@ float + swap
f* f+
LOOP 2drop activate ;
 
: train
feedForward f- learningConstant f*
^weights @ ^inputs @
arity @ 0 ?DO
fdup dup f@ f* float + swap
dup f@ f+ dup f! float + swap
LOOP 2drop fdrop ;
 
( THE TRAINER )
 
create point 0e f, 0e f, 1e f, \ x y bias
 
: x point ;
: y point float + ;
: randomX 640 random s>f ;
: randomY 360 random s>f ;
 
\ y = Ax + B
2e fconstant A
1e fconstant B
 
: randomizePoint
randomY fdup y f!
randomX fdup x f!
A f* B f+ f< IF -1e ELSE 1e THEN ;
 
3 createPerceptron myPerceptron
variable trainings
10000 constant #rounds
 
: setup 0 ; \ success counter
: calculate s>f #rounds s>f f/ 100e f* ;
: report ." After " trainings @ . ." trainings: "
calculate f. ." % accurate" cr ;
: check learningConstant f~ IF 1+ THEN ;
: evaluate randomizePoint feedForward check ;
: evaluate setup #rounds 0 ?DO evaluate LOOP report ;
 
: tally 1 trainings +! ;
: timesTrain 0 ?DO randomizePoint train tally LOOP ;
 
: initialize
myPerceptron perceptron!
point inputs!
0 trainings ! ;
: go
initialize evaluate
1 timesTrain evaluate
1 timesTrain evaluate
1 timesTrain evaluate
1 timesTrain evaluate
1 timesTrain evaluate
5 timesTrain evaluate
10 timesTrain evaluate
30 timesTrain evaluate
50 timesTrain evaluate
100 timesTrain evaluate
300 timesTrain evaluate
500 timesTrain evaluate ;
 
go bye</syntaxhighlight>
Example output:
<pre>After 0 trainings: 10.16 % accurate
After 1 trainings: 7.43 % accurate
After 2 trainings: 7.71 % accurate
After 3 trainings: 4.93 % accurate
After 4 trainings: 3.11 % accurate
After 5 trainings: 0.6 % accurate
After 10 trainings: 48.72 % accurate
After 20 trainings: 85.55 % accurate
After 50 trainings: 86.36 % accurate
After 100 trainings: 98.59 % accurate
After 200 trainings: 98.84 % accurate
After 500 trainings: 95.86 % accurate
After 1000 trainings: 99.8 % accurate</pre>
 
 
=={{header|FreeBASIC}}==
El código es de D.J.Peters (https://freebasic.net/forum/viewtopic.php?t=24778)<br>
The code is from D.J.Peters
 
Yo solo lo transcribo.<br>
I just transcribe it.
<syntaxhighlight lang="freebasic">
Function rnd2 As Single
Return Rnd()-Rnd()
End Function
 
Type Perceptron
Declare Constructor(Byval n As Integer)
Declare Function feedforward(Byval in As Single Ptr) As Integer
Declare Function activate(Byval sum As Single) As Integer
Declare Sub train(Byval in As Single Ptr, Byval uit As Integer)
As Integer lastItem
As Single Ptr weights
As Single c = 0.01
End Type
 
Constructor Perceptron(Byval n As Integer)
lastItem = n-1
weights = New Single[n]
For i As Integer = 0 To lastItem
weights[i] = rnd2()
Next i
End Constructor
 
Function Perceptron.feedforward(Byval in As Single Ptr) As Integer
Dim As Single sum
For i As Integer = 0 To lastItem
sum += in[i] * weights[i]
Next
Return activate(sum)
End Function
 
Function Perceptron.activate(Byval sum As Single) As Integer
Return Iif(sum>0, 1, -1)
End Function
 
Sub Perceptron.train(Byval in As Single Ptr, Byval uit As Integer)
Dim As Integer gues = feedforward(in)
Dim As Single error_ = uit - gues
For i As Integer = 0 To lastitem
weights[i] += c * error_ * in[i]
Next
End Sub
 
Type Trainer
Declare Constructor (Byval x As Single, Byval y As Single, Byval a As Integer)
As Single inputs(2)
As Integer answer
End Type
 
Constructor Trainer(Byval x As Single, Byval y As Single, Byval a As Integer)
inputs(0) = x
inputs(1) = y
inputs(2) = 1.0
answer = a
End Constructor
 
Function f(Byval x As Single) As Single
Return 2 * x + 1
End Function
 
Const As Integer NTRAINERS = 2000
Const As Integer NWIDTH = 640
Const As Integer NHEIGHT = 360
Dim Shared As Perceptron Ptr ptron
Dim Shared As Trainer Ptr training(NTRAINERS-1)
Dim Shared As Integer count
 
Sub setup()
count = 0
Screenres NWIDTH, NHEIGHT
ptron = New Perceptron(3)
For i As Integer = 0 To NTRAINERS-1
Dim As Single x = rnd2() * NWIDTH /2
Dim As Single y = rnd2() * NHEIGHT/2
Dim As Integer answer = 1
If (y < f(x)) Then answer = -1
training(i) = New Trainer(x , y , answer)
Next i
End Sub
 
Sub drawit()
ptron -> train(@training(count)->inputs(0), training(count)->answer)
count = (count + 1) Mod NTRAINERS
For i As Integer = 0 To count
Dim As Integer gues = ptron->feedforward(@training(i)->inputs(0))
If (gues > 0) Then
Circle(NWIDTH/2+training(i)->inputs(0),NHEIGHT/2+training(i)->inputs(1)),8,8
Else
Circle(NWIDTH/2+training(i)->inputs(0),NHEIGHT/2+training(i)->inputs(1)),8,8,,,,f
End If
Next i
End Sub
 
setup()
While Inkey() = ""
drawit()
Sleep 100
Wend
</syntaxhighlight>
 
=={{header|Go}}==
Line 23 ⟶ 1,338:
<br>
This is based on the Java entry but just outputs the final image (as a .png file) rather than displaying its gradual build up. It also uses a different color scheme - blue and red circles with a black dividing line.
<langsyntaxhighlight lang="go">package main
 
import (
Line 127 ⟶ 1,442:
perc.draw(dc, 2000)
dc.SavePNG("perceptron.png")
}</langsyntaxhighlight>
 
=={{header|Java}}==
{{works with|Java|8}}
<langsyntaxhighlight lang="java">import java.awt.*;
import java.awt.event.ActionEvent;
import java.util.*;
Line 249 ⟶ 1,564:
});
}
}</langsyntaxhighlight>
 
=={{header|JavaScript}}==
Uses P5 lib.
<langsyntaxhighlight lang="javascript">
const EPOCH = 1500, TRAINING = 1, TRANSITION = 2, SHOW = 3;
 
Line 384 ⟶ 1,699:
}
}
</syntaxhighlight>
</lang>
[[File:perceptronJS.png]]
 
Well, it seems I cannot upload an image :(
 
=={{header|jq}}==
'''Adapted from [[#Pascal|Pascal]] and [[#Wren|Wren]]'''
 
'''Works with jq, gojq, and jaq - the C, Go, and Rust implementations of jq'''
 
Since jq does not have a PRNG, the following uses an external source of entropy
and can be run in a bash or similar environment by:
<pre>
< /dev/urandom tr -cd '0-9' | fold -w 1 | JQ -Rrnc -f perceptron.jq
</pre>
 
where JQ represents one of the jq executables, and perceptron.jq is the following program.
 
To check the program, a set of random weights was generated
and used both for jq and for Wren. The results were the same, and
show that, at least in that specific case, the number of training runs (i.e. 5) is sufficient for
the perceptron to approximate the target function within the resolution
of the ASCII graphics.
<syntaxhighlight lang=jq>
# The following can be omitted if using the C or Go implementations:
def range(a; b; c):
if a < b and c > 0 then a | while(. < b; .+c)
elif a > b and c < 0 then a | while(. > b; . + c)
else empty
end;
 
# Output: a prn in range(0;$n) where $n is `.`
def prn:
if . == 1 then 0
else . as $n
| ([1, (($n-1)|tostring|length)]|max) as $w
| [limit($w; inputs)] | join("") | tonumber
| if . < $n then . else ($n | prn) end
end;
 
def randFloat: 1000 | prn / 999;
 
def inner_product($x; $y):
if ($x|length) != ($y|length) then "inner_product" | error else . end
| reduce range(0; $x|length) as $i (0; . + $x[$i] * $y[$i]);
 
# the function being learned is f(x) = 2x + 1
def targetOutput(a; b):
if (a * 2 + 1 < b) then 1 else -1 end;
 
def showTargetOutput:
reduce range(10; -10; -1) as $y ("";
reduce range(-9; 11) as $x (.;
if targetOutput($x; $y) == 1
then . + "#"
else . + "O"
end )
| . + "\n" );
 
# output: an array of weights
def randomWeights($n):
reduce range(0; $n) as $i ([]; .[$i] = randFloat * 2 - 1 )
# Or, for testing:
# [0.49215609849927, 0.80317011428771, 0.7062026506222]
;
 
# The perceptron outputs 1 if the inner product of the
# two arrays is positive, else -1
def feedForward($inputs; $weights):
if inner_product($inputs; $weights) > 0 then 1 else -1 end;
def showOutput($ws):
reduce range(10; -10; -1) as $y ("";
reduce range(-9; 11) as $x (.;
# bias is 1
if feedForward([$x, $y, 1]; $ws) == 1
then . + "#"
else . + "O"
end )
| . + "\n" );
 
# input: {weights}
# output: updated weights
def train(runs):
(.weights|length) as $nw
| .inputs = [range(0; $nw)|0]
| .inputs[-1] = 1 # bias is 1
| reduce range(0; runs) as $i (.;
reduce range(10; -10; -1) as $y (.;
.inputs[1] = $y
| reduce range(-9; 11) as $x (.;
.inputs[0] = $x
| (targetOutput($x; $y) - feedForward(.inputs; .weights)) as $error
| reduce range(0; $nw) as $j (.;
# 0.01 is the learning constant
.weights[$j] += $error * .inputs[$j] * 0.01 ) ) ) ) ;
 
def task:
"Target output for the function f(x) = 2x + 1:",
showTargetOutput,
"Output from untrained perceptron:",
({weights: randomWeights(3)}
| showOutput(.weights),
(train(1)
| "Output from perceptron after 1 training run:",
showOutput(.weights),
(train(99)
| "Output from perceptron after 5 training runs:",
showOutput(.weights) ) ) ) ;
 
task
</syntaxhighlight>
{{output}}
As for Wren, using randomWeights equal to:
<pre>
[0.49215609849927, 0.80317011428771, 0.7062026506222]
</pre>
 
=={{header|Julia}}==
<syntaxhighlight lang="julia"># file module.jl
 
module SimplePerceptrons
 
# default activation function
step(x) = x > 0 ? 1 : -1
 
mutable struct Perceptron{T, F}
weights::Vector{T}
lr::T
activate::F
end
 
Perceptron{T}(n::Integer, lr = 0.01, f::Function = step) where T =
Perceptron{T, typeof(f)}(2 .* rand(n + 1) .- 1, lr, f)
Perceptron(args...) = Perceptron{Float64}(args...)
 
@views predict(p::Perceptron, x::AbstractVector) = p.activate(p.weights[1] + x' * p.weights[2:end])
@views predict(p::Perceptron, X::AbstractMatrix) = p.activate.(p.weights[1] .+ X * p.weights[2:end])
 
function train!(p::Perceptron, X::AbstractMatrix, y::AbstractVector; epochs::Integer = 100)
for _ in Base.OneTo(epochs)
yhat = predict(p, X)
err = y .- yhat
ΔX = p.lr .* err .* X
for ind in axes(ΔX, 1)
p.weights[1] += err[ind]
p.weights[2:end] .+= ΔX[ind, :]
end
end
return p
end
 
accuracy(p, X::AbstractMatrix, y::AbstractVector) = count(y .== predict(p, X)) / length(y)
 
end # module SimplePerceptrons
</syntaxhighlight>
 
<syntaxhighlight lang="julia"># file _.jl
 
const SP = include("module.jl")
 
p = SP.Perceptron(2, 0.1)
 
a, b = 0.5, 1
X = rand(1000, 2)
y = map(x -> x[2] > a + b * x[1] ? 1 : -1, eachrow(X))
 
# Accuracy
@show SP.accuracy(p, X, y)
 
# Train
SP.train!(p, X, y, epochs = 1000)
 
ahat, bhat = p.weights[1] / p.weights[2], -p.weights[3] / p.weights[2]
 
using Plots
 
scatter(X[:, 1], X[:, 2], markercolor = map(x -> x == 1 ? :red : :blue, y))
Plots.abline!(b, a, label = "real line", linecolor = :red, linewidth = 2)
 
SP.train!(p, X, y, epochs = 1000)
ahat, bhat = p.weights[1] / p.weights[2], -p.weights[3] / p.weights[2]
Plots.abline!(bhat, ahat, label = "predicted line")
</syntaxhighlight>
 
=={{header|Kotlin}}==
{{trans|Java}}
<langsyntaxhighlight lang="scala">// version 1.1.4-3
 
import java.awt.*
Line 485 ⟶ 1,980:
}
}
}</langsyntaxhighlight>
 
=={{header|Lua}}==
Simple implementation allowing for any number of inputs (in this case, just 1), testing of the Perceptron, and training.
<langsyntaxhighlight lang="lua">local Perceptron = {}
Perceptron.__index = Perceptron
 
Line 555 ⟶ 2,050:
print(i..":", node:test({i}))
end
</syntaxhighlight>
</lang>
{{out}}
<pre>Untrained results:
Line 571 ⟶ 2,066:
2: 5
</pre>
 
=={{header|Nim}}==
{{trans|Pascal}}
<syntaxhighlight lang="nim">import random
 
type
IntArray = array[0..2, int]
FloatArray = array[0..2, float]
 
func targetOutput(a, b: int): int =
## The function the perceptron will be learning is f(x) = 2x + 1.
if a * 2 + 1 < b: 1 else: - 1
 
proc showTargetOutput =
for y in countdown(10, - 9):
for x in countup(-9, 10):
stdout.write if targetOutput(x, y) == 1: '#' else: 'O'
echo()
echo()
 
proc randomWeights(ws: var FloatArray) =
## Start with random weights.
randomize()
for w in ws.mitems:
w = rand(1.0) * 2 + 1
 
func feedForward(ins: IntArray; ws: FloatArray): int =
## The perceptron outputs 1 if the sum of its inputs multiplied by
## its input weights is positive, otherwise -1.
var sum = 0.0
for i in 0..ins.high:
sum += ins[i].toFloat * ws[i]
result = if sum > 0: 1 else: -1
 
proc showOutput(ws: FloatArray) =
var inputs: IntArray
inputs[2] = 1 # bias.
for y in countdown(10, -9):
inputs[1] = y
for x in countup(-9, 10):
inputs[0] = x
stdout.write if feedForward(inputs, ws) == 1: '#' else: 'O'
echo()
echo()
 
proc train(ws: var FloatArray; runs: int) =
var inputs: IntArray
inputs[2] = 1 # bias.
for _ in 1..runs:
for y in countdown(10, -9):
inputs[1] = y
for x in countup(-9, 10):
inputs[0] = x
let error = targetOutput(x, y) - feedForward(inputs, ws)
for i in 0..2:
ws[i] += float(error * inputs[i]) * 0.01 # 0.01 is the learning constant.
 
when isMainModule:
var weights: FloatArray
echo "Target output for the function f(x) = 2x + 1:"
showTargetOutput()
randomWeights(weights)
echo "Output from untrained perceptron:"
showOutput(weights)
train(weights, 1)
echo "Output from perceptron after 1 training run:"
showOutput(weights)
train(weights, 4)
echo "Output from perceptron after 5 training runs:"
showOutput(weights)</syntaxhighlight>
 
{{out}}
<pre>Target output for the function f(x) = 2x + 1:
##############OOOOOO
#############OOOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO
 
Output from untrained perceptron:
OOOO################
OOOO################
OOOOO###############
OOOOO###############
OOOOOO##############
OOOOOO##############
OOOOOOO#############
OOOOOOO#############
OOOOOOOO############
OOOOOOOO############
OOOOOOOOO###########
OOOOOOOOO###########
OOOOOOOOOO##########
OOOOOOOOOO##########
OOOOOOOOOOO#########
OOOOOOOOOOO#########
OOOOOOOOOOOO########
OOOOOOOOOOOOO#######
OOOOOOOOOOOOO#######
OOOOOOOOOOOOOO######
 
Output from perceptron after 1 training run:
####################
###################O
##################OO
#################OOO
#################OOO
################OOOO
###############OOOOO
##############OOOOOO
#############OOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO
 
Output from perceptron after 5 training runs:
################OOOO
################OOOO
###############OOOOO
##############OOOOOO
##############OOOOOO
#############OOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO
###OOOOOOOOOOOOOOOOO</pre>
 
=={{header|Pascal}}==
This is a text-based implementation, using a 20x20 grid (just like the original Mark 1 Perceptron had). The rate of improvement drops quite markedly as you increase the number of training runs.
<langsyntaxhighlight lang="pascal">program Perceptron;
 
(*
Line 689 ⟶ 2,343:
writeln( 'Output from perceptron after 5 training runs:' );
showOutput( weights )
end.</langsyntaxhighlight>
{{out}}
<pre>Target output for the function f(x) = 2x + 1:
Line 778 ⟶ 2,432:
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO</pre>
 
=={{header|Phix}}==
{{libheader|Phix/pGUI}}
Interactive GUI version. Select one of five lines, set the number of points, learning constant,
learning rate, and max iterations. Plots accuracy vs. iterations and displays the training data
in blue/black=above/incorrect and green/red=below/incorrect [all blue/green = 100% accurate].
<syntaxhighlight lang="phix">-- demo\rosetta\Perceptron.exw
--
-- The learning curve turned out more haphazard than I imagined, and adding a
-- non-linear line to f() (case 5) was perhaps not such a great idea given how
-- much it sometimes struggles with some of the other straight lines anyway.
--
include pGUI.e
--#withtype Ihandle
--#withtype Ihandles
--#withtype cdCanvas
 
constant help_txt = """
A perceptron is the simplest possible neural network, consisting of just one neuron
that we train to recognise whether a point is above or below a given straight line.
NB: It would probably be unwise to overly assume that this could easily be adapted
to anything more complex, or actually useful. It is just a basic introduction, but
you have to start somewhere. What is interesting is that ultimately the neuron is
just three numbers, plus a bucket-load of training gumpf.
 
The left hand panel allows settings to be changed, in the middle we plot the rate of
learning, and on the right we show the training data colour coded as above/below and
correct/incorrect (blue/black=above/incorrect, green/red=below/incorrect). What you
want to see is all blue/green, with no black/red.
 
You can change the line algorithm (four straight and one curved that it is not meant
to be able to cope with), the number of points (size of training data), the learning
constant, learning rate (iterations/second) and the maximum number of iterations.
Note that training automatically stops once 100% accuracy is reached (since the error
is then always zero, no further changes would ever occur). Also note that a restart
is triggered when any setting is changed, not just when the restart button is pressed.
 
The learning curve was expected to start at 50% (random chance of being right) and
gradually improve towards 100%, except when the non-linear line was selected. It
turned out far more haphazard than I thought it would. Originally it allowed up to
10,000,000 iterations, but it rarely improved much beyond 1,000,000."""
 
function help_cb(Ihandln /*help*/)
IupMessage("Perceptron",help_txt)
return IUP_DEFAULT
end function
 
Ihandle dlg, plot, canvas, timer,
iteration, accuracy, w1, w2, w3
cdCanvas cddbuffer, cdcanvas
 
integer line_alg = 1
integer points = 2000,
learning_rate = 10000,
max_iterations = 1_000_000,
total_iterations = 0
atom learning_constant = 0.00001
 
enum WEIGHTS, -- The actual neuron (just 3 numbers)
TRAINING -- training data/results, variable length
enum INPUTS, ANSWER -- contents of [TRAINING]
-- note that length(inputs[i]) must = length(weights)
 
sequence perceptron = {},
last_wh -- (recreate "" on resize)
 
function activate(atom t)
return iff(t>0?+1:-1)
end function
 
function f(atom x)
switch line_alg
case 1: return x*0.7+40
case 2: return 300-0.3*x
case 3: return x*0.75
case 4: return 2*x+1
case 5: return x/2+sin(x/100)*100+100 -- (fail)
end switch
end function
 
procedure new_perceptron(integer n)
sequence weights := repeat(0, n)
for i=1 to n do
weights[i] = rnd()*2 - 1
end for
sequence training := repeat(0,points)
integer {w,h} = last_wh
for i=1 to points do
integer x := rand(w),
y := rand(h),
answer := activate(y-f(x))
sequence inputs = {x, y, 1}
-- aside: inputs is {x,y,1}, rather than {x,y} because an
-- input of {0,0} could only ever yield 0, whereas
-- {0,0,1} can yield a non-zero guess: weights[3].
training[i] = {inputs, answer} -- {INPUTS, ANSWER}
end for
perceptron = {weights, training} -- {WEIGHTS, TRAINING}
end procedure
function feed_forward(sequence inputs)
if length(inputs)!=length(perceptron[WEIGHTS]) then
throw("weights and input length mismatch, program terminated")
end if
atom total := 0.0
for i=1 to length(inputs) do
total += inputs[i] * perceptron[WEIGHTS][i]
end for
return activate(total)
end function
procedure train(sequence inputs, integer desired)
integer guess := feed_forward(inputs),
error := desired - guess
for i=1 to length(perceptron[WEIGHTS]) do
perceptron[WEIGHTS][i] += learning_constant * error * inputs[i]
end for
end procedure
function draw(bool bDraw=true)
-- (if bDraw is false, we just want the "correct" count)
integer correct = 0
atom x, y
for i=1 to points do
{sequence inputs, integer answer} = perceptron[TRAINING][i]
integer guess := feed_forward(inputs)
correct += (guess=answer)
if bDraw then
{x,y} = inputs
-- blue/black=above/incorrect, green/red=below/incorrect
integer clr = iff(guess=answer?iff(guess>0?CD_BLUE:CD_GREEN)
:iff(guess>0?CD_BLACK:CD_RED))
cdCanvasSetForeground(cddbuffer, clr)
cdCanvasCircle(cddbuffer, x, y, 8)
end if
end for
if bDraw then
cdCanvasSetForeground(cddbuffer, CD_BLACK)
x := last_wh[1]
y := f(x)
if line_alg=5 then
-- non-linear so (crudely) draw in little segments
for i=0 to x by 20 do
cdCanvasLine(cddbuffer,i,f(i),i+20,f(i+20))
end for
else
cdCanvasLine(cddbuffer,0,f(0),x,y)
end if
end if
return correct
end function
bool re_plot = true
atom plot0
sequence plotx = repeat(0,19),
ploty = repeat(0,19)
integer imod = 1, -- keep every 1, then 10, then 100, ...
pidx = 1
 
function restart_cb(Ihandln /*restart*/)
last_wh = IupGetIntInt(canvas, "DRAWSIZE")
new_perceptron(3)
imod = 1
pidx = 1
total_iterations = 0
plot0 = (draw(false)/points)*100
re_plot = true
IupSetInt(timer,"RUN",1)
return IUP_DEFAULT
end function
 
function redraw_cb(Ihandle /*ih*/, integer /*posx*/, integer /*posy*/)
if perceptron={}
or last_wh!=IupGetIntInt(canvas, "DRAWSIZE") then
{} = restart_cb(NULL)
end if
cdCanvasActivate(cddbuffer)
cdCanvasClear(cddbuffer)
integer correct = draw()
cdCanvasFlush(cddbuffer)
 
if re_plot then
re_plot = false
IupSetAttribute(plot, "CLEAR", NULL)
IupPlotBegin(plot)
IupPlotAdd(plot, 0, plot0)
for i=1 to pidx-1 do
IupPlotAdd(plot, plotx[i], ploty[i])
end for
{} = IupPlotEnd(plot)
IupSetAttribute(plot, "REDRAW", NULL)
end if
IupSetStrAttribute(iteration,"TITLE","iteration: %d",{total_iterations})
IupSetStrAttribute(w1,"TITLE","%+f",{perceptron[WEIGHTS][1]})
IupSetStrAttribute(w2,"TITLE","%+f",{perceptron[WEIGHTS][2]})
IupSetStrAttribute(w3,"TITLE","%+f",{perceptron[WEIGHTS][3]})
IupSetStrAttribute(accuracy,"TITLE","accuracy: %.4g%%",{(correct/points)*100})
IupRefresh({iteration,w1,w2,w3,accuracy}) -- (force label resize)
if correct=points then
IupSetInt(timer,"RUN",0) -- stop at 100%
end if
return IUP_DEFAULT
end function
 
function map_cb(Ihandle ih)
cdcanvas = cdCreateCanvas(CD_IUP, ih)
cddbuffer = cdCreateCanvas(CD_DBUFFER, cdcanvas)
cdCanvasSetBackground(cddbuffer, CD_PARCHMENT)
return IUP_DEFAULT
end function
 
function valuechanged_cb(Ihandle ih)
string name = IupGetAttribute(ih, "NAME")
integer v = IupGetInt(ih, "VALUE")
switch name
case "line": line_alg = v
case "points": points = power(10,v)
case "learn": learning_constant = power(10,-v)
case "rate": learning_rate = power(10,v-1)
case "max": max_iterations = power(10,v)
end switch
{} = restart_cb(NULL)
return IUP_DEFAULT
end function
 
function timer_cb(Ihandle /*timer*/)
for i=1 to min(learning_rate,max_iterations) do
total_iterations += 1
integer c = mod(total_iterations,points)+1
train(perceptron[TRAINING][c][INPUTS], perceptron[TRAINING][c][ANSWER])
if mod(total_iterations,imod)=0 then
-- save 1,2..10, then 20,30,..100, then 200,300,..1000, etc
re_plot = true
plotx[pidx] = total_iterations
ploty[pidx] = (draw(false)/points)*100
if pidx=10 or pidx=19 then
if pidx=19 then
-- drop (eg) 1,2,..9, replace with 10,20,..90,
-- next time replace 10,20..90 with 100,200..900, etc
plotx[1..10] = plotx[10..19]
ploty[1..10] = ploty[10..19]
end if
imod *= 10
pidx = 11
else
pidx += 1
end if
end if
end for
if total_iterations>=max_iterations then
IupSetInt(timer,"RUN",0)
end if
IupUpdate(canvas)
return IUP_IGNORE
end function
 
function esc_close(Ihandle /*ih*/, atom c)
if c=K_ESC then return IUP_CLOSE end if
if c=K_F1 then return help_cb(NULL) end if
if c=K_F5 then return restart_cb(NULL) end if
return IUP_CONTINUE
end function
 
function settings(string lname, name, sequence opts, integer v=1)
Ihandle lbl = IupLabel(lname,"PADDING=0x4"),
list = IupList("NAME=%s, DROPDOWN=YES",{name}),
hbox = IupHbox({lbl,IupFill(),list})
for i=1 to length(opts) do
IupSetAttributeId(list,"",i,opts[i])
end for
IupSetInt(list,"VISIBLEITEMS",length(opts)+1)
IupSetInt(list,"VALUE",v)
IupSetCallback(list, "VALUECHANGED_CB", Icallback("valuechanged_cb"));
return hbox
end function
 
function sep()
return IupLabel("","SEPARATOR=HORIZONTAL")
end function
 
procedure main()
IupOpen()
IupControlsOpen()
 
Ihandle settings_lbl = IupHbox({IupFill(),IupLabel("Settings"),IupFill()}),
line = settings("line","line",{"x*0.7 + 40","300 - 0.3*x","x*0.75","2*x + 1","x/2+sin(x/100)*100+100"}),
points = settings("number of points","points",{"10","100","1000","10000"},3),
learn = settings("learning constant","learn",{"0.1","0.01","0.001","0.0001","0.00001"},5),
rate = settings("learning rate","rate",{"1/s","10/s","100/s","1000/s","10000/s"},5),
maxiter = settings("max iterations","max",{"10","100","1000","10,000","100,000","1,000,000"},6),
restart = IupButton("Restart (F5)", "ACTION", Icallback("restart_cb")),
helpbtn = IupButton("Help (F1)", "ACTION", Icallback("help_cb")),
buttons = IupHbox({restart,IupFill(),helpbtn})
 
iteration = IupLabel("iteration: 1")
w1 = IupLabel("1")
w2 = IupLabel("2")
w3 = IupLabel("3")
Ihandle weights = IupHbox({IupLabel("weights: ","PADDING=0x4"),IupVbox({w1,w2,w3})})
accuracy = IupLabel("accuracy: 12.34%")
 
Ihandle vbox = IupVbox({settings_lbl, sep(),
line, sep(), points, sep(), learn, sep(),
rate, sep(), maxiter, sep(), buttons, sep(),
IupHbox({iteration}), weights, IupHbox({accuracy})})
IupSetAttribute(vbox, "GAP", "4");
 
plot = IupPlot("MENUITEMPROPERTIES=Yes")
IupSetAttribute(plot, "TITLE", "Learning Curve");
IupSetAttribute(plot, "TITLEFONTSIZE", "10");
IupSetAttribute(plot, "TITLEFONTSTYLE", "ITALIC");
IupSetAttribute(plot, "GRIDLINESTYLE", "DOTTED");
IupSetAttribute(plot, "GRID", "YES");
IupSetAttribute(plot, "AXS_XLABEL", "iterations");
IupSetAttribute(plot, "AXS_YLABEL", "% correct");
IupSetAttribute(plot, "AXS_XFONTSTYLE", "ITALIC");
IupSetAttribute(plot, "AXS_YFONTSTYLE", "ITALIC");
IupSetAttribute(plot, "AXS_XTICKNUMBER", "No");
IupSetAttribute(plot, "AXS_YAUTOMIN", "No");
IupSetAttribute(plot, "AXS_YAUTOMAX", "No");
IupSetInt(plot, "AXS_YMIN", 0)
IupSetInt(plot, "AXS_YMAX", 100)
 
canvas = IupCanvas(NULL)
IupSetAttribute(canvas, "RASTERSIZE", "640x360") -- initial size
IupSetCallback(canvas, "MAP_CB", Icallback("map_cb"))
IupSetCallback(canvas, "ACTION", Icallback("redraw_cb"))
 
Ihandle hbox = IupHbox({vbox, plot, canvas},"MARGIN=4x4, GAP=10")
dlg = IupDialog(hbox);
IupSetCallback(dlg, "K_ANY", Icallback("esc_close"))
IupSetAttribute(dlg, "TITLE", "Perceptron")
IupMap(dlg)
IupSetAttribute(canvas, "RASTERSIZE", NULL) -- release limitation
IupShowXY(dlg,IUP_CENTER,IUP_CENTER)
timer = IupTimer(Icallback("timer_cb"), 100) -- (was 1 sec, now 0.1s)
IupMainLoop()
IupClose()
end procedure
main()</syntaxhighlight>
 
=={{header|Python}}==
{{header|Python 3}}
<syntaxhighlight lang="python">import random
 
TRAINING_LENGTH = 2000
 
class Perceptron:
'''Simple one neuron simulated neural network'''
def __init__(self,n):
self.c = .01
self.weights = [random.uniform(-1.0, 1.0) for _ in range(n)]
 
def feed_forward(self, inputs):
weighted_inputs = []
for i in range(len(inputs)):
weighted_inputs.append(inputs[i] * self.weights[i])
return self.activate(sum(weighted_inputs))
 
def activate(self, value):
return 1 if value > 0 else -1
 
def train(self, inputs, desired):
guess = self.feed_forward(inputs)
error = desired - guess
for i in range(len(inputs)):
self.weights[i] += self.c * error * inputs[i]
class Trainer():
''' '''
def __init__(self, x, y, a):
self.inputs = [x, y, 1]
self.answer = a
 
def F(x):
return 2 * x + 1
 
if __name__ == "__main__":
ptron = Perceptron(3)
training = []
for i in range(TRAINING_LENGTH):
x = random.uniform(-10,10)
y = random.uniform(-10,10)
answer = 1
if y < F(x): answer = -1
training.append(Trainer(x,y,answer))
result = []
for y in range(-10,10):
temp = []
for x in range(-10,10):
if ptron.feed_forward([x,y,1]) == 1:
temp.append('^')
else:
temp.append('.')
result.append(temp)
print('Untrained')
for row in result:
print(''.join(v for v in row))
 
for t in training:
ptron.train(t.inputs, t.answer)
result = []
for y in range(-10,10):
temp = []
for x in range(-10,10):
if ptron.feed_forward([x,y,1]) == 1:
temp.append('^')
else:
temp.append('.')
result.append(temp)
print('Trained')
for row in result:
print(''.join(v for v in row))</syntaxhighlight>
{{out}}
<pre>
Untrained
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^...
^^^^^^^^^^^^^.......
^^^^^^^^............
^^^^................
....................
....................
....................
....................
....................
....................
....................
....................
Trained
^^^^^...............
^^^^^...............
^^^^^^..............
^^^^^^..............
^^^^^^^.............
^^^^^^^.............
^^^^^^^^............
^^^^^^^^............
^^^^^^^^^...........
^^^^^^^^^^..........
^^^^^^^^^^..........
^^^^^^^^^^^.........
^^^^^^^^^^^.........
^^^^^^^^^^^^........
^^^^^^^^^^^^........
^^^^^^^^^^^^^.......
^^^^^^^^^^^^^.......
^^^^^^^^^^^^^^......
^^^^^^^^^^^^^^......
^^^^^^^^^^^^^^^.....</pre>
 
=={{header|Racket}}==
{{trans|Java}}
<langsyntaxhighlight lang="racket">#lang racket
(require 2htdp/universe
2htdp/image)
Line 857 ⟶ 2,972:
(big-bang the-demo (to-draw draw-demo) (on-tick tick-handler)))
(module+ main (demo))</langsyntaxhighlight>
 
Run it and see the image for yourself, I can't get it onto RC!
 
=={{header|Raku}}==
{{trans|Go}}
<syntaxhighlight lang="raku" line># 20201116 Raku programming solution
 
use MagickWand;
 
our ( \c, \runs ) = 0.00001, 2000 ;
 
class Trainer { has ( @.inputs, $.answer ) is rw }
 
sub linear(\x) { return x*0.7 + 40 }
 
class Perceptron {
has ( @.weights, Trainer @.training ) is rw ;
 
submethod BUILD(:n($n), :w($w), :h($h)) {
@!weights = [ rand*2-1 xx ^$n ];
@!training = (^runs).map: {
my (\x,\y) = rand*$w , rand*$h ;
my \a = y < linear(x) ?? 1 !! -1;
Trainer.new: inputs => (x,y,1), answer => a
}
}
 
method feedForward(@inputs) {
die "weights and input length mismatch" if +@inputs != +self.weights;
return ( sum( @inputs »*« self.weights ) > 0 ) ?? 1 !! -1
}
 
method train(@inputs, \desired) {
self.weights »+«= @inputs »*» (c*(desired - self.feedForward(@inputs)))
}
 
method draw(\img) {
for ^runs { self.train(self.training[$_].inputs, self.training[$_].answer) }
my $y = linear(my $x = img.width) ;
img».&{ .stroke-width(3) or .stroke('black') or .fill('none') } # C returns
img.draw-line(0.0, linear(0), $x, $y);
img.stroke-width( 1 );
for ^runs {
my $guess = self.feedForward(self.training[$_].inputs);
($x, $y) = self.training[$_].inputs[0,1] »-» 4;
$guess > 0 ?? img.stroke( 'blue' ) !! img.stroke( 'red' );
img.circle( $x, $y, $x+8, $y );
}
}
}
 
my ($w, $h) = 640, 360;
my $perc = Perceptron.new: n => 3, w => $w, h => $h;
my $o = MagickWand.new or die;
$o.create( $w, $h, "white" );
$perc.draw($o);
$o.write('./perceptron.png') or die</syntaxhighlight>
 
=={{header|REXX}}==
{{trans|Java}}
<langsyntaxhighlight lang="rexx">/* REXX */
Call init
Call time 'R'
Line 966 ⟶ 3,137:
y.i=nextDouble()*height
End
Return</langsyntaxhighlight>
{{out}}
<pre>Point x f(x) r y ff ok zz
Line 1,088 ⟶ 3,259:
=={{header|Scala}}==
===Java Swing Interoperability===
<langsyntaxhighlight Scalalang="scala">import java.awt._
import java.awt.event.ActionEvent
 
Line 1,170 ⟶ 3,341:
})
 
}</langsyntaxhighlight>
 
=={{header|Scheme}}==
<langsyntaxhighlight lang="scheme">(import (scheme base)
(scheme case-lambda)
(scheme write)
Line 1,269 ⟶ 3,440:
", percent correct is "
(number->string (perceptron 'test test-set))
"\n"))))</langsyntaxhighlight>
{{out}}
<pre>#(-0.5914540100624854 1.073343782042039 -0.29780862758499393)
Line 1,293 ⟶ 3,464:
Trained on 19000, percent correct is 99.2
Trained on 20000, percent correct is 100.0</pre>
 
=={{header|Smalltalk}}==
{{works with|GNU Smalltalk}}
<syntaxhighlight lang="smalltalk">Number extend [
 
activate
[^self > 0 ifTrue: [1] ifFalse: [-1]]
]
 
Object subclass: Perceptron [
 
| weights |
 
feedForward: inputArray
[^(self sumOfWeighted: inputArray) activate]
 
train: inputArray desire: expected
[| actual error |
actual := self feedForward: inputArray.
error := 0.0001 * (expected - actual).
weights := weights
with: inputArray
collect: [:weight :input | weight + (error * input)]]
 
sumOfWeighted: inputArray
[^(self weighted: inputArray)
inject: 0
into: [:each :sum | each + sum]]
 
weighted: inputArray
[^weights
with: inputArray
collect: [:weight :input | weight * input]]
 
Perceptron class >> new: arity
[^self basicNew
initialize: arity;
yourself]
 
initialize: arity
[weights := 1
to: arity
collect: [:x | self randomWeight]]
 
randomWeight
[^(Random between: -1000 and: 1000) / 1000]
]
 
Perceptron class extend [
 
| perceptron trainings input expected actual |
 
evaluationSamples := 100000.
 
initializeTest
[perceptron := self new: 3.
input := Array new: 3.
trainings := 0.
input at: 1 put: 1. "Bias"]
 
randomizeSample
[| x y |
x := Random between: 0 and: 640-1.
y := Random between: 0 and: 360-1.
expected := (y >= (2*x+1)) ifTrue: [1] ifFalse: [-1].
input at: 2 put: x.
input at: 3 put: y]
 
test
[self
initializeTest; evaluate;
train: 1; evaluate;
train: 1; evaluate;
train: 1; evaluate;
train: 1; evaluate;
train: 1; evaluate;
train: 5; evaluate;
train: 10; evaluate;
train: 30; evaluate;
train: 50; evaluate;
train: 100; evaluate;
train: 300; evaluate;
train: 500; evaluate]
 
evaluate
[| hits |
hits := 0.
evaluationSamples timesRepeat:
[self randomizeSample.
expected = (perceptron feedForward: input)
ifTrue: [hits := hits + 1]].
Transcript
display: 'After ';
display: trainings;
display: ' trainings: ';
display: (hits / evaluationSamples * 100) asFloat;
display: ' % accuracy';
nl]
 
train: anInteger
[anInteger timesRepeat:
[self randomizeSample.
perceptron
train: input
desire: expected.
trainings := trainings + 1]]
]
 
Perceptron test.</syntaxhighlight>
Example output:
<pre>After 0 trainings: 14.158 % accuracy
After 1 trainings: 14.018 % accuracy
After 2 trainings: 14.19 % accuracy
After 3 trainings: 14.049 % accuracy
After 4 trainings: 14.029 % accuracy
After 5 trainings: 14.105 % accuracy
After 10 trainings: 20.39 % accuracy
After 20 trainings: 57.08 % accuracy
After 50 trainings: 92.998 % accuracy
After 100 trainings: 98.988 % accuracy
After 200 trainings: 98.055 % accuracy
After 500 trainings: 99.777 % accuracy
After 1000 trainings: 98.523 % accuracy</pre>
 
=={{header|Wren}}==
{{trans|Pascal}}
<syntaxhighlight lang="wren">import "random" for Random
 
var rand = Random.new()
 
// the function being learned is f(x) = 2x + 1
var targetOutput = Fn.new { |a, b| (a * 2 + 1 < b) ? 1 : -1 }
 
var showTargetOutput = Fn.new {
for (y in 10..-9) {
for (x in -9..10) {
if (targetOutput.call(x, y) == 1) {
System.write("#")
} else {
System.write("O")
}
}
System.print()
}
System.print()
}
 
var randomWeights = Fn.new { |ws|
for (i in 0..2) ws[i] = rand.float() * 2 - 1
}
 
var feedForward = Fn.new { |ins, ws|
// the perceptron outputs 1 if the sum of its inputs multiplied by
// its input weights is positive, otherwise -1
var sum = 0
for (i in 0..2) sum = sum + ins[i] * ws[i]
return (sum > 0) ? 1 : -1
}
 
var showOutput = Fn.new { |ws|
var inputs = List.filled(3, 0)
inputs[2] = 1 // bias
for (y in 10..-9) {
for (x in -9..10) {
inputs[0] = x
inputs[1] = y
if (feedForward.call(inputs, ws) == 1) {
System.write("#")
} else {
System.write("O")
}
}
System.print()
}
System.print()
}
 
var train = Fn.new { |ws, runs|
var inputs = List.filled(3, 0)
inputs[2] = 1 // bias
for (i in 1..runs) {
for (y in 10..-9) {
for (x in -9..10) {
inputs[0] = x
inputs[1] = y
var error = targetOutput.call(x, y) - feedForward.call(inputs, ws)
for (j in 0..2) {
ws[j] = ws[j] + error * inputs[j] * 0.01 // 0.01 is the learning constant
}
}
}
}
}
 
var weights = List.filled(3, 0)
System.print("Target output for the function f(x) = 2x + 1:")
showTargetOutput.call()
randomWeights.call(weights)
System.print("Output from untrained perceptron:")
showOutput.call(weights)
train.call(weights, 1)
System.print("Output from perceptron after 1 training run:")
showOutput.call(weights)
train.call(weights, 4)
System.print("Output from perceptron after 5 training runs:")
showOutput.call(weights)</syntaxhighlight>
 
{{out}}
<pre>
Target output for the function f(x) = 2x + 1:
##############OOOOOO
#############OOOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO
 
Output from untrained perceptron:
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
 
Output from perceptron after 1 training run:
##############OOOOOO
#############OOOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO
 
Output from perceptron after 5 training runs:
##############OOOOOO
#############OOOOOOO
#############OOOOOOO
############OOOOOOOO
############OOOOOOOO
###########OOOOOOOOO
###########OOOOOOOOO
##########OOOOOOOOOO
##########OOOOOOOOOO
#########OOOOOOOOOOO
#########OOOOOOOOOOO
########OOOOOOOOOOOO
########OOOOOOOOOOOO
#######OOOOOOOOOOOOO
#######OOOOOOOOOOOOO
######OOOOOOOOOOOOOO
######OOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
#####OOOOOOOOOOOOOOO
####OOOOOOOOOOOOOOOO
</pre>
 
=={{header|XLISP}}==
Like the Pascal example, this is a text-based program using a 20x20 grid. It is slightly more general, however, because it allows the function that is to be learnt and the perceptron's bias and learning constant to be passed as arguments to the <tt>trainer</tt> and <tt>perceptron</tt> objects.
<langsyntaxhighlight lang="scheme">(define-class perceptron
(instance-variables weights bias learning-constant) )
(define-method (perceptron 'initialize b lc)
Line 1,373 ⟶ 3,841:
(newline)
(ptron 'learn training 4)
(ptron 'print-grid)</langsyntaxhighlight>
{{out}}
<pre>Target output for y = 2x + 1:
Line 1,466 ⟶ 3,934:
{{trans|Java}}
Uses the PPM class from http://rosettacode.org/wiki/Bitmap/Bresenham%27s_line_algorithm#zkl
<langsyntaxhighlight lang="zkl">class Perceptron{
const c=0.00001;
var [const] W=640, H=350;
Line 1,488 ⟶ 3,956:
foreach i in (weights.len()){ weights[i]+=c*error*xy1a[i] }
}
}</langsyntaxhighlight>
<langsyntaxhighlight lang="zkl">p:=Perceptron(3);
p.training.apply2(p.train);
 
Line 1,500 ⟶ 3,968:
pixmap.circle(x,y,8,color);
}
pixmap.writeJPGFile("perceptron.zkl.jpg");</langsyntaxhighlight>
{{out}}
[[File:Perceptron.zkl.jpg]]
9,476

edits