Sandbox: Difference between revisions
Content added Content deleted
(prevent run-time error) |
No edit summary |
||
Line 1:
=={{header|J}}==
{{works with|Jsoftware|>9.0.1}}
{{libheader|math/calculus}}
In earlier versions D. can be used instead of pderiv
<syntaxhighlight lang="J">
load 'math/calculus'
coinsert 'jcalculus'
NB. In earlier versions D. can be used instead of pderivw
NB. ========== Precision
pps =: 9!:11
pps 18
NB. ========== Function definition
func=: monad define
'xp yp' =: y
((1-xp)*(1-xp) * ] ^-(yp)^2) + yp*(yp+2)* ] ^ _2 * xp^2
)
NB. ========== Gradient descent
gd =: monad define
learn=: 0.03
gold=: 1 1
precision=: 1e_10
flagg=:0
attempts =: 1000
i=:0
go=: 0.1 _1
while. (i < attempts) & !(flagg=0) do.
go=: go-learn * func pderiv 1 ] go
i=: >: i
if. (precision > | (func go) - (func gold)) do.
flagg=:1
end.
gold=: go
smoutput i,go
end.
smoutput 'x';({.go);'y';({:go);'f(x,y)';func go
)
</syntaxhighlight>
{{out}}
<pre>
NB. 1 0.181030990251029783 _1.17878944691407384
NB. 2 0.106526139628704192 _1.19653062777180708
NB. 3 0.114463555353394503 _1.21817798934176125
NB. 4 0.107500175480679167 _1.22061411764084937
NB. 5 0.108281552776878703 _1.22275948119432343
NB. 6 0.107616591357617614 _1.22300417912290982
NB. 7 0.107689377135022829 _1.2232109060917864
NB. 8 0.107626033138308236 _1.22323510540100955
NB. 9 0.107632763421305816 _1.2232549721759014
NB. 10 0.107626730913479213 _1.22325735959949355
NB.
NB. ┌─┬────────────────────┬─┬────────────────────┬──────┬─────────────────────┐
NB. │x│0.107626730913479213│y│_1.22325735959949355│f(x,y)│_0.750063420544188286│
NB. └─┴────────────────────┴─┴────────────────────┴──────┴─────────────────────┘
</pre>
|
Revision as of 14:36, 15 January 2023
J
In earlier versions D. can be used instead of pderiv
load 'math/calculus'
coinsert 'jcalculus'
NB. In earlier versions D. can be used instead of pderivw
NB. ========== Precision
pps =: 9!:11
pps 18
NB. ========== Function definition
func=: monad define
'xp yp' =: y
((1-xp)*(1-xp) * ] ^-(yp)^2) + yp*(yp+2)* ] ^ _2 * xp^2
)
NB. ========== Gradient descent
gd =: monad define
learn=: 0.03
gold=: 1 1
precision=: 1e_10
flagg=:0
attempts =: 1000
i=:0
go=: 0.1 _1
while. (i < attempts) & !(flagg=0) do.
go=: go-learn * func pderiv 1 ] go
i=: >: i
if. (precision > | (func go) - (func gold)) do.
flagg=:1
end.
gold=: go
smoutput i,go
end.
smoutput 'x';({.go);'y';({:go);'f(x,y)';func go
)
- Output:
NB. 1 0.181030990251029783 _1.17878944691407384 NB. 2 0.106526139628704192 _1.19653062777180708 NB. 3 0.114463555353394503 _1.21817798934176125 NB. 4 0.107500175480679167 _1.22061411764084937 NB. 5 0.108281552776878703 _1.22275948119432343 NB. 6 0.107616591357617614 _1.22300417912290982 NB. 7 0.107689377135022829 _1.2232109060917864 NB. 8 0.107626033138308236 _1.22323510540100955 NB. 9 0.107632763421305816 _1.2232549721759014 NB. 10 0.107626730913479213 _1.22325735959949355 NB. NB. ┌─┬────────────────────┬─┬────────────────────┬──────┬─────────────────────┐ NB. │x│0.107626730913479213│y│_1.22325735959949355│f(x,y)│_0.750063420544188286│ NB. └─┴────────────────────┴─┴────────────────────┴──────┴─────────────────────┘