Talk:Gradient descent

From Rosetta Code

Needs more information

This needs a description, purpose and preferably, a given function to solve for so that different implementations can be compared. Changed to draft status until that is supplied. --Thundergnat (talk) 12:30, 1 July 2019 (UTC)

Luckily, I managed to find a freely available book excerpt (from Google books) which contained the C# code from which the first Typescript example had been translated together with some explanation of what was being done here.
I've therefore added a rudimentary task description and a Go translation to start the ball rolling. --PureFox (talk) 17:48, 8 July 2019 (UTC)
The differences between the results from the different samples is worrying - can it really be due to minor differences in the different languages' sqrt and exp functions? How much accuracy should be expected (is it worth printing 16 digits)? It looks like the answer is somewhere around 0.107, -1.22?
How was the initial guess derived - if the initial guess is changed, the results are different.
I tried a varient using 32 bit floats instead of 64 bit and the results are similar (but different, of course). I also found that (with 32 bit) delG can become 0 before b is set to alpha / delG - this presumanbly should be tested for? --Tigerofdarkness (talk) 19:07, 2 September 2020 (UTC)
More worrying still is the fact that the Go example, on which a lot of the other examples are based, no longer gives the same results as it did a year ago. When I ran it on my current setup (Ubuntu 18.04, Go version 1.14.7 amd64) the results were: x[0] = 0.10725956484848279, x[1] = -1.2235527984213825 which is 'miles' away from what they were before!
Just to make sure, I ran it again on last year's setup (Ubuntu 16.04, Go version 1.12.5 amd64) and the results agreed with those previously posted: x[0] = 0.10764302056464771, x[1] = -1.223351901171944.
Go has, of course, moved on a couple of versions in the interim and a possible reason for the discrepancy is that FMA instructions are now being supported (from v1.14) which will mean that a FP operation of the form x * y + z will be computed with only one rounding. So in theory results should be more accurate than before.
I first noticed that there was a discrepancy a couple of days back when I was trying to add a Wren example. My first attempt was a straight translation of the Go code which gave results of: x[0] = 0.10781894131876, x[1] = -1.2231932529554.
I then decided to switch horses and use zkl's 'tweaked' gradG function which gave results very close to zkl itself so I posted that. Incidentally, I wasn't surprised that there was a small discrepancy here as I'm using a rather crude Math.exp function (basically I apply the power function to e = 2.71828182845904523536) pending the inclusion of a more accurate one in the next version of Wren's standard library which will call the C library function exp().
So I don't know where all this leaves us. There are doubtless several factors at work here and, as you say changing the initial guess leads to different results. Something else which leads to different results is whether one allows gradG to mutate 'x'. As the Go code stands it copies 'x' to 'y' and so doesn't mutate the former. However, it looks to me as though some translations may be indirectly mutating 'x' (depending on whether arrays are reference or value types in those languages) by simply assigning 'x' to 'y'. If I make this change in the Go code, the results are: x[0] = 0.10773473656605767, x[1] = -1.2231782829927973 and in the Wren code: x[0] = 0.10757894411096, x[1] = -1.2230849416131 so it does make quite a difference. --PureFox (talk) 10:11, 3 September 2020 (UTC)
Interesting.
I looked at the Go sample's gradG (which as you say, a lot of the others use). I'm not sufficiently au-fait with the mathematics to say how good an approximation the gradG function is but I see it involves dividing by h which starts out set to the tolerance and then gets halved on each iteration. It must be something like the actual gradient as the samples sort-of agree. I hadn't noticed the possibility of the mutation of x - that's a good point.
I substituted the actual gradient function (as used in the Fortran sample) and removed h and again, I get the same results as Fortran and Julia (to 6 places). That the original Algol 68 sample agreed with those is possibly a coincidence but I am now more confident that the result is in the region of the Julia/Fortran results.
I suspect that Julia is also using the actual gradient function as it is (I presume) using a built-in minimising function that uses the actual gradient function.
--Tigerofdarkness (talk) 12:08, 3 September 2020 (UTC)
Yes, to get consistent results, the answer does seem to be to use Fortran's gradient function.
I just substituted that in the Go code and obtained results of: x[0] = 0.10762682432948055, x[1] = -1.2232596548816101 which now agrees to 6 decimal places with the Fortran, Julia and your Algol 68 and Algol W solutions. So I'm going to update the Go example on the main page and suggest that those who've previously translated it update their translations accordingly. Thanks for your efforts here. --PureFox (talk) 13:14, 3 September 2020 (UTC)
Thought I'd just add that Wren is now falling into line with updated results of: x[0] = 0.10762682432948, x[1] = -1.2232596548816. Perhaps my Math.exp function isn't so bad after all :) --PureFox (talk) 13:59, 3 September 2020 (UTC)

I thought it was normal for a task to be a draft task until at least four (or so) examples have been entered, and also wait a week or so before promoting it.   I know there are no hard and fast rules.     -- Gerard Schildberger (talk) 19:21, 1 July 2019 (UTC)

In general, I wait for a minimum of 3 months and 20 implementations before I promote one of my tasks out of draft. That way there is plenty of opportunity for discussion and tweaks if necessary. As far as I'm concerned, this doesn't even rise to the level of a draft yet, let alone a full task. Reverted back to draft (again). --Thundergnat (talk) 20:57, 1 July 2019 (UTC)


an easier to read simpler expression of the bi-variate function used for this task

Use this algorithm to search for minimum values of the bi-variate function:

                   ƒ(x, y)  =  (x-1)2e-(y2)  +  y(y+2)e-2(x2) 

─or eliding the negatives in the exponents─

                   ƒ(x, y)  =  (x-1)2 ÷ ey2  +  y(y+2) ÷ e2(x2) 


A bigger font was used to clearly show an exponent used in the exponent of   e.     -- Gerard Schildberger (talk) 15:09, 5 September 2020 (UTC)