Jump to content

Talk:Kahan summation: Difference between revisions

Another note to the task
(Another note to the task)
Line 94:
 
Thanks Sonia. I am still digesting your suggestion, but just on first reading, (and little cogitation I admit), the solution is not "self scaling" in any way, for different precisions. The current task floating point examples might self scale in say Python, but do not in J because of their "fuzzy comparison" (a problem with the task description and not a slur of J by the way). --[[User:Paddy3118|Paddy3118]] ([[User talk:Paddy3118|talk]]) 05:18, 14 December 2014 (UTC)
 
==Task 2==
I suggest to rewrite this task from scratch, giving in the task description 7 or 10 64 bit floating point numbers (or more if generated with a portable random generator), and show the results with a basic summation algorithm versus doing it with Kahan. And show in some ways that the second loses less precision. Nothing more.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.