User talk:Rdm

From Rosetta Code

Seed Discussion for this talk page

Please fill out your user page with the help of the mylang templates (e.g., below) so that the wiki software can link you in to everything else nicely. Thanks! —Donal Fellows 19:59, 3 September 2009 (UTC)

{{mylangbegin}}
{{mylang|J|Wrote it}}
{{mylangend}}
_
Ok, done.. though I am not sure what guidelines I should use to judge how familiar I am with a language, so I likely could be more accurate if I knew how to judge myself. Rdm 03:50, 4 September 2009 (UTC)
Best advice when judge yourself - not to overjudge with your capabilities. And... better to put own nose to languages you know well, leave D alone, OK? ex-pert.... User:Vincent
This user is banned. See his talk page for the reasons. Rdm, I'm sorry for the abuse you and others have had to put up with from him. --Michael Mol 13:48, 6 May 2011 (UTC)
It is indeed quite likely that the D implementation of the multi-split algorithm I posted could have been replaced with a more efficient and/or more concise and/or more elegant implementation. I am sad that it's no longer possible to have that discussion with Vincent, though I understand that it's also possible that it may never have been possible to have that conversation with him. Anyways, it's really too bad, and I hope you never have to do anything like this again. But, I understand, and I hope it's not too hard on you, either. And, I am thankful that you are here for something like this when we need it. --Rdm 14:29, 6 May 2011 (UTC)
The terms for coming back are stated in his ban; he can come back if/when he's willing to have a discussion with me, and he can convince me he'll be able to have discussions like the one you wanted. (Convincing me of that kind of thing isn't hard; I have a pretty hard bias toward believing people are good. But if he convinces me and then proves otherwise, it's back to being banned.) --Michael Mol 15:08, 6 May 2011 (UTC)
It's entirely self-subjective, whatever descriptives you think best defines your familiarity. See my page for comparison. (Though I don't think anyone else uses my particular approach.) --Michael Mol 15:47, 4 September 2009 (UTC)
Ok, that works for me, thank you (there are so many dimensions to consider here, it's almost ridiculous). And I like the simplicity of Active vs. Rusty. Rdm 16:04, 4 September 2009 (UTC)

Ex-expert

Ex-ex-pert. Double negative. You're a pert. ;-) -- Eriksiers 22:26, 6 October 2009 (UTC)

The world needs more perts? Rdm 16:34, 16 October 2009 (UTC)
When doesn't the world need more perts? -- Eriksiers 16:02, 19 October 2009 (UTC)

Filling out Rosetta Code:Add a Task

Could I get you, Dkf and Paddy3118 to give Rosetta Code:Add a Task a thorough treatment of examination, debate and filling? Of the cross section of current users, I think you three are probably the most likely to be familiar with the general pattern and concerns of creating tasks. I added a bunch of my own thoughts in HTML comments in-line, and left a note in the talk page. --Michael Mol 17:15, 21 September 2010 (UTC)

I had read it, and superficially it seemed good, though I was planning on giving myself a few days to digest it. But the best test would be to try and create some new tasks based on its recommendations -- if these new tasks need work, which was not covered by the task description, we could go from there. Anyways, maybe I will try and add a task based on it -- having never done so before, I think I might make a decent test subject. --Rdm 17:26, 21 September 2010 (UTC)

Rule 90

I decided to move our discussion about Rule 90 here so Comps doesn't keep getting emails.

On "end states": the task doesn't have anything to do with checking for end states, so any work that you do there is extra credit and doesn't really apply. On other similar tasks: tasks in the sorting category usually do fundamentally different operations. Also, lots of those algorithms are the subject of discussion in academics for early programmers. There is more of a demand for separate tasks for each of those algorithms (though there really isn't much demand at all for the silly ones, but those came after the fact). You may be right about prime decomposition and counting in factors. In any case, the next generation in the Rule 90/104 processes can be implemented as such (at least in the languages I program in; J seems to be different in ways I don't expect to understand):

For each character in this generation
  Count the number of neighbors
  If the number of neighbors = x and this character is y, then
    the character in this position in the next generation is z
  ...more ifs for whatever rules there are...
return the next generation

Basically a few numbers change for x, y, and z for all the rules. The setup of this generation and counting the neighbors doesn't really need to change. Counting the neighbors doesn't need to change at all (unless you want to add more characters, but then it's not really a Rule x game). Forest fire also does a similar operation to Conway's Game of Life , but it has extra stuff added in like random numbers and extra types of cells. Wireworld expands on that still by adding another type of cell. The change between Rule 90 and Rule 104 amounts to using a different lookup table. --Mwn3d 17:31, 19 April 2011 (UTC)

I had actually said pretty much what I had to say on this topic. And, I added a "Rule n" approach for J, on the One-dimensional cellular automata page, for just-in-case. --Rdm 18:15, 19 April 2011 (UTC)

202

Saw your 202x202. Made me chuckle, I admit defeat - my study is warm enough so I'll not compute a larger one :-)
--Paddy3118 15:25, 30 May 2011 (UTC)

J writeups

Your J writeups are appreciated. I don't always read them, and I don't even know J. Still, I see them as useful and valuable. In fact, for some time, I've been thinking that the more appropriate place for them might be right next to the code they describe. --Michael Mol 20:44, 5 July 2011 (UTC)

Thank you -- I am glad that I have something of an audience even if it's small. That said, I am not always certain of the quality of my comments (especially when the code drifts and my explanation was based on an older revision -- but also I sometimes write from an overly narrow point of view, where I do not treat important issues and instead go into excessive detail on narrow technical issues -- but these are fixable problems). That said, I think my comments tend to a bit too bulky to be part of the main page (which is why I have been dumping them on the talk page). But perhaps we/I should be linking them from the main page? --Rdm 21:03, 5 July 2011 (UTC)
The talk page is OK, but it might get crowded if other language communities start to imitate. You could also put them on a separate page and link to it from the example. Something like "Task/J/Explanation" or "Task/J/In-depth_explanation" might work. "Writeup" might feel too academic for some language communities (but it's less academic than "essay" which I see in the J community a lot), so a more general term might be best. Also it'd be nice to pick a standard name now in case we want to add this to a template or edit form as an optional input. --Mwn3d 21:09, 5 July 2011 (UTC)
I sort of like Task/J/Explanation, now I just need some round tuits. --Rdm 21:30, 5 July 2011 (UTC)
I too welcome the writeups. Their are pro's and con's to putting them in the talk page - currently, I am more likely to read them as I tend to also follow talk pages to find issues with tasks. Keeping such explanations there might also promote inter-language discussions of the kind "I essentially copied this apart from that, where my language has this ...". --Paddy3118 06:17, 6 July 2011 (UTC)
Ok! And, since I currently am low on the tuits, I am going to approach Short Circuit's suggestion this way: after other languages start including writeups (explanations of intermediate code goals, rationales for choices of data representation, explanations of atypical language features, whatever else), and the talk pages start getting crowded, I will happily help move J writeups to Task/J/Explanation pages and leave behind links. Until then, I'll be waiting on other people, and occasionally adding links from the task page to the existing writeups. Does this sound good? --Rdm 13:14, 6 July 2011 (UTC)
I'll note that the Perl 6 examples tend to have descriptions on the task page with their examples. Not sure I really like a lack of in-page presence (such as simply being on the page, or being transcluded from another page, or being retrieved and displayed via a little JavaScript magic), but I'm sure something will be ultimately worked out. As long as it's around, somewhere, and links are better than nothing. :) --Michael Mol 15:44, 6 July 2011 (UTC)
Sure, and J will also have descriptions on the page itself. The talk page writeups have typically been for either (a) detailed descriptions of issues which I hope would be obvious to someone that can read J and that has a copy handy for testing against, and/or sometimes (b) algorithmic concepts which are not at all J specific (issues which are almost worthy of inclusion in the task description). Mostly, though, I use the talk page for added-depth coverage of issues which require a lot of words and which are nothing like the treatment of the task by other languages. And, ok, quite possibly sometimes of the writeup text could be migrated to the task page -- if you see examples of that, I think you should feel free to copy the text over yourself or (if your comfort level is not that high) to call out the issue on the talk page. But my overall feeling is that a lot of those writeups are outside the scope of the task page. --Rdm 16:07, 6 July 2011 (UTC)
P.S. If anyone wants to move essay treatment from a talk page to a task page, please feel free to do so. Or add a comment on the talk page suggesting that that particular essay should migrate? It doesn't have to be me that moves things and I would feel bad if someone was waiting on me to improve things and that caused bad problems. Another approach would be to leave an email message on J's chat forum with a general suggestion. As long as we are not losing something important, I'm comfortable with other people making improvements. --Rdm (talk) 01:51, 18 March 2014 (UTC)

Non-continuous subsequences

Your first C example in Non-continuous subsequences is fine, while the second one using GMP is quite, well, nuts. I'd be tempted replace it with something more general (not limited by bits in int) without the nuttiness, although the value of generating all subsequences of a sequence longer than 32 is questionable. --Ledrug 05:57, 25 July 2011 (UTC)

I think someone else wrote the GMP example. --Rdm 10:41, 25 July 2011 (UTC)
Ah I see it now, never mind then. Sorry about that. --Ledrug 18:53, 25 July 2011 (UTC)

ISINs

Thanks for your comments on Talk:Calculate_International_Securities_Identification_Number. I've added some clarification and would welcome your thoughts. --TheWombat (talk) 00:08, 27 February 2015 (UTC)

Redirecting spam pages

Hi Rdm, thanks for the thought, but I found it initially confusing and more clicks to delete those redirected spam pages. I guess I am a little set in my ways :-)
Maybe you could apply to Michael Mol for the ability to help clean spam?
--Paddy3118 (talk) 05:08, 2 April 2015 (UTC)

I would avoid setting up redirects for spam pages. That results in the server replying with HTTP301, where as simple deletion of pages results in the server replying with HTTP404. A 404 lends no credence to a URL, whereas a 301 may... --Michael Mol (talk) 22:19, 4 April 2015 (UTC)


Anti-spam

Click on Admin links on the left. o/ --Michael Mol (talk) 22:19, 4 April 2015 (UTC)

I see no admin links on the left. Maybe I just need to wait a while...? --Rdm (talk) 23:32, 4 April 2015 (UTC)

So, Ok, I have been deleting a variety of "spam pages". These pages seem to be designed to pretend to be sociable and friendly, but invariably have no contribution to rosettacode, seem to always include some [irrelevant] offsite link, and actually get to be a bit annoying after a time. The sad thing is, also, that they tend to increase the likelihood that we might be rejecting or making difficult real positive contributions to the site and quite probably makes the site less useful than it might otherwise be to a variety of people. Still, the persistent nature of these spamming mechanisms does not seem to leave us with any better options. Sometimes there just are no good solutions? --Rdm (talk) 13:32, 22 April 2015 (UTC)

Seems about the right thing to do Rdm. A couple of years ago when Wikipedia decided to delete the Rosetta Code page one of the reasons given was the amount of spam?! (Together with the an argument that there were not enough references from books/papers/journals). Since then I think we've deleted a lot of spam and appeared in more journals/books. :-)
--Paddy3118 (talk) 14:18, 22 April 2015 (UTC)

use of base 10 (wording)

Why change the use of (in base 10)   (for the Rosetta Code task of EMIRP primes)?   This is what OEIS, Mathworld, and for the most part, Rosetta Code uses.   Also, I don't understand why you added (5+5).   All bases are not base 10.   There is base 2, base 4, ... base 16, ...     &nbsp. -- Gerard Schildberger (talk) 23:51, 3 July 2015 (UTC)

Because that was entirely circular. If the convention is that numbers are understood to be decimal, base 10 carries no additional information. --Rdm (talk) 02:02, 4 July 2015 (UTC)

It may be an understood convention that the primes be expressed in base ten, but primes are not decimal by nature, they are just generally expressed in base ten.   Primes are primes no matter which base they are expressed in.   However, emirp primes are only emirp primes when expressed in base ten since the requirements are written such that the prime must be expressed in base ten (for the rules to be applicable).   Normally, for the most part, most   kinds/types   of primes are primes in any base,   7(10)   is prime,   so is   111(2),   1916(17),   etc.   It doesn't matter what base the primes are expressed in.   However, there are certain primes that depend upon them being expressed in base 10 (or some other specific base):   truncatable (left or right truncatable) primes, repunit (rep unit) primes, apocalypse (or apocalyptic) primes, centrist primes, Smarandache (or Smarandache-Wellin) primes, tetradic primes, circular primes, Eprimes (or e primes), golden (or long) primes, invertible primes, palindromic primes, x (or eXtra-ordinary) primes, primary (or prime digit) primes, eban primes (which is equivalent to the large list of even primes), and I'm sure, many others.   I know of these fore-mentioned particular primes because I included them in my handy-dandy calculator and am familiar with their construction.   -- Gerard Schildberger (talk) 03:33, 4 July 2015 (UTC)
I agree - emirp primes use decimal representation as a part of their selection process. But consider, for example, a language where all numbers are expressed in octal. --Rdm (talk) 05:32, 4 July 2015 (UTC)
··· or in binary, for that matter.     However, that (or those) languages would probably have to somehow express (or conceptualize) numbers in base ten and then apply the applicable rules.   I can imagine there might be other programming mechanisms to find solutions (as per the rules).   The REXX computer language, for example, only expresses/stores REXX numbers in decimal digits (with optional decimal exponentiation), but that doesn't stop that language from expressing/handling numbers in other bases.   Note that I find a difference between   all numbers being expressed in octal   versus   how the numbers are stored/represented (internally).   -- Gerard Schildberger (talk) 05:57, 4 July 2015 (UTC)
By the way, is there such an animal?   That is, a language where all numbers are expressed in octal?   -- Gerard Schildberger (talk) 05:57, 4 July 2015 (UTC)
Well, for example, in pdp-8 assembly language, all numbers were octal unless they contained an 8 or 9, or used a d suffix. But given how many thousands of languages there out there I can't really say how many other languages might have that property. --Rdm (talk) 07:37, 4 July 2015 (UTC)

Addition-chain exponentiation

Hello, I'm glad to hear you see no problem. The Go solution starts with the sentence "A non-optimal solution". The tasks states: "Note: Binary exponentiation does not usually produce the best solution. Provide only optimal solutions.". No problem, indeed!

Actually, the program states in the comments "the techniques here work only with 'star' chains". But star chains are known not to be optimal.

The answer is thus wrong, sorry.

Arbautjc (talk) 21:14, 20 July 2015 (UTC)

Star chains produce correct values for the task. All algorithmic implementations have limits - resource limits if nothing else. If you want the task to eliminate star chains you'll need to modify the task with some appropriate test cases - you can't just rely on comments to extend the scope of the task. (--Rdm (talk) 21:32, 20 July 2015 (UTC)
In short, how do you know "Star chains produce correct values for the task."?
Star chains are proved to give suboptimal solutions in general (that is, there are many cases where they fail to give an optimal solution). Using star chains to find a solution that happens to be optimal is utterly useless: you have to prove it's indeed optimal by other means. Or you prove mathematically that it must be so, but I don't see such a proof here. I could as well write a program that just prints the optimal solution, it would be just as useless.
Arbautjc (talk) 21:46, 20 July 2015 (UTC)
Ok, so it should be easy for you to find a test case which illustrates this problem. --Rdm (talk) 21:51, 20 July 2015 (UTC)
It's hilarious. You seem to think that if I'm not able to prove the contrary, then star chains must be enough. That's insane mathematics. You have to prove it's enough, otherwise it's does not answer the "find an optimal solution" question. Arbautjc (talk) 22:01, 20 July 2015 (UTC)
You seem to be missing the point, which is that all computers are finite, and thus - by your reasoning - flawed and unacceptable. --Rdm (talk) 22:05, 20 July 2015 (UTC)
I have never claimed that computers are unacceptable. And indeed, you can find an optimal solution with a finite computer. Actually, given enough time, there is a very easy algorithm. I wrote one in Fortran 77, thus not even recursive, but it's too slow to find the solution for the values of the task (it would take months on my machine, and I don't want to wait that long, but notice people have found solutions to other problems using much more power for a much longer time: this is not the problem here). §And that my program is slow does not mean I have the best algorithm available (and I'm sure it's not optimal, regarding speed). On the other hand, I can prove it gives only optimal solutions (it looks for all chains). See the difference? Computers are prefectly acceptable, granted you use them correctly. Answering the question "find only optimal solution" by using an algorithm that is not proved to work (and actually, proved to not work on infinitely many cases) is not a correct way to use a computer. Here the problem is not the computer, it's your logic. Arbautjc (talk) 22:15, 20 July 2015 (UTC)
What you seem to be saying is that a non-answer which takes an unacceptable amount of time is correct, while a definite answer which is correct is not, because the procedure which produces the definite answer has been shown to be less than perfect for a case which your correct implementation could not hope to solve.
So, ok, sure, I can also show that an implementation which produces a few canned results which are correct and which otherwise sits in a while loop which never terminates produces only correct answers. And that is completely correct, logically speaking.
If you like, I suppose I could post a solution of that form to the Addition-chain_exponentiation task. It would be logically correct. --Rdm (talk) 22:25, 20 July 2015 (UTC)
"a few canned results which are correct and which otherwise sits in a while loop which never terminates": either your anger makes you write rubbish, either you have not the slightest idea of what computer science is all about. Notice that run time is often perfectly predictable, at least within bounds. Your irony perfectly mimicks ignorance. Of course, if you want to write such useless programs in RC, I'll mark them as incorrect. I hope your vanity will accommodate with this.
Regarding garbage, it's your problem (and if you think logic is garbage, it's hopeless).
Regarding the task: it's asked to give an optimal solution, the program does not provably give an optimal solution, period. Your suggestion of a neverending silly program would be equally suited: that is, equally wrong.
Sorry to be rude, but you really have to understand you are damn wrong.
Arbautjc (talk) 22:45, 20 July 2015 (UTC)
I would, in fact, prefer that you not be rude.
Still, the problem with logic is that - by itself - it's meaningless. Only if your underlying axioms are relevant can the results of logic also be relevant.
Which gets us back to the flaw of addition star chain and the related flaw in fixed width integers. These are both threshold problems. Below the threshold, everything works fine. Above the threshold, the results can be logically invalid.
You seem to have the idea that you can reject one threshold issue while entirely ignoring another threshold issue.
Frankly, I do not see the logic in that. At least, not based on any set of axioms which seem consistent with your objections here to my use of irony. --Rdm (talk) 22:54, 20 July 2015 (UTC)
It's not a theshold issue. Maybe you will see the logic in this then: "A Brauer-based algorithm will fail the first time at N = 12509." (if this site is not enough for you, I'll try to give you the reference in Knuth's TAOCP then, as I'm sure to have read this in one of the volumes). It's far below the values of the task, which is a big problem for me. And the article linked in the Go program does not claim to give optimal solution, on the contrary: "Even though minimal-length cf-chains are not optimal, they have the nice property of being easy to compute [...]".
From an engineering point of view, it would be perfectly fine to accept suboptimal solutions that are easier to compute. But it's not the task. The task clearly states that you have to find an optimal solution, not a fast suboptimal solution. No question of theshold: I know there are ways to compute this solution that are "unreasonably" long (but, with massively parallel computations and months of computer time, it's not that unreasonable). It can be done, really. Is it possible to give a much faster program? Maybe. Your mission, should you choose to accept it.
Arbautjc (talk) 23:06, 20 July 2015 (UTC)
If that is the case (I've not had time to check for myself), then 12509 would be the star chain threshold, and adding 12509 to the task requirements would be sufficient. It's possible, also, that the C implementation would fail for that value - I did not study it long enough to determine which algorithm it used.
Please feel free to add 12509 to the task requirements.
Oh, wait, reading that reference you cited, 12509 is the chain length where it fails, not the exponent. Well, since wikipedia provided a threshold with a chain about half that length, I think I might see a problem in the reference you cited.
So, anyways: it's a threshold issue. --Rdm (talk) 23:11, 20 July 2015 (UTC)
Why not, but actually, any value above this is a priori a problem, and the values in the task are both above. I didn't check the C program either. It's partially wrong, for a stupid reason (it does not compute a matrix power). And I didn't check 12509 by myself, since I computed optimal addition chains only up to 2048 till now. But I'm pretty sure there are much better algorithms, since optimal values have been computed for much larger values.
Following your edit: well, that kind of threshold, yes. But it's still possible to compute this correctly, without relying on a algorithm with unknown correctness (an unproven program that gives the correct solution by chance is still not a correct program, and is only useful if you can check the solution by other means, like what happens with integer factorization). It will just take more (possibly much more) time. Arbautjc (talk) 23:19, 20 July 2015 (UTC)

Correctness is in the answers. The algorithm is correct if it computes the correct answers. And, once again, any 64 bit integer implementation will have problems long before the Brauer based algorithm has problems. I agree with you that an unproven algorithm can be a problem, but it seems to me that the Brauer algorithm is no more unproven than signed 2s complement arithmetic is unproven. Keep in mind, also, that the task only asks for optimal solutions for a couple exponents. It does not ask for a solution for an exponent of 10^20, let alone for exponents larger than 10^1838. So, that is the issue as I see it - you have indicated that you would accept an implementation which we know would fail for an exponent in the rough vicinity of 10^19 but at the same time, you have indicated that you would not accept a different implementation because it would fail for an exponent in the rough vicinity of 10^1838. And then as proof that of the soundness of your reasoning you give me a reference to a writeup which indicates a failure for some unknown exponent which is much greater than 10^1838. Uh.... --Rdm (talk) 23:32, 20 July 2015 (UTC)

I don't understand what you mean:
  • Brauer chains (or star chains) algorithm is not "unproven", it's proved to fail to give an optimal answer in general (precisely, for infinitely many, mostly unknown, values). 2s complement arithmetic is just a way to compute, it's perfectly proved, in the sence that you can prove some usual laws of addition. How can you compare?
  • Brauer chains fail for 12509, what has your 10^20 or 10^1838 to do here? The real problem is that you can't accept a program that is unproven for 31415 and 27182, and known to fail for 12509 and for infinitely many other higher values, unless you can prove by some means that it indeed works for 31415 and 27182. It could, but how do you know? It could as well fail: without further investigation, you can't rely on Brauer chains above and including 12509.
  • The answer is not always enough: if you prove your algorithm gives the correct answer, then you know it's correct, period (or the computer has a bug, it happened once with the Pentium, or the compiler has a bug, etc. but these are other matters). However, if you have no proof that your algorithm gives the correct answer, then the answer alone is useless. Actually, no, here it's not completely useless, it gives an upper bound of the optimal value. But it's not enough if you want to be sure you have the optimal value. Thus "The algorithm is correct if it computes the correct answers." is true only if you know the correct answer. Without external knowledge, or a way to check answer, you have a value that may be right or wrong, you don't know: useless. In some situations, there are indeed checks: for instance, for integer factorization, there is a fast way to check an answer is correct, you just have to multiply the factors. For addition chains, there is no such quick and guaranteed check, only some known lower and upper bounds.
Arbautjc (talk) 23:41, 20 July 2015 (UTC)
Yes, http://wwwhomes.uni-bielefeld.de/achim/addition_chain.html suggests that an exponent of 12509 causes the Brauer algorithm to fail to produce an optimal value. So I am confused - why didn't you just update the task by adding this as a task requirement? And why did you refer me to that page which told us that the length of the chain was 12509? --Rdm (talk) 00:14, 21 July 2015 (UTC)
I have updated the task and comented in the Talk page about this.
Why do I refer you to a page that state that Brauer chains fail to be optimal for 12509? Maybe so that you can read that 10^1838 is not the least value for which Brauer chains fail?
Okay, is this just some troll? It's not very funny.
Now I'm done with this: the task is changed so that there is no doubt that Brauer chains are not enough. You have all the information to understand the problem, it's up to you.
Good night.
Arbautjc (talk) 00:23, 21 July 2015 (UTC)
ad hominem statements are not logical.
Good night. --Rdm (talk) 04:57, 21 July 2015 (UTC)

italicized fonts overlaying text

I could use another pair of eyes on this.   (I know, I know, like you don't have enough work to do ···)

In the   [Magic squares of odd order],   can you see what I'm seeing under the   Task   heading, in the first sentence?

The italicized   N   seems to be overlaying part of the (superscript)   2     (or is it the other way around?).   This is happening in two different places in that task's text.   Incidentally, this phenomenon disappears when the text (web-page) is made very large.

I've noticed that when using an italicized characters (via using two apostrophes), part of the last italicized character "leans over" into the next glyph, as it then appears to "overlay" the next blank (if there is one), making it appear that the blank isn't there, so the italicized text just blends (or bleeds) into the following text.   For this reason, I normally insert a blank before and after every italicized text to preserve the integrity of the text following the italicized characters.   The leading blank is needed to make the italicized characters appear "balanced" as far as inter-word spacing.

If you're seeing what I'm seeing, then I'll correct the (overlaying) text.   I'm the author of that Rosetta Code task, but it was "tidied" up by another person without any discussion, so I have to tread carefully here, least there may be an editing war, and some people don't like to have their authority questioned.   The expression was fine when I entered it originally.   I have a much different opinion on what "tidying up" means.   -- Gerard Schildberger (talk) 21:42, 12 September 2015 (UTC)

Nah, I never have too much to do... and if I just had enough time, I might be able to accomplish even a fraction of what I need to get done.
Anyways, I took a look at Magic squares of odd order (and I think you men the first sentence of the second paragraph?) and the has the N touching the 2 when I look at it using internet explorer. They do not touch (though they look close) when I look at it using chrome, firefox and safari. I also took a look at the history, and I guess your N2 works well enough - though the 2 is pretty close to the N there, also.
Personally, I do not have a strong opinion on the issue. I don't know if that helps... --Rdm (talk) 22:13, 12 September 2015 (UTC)
Yuppers, I should've mentioned that I'm using Firefox Aurora.   I didn't suspect it might be a particular rendering by a specific browser.   On my (largish) screen, those characters mentioned above really do overlay each other, almost obliterating the superscript.   I'll correct the problem, we might as well have all Rosetta Code readers seeing the same thing without any obliterations.   -- Gerard Schildberger (talk) 22:23, 12 September 2015 (UTC)

The regex article

You recently deleted the regex article and it says the reason is that it was a blank page. I believe it was a redirect to the "Regular expressions" article. Did i forget to fill in the actual redirect or is there another reason for it to be deleted? --Bugmenot3 (talk) 20:15, 12 October 2015 (UTC)

(I posted a response on your talk page. Hopefully you will feel that I have addressed your concerns in a constructive fashion?) --Rdm (talk) 21:12, 12 October 2015 (UTC)
I'm not too experienced in the mediawiki discussion style, so please guide me if i'm doing it wrong. But thanks for the response. I often add redirects to wikis i use, when the first word i use looking for something isn't a result. I try to make sure it's generically relevant when i do it though. I could be doing other things as well, but this is scratching my own itch, cause i'll probably be back to find out how regexes are used, next time i use it in another language. I'll make sure to avoid submitting empty pages though. --Bugmenot3 (talk) 17:35, 13 October 2015 (UTC)
What is the benefit for other people, of these redirects? I deleted a couple today which seemed rather arbitrary and pointless, and I would be inclined to do that again for redirects where I don't understand how they would be useful to others. --Rdm (talk) 18:40, 13 October 2015 (UTC)
The point is if many people use alternative names for something, they can still easily find it. Regex is a well known abbreviation for a regular expression. The latter is the more correct name and thus the name of the actual article. It's not necessary to have these redirects, but it makes it easier to find things, if you don't happen to look up the most correct term for something. This wiki is good for people working in multiple languages, as it compares them side-by-side and i find the redirects quite useful for that. But i don't really have any stripes on my shoulders on this wiki, so if you don't want it here, i won't add them. --Bugmenot3 (talk) 12:10, 14 October 2015 (UTC)
So... I went to google, yahoo, bing, and yandex, and performed the search site:rosettacode.org regex and the regular expressions page came up as the first hit. And, I deleted the "Regex" link, again, and tried the search on this site, and it came up as the second hit. So it sounds like you are trying to solve a non-problem? Anyways, I guess my point is that I would rather you put your efforts into solving problems that matter for people - and the trivial problems, while they can be good learning exercises, aren't really worth much time nor effort. --Rdm (talk) 12:33, 14 October 2015 (UTC)
If i lookup regex on wikipedia, i get redirected to their regular expression article. It's worth my own time, because i'll definitely be looking up some of them again and maybe others will too. It won't save my life, but it's enough to be worth the minor hassle in my opinion. It's not going to affect any decision to help on other things on this wiki. --Bugmenot3 (talk) 09:29, 16 October 2015 (UTC)

Spam attack

The following user accounts are part of a recent spam attack: QDWHerman91339, Bettina10F, LesleeXcj136, TiffanyD36, SwenBernard7041. Please remove these accounts. Thanks. --Andreas Perstinger (talk) 07:20, 21 December 2015 (UTC)

Yeah, I try and do a sweep every so often. Note, however, that wiki signup implementation currently allows people to post content without even having a valid email address (or, that's how it worked last I tried). So that makes it more work for us to clean up than it makes for whoever this silly person is, to sign up and start posting. Also, the rather consistent format of the spam, here, suggests that this is the work of one individual or one institution.
Personally, I would rather they put their effort into being useful, but I do not know how to reach out to them to accomplish that. Nor, for that matter, do I have the patience and motivation to address the current wiki software issues. (Do you feel up to that task?) --Rdm (talk) 13:52, 21 December 2015 (UTC)
I've just created two new user accounts (Testuser and Testuser2) and I wasn't able to create a new page without confirming the email address. So, at least that makes it a little bit harder for a spammer. I guess all these bogus accounts use different e-mail addresses, don't they? (BTW, you can delete my two test accounts).
You've probably noticed that since a few weeks I replace new spam pages with the remove template. Is that ok for you or does that encumber your admin work? Most of the time the spammer doesn't come back using that account. But the recent attack continued (although I've noticed that most (all?) new spam edits didn't contain external links any more) that's why I wanted to point it out. AFAIK, you and Paddy3118 seem to be the only admins who currently take care about removing spam. I usually browse through all the recent edits during breakfast or dinner (using the RSS feed) so I may be able to help cleaning up.
What are the software issues you mentioned? --Andreas Perstinger (talk) 20:20, 22 December 2015 (UTC)
Well, I am pleased and surprised that you were not able to create user pages before you responded to the email confirmation. (That was the software issue I was referring to.) I'll do a cleanup pass and get your test accounts then, also. Thanks! --Rdm (talk) 20:50, 22 December 2015 (UTC)
Hi rdm: I was reading a few discussions and I’ve found that you are very experienced, responsible and reasonable admin. Just like me. LOL Can you please make me an admin to avoid blocking by RC my login, pictures posting, etc. I’m a victim of RC fighting spam too hard.
I can’t spend much time as editor/admin on RC. but In turn, I will always do small edits, i.e., correcting typos, wrong Wiki tags using, fixing codes, etc. Actually, I’ve already made a few edits (I think this March). But later I’ve stopped, because RC decided I’m “super” active and started blocking my login.
You can always check my code and pictures on OEIS Wiki. Fortunately, they are not blocking pages and pictures.
Before, I proposed to set “trusted user” account, but later I’ve found that somebody already proposed it many-many years ago. So, being admin would help me with contributions. Many of them are coming soon. --AnatolV 23:30, 7 July 2016 (UTC)
Hi again rdm: I went to File list and discovered that all file uploads are blocked since 6/2/2016, even admins have no uploads. Is it sort of technical glitch? Or result of fighting spam too hard?? --AnatolV 17:35, 13 July 2016 (UTC)
I am not the right person to ask about that. Try Short Circuit. --Rdm (talk) 19:29, 13 July 2016 (UTC)

request for another J program version

For the Rosetta Code task Prime conspiracy,   I'd like to humbly request that ya   (what? ··· more work?)   add another version that displays the frequency occurrence(s) in terms of a percent (%);   it would make it easier (for my ole eyen) to visually compare to other results.


I'm sure that it's practically a no-brainer in J.

Also, why do the one-off's frequency occurrences show up as a very large number (1e_6   or one million)   instead of the equivalent to 1/1,000,000 ?   -- Gerard Schildberger (talk) 00:13, 23 March 2016 (UTC)

I'll add a formatted version. But _ followed immediately by numeric digits is J's "minus sign". So 1e_6 is 1 divided by a million. J uses "_" instead of "-" for minus so that negative numbers in a list of space separated numbers do not turn into an attempt to subtract one part of the list from another part. --Rdm (talk) 02:29, 23 March 2016 (UTC)
Ah!!   With a little more sleep, I've could've figured that out   (J's  1e_6   is   1e-6).   I mistook it for some kind of blank.   Soooooooo, ... nevermind.   -- Gerard Schildberger (talk) 02:36, 23 March 2016 (UTC)
That's something of a common problem with J - it all seems obvious only *after* you understand what it's doing... --Rdm (talk) 02:40, 23 March 2016 (UTC)
Yuppers.   -- Gerard Schildberger (talk) 02:46, 23 March 2016 (UTC)
And, thanks for adding the   %   thingy to the Rosetta Code task   Prime conspiracy.       J is sure a concise puppy, eh?   -- Gerard Schildberger (talk) 02:46, 23 March 2016 (UTC)
It can be, it kind of depends on what you are doing, though. In this case, all I needed to do was multiply by 100 (and since I was already dividing by the number of prime pairs to get that ratio, I just rolled this change into that divisor - adding a decimal point in the right spot) and add a percent sign (which was not actually all that concise, considering I needed to add six characters of code just to add a single character, in the right place, to my output rows). --Rdm (talk) 02:51, 23 March 2016 (UTC)

help on fixing an apparent change

Sometime in the past few months, a change was made   (perhaps Rosetta Code?)   that now renders certain character strings within the   <pre>   HTML tag.

Apparently, there are certain strings   (containing commercial at signs @)   that now causes the text


  [email protected]▓


to be displayed embedded within the displayed text,   and where the     character is (for me) an unviewable glyph which has the code:   ff over fd   (in a "box").

This can be observed in a few of my REXX's   output   sections.

You can view this anomaly with the 2nd and 3rd REXX's   output   (versions 2 and 3)   section for the Rosetta Code task   Mandelbrot set.

There are some (a low number) mangled REXX outputs for other Rosetta Code tasks as well.   This mangling didn't happen when I entered the effected (REXX) outputs   (entered possibly years ago).

Another example (it was just entered):   the   J   entry's output for the Rosetta Code task     Compile-time calculation     -- Gerard Schildberger (talk) 20:35, 9 April 2016 (UTC)
Thanks for catching that. I've put a note there, for now. --Rdm (talk) 20:58, 9 April 2016 (UTC)

I was always under the impression that Wiki won't change or reformat any text with a   <pre>   HTML tag.

It appears (possibly) some change was made to Rosetta Code to try to protect (or hide) people's e-mail addresses, most likely because of the rash of spamming   (or whatever ya call it)   for all the many imbedded links and such.

If you think I should contact somewhere else concerning this problem, please advise.   -- Gerard Schildberger (talk) 06:03, 1 April 2016 (UTC)

This sounds like a change induced by an upgrade of the mediawiki software. It looks like someone upstream put in some kind of sloppy stop-gap hack to defeat spammers, and you're getting hit by that. The ideal approach would be to read up on mediawiki docs to find out how to reconfigure or change this. And, then, when you figure out what to do about it, you should let Short Circuit know. (Thanks!) --Rdm (talk) 06:27, 1 April 2016 (UTC)
My expertise regarding HTML tags is of the sort:   monkey see, monkey do.   If I see someone else use some nifty thingy, I then use that new gizmo feature   (for a lack of a better description).   Actually reading up on mediaWiki and all its ..., er,   quirks   (my word) is currently beyond my skill set.   -- Gerard Schildberger (talk) 06:36, 1 April 2016 (UTC)
Hmm... well... Personally, I have not run into this problem yet. One possibility might be to ask Short Circuit to install the ConsoleOutput Extension. Or maybe there's another extension that would better match what you like? --Rdm (talk) 07:06, 1 April 2016 (UTC)
I haven't any idea what a   console extension   is or what it does (or supposed to do), and that's after I read the link you included.   What I'd like is that the <pre> HTML tag would do is behave like it did a year ago (or whenever).   (And, what the hell, also I'd like world peace and an end to world hunger).   I'm also concerned that   everyone   is having this problem viewing text that has the character   @   spewed around.   I see this as not only my problem, but mediaWiki (or possibly Rosetta Code).   One short-term solution would be to just use a different character   (other than @)   for dithering.   But that won't solve the problem of other people who used the   @   character for ASCII art dithering.   -- Gerard Schildberger (talk) 07:21, 1 April 2016 (UTC)
Console Output is like pre, but intended for output of programs. It also has a dark background and allows a bit of markup within it. But the thing is: we're dealing with the php community here, with all that that implies... --Rdm (talk) 07:33, 1 April 2016 (UTC)
As I understand it, the problem is with CloudFlare's email protection. Strangely, only some of the @ characters trigger it. If you look at the page source of the Mandelbrot set page you can see that in the output section for the Vedit macro language these email protection links are also inserted. But my version of Firefox (45.0.1 on a Linux system) ignores them and renders the output correctly. According to the CloudFlare docs you should be able to turn if off by inserting <!--email_off--> and <!--/email_off--> HTML comments but it didn't work. I will try to contact Mike via IRC. --Andreas Perstinger (talk) 17:09, 1 April 2016 (UTC)

Convex Hull

I just stumbled upon a page called Convex hull. It's not listed on the "tasks not implemented in Java" page. Is this supposed to be a draft task? It doesn't have the appropriate template to mark it as such. Or is this deliberate? Fwend (talk) 20:10, 14 April 2016 (UTC)

I've added the necessary template for a draft task.   -- Gerard Schildberger (talk) 20:15, 14 April 2016 (UTC)

$T.REX talk

Looking back of what's been said/posed on the talk page for the $T.REX subject, I would feel better if the whole kit 'n caboodle would be moved here, or to my user talk page, especially in lue of what's been written by me regarding my mistreatment (as I see it).   I certainly don't want Rosetta Code to be battle ground if people take offense of what's been posted.   -- Gerard Schildberger (talk) 06:33, 10 May 2016 (UTC)

"math" HTML tag not rendering properly

TL DR; This is now diagnosed, but not yet everywhere repaired, at the time of writing. Hout (talk) 19:05, 20 September 2016 (UTC)
The subset of browsers which display the 'fallback' graphic file, rather than locally processing MathML code (the majority of browsers, in fact) are prevented from displaying the formula graphic by a piece of syntactically ill-formed HTML which can be generated, in response to certain unexpected inputs, by the MediaWiki 1.26.2 processor.
One of the unexpected inputs, introduced to a largish number of task pages over recent months by a well-intentioned program of editorial tidying, simply consists of <math> tags to which redundant white spaces have been added, either before or after the Latex content.
The point to be aware of is that in Rosetta pages, we are not editing actual HTML <math> tags, but rather MediaWiki input <math> tags, which are translated into their HTML counterparts by the MediaWiki processor. It is therefore important to be aware of the behaviour, and the input expectations, of the MediaWiki processor itself.
It is also important to check the real effects of any edits which we make on both kinds of browser - the majority, like Chrome and Safari, that display the graphic file, and the minority, like Firefox, that generate an image by local processing of MathML code, and depend on the local installation of requisite fonts. Hout (talk) 19:05, 20 September 2016 (UTC)

At least two people   (Rosetta Code users),
"WillNess" and "Walterpachl",
have noticed that the   <math>   HTML tag doesn't appear to be rendering properly   --- at least, on two separate Rosetta Code tasks.

The first occurrence was a few days ago when Walter Pachl noticed that the   <math>   HTML tag within the Rosetta Code task   Carmichael 3 strong pseudoprime   wasn't being rendered   (it was showing up as blanks)   on his screen.   Walter had sent me via e-mail a screenshot of his terminal screen.

Then today, user   WillNess   said that the Rosetta Code task   Hamming Numbers   <math>   HTML tag wasn't being rendered properly.

Both Rosetta Code task preambles have been fixed by using other HTML tags to get around the problem.

Now, I never had noticed anything wrong,   I'm using an outdated (old) Firefox Aurora.   My operating system (Windows/XP) won't allow me to upgrade my old Windows Internet Explorer, so I can't (from my computer) determine where the problem is as far as a possible error in --a-- web browser.

Just today, I "fixed" the   Carmichael 3 strong pseudoprime   preamble to not use the   <math>   HTML tag.   I changed it to use the <i>   HTML (italics) tag instead.

However, this may just be hiding the original problem.   There are plenty of other Rosetta Code tasks using the <math>   HTML tag,   and I was thinking that maybe the problem should be addressed by someone who has more (working) web browsers and to try to find out where the problem lies.   -- Gerard Schildberger (talk) 19:22, 5 July 2016 (UTC)


I don't know what web browser(s) that users   WillNess   and   Walterpachl   are using.

I'm hoping that you may find out where the problem lies or know someone who has the ability and/or tools to diagnose this problem.   Not to mention the time.   Thank you in advance.   -- Gerard Schildberger (talk) 19:33, 5 July 2016 (UTC)

It has been flaking out, for me, also. Worse, approaches which work sometimes fail other times. But if it has been working for you this probably means that the problem resides not in the rosettacode implementation but in the recent releases of defective browsers. (And this also suggests both management failures and open source community failures.)
Anyways, for a diffuse problem like this, maybe we just need to take a different approach. For now, I guess, I try to read the markup in the source, and I sometimes try to find a browser that will render the stuff. That's probably not a good solution... --Rdm (talk) 21:28, 5 July 2016 (UTC)
Following the discovery that formulae were consistently ceasing to be visible on OSX and iOS Chrome and Safari after Gerard made some edits which included introducing two white space characters into <math> tags (one before and/or one after the Latex code) - I ran a diff on the HTML source code generated by the wiki software. First without, and then with the flanking spaces.
It turns out that the wiki preprocessor does not pass those redundant spaces through, but instead responds to them (for reasons best known to its writers and its particular history) by injecting a lengthy and redundant "alttext" attribute into the math tag.
The intended effect of an alttext attribute in the top level math tag seems unclear. This Mozilla page suggests that the attribute is marginal https://developer.mozilla.org/en-US/docs/Web/MathML/Element/math, and the current MathML standard says that it: "provides a textual alternative as a fall-back for user agents that do not support embedded MathML or images."
Given that the redundant injection of flanking white space around Latex content in <math> tags changes the code generated by the Wiki preprocessing software in this way, and has the effect of preventing the display of formulae on the main browsers used on two large platforms (99% of iOS browser usage and 90% of OS X browser usage – see http://www.zdnet.com/article/which-browser-is-most-popular-on-each-major-operating-system/) I recommend that the use of these spaces is explicitly discouraged in the Rosetta code editing guidelines. The practice adds no value for users, but does entail costs. Hout (talk) 21:40, 16 September 2016 (UTC)
Ah - getting clearer now, on closer reading of the diff.
It looks as if the injection of the fallback alttext attribute is prompted by some kind of error condition, and is not itself the source of a failure to display.
The deeper problem is that when the redundant flanking spaces are introduced into the math tag by a human editor, a bug in the wiki preprocessor generates some ill-formed HTML code, dropping a semicolon at the end of a vertical-align attribute and concatenating the alignment value straight into the name of the following height attribute, so that we get something like: vertical-align:-2.671exheight where there should be a semi-colon between the letters x and h.
Restoring the missing semi-colon between the vertical-align and height attributes proves sufficient to restore the lost visibility on the main iOS and OSX browsers.
In short - there is a bug in the wiki pre-processor. Adding the redundant spaces to a math tag leads to the generation of an ill-formed fall-back image metatag, with a corrupted pair of placement attributes. Not a bad reason for scrupulously avoiding the injection of redundant space into <math> tags, and clearly no fault of the browsers. Bug report, some one ? Hout (talk) 22:39, 16 September 2016 (UTC)
Thank you for your efforts to locate where the problem lies.   What I don't understand why it works for FireFox and Microsoft Internet Explorer, and not the iOS and OS X browsers?   Shouldn't it be failing on all web browsers?   (I'm not quite sure where the Wiki pre-processor "fits in" with the browsers.   -- Gerard Schildberger (talk) 22:50, 16 September 2016 (UTC)
The pre-processor doesn't "fit in" with the browsers, it simply generates the HTML code which they all read. Browser behaviour with syntactically ill-formed code is undefined. One could even argue that the Chrome and Safari non-display of an image placement tag with corrupted placement attributes is more "correct" than the Firefox response of concealing the problem and making a guess. Both approaches are understandable. Hout (talk) 22:56, 16 September 2016 (UTC)
Since it is now known that "flanked whitespace" causes ill-formed code in the Wiki pre-processor being used on Rosetta Code, then why don't those whitespace(s) be (solely/only) removed, and not change (remove) the use of larger fonts (larger fonts [using the BIG HTML tag] makes the formulae easier to read).   Easier to read/peruse formulae was the whole intent of the changes in the first place.   The baby is being thrown out with the bathwater.   -- Gerard Schildberger (talk) 19:54, 17 September 2016 (UTC)
In most cases removing the flanking white space which you introduced proves sufficient to allow the MediaWiki processor to start generating syntactically correct code again. In some cases it does not prove sufficient to do that, but reverting the code to the state in was in before your edit does prove sufficient. More diagnostic work would doubtless reveal exactly what other aspects of your changes have proved unexpected or indigestible to the MediaWiki process, but all the work of diagnosing the problem, restoring visibility to these formulae, and, most time-consuming and exhausting of all, gradually overcoming your puzzling personal reluctance to accept and understand what has been happening, has already cost me more time than I can afford. It may be that in some cases the processor is unable to generate a graphic file as large as your double "big" tags are requesting, in the space in that it calculates to be available.

Mr Hout:   You needn't worry about my supposedly reluctance to accept what has been happening.   You are assuming and/or interpreting wrongly (of my beliefs of what has been happening), and most of your assumptions are wrong concerning my knowledge and acceptance of what's occurred to cause the failure.   I've learned what is triggering the failure in the Wiki pre-processor and how it causes some browsers to not render the formulae.   Your personal snipes about my supposedly puzzling over whatever doesn't need to be voiced, it is wrong to assume that, and also wrong to voice it in such a way that it seems that my reluctance is the cause if your more wasted time.   If it's important for you to know what I know or accept, then ask me directly instead of wrongly interpreting what I believe.   You're not overcoming my reluctance, I've already understand the problem, and I learned more from Rdm's statements and thoughtful wording of the problem.   I also understand that it is important for people to view the formulae, and by the removing of whitespace, this will cause (or should cause) the correct rendering of the formulae.   Whether or not you believe I understand the problem or not shouldn't effect fixing the problem (or implementing a work-around).   I don't see the need for voicing such negative and inappropriate (and incorrect judgements) concerns about my intentions or beliefs.   -- Gerard Schildberger (talk) 21:16, 17 September 2016 (UTC)


In short, I am doing the only the minimum required to restore the visibility to fomulae unintentionally hidden by your edits. I quite understand that it must be distressing to see some part of your work undone. I hope you can make the effort to understand that it might also be distressing to see the formulae vanishing entirely ... Hout (talk) 20:12, 17 September 2016 (UTC)
No, you're wrong about my distress.   I do not see it that way at all, the underlying problem should be fixed, or at least in the meantime, have the rendering of the formulae changed so as to be seen/viewed by everyone.   Nothing is being undone (externally), the formulae are being edited/changed so they can be rendered for all browsers.   Please don't ascribe/assume feelings to me that aren't accurate or presumptive.   This only detracts from the civility here on Rosetta Code and clutters up the discourse.   I don't have to make an effort, I   already   understand the need to make the changes (quickly) so that the formulae don't vanish entirely.   -- Gerard Schildberger (talk) 21:16, 17 September 2016 (UTC)
If anyone else would like to take a turn and try the experiment with these repairs - look at Heronian Triangles. Gerard's edits have inadvertently rendered the formulae on that task page invisible to browsers which display the graphic file, and it is one of the cases where just removing the flanking white spaces which Gerard introduced inside Math tags does not prove enough, but reverting the formula code entirely to the state it was in before his edits does prove enough to restore visibility Hout (talk) 20:25, 17 September 2016 (UTC)

This isn't the only bug that exists (at least, as far as rendering HTML code).   I know of four others that would fall in the "flaky" category, as one widens the window, or contracts it, "things" disappear or re-appear (mostly the top or bottom of a "box", or internal lines of a grid (table), so I had suspected a browser problem or perhaps an operating system (Microsoft Windows in my case), but I didn't have the knowledge or tools to diagnose it and/or pursue problem resolution.   One such problem I observed around three or four (?) years ago in rendering a vinculum (for a square root glyph), it worked on FireFox, but not Microsoft Internet Explorer.   It turned out that MS' I.E. rendered some text wider, and FireFox didn't, so it showed on one browser, but not the other, but it was mainly due to the fact that, at that time, I had a really wide high-resolution monitor, and something was tripping (apparently) probably on the same problem that is now being observed.   To make the problem resolution a wee bit more complicated, I had dual (identical) monitors, each monitor had a different browser (and driver protocol), so I thought it might be a monitor or driver problem.   That particular problem was really driving me batty (costing me more than a few number of hours), ... it was there, made a small change, next time ... it wasn't.   The small (benign) change could be just changing my socks.   -- Gerard Schildberger (talk) 23:13, 16 September 2016 (UTC)


Rdm:   I finally --- (after dealing with myself being mostly bed-ridden and I'm more or less restricted to sitting up for short periods, so I'll try to make this brief) --- got my (Windows/XP) system somewhat repaired (I think) and I can now post thingys to Rosetta Code via FireFox (Aurora), but my Microsoft Windows Internet Explorer (version 8) is still flaky and can't open most webpages.

After doing some rather vague (Google) searches, I finally found something that touches on this problem (on Rosetta Code) and apparently solves the underlying problem, but I have no idea if that "patch" applies to any software that Rosetta Code is utilizing.   That bug/problem (below) started occurring (elsewhere) on or before April 12th, 2016.

The circumstances that caused the missing semicolon was not mentioned, however.


The fix is referenced on:   https://gerrit.wikimedia.org/r/#/c/283166/


Here is an excerpt:

 Change 283166 - Merged
 Ensure use of ; to seperate (sic) svg styles It was reported that under certain 
                 circumstances a semicolon was missing from the SVG style. 
 Bug: T132563 
 Change-Id: I148433657848fdc74889fcaf6d883078c46a4006 


Here is a description of the bug:

I believe there is a bug in the way the SVG fallback image HTML code is generated when 
using MathML + SVG fallback and a Mathoid server.
Here is the HTML code that I obtain for an SVG fallback image:

<meta class="mwe-math-fallback-image-inline" aria-hidden="true" style="background-image: url('/wiki/index.php?title=Special:MathShowImage&hash=2caf40baf06d5cb633d350e651164506&mode=mathml'); background-repeat: no-repeat; background-size: 100% 100%; vertical-align:-2.338exheight: 6.176ex; width: 24.761ex;" />

There is a missing semicolon between vertical-align:-2.338ex and height: 6.176ex, and 
that causes the SVG image not to display on the page.
The problem appears to come from the function correctSvgStyle( &$style ) in MathMathML.php.

I could fix the bug by changing the line

     $style .= ' ' . $styles[1]; // merge styles

into

     $style .= ' ' . $styles[1] . ';'; // merge styles

Elsewhere, it was said:

Such a picture has zero height and thus is invisible (in chrome).

This is, as the saying goes, way over my pay-grade.   I have no idea even where the fix goes (to be applied), nor if that fix is applicable to the software on Rosetta Code.

But it sure seems to address the issue of the missing semicolon in the SVG style.   It was said elsewhere on Rosetta Code that the Wiki pre-processor is slightly back-level.   Maybe updating that piece of software may already have such a fix already applied?

But, in any case, I assume you know who to contact (in Rosetta Code land) and see if this fix/patch is applicable or even applies.   -- Gerard Schildberger (talk) 21:33, 25 September 2016 (UTC)

The real underlying problem is that two different formula display methods are used by current browsers, and that these are served by two different parts of the generated code.
Even if, as we hope, this particular fragility happens to get fixed at some point, it will remain unsafe and imprudent to make formula edits in one class of browser without checking their real effects in the other. Hout (talk) 22:03, 25 September 2016 (UTC)

Sieve of Eratosthenes

The J verb sieve0 fails on an argument of 2. You have to do something like 2=+/0=(i.3>.y)|/i.y rather than 2=+/0=|/~i.y . Roger Hui (talk) 14:03, 27 July 2016 (UTC)

Thank you. Fixed. (Or, at least in principle - I am currently waiting for the edited page to show up.) --Rdm (talk) 16:07, 27 July 2016 (UTC)

Tasks still affected by MediaWiki <math> tag issue as of Sept 21 2016

The following 54 Rosetta task wiki pages still generate ill-formed HTML which prevents display of graphic (usually formula-displaying) files. Most, but not all, of these arise from a recent program of cosmetic edits in which redundant white spaces were were introduced into <math> tags, and these were flanked by a pair of <big> tags. It was not appreciated by the editor involved that the MediaWiki processor did not anticipate or properly handle some of the input patterns that were introduced, and their effects were, unfortunately, only tested on a minority type of browser which does not use graphic file display for formulae. The fact that formula after formula was being left completely invisible to most browsers took several months to sink in and be properly understood.

In a number of tasks the edits have now been reversed and formula visibility restored, 54 remain unfixed.

Names of tasks still affected

"A+B" (now fixed – Hout (talk) 08:23, 22 September 2016 (UTC))
"Ackermann_function"
"AKS_test_for_primes"
"Amicable_pairs"
"Arbitrary-precision_integers_(included)"
"Arithmetic-geometric_mean"
"Benford's_law"
"Carmichael_3_strong_pseudoprimes" (now repaired – Hout (talk) 09:50, 22 September 2016 (UTC))
"Casting_out_nines"
"Check_Machin-like_formulas"
"Chinese_remainder_theorem"
"Conjugate_transpose"
"Constrained_random_points_on_a_circle"
"Deal_cards_for_FreeCell"
"Display_a_linear_combination"
"Egyptian_fractions"
"Elliptic_curve_arithmetic"
"Equilibrium_index"
"Euler_method"
"Farey_sequence"
"Faulhaber's_formula"
"Fractran"
"Gamma_function"
"Hash_join" (now fixed – Hout (talk) 19:42, 22 September 2016 (UTC))
"Heronian_triangles"
"Hofstadter_Figure-Figure_sequences"
"Hofstadter_Q_sequence"
"Identity_matrix"
"Integer_roots" (now fixed – Hout (talk) 08:29, 22 September 2016 (UTC))
"Jaro_distance" (now fixed – Hout (talk) 20:02, 22 September 2016 (UTC))
"Josephus_problem" (Repaired ---Paddy3118 (talk) 15:30, 28 October 2016 (UTC))
"Knuth_shuffle"
"Least_common_multiple"
"Ludic_numbers" (now repaired – Hout (talk) 21:08, 22 September 2016 (UTC))
"Modular_exponentiation"
"Modular_inverse"
"Monte_Carlo_methods"
"Multifactorial"
"Multiple_regression" (now repaired – Hout (talk) 21:23, 22 September 2016 (UTC))
"Nth_root"
"Permutations_with_repetitions" (now fixed – Hout (talk) 08:26, 22 September 2016 (UTC))
"Pi" (now repaired – Hout (talk) 21:33, 22 September 2016 (UTC))
"Pythagorean_triples"
"Quaternion_type"
"Real_constants_and_functions"
"Runge-Kutta_method"
"Sattolo_cycle"
"Shortest_common_supersequence"
"Subtractive_generator"
"Sum_of_a_series"
"Sutherland-Hodgman_polygon_clipping"
"Test_integerness"
"Thiele's_interpolation_formula"
"Trabb_Pardo–Knuth_algorithm"
The list is generated by searching through the task HTML for the pathological string exheight, which arises when a semicolon is missing between a vertical-align attribute and a height attribute. Hout (talk) 20:17, 21 September 2016 (UTC)
There is a full list of repairs so far (I am aiming to update it weekly), at http://rosettacode.org/wiki/User_talk:Gerard_Schildberger#Restoring_formula_visibility_to_50.2B_tasks_for_Chrome.2C_IE.2FEdge.2C_Safari_etc
As of today, we are down to 34 tasks whose visibility to most browsers has yet to be restored. Hout (talk) 18:16, 28 October 2016 (UTC)

Upgrading generator to MediaWiki 1.27 ?

In case this is not already in hand, I notice that Rosetta pages are currently generated by <meta name="generator" content="MediaWiki 1.26.2"/> and that 1.27 is available at https://www.mediawiki.org/wiki/Special:ExtensionDistributor/Math

Discussion here:

https://phabricator.wikimedia.org/T136089

suggests that 1.27 might alleviate the problem of formula images made invisible (zero height) when a semicolon is lost between two attributes: Hout (talk) 21:41, 25 September 2016 (UTC)

Caveat: I saw and fixed a similar problem yesterday on Wikipedia (redundant space in <math> tag triggering the same loss of semicolon and visibility), despite the fact that Wikipedia is using a later build of the generator. Hout (talk) 21:52, 25 September 2016 (UTC)


fixing "language" entries so that they appear as languages

I noticed that some language entries   (perhaps improperly configured or wrongly set-up or other)   are not appearing at "languages"   (in the Category:Programming Languages   page).   Lately, I noticed another:   Shapely.

Undoubtedly, there are others   (and I now regret not writing them down in a list of some sort).

I think it seems/appears to have the   #REDIRECT   thingy "backwards".

I have in the past fixed a number of them, but I am now reluctant to fix such errors at this point.   Some of these improperly set-up languages are a bit beyond what I know about how to fix such things.   Perhaps you could fix and/or address these one (that I know of) language (definition) entry such that it appears where it ought to appear.     -- Gerard Schildberger (talk) 09:23, 26 March 2020 (UTC)