Talk:Break OO privacy

From Rosetta Code

Should this task remain?

Is this a correct subject for RC? Will it reflect badly on RC? Will it attract the wrong kind of audience to RC? --Paddy3118 03:42, 6 August 2011 (UTC)

I think that if it is something that some languages can achieve then the task should remain. Not every idea is necessarily good practice in all languages, but if it is achievable, then it can still be demonstrated for comparative purposes. Markhobley 08:23, 7 August 2011 (UTC)

Context?

What is the context for this task?

In other words, are we talking about using a debugger? or are we talking about implementing an inheritance hierarchy? Or are we talking about code analysis? Or.... ? --Rdm 17:49, 6 August 2011 (UTC)

I put an example of one possible interpretation of the task requirements out there. Is that what this task is about? --Rdm 00:41, 7 August 2011 (UTC)

When I saw the task title before reading the details, I assumed it was about accessing “private” fields of an object other than by the explicit code of the object (i.e. the Tcl example is appropriate, the C# and Java ones are not). —Kevin Reid 00:54, 9 August 2011 (UTC)

I agree. The Java one is now restructured to only present the backdoor approach. I lack the expertise to do this with the C# code, but there must be something possible. I guess this means we're getting closer to being able to write the task description more exactly… –Donal Fellows 10:32, 9 August 2011 (UTC)
I have marked the current task description as incorrect. Both the prior Java implementation and the current C# implementation satisfied the task description. So if they are wrong then the task description is also wrong. --Rdm 12:43, 9 August 2011 (UTC)
I modified the task description to reflect what I think was the intent of the task. (Mwn3dPaddy, are my modifications congruent with what you had in mind?) --Michael Mol 13:31, 9 August 2011 (UTC)
Sorry, I've been away. Back now!
The intent was to show ways to circumvent such protection when, for example, you are given a compiled class and wish to force access to a protected member. The reason for the task is that Python - a language that has intentionally weak protection - relying instead on a "we're all consenting adults" approach, mentions that where other languages have a culture of using protected members, that there usually exists methods to get around this. I did want to get them in one place; but I also realise that it might undermine those languages that rely more on the obscurity of their protection hacking methods so I remain quite willing to junk the whole page if the community thinks it wise. --Paddy3118 19:05, 9 August 2011 (UTC)
I'd vote that the task stay. Of course it should have warnings about the practice being frowned upon, running with scissors, playing with live ammunition, that sort of thing. The task need not even have a specific context. I would like to see some elaboration in the task description about possible uses of value. As I could see some languages may have different reasons for this kind of thing, it would also be appropriate to ask each language for some description of why this might be needed. --Dgamey 02:20, 12 August 2011 (UTC)
i'd actually like to use this task to demonstrate that in languages that claim to have protection this protection can be circumvented. of course even nicer would be if we could get proofs for languages where it can't be circumvented. but i guess that may be hard.--eMBee 07:02, 25 October 2011 (UTC)
Shouldn't be that hard on any system that has had a security patch in the last year... If you can access a program's memory, the high level language semantics can be ignored. --Rdm 10:07, 25 October 2011 (UTC)
i mean it is hard to prove that a system is secure, and that private members of an object can not be accessed.
of course anything that happens outside of the process is beyond the control of the language, but the question is if it is possible to prevent access to private datastructures within the process.
If the OS allows random processes to access a program's memory then it almost certainly allows that program to access its own memory. --Rdm 14:02, 25 October 2011 (UTC)
ok, good point. we need to limit the scope then. can we assume that at least unix based OSes do not allow a program to access another programs memory?
and is it fair to say that circumventing OS restrictions is beyond the scope of the issue we try to highlight?
would that be enough to confine attempts to get around protection to cases within the process?--eMBee 14:20, 25 October 2011 (UTC)
consider a system that allows you to load new code at runtime (as lisp, pike, python, javascript and many other languages do) is it possible to build the language/compiler/virtual machine in such a way as to make it impossible for newly injected code to access protected datastructures? the question is related to whether it is possible to build sandboxes (like for javascript) that are actually save. certainly sandbox developers claim it is possible, and if it is possible for them then it should be possible for a regular language runtime as well.
to find out which languages offer this possibility, in particular among languages that allow to inject code at runtime, is a very interesting question.--eMBee 12:44, 25 October 2011 (UTC)