Hacker Newsnew | past | comments | ask | show | jobs | submit | gryphel's commentslogin

The Mini vMac download page (https://www.gryphel.com/c/minivmac/dnld_std.html) includes binaries for Macintosh, Windows, Linux, and other platforms.

As recently noted in the build documentation, compiling it yourself is not recommended for most people. (First, the result will be much less efficient than the official binary unless you tweak things for your particular compiler. Second, there is the chance of running into compiler bugs and bugs in Mini vMac that show up only on some compilers - the official binaries are much better tested.)


Quite reasonable. I've edited the footer section with a callout to these recommendations and appropriate links.


If anyone is interested, I ported this source code to compile in MPW Pascal. The modified source and binary are available from: http://www.gryphel.com/c/sw/general/macpaint/index.html


Yes exactly, the problem is non determinism. But I think it should be possible to have deterministic execution while still keeping compatibility with existing software, by replacing the real time with a "fake" time that is deterministic upon the stream of instructions executed, which is on average close to real time. This requires support by the CPU, but it is a relatively simple change, compared to changing speculative execution and caching. Fake time could be kept in sync with real time, by periodically running a privileged process that takes a fixed amount of fake time, but variable amount of real time. (I brought this up in a different comment thread that got marked as duplicate.)


... and then you have to make, for example, scheduling of threads based on "fake time".

rr does just this! I think it uses the "instructions retired" performance counter as its "fake time"; that turns out to be deterministic enough for its purposes. Whether it's deterministic enough in a security context, I don't know for sure.

But this approach, though it can run threaded software, will not let untrusted software (which should really mean all software!) use more than one real core or hardware thread.


What's rr?



Rather than changing OoOE, wouldn't it be much simpler to prevent unprivileged processes from seeing the real time? For maximum compatibility with existing software, a CPU could provide fake time, that on average passes at the same rate as the real time, but is completely determined by the instructions executed. If the length of process time slices are determined by fake time instead of real time, the process will never be affected small variations in real execution time, no matter how much time passes.


Unless you never get the real time doesn’t this just push out the problem to statistical analysis of differences between real and fake time?


The fake time can be synced to real time at the beginning of the time slice of a process. So the process never sees the real time, but the fake time is fairly close to it.


Processes can run a long time and pass data around. This isn’t realistic.


I'm not seeing the issue with a long running process. If the fake time is regularly synced to real time, it should not matter how long the process runs.

If passing data involves frequent switches between processes, then, yes, I see there would be a problem with syncing on every process switch. I think syncing at longer intervals would solve that problem. All processes could share the same fake time, but see the fake time skip a small fixed amount at fixed regular intervals. That fixed amount of fake time would take a variable amount of real time, which absorbs the small variations in real execution time.


It should be able to run Mac OS software using an emulator, similar to a previous project that used a Raspberry Pi:

http://retromaccast.ning.com/profiles/blogs/honey-i-shrunk-t... https://www.engadget.com/2013/08/28/mini-classic-macintosh-m...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: