An EULA is a license agreement - a contract of sorts. Breaking a contract you agreed on is punishable in court under the right circumstances, but it's not "illegal" (otherwise, you could literally make up new laws). EU courts often find EULAs unenforceable due to a host of reasons (we're very consumer-friendly over here).
Furthermore, you are only breaking the contract if you accepted the contract in the first place; so it only covers what you can do with the product iff you purchased the product (and accepted the EULA that comes with it). So if you are doing blackbox reverse engineering, there is nothing the company can do to prevent you from figuring out how their physical product works.
Finally, as another comment noted, in most parts of the world reverse engineering for interoperability purposes is protected by law.
I had the same question. Until I found that I should press the arrow in the box, instead of the arrow near the bottom bar... (Ahh, I love other music games, and this one has no music?!)
(Disclaimer: Not to argue for who invented computers first.)
Yeah, there is a famous computer geek who invented a super nice Chinese input method named "Cangjie" in Assembly and it can be used with a special Chinese CPU. (Now he is an old man.)
The keys can be printed out on a paper. If you set a strong enough passphrase for the secret keys, you can upload them somewhere, too.
I should rarely use my special keys (or passphrases of the secret keys) so as to keep them safer. And simpler authentication is more efficient for normal web services. GPG authentication may not be fast enough.
Sorry for disturbing you with my previous comment. And my real questions are: Will a robot with AI be treated as a human baby? Who has the permission to generate AI? When are you allowed to destroy such robot? If you don't want AI to end human race, there should be two cautious choices: 1) Never invent AI. 2) Never treat AIs as humans and destroy them when they do not obey your order.