Hacker Newsnew | past | comments | ask | show | jobs | submit | indspenceable's commentslogin

Seems like no one's commenting that the study isn't actually about marijuana - it's regarding the access to the cafes. My instinct tells me that they're probably not entirely off on the conclusion but it seems like a more complicated relationship - if I couldn't get into the place where all my friends hung out, of course I'd have more time to study!


Appears they accounted for whether amount of study time increased as a result of the ban.

From the study:

  We find no change in reported study hours, which suggests
  that we can eliminate effort adjustments as one channel of
  our results. [1]
My thought on this is focused mostly on the fact that the city was apparently filled with "problematic drug tourism". In other words it appears they're saying the city was becoming (if not already) a "college party town". I'd be curious to know what kinds of "distractions" ended up leaving the city because of the ban.

[1] http://www.restud.com/wp-content/uploads/2017/03/MS20610manu...


Often when I see something about 4d it uses this same analogy: a 2d being seeing a crosssection of a 3d world, in 2d. However, the video is in 2d, and it's able to show 3d in a much better/more clear way, clearly, cause when it's showing that example it's got a cutout of the "3d" view (which is still 2d!).

Why can't we do the same thing with 4d? Why does the object just disappear when it bounces into the 4th dimension, can't we maybe see a projection of it onto the 3rd dimension?


I remember playing a flash or java game years ago that did exactly this. It was a 4d maze, rendered in stereoscopic 3d. It displayed the images side by side and you crossed your eyes to get the 3d effect.

It was really disorienting (the 4d, but also the eye-crossing), and like the post author says, just bundles of lines rather than solid shapes. Still very cool.

Ooh, I found it!

http://www.urticator.net/maze/


Thank you so much! I've wanted something like this for ages. Now to figure out how to cross my eyes consistently with astigmatisms...


Idk where I found these. Not 3d though, just stereoscopic 2d. Tetris may be nice for stereo-training, as it doesn't involve any complex movements. But it was harder to me than seeing stills.

http://www.hidden-3d.com/games_stereogram_tetris.php

http://www.hidden-3d.com/stereogram_games.php


"Why does the object just disappear when it bounces into the 4th dimension, can't we maybe see a projection of it onto the 3rd dimension?"

Yes, you can. This program just doesn't.


There’s a big problem with that, which is that we don’t actually see in 3 dimensions. Our eyes only get 2-dimensional projections of a 3-dimensional world.


Our eyes could get 2-dimensional projections of any more-dimensional world, actually, if it was available. The light-sensitive outline of that little guy in the video could accept photons from any 4-direction, and not only photons floating in his current 2-plane.

Ofc it is abstraction, I know that physicists aren't happy with heterodimensional settings at human scale.


I am not sure you are correct. We have 2 eyes so that our brain can work out the depth of the image.


We see in 2D, with a tiny bit of depth metadata. True 3D vision would allow you to look at someone, and see the entire volume of all their internal organs simultaneously.


Wouldn't that be xray vision?


The comparison here is, as usual, to go down a dimension. Imagine living in a world where everything is constant vertically, like those old 3D maze screen savers or Wolfenstein 3D. You're really seeing a 1D amount of information about a 2D world (in fact, this is how the calculation is for Wolf3D and other games of its era). You can infer depths to objects if you have two eyes.

Now contrast that to if you were plucked up vertically 'above' the game's level to look down upon it. Now you can see the entire 2D extent of the maze at once. Before, your vision was blocked by the walls, now you see the walls and what's on the other side of the walls simultaneously in a way that's entirely distinct from simply seeing through a transparent object.

Now, like seeing a 1D amount of information about a 2D maze while live inside it, we see a 2D amount of information about a 3D world around us (a picture demonstrate's this 2D amount of information - it's planar). Now imagine being lifted out the 3D plane of existence so that you could behold the entirety of the 3D world at once. That's the rough analogy.


An xray is still a 2D projection, just with different wavelength light.


It's somewhat like having the alpha on everything turned down to varying levels - you can see through the skin, bones and metal implants are still at 100% alpha - but fundamentally no different than what looking at someone gives you: a 2D plane.


The 2D->3D analogy made sense to me -- only being able to see the cross-section that is visible in a current dimensionality -- but I got to wondering if you couldn't use some of techniques from information visualization techniques to show a representation of that extra dimension in the current dimensional landscape (ie fading the object as it gets farther away from the current dimension).


Maybe they could do something like a fog? That is, if the object was farther away in the 4th dimension, it would appear less distinct or more fuzzy/blurry, like being in the fog.


Miegakure did something similar where it showed a shadow of the 3D space immediately adjacent to the currently visible slice. I think he since removed that, so I expect he must have experimented with something similar in 4D Toys and decided against it.


That's a neat idea! But I think it might be a bit overwhelming and confusing. It would help with getting the bigger picture.

Every 4d object would end up being a clear 3d cross-section where it intersects with our 3d world, plus a cloud of superimposed and increasingly hazier 3d cross-sections above and below us along the 4d ("w") axis, projected down onto w=0 3d-space.


Watch the part about the slicing of 2d world looking at the 3d objects. That slice is infinitely large in both dimensions, yet it can't see the 3D objects when they aren't in the same plane.

Now apply the same analogy to our world: our 3D world is just an infinitely large 3D plane in a 4D world. When the objects aren't in our 'plane', we can't see them.


See also Flatland (http://amzn.to/2qKFrOh, aff. link) which is a short novel in part about 2D shapes discovering the third dimension.

Edit: Didn't notice it was mentioned at the end of the comments, https://news.ycombinator.com/item?id=14472395, there's a free version linked from archive.org.


A favorite novel of mine actually! Hard to believe that it was written in the 1800s, yet still Abbott understood the 4th dimension better than most people today.


> Hard to believe that it was written in the 1800s

Aside from all the misogyny and the style of writing, you mean?


It was the 1800s. I'm sure that 150 years from now, everything you and I are writing now will be offensive in ways we don't understand today.


To be fair I think the misogyny was satirical


You obviously know that it wasn't, otherwise you wouldn't have created that throwaway.


Purely theoretical, but the presence of a 3D object can be detected by a 2D creature, by it's shadow, depending on relative positioning of object, 2D surface, and light source. What would a 3D shadow look like, with no need for a wall or flat surface onto which the shadow would be cast? For that matter, what would 4D light look like, if it even differs. Maybe the light we see is only a shadow of something from the 4th Dimension, or maybe a portal or doorway we have not yet learned to use to it's potential.


Yes, I would also expect a projection rather than a crosssection. One reason could be: perhaps it would make the game too confusing and difficult?

But I'd still like to see the difference.


I would like to see _four_ simultaneous 3d projections of the 4d space, each ignoring (or flattening) one of the 3 dimensions.


Exactly, you could do it with transparency/mesh lines/color gradient - it would be interesting to see.


> These days, when most people talk about AI they’re talking about machine learning. There’s not any of that in SF2.

Actually, AI for games is pretty much never equivalent to AI for non-games. The end goal is different - games want to provide a non-optimal set of instructions, so that it's challenging but not impossible to win. The goals are entirely different.

If you're making a game, this is probably at least a useful example to look at, even if I don't agree with some of the decisions they made (uninterruptible moves, for instance)


I think the problem here is that AI academics think of AI as predicting from factors. Most other people think of an AI as a program that can effectivly emulate human intellegence. The end goal of AI (or at least game AI) is to create as perfect of a facsimile for the way an average person will think. In reality there are no "best decitions", let alone in a simulated reality inhabited with different rules and AI.

When I play a game I want the AI to be as close to another human as possible. That's why many AIs are laughed at for being tricked by really simple tactics (circling around a pilar so the AI follows you, putting something between you and the AI so they can't see you like in Skyrim's buckets, etc.).


I wouldn't go that far.

AIs in games often cheat, and they're also dumbed down so that they don't beat humans too fast, but the main difference with "regular" AI comes from the difference in goals and context. AI in most games has to make sufficiently smart decisions in a dynamic environment and with very limited timespan (milliseconds, usually). In such, it is pretty similar to algorithms used in robotics, and therefore entirely unlike the stuff the regular AI does to suggest you better ads.


Completely agree. Games are typically only "fun" when you feel challenged. Most people do not find "fun" in losing.


The thing is most simple game AI isn't challenging. Or it's challenging in boring ways. Like having 100% perfect aim or long health bars. It's awesome to have an AI that is challenging by actually being good at the game. Having some actual strategy and intelligence.

It's a complete misconception that "real AI" needs to be super hard. Give it realistic constraints like slow reaction times or noisy input. You can handicap it in many ways to control the difficulty. Modern chess engines can easily beat even the best players in the world. But by limiting the number of moves they search, you can set one up that new players can beat.

My favorite game AI is from age of empires 2. All other RTSes just let the AI cheat like crazy to provide challenge. For AoE2 they went to a lot of work to design an expert system and a custom scripting language for it. Tons of features were implemented to make it easy to write relatively sophisticated AI strategies. And they documented it well and made it easy for modders to write even better AI scripts.

As a result the AI on hardest can beat all but skilled competitive players without cheating at all (at least the current AI shipped with the steam version.) It's actually fun to play against and isn't a terrible substitute for a real human player.


Similarly to that, there exist AIs for Starcraft Broodwar that are capable of providing a decent challenge to novice and intermediate level human players. You could check some examples at play on: http://sscaitournament.com/ and also develop your own AI in one of the more popular programming languages (C++, Java, etc). Disclaimer: I am the author of one of those Starcraft Broodwar AIs


> Most people do not find "fun" in losing.

Allow me to introduce you to a developer named https://en.wikipedia.org/wiki/FromSoftware ...


And if that's not good enough, there's the one whose game basically opens with 'Losing is fun!': http://dwarffortresswiki.org/index.php/DF2014:Losing


Kind of off topic, but the thing I like about the "Losing is fun!" mantra of DF is that "losing" is not the end of the game. It's just another point in history. So "losing" really just advances the story plot -- which is why it's fun. Conversely, "winning" means that you don't have anything left to do and it's "boring". I don't know any other game with this point of view.


Thia is just the general rule for simulation games, in free play mode.

Reach a point where all core problems are solved, and then create your own problems (and try to solve them).

Its just DF community that claimed it as their mantra, and their primary marketing of the game. But all sim games (d)evolve to this.

Kinda like Lucky Strike's "it's toasted!" slogan


Both of which were influenced in their attitude towards losing by a little game known as Nethack [1].

Although I understand Souls doesn't have permadeath, therefore it's probably only half the fun of Nethack.

__________

[1] https://en.wikipedia.org/wiki/NetHack


Which in turn took from rogue. Which in turn came from DnD first edition / advanced edition, when DnD was more about navigating dungeon traps than fighting monsters (and not at all similar to modern DnD, which resembles Besthesda RPGs much more than roguelikes)

But Original DnD is where my inspiration-knowledge stops


D&D is descended from wargaming.



https://alt.org/nethack/ is probably the best way to play it (or watch others do so, which is a pretty cool feature)


It depends. With a well balanced fighting game I don't mind losing to a good human opponent because it means that my game improves. Playing against someone who's good is a great way to find out what moves/patterns you thought might be safe aren't.

I agree that if you have no way to measure that progress/improvement then the fun is lost.


That actually would be an interesting game mechanic; make sure you lose, but you get more points for how spectacularly you lose...


Now i want though a machine learned fighting game AI. Maybe it would even be a challenge for the enthusiasts?

https://youtube.com/watch?v=xSGW7CwD5GM


Someone made exactly that for Super Smash Bros.:

https://www.engadget.com/2017/02/26/super-smash-bros-ai-comp...


Machine learning would be interesting to apply to something like saltybet[0], there's already a paper on the idea[1], dunno if somebody actually did manage to do it.

[0]http://www.saltybet.com/

[1]http://webcache.googleusercontent.com/search?q=cache:VXw25h2...


(uninterruptible moves, for instance)

I bet this was done to decrease the complexity of the script flow. Otherwise, you'd have to provide for the interruption cases.


I asked the blog's author and he said hits do interrupt scripts. Aside from that, more complex scripts intended for higher difficulty have explicit conditional branches after moves to allow for blocking and counters. The triple fireball script example is for "easy Ryu", so it's intentionally dumb/non-reactive.


What do you mean by non-interruptible moves?


I use the keys on that row, and having physical keys gives me the useful tactile feedback that I've pressed a key. Sounds dumb, but it's real - it's the reason that keyboards are much better to type on than touchscreens.


A quick peruse doesn't show that there's a way to only commit certain parts of a file (like you can do in vanilla git via `git add --patch`)


"There's also a p/partial flag that allows you to interactively select segments of files to commit." http://gitless.com/#gl-commit

But looking at it, this seems like it would strongly couple the decision to include a part of a file with the commit process rather than something that I can be thinking about as I write the change. I think I'd likely find that irritating.


Or just turn off notifications on every message? I only get notified on @mentions and I don't feel like I'm oncall.


The main reason for not turning off notifications is the fear of missing out, decisions being made without you. And it's a very valid fear.

One team member going away from Slack is bad for the member and for the team. If everyone else participates and you're on the sideline, then you're an oddball and not a team player.

For it to be effective, the whole team has to willingly move away from it.


Do you all work at companies that don't do design reviews and code reviews? Those seem like good ways to keep people in the loop and get feedback on idea/code. Code reviews should be interrupts, but nothing else should be.

I find the amount of work one can practically achieve in a day to be a good limiter of how much people will be kept out of the loop if they are not actively engaging in their email (or whatever system people use). The reality is, no matter how fast you think you're moving, it's not that fast.


We do pretty strict code reviews https://github.com/dgraph-io/dgraph/pulls?utf8=%E2%9C%93&q=i... . Nothing goes into master until it gets LGTMed.

The point was that, if all of your team members are having a conversation, but you're not, you'd feel left out. Of course, you might not care, but a lot of people who feel they're active part of the team do; and discourse allows us to do that very well.


I agree, this seems very much like a culture problem that wants a culture fix, not a technical one.


Re: Generators - I don't know why it's so hidden, but the Enumerator class allows you to make generators

```

enum = Enumerator.new{|y| a = 0; loop{y << a; a += 1}}

enum.next => 0

enum.next => 1

etc

```

I believe this has existed since 1.8x


What makes you think they're hidden? Enumerator moved from stdlib to core in 1.9 and also calling #to_enum isn't needed anymore:

    enum = 0.upto(Float::INFINITY)
    enum.next # => 0
    enum.next # => 1


It's not "hidden" per se, it's the first SO result for googling "ruby generators".


Re: back pressure - that's what I was most interested in, and I was sad to see it not addressed. I got a silent brass mute for my horn (trombone) a couple of years ago, but found that playing with the mute in (like any other practice mute that I have used) felt extremely different than playing without; especially in lower registers. Even the older version was magical - making me silent but still able to hear myself, but the cost is long term usability.


I'm a string player. You can get practice mutes for string instruments, and somewhat analogous to back pressure, they change how the instrument responds to the bow. The result is that you can practice some things (left hand technique, repertoire) but not others.

For myself, I'm practicing for pleasure and not to advance professionally. Muting my instrument would spoil it. Fortunately I've got a tolerant family and a detached house.


Would an electric instrument help, or do they feel too different from acoustic, too?


Too different. Actually, another instrument that I play is electric bass, and I simply practice it unplugged.


I bought a really inexpensive soft tone mute on the basis of this comparison review in the ITA Jourmal: http://www.trombone.org/articles/library/viewarticles.asp?Ar...

Sadly, that article is a few years out of date. Still, my big takeaway was that I would be willing to forego some attenuation and even cope with modest back pressure if the mute didn't mess with the tuning properties.

I've been pretty happy with the soft tone mute. It's pretty quiet (but louder than most), is better than many practice mutes for back pressure, and seems to have little impact on tuning. For reference, I'm playing a King 3b, no f attachment.

Maybe if I start spending more time with my horn again, I'll consider the silent brass as it seems pretty sweet for recording.


No - it focuses advertising efforts on people who have visited your website. So, you look at a jacket, but then aren't sure you want to buy it, retargeting makes you see that jacket again and again until you change your mind. Then you keep seeing it anyway.


You know, I'd almost be willing to give ad companies access to my credit card statements, so they'd know when I bought something and stop telling me to buy it.

And I bet ad companies realize this. This privacy-invasion paradigm is only getting started.


Netflix is a big offender here.


If they know every page I've visited on the site, how do they not know when I've made the purchase?


Yes, it is a lot of money. Those professions make a lot of money too.

I feel a little guilty because I have friends who are doing things which are way better for society (teachers, for instance) and are damn good at what they do, but making only a fraction of what I make. I feel good about what my company does, but I don't think we (or especially other companies that are adding so little value to people who actually need it, rather than essentially just the tech elite in SF/NYC/etc) are adding so much more that it justifies the huge disparity in salary.


It's not that teaching is better for society than whatever you do (is it? how would we really know?) - it's that you get immediate "helping someone out" feedback all the time. It must be gratifying on that level (aside from the fact that it's hard to actually reach anyone who feels compelled to "do time" in your classroom). If you feel like you're missing that "good person" feeling, I'm sure you can figure out ways to get it.


If you feel guilty you could always go do those other things that benefit society more or even do some volunteer work so you feel less guilty about making a decent living.


That's my point: I feel like software salaries are so much more than decent living wages. I don't feel bad about what I do personally, but I feel bad about our industry as a whole.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: