I would guess OSM uses optimizations for eucledian graphs, where the path a->c is always shorter than a->b->c. This restriction makes e.g. TSP solvable. But this property does not hold for any generic graph.
I don't know if this makes visualisation also easier.
Technically, if you've got a bumpy dirt track a->c and a freeway a->b->c then the travel time on the latter route can be shorter.
Of course, they do get to dodge a major problem: That high-dimensional data is hard to visualise in an understandable way. Everyone knows what a map looks like, nobody knows what a clear visualisation of a set of 100-dimensional vectors looks like.
That's the point of EurKey. Special characters are the same as on the US layout with additional, language specific letters available. At least for writing English and German it has been great for me.
No strong impression either way from that. So, lets look a bit more...
k, there's an "About" button up the top. Clicked that.
Nothing. It just drops down a list of "Blog", "Team", or "Contact".
I don't give a shit about any of those, nor have any interest in them.
Why isn't "About" actually taking me to a page with info telling me WTF it's About?
That's not super shady anyway, just really dumb design.
Moving on, Lets look at the "Developers" options. So I click "Developers" then pick the 1st option "Desktop apps". That open's a new submenu, so I pick the first page there... "Dashboard".
Instead of anything useful, that takes me to a website where I need to login.
Well. That's the end of my interest. Closed website, never to return.
At least it doesn't seem shady, as it never took me to anything other than the front page even though I tried (briefly).
You might want to advise them a bit harder or differently or something, as it's clearly not great currently. :/
Thanks for the feedback, but haha it sounds like it worked actually: you're not the target customer and avoided wasting yours or the company's time :-)
At this stage of the company, the goal of the website is to provide a validating presence for people who already heard about them, because they're selling to carefully vetted partners.
The typical flow is (strong reason to engage) => homepage => "deploy on Massive" => book a time to talk.
Massive never seems short of customer interest, and the challenge is more on engineering to safely and efficiency grow.
I guess now is the time to plug the jobs page:
https://www.joinmassive.com/jobs
(I'm personally leading the key searches and feedback very welcome on those listings)
> the goal of the website is to provide a validating presence for people who already heard about them, because they're selling to carefully vetted partners.
You've just wasted my time, and other people people's time on this crap.
Trying to claim that, after quite literally coming here on HN and trying to spruik that crowd to everyone, just makes the case for the earlier critical commenters.
That's the behaviour of someone clearly full of shit.
Thanks for confirming joinmassive is actually a shady operation.
Hopefully this gets into search engine results, so less people waste time on this bullshit.
(HNers are welcome to visit my profile and decide if I'm legit after 30 years, google, inktomi and numerous startups)
Many startups sell and work closely with partners earlier in their lifecycle before their platforms are ready for the mainstream. OP was asking about access to lots of low cost CPUs for an application that could be a good fit, so I posted a casual one-liner. Massive is akin to SETI@Home but with a slightly more general SDK, and which complies with privacy, security and opt-in requirements from the major AV companies.
I personally think it's awesome to have another way to monetize that isn't ads or subscriptions, both of which have downsides to users and don't fit every type of application.
You don't have to agree, but I'd *kindly* ask to be afforded the same respect I'd give you, and which frankly I personally deserve.
PageRank and similar ranking algorithms on graphs can be used to detect monitoring attempts in P2P botnets [0] (a botmaster can detect when researchers/law enforcement start monitoring a botnet).
For my master's thesis, I evaluated these algorithms and tried to find ways to prevent detection and came to the conclusion that it is hard if you don't want to deploy about as much sensor nodes as the botnet has active peers. Those algorithms are working really well and are hard do work around.
Maybe I did not use it as intended, but I have an app running on fly.io that exposed the metrics endpoint on another port as the webapp itself, so it is not publicly reachable.
This app suddenly broke and I had to disable the metrics endpoint to get it running again. Since it is a toy project, I didn't investigate further but internally everything worked fine (the healthcheck got the expected 200 response) but the routing of external requests just broke.
Also when building and deploying an image, some non-obvious and undocumented changes are made. My app generates version information based on the git repository state. The build process deletes (or ignores, I don't know, never got an answer to my issue) the fly.toml file in the docker build context, which resulted in uncommitted changes and a "dirty" repository when building the app. My dirty workaround is to `git checkout fly.toml` in my dockerfile. It works but it isn't pretty...
I also applaud their user-respecting design choices. E.g. if your browser sets the DNT (do not track) header, they won't show a cookie consent banner and just assume you selected "reject all"
My sites don’t present a cookie banner to anyone, and don’t check for the DNT header. We simply don’t spy on our users. Any cookies we set are session and other “essential” cookies.
We do that for all sites we (a nonprofit / education / science / arts web agency) build. Seems like such a no-brainer to me - if someone has expressed a preference like that, you're only going to annoy them with a cookie popup, which they're most likely going to reject.