Hacker Newsnew | past | comments | ask | show | jobs | submit | SaltwaterC's commentslogin

tl;dr Lennart is an ass.


It looks like it doesn't like recursion.


The API doesn't bother me as much as the temporary authentication via STS. Temporary credentials for database access? Seriously? I am the only one who sees how ridiculous this is?


What's wrong with using temporary credentials?


They add useless latency and another point of failure. And sometimes, the AWS APIs DO fail.


Temporary credentials are used specifically to reduce latency, since you get a token that is valid for a period of time and doesn't need to be checked against the auth service on every call. Since the credentials last for 12 hours the work to retrieve them should be negligible. Because they don't need to be checked against the auth service on every call it seems like they would more resilient to API failure than standard AWS credentials.


You still need to provide an access key id and secret access key with that session token as returned by Security Token Service. And properly sign every request with those credentials. With a proper session token, but invalid credentials, the request fails with HTTP 400: "__type: com.amazon.coral.service#InvalidSignatureException, message: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.". Amazon still checks the signature. The same logic that could be used for IAM credentials that don't expire, without the need to check for a valid session token. If it is an Amazon screw up, that static credentials take more time to evaluate vs temporary credentials - that's not out fault. It is a pure Amazon screw up. The signing logic is still there for every API call. So (apparently?) we're back to square one: extra requests and wrapper logic for zero benefits, and worse overall experience. The session credentials aren't stateful. I specifically checked for this behavior. Therefore what you say, doesn't seem to happen in reality.

And for the love of God, don't blindly trust the documentation / what the AWS folks say. As an AWS library author myself, I had a lot of fun debugging failed requests because the smarty who wrote the signing procedure docs forgot to mention some HTTP headers that are mandatory to sign. I had to reverse engineer an official SDK in order to patch my own code. While being implemented as the docs say, the signing method failed at every request, although I had a valid session token. Trial and error is always a broken way of developing things due to broken docs. If you're an AWS employee, please send my regards to the documentation folks.


Well, since you're complaining, I added SimpleDB support to aws2js. It was like 12 lines of code actually, but since people don't ask ...


While we're at it, can the Adobe folks explain to the rest of the world when Flash is going to suck less? I mean really. Don't care about the on paper gibberish.


The Wikipedia's definition of "notable" is a heap of crap. Maybe somebody should donate Jimmy a dictionary, besides the cash he's looking for.


This is exactly what they said on their blog post:

Long term archival data is different than everyday data. It's created in bulk, generally ignored for weeks or months with only small additions and accesses, and restored in bulk (and then often in a hurried panic!)

This access pattern means that a storage system for backup data ought to be designed differently than a storage system for general data. Designed for this purpose, reliable long term archival storage can be delivered at dramatically lower prices.


Their architecture page seems to confirm this. It seems that their service is explicitly designed to have different performance characteristics from Amazon S3, so maybe they aren't quite a direct competitor to S3, but there are probably a lot of people using S3 for the use cases that Nimbus.IO claims to do better on, simply because S3 was available at the time.


Yes exactly. Nimbus.io is designed for long term archival storage at more affordable prices. We think it's a great time to be competing on price.

We may compete with S3 for low-latency service later on (latency can be made arbitrarily low by spending enough money on caching.) Initial calculations suggest we could be almost as low-latency as S3 and still under price by a good margin.


Latency may be able to be made low through caching, but depending on the distribution the point at which additional cache is uneconomical may be well before the edge of your performance envelope.

How are you calculating your latency? Also, what distribution do you assume your file accesses will come from?


Keep in mind that S3, like all other Amazon products, are priced with stupid margins. As such providing lower prices isn't difficult.


Hmmmm. Downvotes without comments. Classy.

This is cheap and at qty 1.

A backblaze type box is ca 12K for 135TB of storage.

Assume an interest rate of 5% and 36 months worth of repayments and the server itself is worth $725/month

It's uses roughly 1kw of power and 4u of rack space, so say you have 6/rack with a 30A rack. You can get the rack for say 5k, giving us a total rack cost/server of $833/month

Total cost/server month is $1558/month.

Total cost/gb month is $0.011/gb month.

Add in parity replication (1 in 4, 25%), $0.014/gb month.

This doesn't include compression or dedup, both of which drops cost price dramatically.

Compare that to say S3's $0.14/gb and you can see why I'd say the margins are stupid, especially at the scale they're running at.


Nice; that's the same sort of math we're doing.

Note that the BackBlaze machines are optimized for very cold data since they only need to support backup and restore. We also do custom hardware at SpiderOak, but we support web/mobile access, real time sync, etc. That makes our hardware slightly more expensive because of the generally warmer data. So you're off by a few pennies, but certainly in the right zone.

For Amazon, I suspect their internal S3 cost is actually quite a bit higher than either BackBlaze or SpiderOak since their data is warmer.


I'd suspect that their data temperature is very bimodal, so they'd be able to easily split out hot data from cold.

What much warm data data do you normally have/node?


Says right there on the front page: "Build by SpiderOak on the same proven backend storage network which powers hundreds of thousands of backups".


Zed has the balls to post stuff under his own name.


Spawning a new process is not the same thing as forking, but people often forget this bit. This post wasn't about forking.


I thought that spawning a new process was the same as forking. What am I missing?


http://linux.die.net/man/2/fork - this explains better.


Thanks, but I already know what fork does. My point is that "fork" is precisely how one spawns a new process, and there is no other way to do so.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: