Hacker Newsnew | past | comments | ask | show | jobs | submit | not_the_fda's commentslogin

And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.

The parallel for this is when Rome changed from only recruiting citizens for their army to recruiting anyone who could pass the physical. They had no choice, and the new armies were much better at fighting. But the soldiers also didn’t have the same stake in the republic that voting citizens did.

Citizens were loyal to Rome. Soldiers were loyal to their commanders. If commanders wanted to launch rebellions, the soldiers would likely support them.

A commander who commands the loyalty of legions by convincing a handful of drone operators would be very dangerous for democracy.


The original Terminator movie doesn’t seem so far fetched now (minus the time travel).

10 1MB blobs is nothing on modern hardware.

The actual encryption itself is relatively quick, I don't mind that. It is the re-upload of the whole file that is my concern.

Yep. I worked on the control system for the Virginia class attack sub-marines for my co-op. Also got to ride around in a Seawolf class submarine.

That's pretty cool. I'm guessing you're American, not Canadian, right? I didn't realize American schools had co-ops; I thought they mostly/solely had internships.

Very clever, but that's the problem, clever is never the correct solution.

With a few bytes more more you can create an implementation that is a lot easier to understand. Bytes are cheap, developer time isn't.


"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

https://news.ycombinator.com/newsguidelines.html

I'd especially hammer the point in this case, because clever hacks are very much on topic for Hacker News. They are, in fact, what gave birth to the word hacker and the idea of hacking in the first place. Not only that but it was precisely the clever hacks with no particular utility that were prized most highly!


If you are writing a chess engine you'll want to store hundreds of millions of positions while you search for the best move and at that scale a byte is important because it gets multiplied by an enormous factor.


But that is a totally different problem which requires far fewer bytes to represent. For that problem you are just considering of the valid pieces which made a move and what board that came from. Storing a single move is far cheaper than an entire board state.


Not when you include transpositions, where you arrive at the same position from a different move order, in which case saving board states instead of moves could be very valuable.


There are transposition tables for that though. They don't store the board state actually. For Stockfish, transposition table entries are 10 bytes each, 16 bits of which are the low bits(or high? Can't remember) of a zobrist hash of the board state. The other 48 bits of the hash are used for addressing into the hash table, but aren't stored in it. The rest of the entry will be stuff like the best move found during the previous search(16 bits), the depth of that search(8 bits), evaluation(2 different ones at 16 bits each), and various bits of data like node type and age of the entry(for deciding which entry to replace, because this table is always full). Collisions can occasionally happen, but saving a full board state to eliminate them would cost far too much, since no matter how big you make the table, it'll never be big enough to cache all the board states a search visits.

In Stockfish, there will only be one full-fledged board state in memory per search thread. So the size of the board state is pretty much irrelevant to performance. What's important is reducing the overhead of generating possible moves, applying those moves to the board state, and hashing the board state, which is what magic bitboards are for.


That's interesting, I didn't know about transposition tables, thanks for the explanation!


If they cared about that, then it wouldn't have been written in python. This is an exercise of the author showing how clever they are.


This is pretty standard ( or at least used to be 20 years ago ) in high performance chess programming, see

https://www.chessprogramming.org/Bitboards

https://healeycodes.com/visualizing-chess-bitboards


They already have. They have the Jetson Line: https://en.wikipedia.org/wiki/Nvidia_Jetson


I don't think the phenomenon is limited to Seattle.


Its not. I know some ex bay area devs who are the same mind, and i'm not too far off.

I think its definitely stronger in MS as my friend on the inside tells me, than most places.

There are alot of elements to it, one being profits at all costs, the greater economy, FOMO, and a resentment of engineers and technical people who have been practicing, what execs i can guess only see as alchemy, for a long time. They've decided that they are now done with that and that everyone must use the new sauce, because reasons. Sadly until things like logon buttons dis-appear and customers get pissed, it won't self-correct.

I just wish we could present the best version of ourselves and as long as deadlines are met, it'll all work out, but some have decided for scorched-earth. I suppose its a natural reaction to always want to be on the cutting edge, even before the cake has left the oven.


Currently there is no profit per token, quite a bit of loss per token, that's the problem. Your not going to make it up in volume.


Do you have a source for that? I'm especially interested in a source for Anthropic.


https://www.wsj.com/tech/ai/openai-anthropic-profitability-e...

Anthropic expects to break even in 2028. They’re all unprofitable now.


paywalled.

Are they unprofitable because they don't profit on inference, or because they reinvest all of the profit into scaling up?

Remember how long Amazon was unprofitable, by choice.


> Are they unprofitable because they don't profit on inference, or because they reinvest all of the profit into scaling up?

They are scaling up using VC money, not revenue. As far as profit on inference goes, it's hard to separate it out from training: they cannot, at any given time, simply stop training because that would kill any advantage they have 6 months down the line.

For all practical purposes, you can't look at their inference costs independent of the training cost; they need to keep spending on both if they want to continue doing inference.

> Remember how long Amazon was unprofitable, by choice.

That was a very different scenario - AMZ was not spending their revenue on land-grabbing, they were spending their revenue on long-lived infra, while AI companies now are spending VC investment, not revenue, on land-grabbing.

The difference between spending your revenue on short-lived infra (training a new model, acquiring GPUs) and long-lived infra is that with long-lived infra, at any time, even after 10+ years, you can stop expanding your infra and keep the resulting revenue as profit.

With short-lived infra (models, GPUs), you can't simply stop infra spending and collect profit from the revenue, because the infra reached end-of-life and needs to be replaced anyway.


Treasuries.

Take a steady guaranteed 4% and sleep tight.

"Safe" bonds are less than treasuries, and a big funder of the AI bubble is bonds , so you will be they one holding the bag when they bust.


Return on Investment



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: