Hacker Newsnew | past | comments | ask | show | jobs | submit | topspin's commentslogin

> It was ridiculously reliable.

Back in the late 90's and early 2000's, getting broadband was a problem where I lived. I oscillated among a few wireless internet providers (actual 802.11 Wifi to a repeater 11 miles away in one case,) and acoustic modems, as I changed properties.

For a couple years I used Qwest ISDN. That was by far the most reliable and consistent Internet I'd ever seen: it wasn't fast (128 Kbps,) but it never went down, and the latency and jitter was lower then anything I've had, then or since.


ISDN was awesome. I had that going on for a bit, too. It was great to experience parts of what some folks (mostly the French, IIRC) had commonly used for such a long time.

Nearly-instant dialup. And not just for a single ISP, but other ISPs as well: The circuit and the Internet service were provided by different entities.

Switch to a different ISP? No problem -- no appointments or installers making new holes in the house required. Just plug in a different phone number, username, password, and done.

And since each B channel was independent, one could do voice calls while the other did data -- dynamically, as-needed. Performance was resolute: Calls were perfect in their consistency, and data rates were precisely 64 kilobytes per second, per channel, symmetric, and not one bit more nor less -- and with constant latency (what jitter?).

And to not leave it to implication for those who don't know: An ISP wasn't required at all. Two people with ISDN could move data between their computers without involving the Internet. The circuits were switched in an any-to-any to fashion.

Want to play a two-player computer game a buddy, with voice chat, over ISDN in 1999? No problem: Use one B channel for data, the other for voice, and get gaming. The circuits are dedicated to these tasks for the duration of the game, and latency is a fixed constant (no Internet used at all, and no lag spikes either).

We've really lost something with the death of this point-to-point, circuit-switched technology. We're probably better off with the best-effort packet switched IP business we wound up using instead, but we've lost something nonetheless. It offered some neat opportunities and was a fun system to explore.


My ISDN was sold as "ISDL" by an ISP. Still had the performance you're describing, but it was tied to them. There was no dialing on my part: it was just always up. I'd pay for it today if an ISP offered it at a low cost, as a backup.

I missed the IDSL phase completely. I'm not even sure if it was ever available in my neck of the woods.

For me, it the continuum went like this: Dialup > ISDN > dialup > slow DOCSIS > faster VDSL > faster DOCSIS > [this is the part where I write a whole chapter about how there is fast, cheap gigabit fiber available in rural areas directly surrounding my small city, from multiple competing companies, but none within the city limits]

Anyway, IDSL. That technology skipped right by a lot of what was cool about ISDN. For me, real ISDN was always-on unless I disconnected it for some reason. While still "dialup" in the strictest sense, it was not infrequent to have sessions that went for months without any interruption at all. But I could also do anything else I wanted with it.

And backups: Apparently these days, a person can get a slice of Starlink pretty cheap. In this mode ("Standby Mode," IIRC), it provides a slow, always-on connection -- I think it's $5 per month for ~500Kbps.

The RV and snowbird communities hate it because it isn't free (they used to be able to pause service in off-season without monthly cost), but it sounds pretty good as a fixed, domestic backup: 500kbps is a lot more than 0. (And if this backup needs used for a long time or speed is important, then: 500kbps is way more than enough bandwidth to log in and pay for a month of real service.)


For me it was Dialup -> 802.11 @ 7 miles -> Dialup -> 802.11 @ 11 miles -> ISDN -> WISP -> DOCSIS via Comcast.

> this is the part where I write a whole chapter about how there is fast, cheap gigabit fiber available in rural areas

Not all of them. I'm in what amounts to the North Korea of 'murica: a place that is pitch black at night as seen by satellite photos. There is no fiber. Or, not infrequently, power. I'm on the edge of cable the service area, but it does work, so that's what I'm using.

Verizon built a tower 1/2 mile away, so now my 5G is all the bars, and I could get IP service that way if I wished. Then there is Starlink. Good times, I suppose.


I've got a good friend that moved to an area like that, in Appalachia.

Cell phone is unilaterally spotty (all carriers; I've got the gear to check that).

The cable network doesn't reach that far. DSL doesn't exist there. There's a local WISP that keeps talking about maybe making a move there, but it hasn't happened.

For the first few years we did a cobbled together cellular thing with a grey-market AT&T corpo iPad SIM, an LTE modem, and a directional antenna about 30 feet up. That worked, usually, unless the SIM died again or the singular tower being aimed at needed maintenance. (This was before cellular providers started willfully selling home internet.)

Now he's got starlink as his only WAN. That does pretty well for him, actually. We chat often, and at length, with his phone on WiFi calling and it works fine almost always. And by that, I mean: There's sometimes an audio glitch, and it's hard to pin down what the source is when it happens. It never lasts long.

The cool thing about cheap starlink as a backup is that, aside from the purchase price, it's like no-brainer cheap. I'd use it at home myself if my connection from Spectrum were iffy. (But Spectrum here is astoundingly consistent, so I don't see a need.)


Playing TFC, I always got faster ping times than the early cable users. ISDN was great.

> Microwave is line-of-sight so here on the Colorado front range

In such places it was common to bounce microwave trunk lines with "passive repeaters": big aluminum reflectors, about the size of a highway billboard, setup wherever a line needed to get around an obstacle. There is an excellent article about it all here[1].

[1] https://computer.rip/2025-08-16-passive-microwave-repeaters....


These are super cool and I've never seen one before! I'd imagine most of the passes I'm in are within spitting distance of at least one town so powering a substation isn't out of the question. It seems like most of the installations are very much "middle of nowhere" situations. I hope to run across one of these in person!

> RTX 60x0 series is apparently coming in 2018

That's either a typo, or NVidia has achieved some previously unheard of levels of innovation.


They're hedging on LLMs inventing time travel any day now.

It's exists. Car and Driver and other sites have photos.

Obviously it's weird to not showcase the exterior of a Ferrari, that being pretty much the entire point of Ferrari. The cynic in me can't help but think this may be due to the fact that it looks like a lowered Hyundai with a body kit[1].

[1] https://www.caranddriver.com/news/a70279106/ferrari-luce-ev-...


Do we know that’s an accurate image? The site says it’s an illustration and admits the exterior reveal comes later

I hope it’s not accurate. If so, interior looks more interesting than the exterior


Wow, first we got SUVs that were like morbidly obese Saloons, now we have this which is like a comically squashed SUV. Horrible.

Oh, wow. Yeah nobody is going to convince me that that is a Ferrari.

> Record high margin debt

$566B in margin debt. Is that actually a financial black swan amount of money? If 50% of that got "corrected" into Money Heaven on Friday, would it be more than a bad day at the stock market?


You're right that $566B alone isn't a black swan. That FINRA figure only captures retail and small institutional margin at broker-dealers. It excludes prime brokerage (hedge funds), securities-based lending, and repo markets. Conservative estimates put total leveraged exposure at $10-15 trillion. The $566B is maybe 5% of the iceberg.

I see visible margin debt as both a canary and a proxy. It's a canary because retail cracks first (less sophisticated risk management, stricter regulatory margin). It's a proxy because when visible leverage contracts, it usually means hidden leverage is contracting too. They're exposed to the same assets. When FINRA margin debt starts falling, it's not just a warning, it's confirmation that system-wide deleveraging is already underway.

That's my 2c. Does that make sense?


> a canary and a proxy

Whatever shenanigans are appear in the public record, multiply by 10x to approximate of the real story.

> Does that make sense?

Yep. 1929 called. Just to gloat. They don't want their market back.


I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.

Yeah, but thinking with an LLM is different. The article says:

> By “thinking hard,” I mean encountering a specific, difficult problem and spending multiple days just sitting with it to overcome it.

The "thinking hard" I do with an LLM is more like management thinking. Its chaotic and full of conversations and context switches. Its tiring, sure. But I'm not spending multiple days contemplating a single idea.

The "thinking hard" I do over multiple days with a single problem is more like that of a scientist / mathematician. I find myself still thinking about my problem while I'm lying in bed that night. I'm contemplating it in the shower. I have little breakthroughs and setbacks, until I eventually crack it or give up.

Its different.


YMMV, but I've found that I actually do way more of that type of "thinking hard" thanks to LLMs. With the menial parts largely off my plate, my attention has been freed up to focus on a higher density of hard problems, which I find a lot more enjoyable.

Yup, there is a surprisingly high amount of boilerplate in programming, and LLMs definitely can remove this and let you focus on the more important problems. For a person with a day job, working on side projects actually became fun with LLMs again, even with the limitation of free time and mental energy to invest in.

There are a lot of hard problems to solve in orchestration. We've barely scratched the surface on this.

I very much think its possible to use LLMs as a tool in this way. However a lot of folks are not. I see people, both personally and professionally, give it a problem and expect it to both design and implement a solution, then hold it as a gold standard.

I find the best uses, for at least my self, are smaller parts of my workflow where I'm not going to learn anything from doing it: - build one to throw away: give me a quick prototype to get stakeholder feedback - straightforward helper functions: I have the design and parameters planned, just need an implementation that I can review - tab-completion code-gen - If I want leads for looking into something (libraries, tools) and Googling isn't cutting it


> then hold it as a gold standard

I just changed employers recently in part due to this: dealing with someone that appears to now spend his time coercing LLM's to give the answers he wants, and becoming deaf to any contradictions. LLMs are very effective at amplifying the Reality Distortion Field for those that live in them. LLMs are replacing blog posts for this purpose.


I echo this sentiment. Even though I'm having Claude Code write 100% of the code for a personal project as an experiment, the need for thinking hard is very present.

In fact, since I don't need to do low-thinking tasks like writing boilerplate or repetitive tests, I find my thinking ratio is actually higher than when I write code normally.


I'm with you, thinking about architecture is generally still a big part of my mental effort. But for me most architectural problems are solve in short periods of thought and a lot of iteration. Maybe its an skill issue, but not now nor in the pre-LLM era I've been able to pre-solve all the architecture with pure thinking.

That said architectural problems have been also been less difficult, just for the simple fact that research and prototyping has become faster and cheaper.


I think it depends on the scope and level of solution I accept as “good”. I agree that often the thinking for the “next step” is too easy architecturally. But I still enjoy thinking about the global optimum or a “perfect system”, even it’s not immediately feasible, and can spend large amounts of time on this.

And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.

I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.

I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.


And thinking of how to convey all of that to Claude without having to write whole books :)

tfw you start expressing your thoughts as code because its shorter instead

Ya, they are programming languages after all. Language is really powerful when you really how to use it. Some of us are more comfortable with the natural variety, some of us are more comfy with code ¯\_(ツ)_/¯

Agreed. My recent side projects involve lots of thinking over days and weeks.

With AI we can set high bars and do complex original stuff. Obviously boilerplate and common patterns are slop slap without much thinking. That's why you branch into new creative territory. The challenge then becomes visualising the mental map of modular pieces all working nicely together at the right time to achieve your original intent.


My experience is similar, but I feel I'm actually thinking way harder than I ever was before LLMs.

Before LLMs once I was done with the design choices as you mention them - risks, constraints, technical debt, alternatives, possibilities, ... I cooked up a plan, and with that plan, I could write the code without having to think hard. Actually writing code was relaxing for me, and I feel like I need some relax between hard thinking sessions.

Nowadays we leave the code writing to LLMs because they do it way faster than a human could, but then have to think hard to check if the code LLM wrote satisfies the requirements.

Also reviewing junior developers' PRs became harder with them using LLMs. Juniors powered by AI are more ambitious and more careless. AI often suggests complicated code the juniors themselves don't understand and they just see that it works and commit it. Sometimes it suggests new library dependencies juniors wouldn't think of themselves, and of course it's the senior's role to decide whether the dependency is warranted and worthy of being included. Average PR length also increased. And juniors are working way faster with AI so we spend more time doing PR reviews.

I feel like my whole work somehow from both sides collapsed to reviewing code = from one side the code that my AI writes, from the other side the code that juniors' AI wrote, the amount of which has increased. And even though I like reviewing code, it feels like the hardest part of my profession and I liked it more when it was balanced with tasks which required less thinking...


its how you use the tool... reminds me of that episode of simpsons when homer gets a gun lic... he goes from not using it at all, to using it a little, to using it without thinking about what hes doing and for ludicrous things...

thinking is tiring and life is complicated, the tool makes it easy to slip into bad habits and bad habits are hard to break even when you recognise its a bad habit.

Many people are too busy/lazy/self-unaware to evaluate their behaviour to recognise a bad habit.


Reading this comment and other similar comments there's definitely a difference between people. Personally I agree and resonate a lot with the blog post, and I've always found designs of my programs to come sort of naturally. Usually the hard problems are the technical problems and then the design is figured out based on what's needed to control the program. I never had to think that hard about design.

Aptitude testing centers like Johnson O'Connor have tests for that. There are (relatively) huge differences between different people's thinking and problem solving styles. For some, creating an efficient process feels natural, while others need stability and redundancy. Programmers are by and large the latter.

[1]: https://www.jocrf.org/how-clients-use-the-analytical-reasoni...


> I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.

Okay, for you that is new - post-LLM.

For me, pre-LLM I thought about all those things as well as the code itself.

IOW, I thought about even more things. Now you (if I understand your claim correctly) think only about those higher level things, unencumbered by stuff like implementation misalignments, etc. By definition alone, you are thinking less hard.

------------------------

[1] Many times the thinking about code itself acted as a feedback mechanism for all those things. If thinking about the code itself never acted as a feedback mechanism to your higher thought processes then ... well, maybe you weren't doing it the way I was.


Thats not thinking hard, you are making decisions

It's certainly a different style of thinking hard. I used to really stress myself over coding - i.e. I would get frustrated that solving an issue would cause me to introduce some sort of hack or otherwise snowball into a huge refactor. Now I spend most of my time thinking about what cool new features I am going to build and not really stressing myself out too much.

there's no such thing as right or wrong , so the following isn't intended as any form of judgement or admonition , merely an observation that you are starting to sound like an llm

> you are starting to sound like an llm

My observation: I've always had that "sound." I don't know or care much about what that implies. I will admit I'm now deliberately avoiding em dashs, whereas I was once an enthusiastic user of them.


I still use em-dashes. I started using them when my professor lambasted my use of semi-colons. I'm not looking back -- LLM haters be damned!

I think OP's post is an attempt to move us past this stage of the discussion, which is frankly an old hat.

The point they are making is that using AI tools makes it a lot harder for them to keep up the discipline to think hard.

This may or may not be true for everyone.


It is a different kind of thinking, though.

I'd go as far as to say I think harder now – or at least quicker. I'm not wasting cycles on chores; I can focus on the bigger picture.

I've never felt more mental exhaustion than after a LLM coding session. I assume that is a result of it requiring me to think harder too.

It wasn't until I read your comment that I was able to pinpoint why the mental exhaustion feels familiar. It's the same kind (though not degree) of exhaustion as formal methods / proofs.

Except without the reward of an intellectual high afterwards.


Personally I do get the intellectual high after a long LLM coding session.

I feel this too. I suspect its a byproduct of all the context switching I find myself doing when I'm using an LLM to help write software. Within a 10 minute window, I'll read code, debug a problem, prompt, discuss the design, test something, do some design work myself and so on.

When I'm just programming, I spend a lot more time working through a single idea, or a single function. Its much less tiring.


In my experience it's because you switch from writing code to reviewing code someone else wrote. Which is massively more difficult than writing code yourself.

I use Claude Code a lot, and it always lets me know the moment I stopped thinking hard, because it will build something completely asinine. Garbage in, garbage out, as they say...

Yes, if anything I think harder because I know it's on the frontier of whatever I'm building (so i'm more motivated and there's much more ROI)

What happened here is what always happens with all printed and digital material that goes through some evidentiary process.

The shot-callers demand the material, which is a task fobbed off onto some nobody intern who doesn't matter (deliberately, because the lawyers and career LEOs don't want any "officer of the court" or other "party" to put eyes on things they might need to deny knowing about later.) They use only the most primitive, mechanical method possible, with little to no discretion. The collected mass of mangled junk is then shipped to whoever, either in boxes or on CD-ROM/DVD (yes, still) or something. Then, the reverse process is done, equally badly, again by low-level staff, also with zero discretion and little to no technical knowledge or ability, for exactly the same reasons, to get the material into some form suitable for filing or whatever.

Through all of this, the subtle details of data formats and encodings are utterly lost, and the legal archive fills with mangled garbage like raw quoted-printable emails. The parties involved have other priorities, such as minimizing the number of people involved in the process, and tight control over the number of copies created. Their instinct is not to bring in a bunch of clever folk that might make the work product come out better, because "better" for them is different than "better" for Twitter or Facebook. Also, these disclosures are inevitably and invariably challenged by time: the obligation to provide one thing or another is fought to the last possible minute, and when the word does finally go out there is next to no time to piddle around with details.

In the Epstein case, the disclosures were done years ago, the original source material (computers, accounts, file systems, etc.) have all long since been (deliberately) destroyed, and what the feds have is the shrapnel we see today.


8.5.7 here (built Sept 6, 2023)

Now I need to worry about this one. I've been anxious about vscode lately: apparently vscode extensions are a dumpster fire of compromises.


> Why is such an ancient plane still being used?

Because it was designed to operate in the same atmosphere as we had in the 1950's, it's highly customized with unique instruments and communication gear specialized for NASA and its systems, and they have a big shop filled with tools and spare parts accumulated over half a century to adapt to whatever conceivable thing comes up. They could drop a few hundred million and replace their WB-57s, but there isn't a real need.

> Are they machining their own engine parts?

The WB-57 engines are basically downrated, high-altitude versions of the Pratt & Whitney JT3D/TF33, not the original Avons. They are still in service today in military applications, so servicing them isn't some extraordinary concept. Plus, they don't see many flight hours, as these aircraft (there are 3) spend most of their time in a shop getting reworked for future missions, so engine overhauls aren't that frequent.

> I would imagine it's incredibly expensive to maintain.

All such aircraft are incredibly expensive. However, the Canberra is as old fashioned rivet and sheet metal design, and modifying it is relatively straightforward compared to most of what is manufactured today. It was designed as a bomber and has a large fuel and payload capacity, and a handy bomb-bay with large doors, filled with racks of mission specific gear.

I suspect this one can be repaired and returned to service. That's not uncommon for controlled belly landings. It did not appear to incur excessive damage in that landing, and there are mothballed Canberra in various boneyards around the world to provide replacement parts.


> those weapons will be used against you

On the matter of social media "moderation," this is the phase you're actually in, right now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: