Hacker Newsnew | past | comments | ask | show | jobs | submit | dfabulich's commentslogin

> I only scan the headlines

Have you scanned any headlines about ICE lately? Maybe do a quick search for news about Minnesota?

(I'm pretty sure that if you'd been putting your pants on in Minnesota, you would not have written this comment.)


Are you saying legal US citizens are having a tough time in Minnesota with ICE? My cousins and their families aren't. They're too busy leading their own normal, daily lives.

Yes; my neighbors had trouble going to the grocery store. From appearances, you might think they're on vacation from Mexico. They have been here for generations, and one of their family is a high enough ranking member of the military that I won't say more to avoid the risk of doxxing them.

Yes, two of them were just killed. Does that qualify as "having a tough time?"

And how many people live in Minnesota? What were they doing when they were killed?

I don't get your point. What proportion of residents does an event need to negatively impact for you to believe that it's hassling people?

Surely it can't be 100%, right? No event in any major city, even horrific events, actually affect everyone.


How many illegal aliens were killed in Minnesota?

What's the ratio of citizens to non-citizens that's okay? One citizen per every hundred or are you thinking 10-1?

Have you considered they could maybe just stop interfering with federal law enforcement and let them do their jobs as they have been doing for decades under all sorts of administrations? You'll be hard pressed to find a tear shed for agitators protecting illegal immigrant criminals with deportation orders.

Neither you nor anyone else believes this is how immigration enforcement has been done "for decades under all sorts of administrations."

You can make it appear as if you have a better grasp on reality by just acknowledging that this is a much different enforcement mechanism than we've seen in the past, but you think that's okay.

Anyway there are now several known cases of people being detained or deported without deportation orders. This is another point that you could at least give the appearance of honesty and grasp on reality by acknowledging.


You're right that immigration enforcement in the past did not have to deal with mobs trying to interfere with that enforcement.

DHS's own data proves that current enforcement priorities have changed.

So what's more probable in your mind?

( Hypothesis A ) -- Mobs trying to interfere with law enforcement has caused DHS to focus on arresting and deporting immigrants without criminal background

( Hypothesis B ) -- DHS's focus on arresting and deporting immigrants without criminal background has required significant scale-up of personnel with minimal training (validated by DHS's own data) and required tactics that a large number of Americans believe to strike an unacceptable cost-benefit balance

( Hypothesis C ) -- The two facts (enforcement approach and public response) are not causally related to each other at all


It's telling you chose to not answer the question and instead chose to introduce a different (straw man) question in response.

At least people in the past had the integrity to acknowledge their positions head-on. One of the lamentable things missing today


Interfering with federal law enforcement is not punishable by summary execution.

Huh? Did you respond to the wrong comment?

> What were they doing when they were killed?

One was returning from dropping off her 6 year old child at school.

The other was videotaping ICE activity with one hand while holding out the other hand to show he was no was no threat.

What is your point, exactly? Neither was doing anything illegal, neither was directly trying to interfere with ICE actions. (The first wasn't trying to interfere at all.)

Although normally I'd say wait for the full evidence to be revealed, in this case (1) there's already a wealth of evidence from bystanders, and (2) the investigations are actively being interfered with so official evidence is not forthcoming.

Those are the 2 citizens killed. CBP and ICE killed at least 25 other people in the field and at least 30 died in custody (one source cites 30-32, another 44).

Apparently, the violence is necessary to deport at (checks notes) a lower rate than Biden's. It might make sense if the current enforcement was aimed at serious criminals, but only the rhetoric is. The current enforcement is much less selective. More damage, less gain.


A corollary I don't see mentioned enough by the morons who believe there are roving hordes of violent illegal criminals:

Let's assume there was. Then what on earth is the administration doing tracking down and putting cuffs on so many people who do not fit in that category?

Every seat in a detention center, courtroom, or plane filled by a random guy stopped in the Home Depot parking lot is a seat taken away from one of these allegedly numerous violent rapist/murders/whatever.

So even if you were stupid enough to believe all the transparent bullshit from this gang of liars, they'd still be fucking awful!

All this stuff does, in addition to squelching public appetite for immigration enforcement writ large, is keeps the actual bad guys inside the country even longer!


You keep moving the goalposts that much and maybe the patriots can win the Super Bowl.

[flagged]


There has been no such thing.

Just curiously, what do you personally get out of lying constantly in this thread?

It's not a lie to point out the truth. Words have meaning and wantonly applying the most scariest sounding words you can find does not help your cause.

Dopamine.

In 2024, Starcloud posted their plans to "solve" the cooling problem. https://starcloudinc.github.io/wp.pdf

> As conduction and convection to the environment are not available in space, this means the data center will require radiators capable of radiatively dissipating gigawatts of thermal load. To achieve this, Starcloud is developing a lightweight deployable radiator design with a very large area - by far the largest radiators deployed in space - radiating primarily towards deep space...

They claim they can radiate "633.08 W / m^2". At that rate, they're looking at square kilometers of radiators to dissipate gigawatts of thermal load, perhaps hectares of radiators.

They also claim that they can "dramatically increase" heat dissipation with heat pumps.

So, there you have it: "all you have to do" is deploy a few hectares of radiators in space, combined with heat pumps that can dissipate gigawatts of thermal load with no maintenance at all over a lifetime of decades.

This seems like the sort of "not technically impossible" problem that can attract a large amount of VC funding, as VCs buy lottery tickets that the problem can be solved.


Yes, on the face of it, the plan is workable. Heat radiation scales linearly with area and exponentially (IIRC) with temperature.

It really is as simple as just adding kilometers of radiatiors. That is, if you ignore the incredible cost of transporting all that mass to orbit and assembly in space. Because there is quite simply no way to fold up kilometer-scale thermal arrays and launch in a single vehicle. There will be assembly required in space.

All in all, if you ignore all practical reality, yes, you can put a datacenter in space!

Once you engage a single brain cell, it becomes obvious that it is actually so impractical as to be literally impossible.


I kind of want to play it out though... if someone did do this (for whatever reasons), what would the real benefits even be? Something terrestrial operations wouldn't be able to catch up to in 5-10 years?

This article includes a graph with a negative slope, claiming that AI tools are useful for beginners, but less and less useful the more coding expertise you develop.

That doesn't match my experience. I think AI tools have their own skill curve, independent of the skill curve of "reading/writing good code." If you figure out how to use the AI tools well, you'll get even more value out of them with expertise.

Use AI to solve problems you know how to solve, not problems that are beyond your understanding. (In that case, use the AI to increase your understanding instead.)

Use the very newest/best LLM models. Make the AI use automated tests (preferring languages with strict type checks). Give it access to logs. Manage context tokens effectively (they all get dumber the more tokens in context). Write the right stuff and not the wrong stuff in AGENTS.md.


That sounds exhausting.

I'd rather spend my time thinking about the problem and solving it, than thinking about how to get some software to stochasticaly select language that appears like it is thinking about the problem to then implement a solution I'm going to have to check carefully.

Much of the LLM hype cycle breaks down into "anyone can create software now", which TFA makes a convincing argument for being a lie, and "experts are now going to be so much more productive", which TFA - and several studies posted here in recent months - show is not actually the case.

Your walk-through is the reason why. You've not got magic for free, you've got something kinda cool that needs operational management and constant verification.


I’ve seen otherwise intelligent and capable people get so addicted to the convenience and potential of LLMs, that they start to lose their ability to slowly go through problems step by step. it’s sad.

Agreed. My work is mandating Claude Code usage this week for everyone. I spent all day today getting it to write tickets, code, and tests for something I knew how to do. I don’t understand the appeal. Telling the AI “commit those changes and then push,” then waiting for the result, takes way longer than gcmsg <commit msg> && gp.

If you're not developing an iOS/macOS app, you can skip Xcode completely and just use the `swift` CLI, which is perfectly cromulent. (It works great on Linux and Windows.)

There'a great indie app called Notepad.exe [1] for developing iOS and macOS apps using macOS. You can also write and test Swift apps for Linux easily [2]. It also supports Python and JavaScript.

If you hate Xcode, this is definitely worth a look.

[1]: https://notepadexe.com

[2]: https://notepadexe.com/news/#notepad-14-linux-support


So wait this thing is real? Calling it notepad.exe gave me the impression that it's just an elaborate joke about how you can code any program in Notepad...

It might have a joke name but it costs $80!

That's the real joke...

Or pay $19.99 for a year and be able to run it on 3 devices.

That's a pretty good deal.


It claims “native performance”, which makes me suspect it’s another Electron bloat.

Instead of speculating you could download and see for yourself that it’s not. It’s by Marcin Krzyzanowski who is all about native iOS and macOS apps.

Even if you're developing for macOS you can skip xcode. I've had a great time developing a menubar app for macOS and not once did I need to open xcode.

curious what you used - I've been looking into making a menubar app and really hate xcode

claude -p "Make a menubar app with AppKit (Cocoa) that does X"

I would avoid it for Linux and Windows. Even if they are "technically supported", Apple's focus is clearly macOS and iOS. Being a second- (or even third-) class citizen often introduces lots of issues in practice ("oh, nobody teenaged that functionality on Windows"...)

Self-driving municipal busses would be fantastic.

Also, a real nightmare for the municipal trade unions. (Do you know why every NYC subway train needs to have not one but two operators, even though it could run automatically just fine?)

Why?

Because the Transport Workers Union fought tooth and nail for it. Laying off hundreds of operators would be a politically very dangerous move.

Huh. I wonder if that makes any sense. It doesn't seem to make sense to keep employing people if you no longer need them. It sucks to be layed off, but that's just how it works.

It also shows a lack of imagination. If you have to provide a union with a job bank, why not re-deploy employees to other roles? With one person per train, re-deploy people to run more trains therefore decreasing the interval between trains. Stations used to have medics but this was cut. How about re-train people to be those medics? The subway could use a signaling upgrade and positive train control. Installing platform screen doors to greatly reduce the incidence of people falling onto the tracks is going to need a lot of labor.

Why would you need buses?

Mass transit is a capacity multiplier. If 35 people are headed in the same direction compare that with the infrastructure needed to handle 35 cars. Road capacity, parking capacity, car dealerships, gas stations, repair shops, insurance, car loans.

Believe it or not, in some cities that have near grid-lock rush-hour traffic - there's between 50-100%+ as many people traveling by bus as by car.

If all of those people switch to cars, you end up with it taking an hour to travel 1 mile by car.

It's almost as if they have busses for a reason.


First, these cities should be fixed by removing the traffic magnets. It's far past the point where we used the old obsolete ideology of trying to supply as much traffic capacity as possible.

But anyway, your statement is actually not true anywhere in the US except NYC. Even in Chicago, removing ALL the local transit and switching to 6-seater minivans will eliminate all the traffic issues.


> First, these cities should be fixed by removing the traffic magnets.

If you remove the jobs and housing, traffic does get a lot better. But it's not much of a city without jobs and housing.


Indeed. And people live better lives, with better job accessibility and variety. Once you remove dense office cores.

Car traffic magnets like highways inside urban cores? Or people traffic magnets like office buildings, colleges, sports stadiums, performing arts venues, shopping malls?

Office buildings. Everything else is just noise.

Large stadium arenas are a special case, but they don't create sustained traffic, and their usage periods typically do not overlap with the regular rush hour.


6-seater self-driving municipal minivans would be fantastic, too. (I would still call that a "bus", but I don't care what we call it.)

That's the testing matrix we have to do for iOS and Android apps today. The screen sizes don't go all the way up to ultrawide, but 13" iPad (portrait and landscape) down to 4" iPhone Mini, at every "Dynamic Type" display setting is required.

It's not that tough, but there can be tricky cases.


Also with every relevant locale, as English UI strings are usually abnormally short.


I think the industry settled on pretty good answers, using lots of XML-like syntax (HTML, JSX) but rarely using XML™.

1. Following Postel's law, don't reject "invalid" third-party input; instead, standardize how to interpret weird syntax. This is what we did with HTML.

2. Use declarative schema definitions sparingly, only for first-party testing and as reference documentation, never to automatically reject third-party input.

3. Use XML-like syntax (like JSX) in a Turing-complete language for defining nested UI components.

Think of UI components as if they're functions, accepting a number of named, optional arguments/parameters (attributes!) and an array of child components with their own nested children. (In many UI frameworks, components literally are functions with opaque return types, exactly like this.)

Closing tags like `</article>` make sense when you're going to nest components 10+ layers deep, and when the closing tag will appear hundreds of lines of code later.

Most code shouldn't look like that, but UI code almost always does, which is why JSX is popular.


Yes, SwiftUI supports macOS automatically.


HTML elements can style themselves now using the @scope rule. (It's Baseline Newly Available.) Unlike the "style" attribute, @scope blocks can include @media and other @ rules. You can't get more self-contained than this.

    <swim-lane>
        <style>
            @scope {
                background: pink;
                b {
                    background: lightblue
                }
                @media (max-width: 650px) {
                    /* Mobile responsive styles */
                }
            }
        </style>
        something <b>cool</b>
    </swim-lane>
You can also extract them to a CSS file, instead.

    @scope (swim-lane) { /* ... */ }
The reason approaches like this continue to draw crowds is that Web Components™ as a term is a confluence of the Custom Elements JS API and Shadow DOM.

Shadow DOM is awful. Nobody should be using it for anything, ever. (It's required for putting child-element "slots" in custom elements, and so nobody should use those, either.) Shadow DOM is like an iframe in your page; styles can't escape the shadow root and they can't get into the shadow root, either. IDs are scoped in shadow roots, too, so the aria-labelledby attribute can't get in or out, either.

@scope is the right abstraction: parent styles can cascade in, but the component's styles won't escape the element, giving you all of the (limited) performance advantages of Shadow DOM with none of the drawbacks.


Decoupling slots from shadow dom would make custom elements even better.

I love custom elements. For non React.js apps I use them to create islands of reactivity. With Vue each custom element becomes a mini app, and can be easily lazy loaded for example. Due to how Vue 3 works, it’s even easy to share state & routing between them when required.

They should really move the most worthwhile features of shadow dom into custom elements: slots and the template shadow-roots, and associated forms are actually nice.

It’s all the extra stuff, like styling issues, that make it a pain in the behind


There's really no way to decouple slots for shadow roots.

For slots to work you need a container for the slots that the slotted elements do not belong to, and whose slots are separated from other slot containers. Otherwise you can't make an unambiguous relationship between element and slot. This is why a shadow root is a separate tree.


Agreed. The way I explain it is: suppose you have a `<super-table>` element, and you have a child slot called, for example, `<super-row-header>`. Presumably you want to write some JS to transform the slotted content in some way, decorating each row with the header the user provided.

But, if you do that, what happens to the original `<super-row-header>` element that the user provided? Maybe you'd want to delete it…? But how can you tell the difference between the user removing the `<row-header>` and the custom element removing it in the course of its work?

What you'd need is for `<row-header>` to somehow exist and not exist at the same time. Which is to say, you'd have one version of the DOM (the Light DOM) where the slot element exists, and another version of the DOM (the Shadow DOM) where the `<row-header>` element doesn't exist, and the transformed content exists instead.

It's clever, I guess, but the drawbacks far outweigh the benefits.

Client-side components inherently require JS anyway, so just use your favorite JS framework. Frameworks can't really interoperate while preserving fine-grained reactivity (in fact, Shadow DOM makes that harder), so, just pick a framework and use it.


That's styling itself sure, but it's not self-evidently self-contained. Does every component emit those styles? Are they in the page stylesheet? How do they get loaded?

Counterpoint: Shadow DOM is great. People should be using it more. It's the only DOM primitive that allows for interoperable composition. Without it you're at the mercy of frameworks for being able to compose container components out of internal structure and external children.


https://2025.stateofhtml.com/en-US/features/web_components/

Sort by negative sentiment; Shadow DOM is at the top of the list, the most hated feature in Web Components. You can read the comments, too, and they're almost all negative, and 100% correct.

"Accessibility nightmare"

"always hard to comprehend, and it doesn't get easier with time"

"most components don't need it"

"The big issue is you need some better way to some better way to incorporate styling from outside the shadow dom"

> It's the only DOM primitive that allows for interoperable composition.

There is no DOM primitive that allows for interoperable composition with fine-grained reactivity. Your framework offers fine-grained reactivity (Virtual DOM for React/Preact, signals for Angular, runes for Svelte, etc.) and any component that contains another component has to coordinate with it.

As a result, you can only mix-and-match container components between frameworks with different reactivity workflows by giving up on fine-grained reactivity, blowing away the internals when you cross boundaries between frameworks. (And Shadow DOM makes it harder, not easier, to coordinate workflows between frameworks.)

Shadow DOM sucks at the only thing it's supposed to be for. Please, listen to the wisdom of the crowd here.


> There is no DOM primitive that allows for interoperable composition with fine-grained reactivity. Your framework offers fine-grained reactivity (Virtual DOM for React/Preact, signals for Angular, runes for Svelte, etc.) and any component that contains another component has to coordinate with it.

This just isn't true - composition and reactivity are completely orthogonal concerns.

Any reactivity system can manage DOM outside of the component, including nodes that are projected into slots. The component's internal DOM is managed by the component using whatever reactivity system it desires.

There are major applications built this way. They make have a React outer shell using vdom and Lit custom elements using lit-html for their shadow contents.

On top of those basics you can also have cross-shadow interoperable fine-grained reactivity with primitives like signals. You can pass signals around, down the tree, across subtrees, and have different reactivity systems use those signals to update the DOM.


You can do it, but that undermines the whole point of React: fine-grained reactivity.

If the child component had been a React component instead, the children would have participated in the virtual DOM, and React could do a minimal DOM update when it was done.

React can't even see the shadow contents of Lit elements, so React just has to update the light DOM and let Lit take over from there.

(Same applies to Vue, Angular, Svelte, Solid, etc. Each framework has a reactivity system, and you have to integrate with it to get its benefits.)


You still get minimal DOM updates crossing shadow boundaries. This is true fact.

React's vdom without shadow DOM passes props to components, which all return one big vdom tree, and then there's one big reconciliation. React used with shadow DOM evaluates smaller vdom trees per shadow root, and does more, but smaller reconciliations. It's the same O(n) work.

But in reality it's often much _better_ with shadow roots because the common WC base classes like Lit's ReactiveElement all do change detection on a per-property basis. So you only regenerate vdom trees for components with changed props, and with slots that doesn't include children. So if children of a component change, but props don't, the component doesn't need to re-render. You can do something similar by hand with memo, but that doesn't handle children separately. The compiler will, of course, fix everything.

Every other reactivity system works fine across shadow boundaries. Even the super-fine grained ones like Solid. The only issue with signals-based libraries like Solid is that they pass signals around instead of values, so to get true no-re-rendering behavior with web components you have to do that to, which means picking a signals library, which means less interoperability. The TC39 signals proposal points to a future where you can do that interoperably too.


I feel like it’s a niche feature that got way too much attention. In a past job, we maintained a widget customers could embed onto their page. How much trouble we had with parent styles creeping into our widget and ruining the layout! This would have been so much easier with shadow DOM effectively isolating it from the customer site; that is the only valid use case for it, I feel.

Yet, for actual web components, I entirely agree with you.


Yeah but most people don't need or want 'interoperable composition', they want sites with a consistent look-and-feel. Shadow DOM makes this much more difficult.


I haven't played with the Shadow Dom since Polymer one, but we had defaults and variables to address this that worked amazingly, and helped standardize it with other teams far better than other css things we had done at the time. It looks like that is still a thing - https://shadow-style.github.io/ - without which people injected things through the CMS that were not fun to deal with.


All of this is introducing complexity that simply goes away if we just avoid Shadow DOM.


Because, as they always say, Win32 is the only stable ABI on Linux.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: