Hacker Newsnew | past | comments | ask | show | jobs | submit | j45's commentslogin

Thanks for the update,

Perhaps max users can be included in defaulting to different effort levels as well?


Make affinity sound like a smarter and smarter choice.

Certain phrases invoke an over-response trying to course correct which makes it worse because it's inclined to double down on the wrong path it's already on.

Sometimes you can still learn from a story.

Humans are about making mistakes and learning from them, not hiding behind the disease of perfectionism.

If there's something the author needs to say, I'm sure they are capable of using their words.

The other side that could have happened so easily is so much silence that there was no book.


Sounds like early AI..

Sometimes the capability unlocks the possibilities.

But does it synergize paradigms?

Creating a new capability is like making a new flashlight.

Maybe the new light can see wider, or further and you see something you didn’t before that was possible.

You can synergizr the looksmaxing while cooking if you like :)


I wonder if there's a way to bring some of what Pi Coding Agent has to claude code itself.

It seems that installing claude code directly from npm shields from some of the current issues.


Efficient token use will be the new code/vim golf.

Whether it's human token use, or future OpenClaws


I've mention before that we should have a look at Telegraph/telegram speak. There was a HUGE industry in word efficiency at that time. There are hundreds of books.

I even think an LLM trained to communicate using telegram style might even be faster and way cheaper.


Why use many word when few do trick?

Reminds me of the terminus agent/harness on the terminal-bench coding benchmark - they just send send keystrokes to a tmux session. They score pretty well.

https://www.tbench.ai/news/terminus


> I've mention before that we should have a look at Telegraph/telegram speak.

.- -. -.. / .. --..-- / ..-. --- .-. / --- -. . --..-- / .-- . .-.. -.-. --- -- . / --- ..- .-. / -. . .-- / - . .-.. . --. .-. .- -- -....- -... .- ... . -.. / --- ...- . .-. .-.. --- .-. -.. ...


.--. .-. .- .. ... . -.. / -... .

It’s the new cloud cost vector, where cutting 2K from context on a busy service saves $xxxxx.

Terse.


Like "Token Usage Consulting" companies popping up now? :-D

No org doing real work cares about token use costs.

This mainly just affects hobbyists.


Token use cost can easily get as large as dev salaries. Even real businesses care about that.

Inefficient token use will have to tighten up.

There are services which will remove all but the text when browsing and make it greatly lighter.

20+ mb is the weight of all the javascript javascripting, ultimately to arrange and display an html page.


20+ mb is also the weight of rendering the HTML inside each client, instead of at the server. It is the weight of continuous disdain for users, and of 30 years of not giving a fuck about adding yet another abstraction layer and making it someone else's problem.

You can somewhat "fix" this by using your slow link to connect to a VPS somewhere that then connects to the Internet, either via links or a similar text-mode browser, or other bandwidth-saving gateway.

In the article and my pi-isp project, I use MacProxy Classic to strip heavy stuff from web pages through a local proxy service running on the Pi. This helps a lot, but if a page has 20 MB of resources, it can only do so much (without completely disabling JS and images).

There were proxies in the old day that would recompress images and do other things, but if the 20 MB is compressed javascript, there's not much you can do but hope it caches well.

That's a lot of stuff that I don't want to deal with or maintain. It's simply beyond the tolerance of my gumption, so it's not going to happen. :)

What can happen, instead: I can dream.

In this dream, the process of loading a web page identifies the viewing platform well-enough and the server delivers content that is shaped for it, so it can be downloaded quickly and displayed simply by the end-user device. It's not one-size-fits-all at all, or even one-size-fits-most: It's a pile of of simplistic HTML and maybe some minimal javascript and CSS that is meant for whatever the user is using right now.

In this way, the same layout jiggering, varnicating, and transfabulation is done as it is done today, but the work of doing so principally happens on the server instead of the client.

Also in this dream, I can hear people saying "But that's a can of worms!", and they're right. It's a damned mess -- but it's a mess either way. This just moves the mess from the client to the server.

I can also hear shouts of "But there will be hundreds or even thousands of layout paths!" And all I can think is: If there's a thousand unique device types hitting a given dynamic page, and that scales poorly with the server side doing the work, then that's a problem for the systems guys to direct instead of the web guys.

Which is fine: The web guys hacking away however they want is how we got into this mess of 20 megabyte Javascript downloads just-to-view-a-web-page to begin with. They've quite broadly proven that they're shit at this kind of work, and in my ideal world they'd be relieved of that duty.

(And yeah, to be sure: After I wake from this dream I'm still going to go outside and yell at the clouds, just as I do every day.)


This dream actually existing for a hot minute back in the day! There were a number of sites that would give you much different experiences based on your capabilities - and this continued into the mobile era - until Jobs screwed it up by shipping a desktop-capable web browser on a phone.

A couple of news sites have a low-bandwidth version: https://text.npr.org


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: