I can't tell if your general premise is serious or not, but in case it is: I get zero dopamine hits from using these tools.
My dopamine rush comes from solving a problem, learning something new, producing a particularly elegant and performant piece of code, etc. There's an aspect of hubris involved, to be sure.
Using a tool to produce the end result gives me no such satisfaction. It's akin to outsourcing my work to someone who can do it faster than me. If anything, I get cortisol hits when the tool doesn't follow my directions and produces garbage output, which I have to troubleshoot and fix myself.
We live in a world increasingly becoming hypercapitalistic in every facet of life with problems and solutions being marketed together, all fed with algorithms.
"Have you got chronic sebhorric dermatitis, click this link to make it go away".
"You may be eligible for compensation if you bought a Volkswagen"
"Fight the corrupt fascist government, buy a gas mask here"
"Fight the corrupt socialist government, buy a year's supply of Iodine tablets"
> It would be great if all games after a certain period of time were opensourced
I would settle for simple copyright expiration in a reasonable amount of time. 70 years after death of author is so wholly unreasonable. Even though so many IPs are now part of the collective cultural consciousness, people can't explore their creativity using them without threat of getting Nintendo'd (even for non-commercial projects!), and entire generations that grew up experiencing them will be dead and gone by the time they enter public domain. It is a travesty that we impose such heavy shackles on human creativity.
When I look back, seems to me the default was sort of "anything can copy and modify anything" because without additional measures or rules... what's stopping them? We added copyright as a time-limited exclusivity available to the creator to encourage people to create things (knowing they would have time to recoup some of their effort commercially).
With anything else (books or stories, pictures or movies, etc) the ability to modify or extend the work was the default. Copyright was a carve-out in this.
With software it's actually the reverse--the ability to modify or extend the work is _not_ the default. It takes explicit action by the creator to make that reasonable without substantial effort in most cases. We're actually dealing with an entirely different situation here, and providing that exclusivity on top really does seem like a bad deal for society in a lot of ways.
Is there anything else that's covered by copyright that's in a similar sort of situation as software? Where the thing that's covered by copyright _isn't_ really modifiable to begin with?
Which is a lot of words to say--on the surface, yeah, I agree with you. Besides shorter terms, I think if you want that exclusivity from society you should be required to give something back in return... like the source code so everyone can benefit from and build off of your work after your period of exclusivity expires.
> Is there anything else that's covered by copyright that's in a similar sort of situation as software? Where the thing that's covered by copyright _isn't_ really modifiable to begin with?
I don't see how software is unique here. You can modify a compiled executable, just like you can modify a finished graphic, or a produced movie, or a piece of music from an album. It takes additional effort, but so does modifying the graphic without the PSD file, the movie without the editor project files, and the music without the stems.
The original copyright laws date from the 1700s; at the time the only thing being protected was text: stories, essays, reference volumes, etc. Basically, stuff for which there was no "source code" to conceal, the whole thing was right there on the page.
It's only been in the 20th century that we've increasingly seen classes of copyrightable works for which the source code dwarfs the final released product: music, digital visual arts, film, and software
To make matters even worse, the commercial interest in copyright doesn't care about any of this, because pirates only duplicate and distribute the end product anyway. So it's only the creative side wanting to remix and extend that is shut out by a lack of source escrow.
It's even a bad deal for the rightsholder. There are lots of stories in video games of how a studio or publisher lost the original source code or assets for a game, then 5, 10 or 20 years later they want to remaster it and they can't do so without jumping through really elaborate hoops involving binary recompilation, emulation, repainting assets from scratch, etc.
If the code and assets were escrowed, the rightsholder could just go claim that stuff whenever they need it.
There was an article on here recently I think about someone trying to reverse engineer the Xbox version but it was really tricky because everything seemed to be serialised Unreal objects or something like that.
How are Palantir so effective (as this article is alluding)?
From a cynical British perspective, when I think of government departments and civil servants. I think inefficiency, data siloing, politics and lack of communication between departments and also internally not communicating between teams. Not withstanding a lack of cooperating and willingness to change.
Did Palantir have a political mandate, or can they just cut through the bureaucracy or bypass it with technology?
Are they effective? Do you have data on the number of people they've correctly identified vs false-positives. In fact, do you have any evidence they're even trying to limit false positives?
The reason they are able to very efficiently send a dozen ICE agents to a random persons home to hold them at gun point until they can prove their immigration status is because the goal is to send ICE agents around holding people at gun point and they're happy if they happen to also get it right sometimes.
If I understand correctly, you're saying that in a majority of cases (or something approaching that) the targets of these raids are not subject to lawful deportation?
I would be curious to have data / information showing that.
I'm saying we have absolutely no concrete statistical data, and in the press we have many cases where law enforcement has been deliberately negligent in order to deport people who were here legally. We can actually see them deliberately trying to avoid doing the things you would do if you wanted to establish the people you were trying to deport were here illegally. So it's fair to say, until we have some evidence that these people were here illegally the sensible thing to do is to assume they are innocent.
It's also kind of a problem to say "Oh well, we've got no concrete data, let's continue to let them deport whoever they like and shoot anyone who gets in the way".
Palantir's mission is to exactly solve the problem you're describing: break through data siloes to get better information. Core of the platform are data pipelines that can move data from any silo into the palantir data lake, where it can be analysed. Their forward engineering project approach probably enables them to bypass the organisational boundaries between departments. Their top-down selling approach ensures management assists bypassing organisational boundaries.
> break through data siloes to get better information
This is the pitch of every consulting company ever.
In this case, Palantir is doing VLOOKUP on healthcare records to get suspects’ addresses. They then put that in a standalone app because you can’t charge buttloads of money for a simple query.
Something I see often in technical circles (and I'm not accusing you) is the manufacturing of consent for ghoulish behaviour by describing it in a reductive way. I think there's a bias to consider sophisticated violations of civil rights as more nefarious than mundane ones.
UK government departments are slow and hostile to change, so I am skeptical that Palantir being parachuted in, would produce anything more than a CSV file with a few hundred rows in it.
From what I've read is that they are not a product company. But they rather have a zoo of solutions. And they are hired by governments desperate to improve their IT, probably after the n-th issue going public. I highly doubt this would be legal in many states but who will (and can) check this anyway?
Of course it's tempting to throw everything into one huge database. But Jesus, this is like interns writing the Software...
Exactly like any other big tech (Google, Microsoft, etc) or consulting (McKinsey, Deloitte, etc) company!
There really isn't anything special about Palantir the company. They have disrupted consulting on marketing alone (all this forward-deployed stuff is more fluff than anything) which is not unheard of, and continue to receive all this bad press due to their clientele and the kind of data they're processing. Government departments, military. They are happy to take credit for all the "conniving" allegations because it makes them look like they have a plan, and anybody with purchasing power involving with them knows it corresponds very little to the company operationally, i.e. what the company does.
It's interesting to see how their CEO plays into the whole thing, trying to look paranoid/crazy/brutal/.... It's really just branding/marketing. It's similar to how certain politicians in the US present themselves through vice signalling. Doesn't matter what goes on in the background, the unwashed masses will think things must be happening.
Well yes, all the big tech companies are just as corrupt as Palintir, but only Palintir is actively making tech purpose built to enable some of the most vile people on the planet to more easily physically kidnap and harm human beings for money. They are trying to be 1930s IBM
I asked Claude several times to resolve this ambiguity and it suggested various prioritisation strategies etc. however the resulting changes broke other functionality in my library.
In the end I am redesigning my library from scratch with minimal AI input. Why? because I started the project without the help of AI a few years back, I designed it to solve a problem but that problem and nuanced programming decisions seem to not be respected by LLMs (LLMs dont care about the story, they just care about the current state of the code)
> I started the project in my brain and it has many flaws and nuances which I think LLMs are struggling to respect.
The project, or your brain? I think this is what a lot of LLM coders run into - they have a lot of intrinsic knowledge that is difficult or takes a lot of time and effort to put into words and describe. Vibes, if you will, like "I can't explain it but this code looks wrong"
I updated my original comment to explain my reasoning a bit more clearly.
Essentially I ask an LLM to look at a project and it just sees the current state of the codebase, it doesn't see the iterations and hacks and refactors and reverts.
It also doesn't see the first functionality I wrote for it at v1.
This could indeed be solved by giving the LLM a git log and telling it a story, but that might not solve my issue?
I'm now letting Claude Code write commits + PRs (for my solo dev stuff), but the benefits have been pretty immense as it's basically Claude keeping a history of it's work that can then be referenced at any time that's also outside the code context window.
FWIW - it works a lot better to have it interact via the CLI than the MCP.
I personally don't have any trouble with that. Using Sonnet 3.7 in Claude Code, I just ask it to spelunk the git history for a certain segment of the code if I think it will be meaningful for its task.
Out of curiosity, why 3.7 Sonnet? I see lots of people saying to always use the latest and greatest 4.5 Opus. Do you find that it’s good enough that the increased token cost of larger/more recent models aren’t worth it? Or is there more to it?
Opus is pretty overkill sometimes. I use Sonnet by default. Haiku if I have clearer picture of what I'm trying to solve. Opus only when I notice any of the models struggle. All 4.5 though. Not sure why 3.7. Curious about that too.
I suspect they use the LLM for help with text editing, rather than give it standalone tasks. For that purpose a model with 'thinking' would just get in the way.
Yes, a lot of coders are terrible at documentation (both doc files and code docs) as well as good test coverage. Software should not need to live in ones head after written, it should be well architected and self-documenting - and when it is, both humans and LLMs navigate it pretty well (when augmented with good context management, helper mcps, etc).
I've been a skeptic, but now that I'm getting into using LLMs, I'm finding being very descriptive and laying down my thoughts, preferences, assumptions, etc, to help greatly.
I suppose a year ago we were talking about prompt engineers, so it's partly about being good at describing problems.
One trick to get out of this scenario where you're writing a ton is to ask the model to interview until we're in alignment on what is being built. Claude and open code both have an AskUserQuestionTool which is really nice for this and cuts down on explanation a lot. It becomes an iterative interview and clarifies my thinking significantly.
One major part of successful LLM-assisted coding is to not focus on code vomiting but scaffolding.
Document, document, document: your architecture, best practices, preferences (both about code and how you want to work with the LLM and how do you expect it to behave it).
It is time consuming, but it's the only way you can get it to assist you semi-successfully.
Also try to understand that LLM's biggest power for a developer is not in authoring code as much as assistance into understanding it, connecting dots across features, etc.
If your expectation is to launch it in a project and tell it "do X, do Y" without the very much needed scaffolding you'll very quickly start losing the plot and increasing the mess. Sure, it may complete tasks here and there, but at the price of increasing complexity from which it is difficult for both you and it to dig out.
Most AI naysayers can't be bothered with the huge amount of work required to setup a project to be llm-friendly, they fail, and blame the tool.
Even after the scaffolding, the best thing to do, at least for the projects you care (essentially anything that's not a prototype for quickly validating an idea) you should keep reading and following it line by line, and keep updating your scaffolding and documentation as you see it commit the same mistakes over and over. And part of scaffolding requires also to put the source code of your main dependencies. I have a _vendor directory with git subtrees for major dependencies. LLMs can check the code of the dependencies, the tests, and figure out what they are doing wrong much quicker.
Last but not least, LLMs work better with certain patterns, such as TDD. So instead of "implement X", it's better to "I need to implement X, but before we do so, let's setup a way for testing and tracking our progress against". You can build an inspector for a virtual machine, you can setup e2es or other tests, or just dump line by line logs in some file. There's many approaches depending on the use case.
In any case, getting real help for LLMs for authoring code (editing, patching, writing new features) is highly dependent on having good context, good setup (tests, making it write a plan for business requirements and one for implementation) and following and improving all these aspects as you progress.
I can't remember the exact prompt I gave to the LLM but I gave it a Github issue ticket and description.
After several iterations it fixed the issue, but my unit tests failed in other areas. I decided to abort it because I think my opinionated code was clashing with the LLM's solution.
The LLM's solution would probably be more technically correct, but because I don't do l33tcode or memorise how to implement Trie or BST my code does it my way. Maybe I just need to force the LLM to do it my way and ignore the other solutions?
Trying not to turn this into "falsehoods developers believe about geographic names", but having done natural-language geocoding at scale (MetaCarta 2002-2010, acquired by Nokia) the most valuable thing was a growing set of tagged training data - because we were actually building the models out of that, but also because it would detect regressions; I suspect you needed something similar to "keep the LLMs in line", but you also need it for any more artisinal development approach too. (I'm a little surprised you even have a single-value-return search() function, issue#44 is just the tip of the iceberg - https://londonist.com/london/features/places-named-london-th... is a pretty good hint that a range of answers with probabilities attached is a minimum starting point...)
If Claude read the entire commit history, wouldn't that allow it to make choices less incongruent with the direction of the project and general way of things?
It does not struggle, you struggle. It is a tool you are using, and it is doing exactly what you're telling it to do. Tools take time to learn, and that's fine. Blaming the tools is counterproductive.
If the code is well documented, at a high level and with inline comments, and if your instructions are clear, it'll figure it out. If it makes a mistake, it's up to you to figure out where the communication broke down and figure out how to communicate more clearly and consistently.
"My Toyota Corolla struggles to drive up icy hills."
"It doesn't struggle, you struggle." ???
It's fine to critique your own tools and their strengths and weaknesses. Claiming that any and all failures of AI are an operator skill issue is counterproductive.
But as a heart surgeon, why would you ever consider using a spoon for the job? AI/LLMs are just a tool. Your professional experience should tell you if it is the right tool. This is where industry experience comes in.
As a heart surgeon with a phobia of sharp things I've found spoons to be great for surgery. If you find it unproductive it's probably a skill issue on your part.
A tool is something I can tightly control. A thing that may or may not work today, and if it does, might stop working tomorrow when the model gets updated without any notification to anyone, the output of which I have to very carefully scrutinize anyway, is not a tool. It's a toy.
"Generate a Frontend End for me now please so I don't need to think"
LLM starts outputting tokens
Dopamine hit to the brain as I get my reward without having to run npm and figure out what packages to use
Then out of a shadowy alleyway a man in a trenchcoat approaches
"Pssssttt, all the suckers are using that tool, come try some Opus 4.6"
"How much?"
"Oh that'll be $200.... and your muscle memory for running maven commands"
"Shut up and take my money"
----- 5 months later, washed up and disconnected from cloud LLMs ------
"Anyone got any spare tokens I could use?"
reply