Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Why Copilot Is Making Programmers Worse at Programming (darrenhorrocks.co.uk)
82 points by mmphosis on Sept 11, 2024 | hide | past | favorite | 130 comments


This article 100% read like it was written by AI.

I will personally never use Copilot, or any other AI code generation tool, for the simple reason that I enjoy writing code.

Even if I were unfamiliar with a new language, I'll still never use it. Instead, I'll consult the documentation and follow examples. I like coding, and I neither need nor want a machine to do it for me.

It's exactly the same as writing English. There is great pleasure to be found in writing, it's worth your time. Just be careful when doing so to not end up sounding exactly like ChatGPT.


> I will personally never use Copilot, or any other AI code generation tool, for the simple reason that I enjoy writing code.

I had an XML file format, I needed it converted to JSON. I haven't parsed XML in over a decade, and I didn't want to relearn. I also didn't want to match up XML fields to JSON fields. Both file formats had well defined schemas available, but were also common enough that most major LLMs were trained on them.

I just asked Claude to write a program to convert from one file format to another.

There is no joy in that task. Doing it myself would have been the very definition of a useless chore.


If you are an experienced coder, then you know which tasks are useless chores. It's a different issue for newdevs who will be learning to code with tool assistance.


And parsing text is quite foundational. Although today I would need to think hard to find a language, where you don't have a ready to use XML/JSON parser ready at hand.

Perhaps there is no reference implementation in Pancake Stack, but your language has to be quite arcane to not have something available.


New devs, by the time they will become senior devs, the tools will be 10x better. 10x in 3 years is very conservative, it may as well be 100x, not even considering runaway intelligence.

In 2 days, GPT, Claude and Llama 3 helped me solve 4 different problems. 1) I wanted to load in a webpage a json file, parse and read it, using Rust + webassebly. GPT solved it in 10 minutes. 2) Then i wanted to use this code and make it as a Chrome browser extension. GPT solved it as well. 3) Then i wanted to parse recursively directories with Rust source code, run tree-sitter queries, get the points and print matches, not with cli but using the libraries. GPT solved it in an hour. 4) Right now i finished analyzing a git repo, and fetching all branches and commit messages using git2. Llama 3 + Claude Sonet solved it in an hour or two.

DHH did an interview yesterday [1] in which he mentions that he will be programming by hand even if everyone else uses A.I. assistance. I think he is pretty smart and he will not be stuck in the era of meaty fingers pressing keyboards to make programs.

[1] https://www.youtube.com/watch?v=fEy9JhHk6lg


> I just asked Claude to write a program to convert from one file format to another.

Was it correct?

How did you know?


It's pretty simple to check yourself to be fair.


Same way that you'd know if you wrote it by hand?


Don't use facetious question marks. If you are genuinely confused, put it into words.

Aside from that; Following logic as you develop it vs following logic as you read it are two different skills. Also, with the latter, it's easier to miss details. Further, when you're not experienced on the subject of the code, it's critical to develop an understanding by doing, and not only by reading. Further still, following the logic of genAI code is complicated by the fact that AI can make different mistakes than humans do, and it speaks/documents with much more authority, both of which make it not only more difficult to spot issues, but make it difficult in a frustrating way, reducing energy and satisfaction.


Shoulda asked claude what library to use


I'm wondering if there was some remapping of the fields used, otherwise library yes.


Yeah, parsing XML is no small deal. Did it answer with a regexp or what?


When I tested right now, for parsing XML it did use libraries or native tools depending on the language, but it converted the nodes provided by that library to JSON as written code.


TBF there are countless websites that can do this automatically for you, even providing the code to handle the conversion. No AI needed.


Asked ChatGPT to create an article on this title and "Erosion of Core Programming Skills" was exact word by word one of the headings it used.

And of course the fact that each heading has exactly 2 paragraphs in similar lengths and the overall tone as well.

The whole blog including the other articles looks like AI generated.


My work has Copilot. It's best when it's a turbocharged intellisense - if I can push tab and have it autofill what I already was about to type I like it. Saving keystrokes is just smart. I won't let it think for me, but I do let it handle the mundane so that I have more time to think. Also, it's terrible at the thinking, just godawful at solving most of the problems that I need to think about.


I've never been able to see any benefit in this because typing the code takes much less time than deciding what code should be there.

I could get a little bit of benefit out of an autocomplete that only showed legal suggestions, but even with deterministic autocomplete I frequently had to look at too many suggestions and it was faster to throw a query at Google.


I build CRUD apps at BigCo and most of the LOC is pretty mindless but needs to be deployed yesterday. Copying over 100 different memberVariable = DATABASE_CONSTANT in different forms becomes much easier when copilot starts guessing the format.


I would say that in many cases there is annoying amount of boilerplate, and I would say it would make sense to DRY & abstract it away, because it's painful to write it, but Copilot has made it bearable, so I'm less caring of that now.

And sometimes if you do abstract it away too much, edge cases handling and everything affecting everything else might not be worth it.

I also think with Copilot in the house, it's completely acceptable to have it write utils fn even if they are not DRY, in the sense that you could import leftPad as a package, but it doesn't matter since you can just write function leftP and copilot will do it for you within few seconds, which is faster and more flexible for the future. And without unnecessary, unknown dependencies with potential vulnerabilities baked in.

I think there's going to also be some sort of enlightening in terms of overusing libraries and idea of everything having to be DRY. Because frequently this will cause more problems than it wins even though you think the code is really, really clean.


I wonder if it depends on the programming style, because I always feel like I have this massive amount of things that I need to "vomit" out, so Copilot is always helpful for me. Like I already know what the whole system should be like, so it's just about laying it out.

It's mostly full stack web applications, with React frontend and APIs with databases, so it's not exactly something groundbreaking.


I think that's a big part of it. I'm writing servers and systems in C++ and Kotlin. C++ is a very dense language, virtually every token is semantic with little redundancy except the archaic header declaration system, so there are not all that many shortcuts that still preserve my intended meaning. Kotlin feels like a response to Java being so low-density, and is much higher density, although not as much as C++.

Every time I play with web or GUI development I get the impression that the languages and systems are misaligned with the way people express their intentions in the space. It makes me want to develop a language (or family of languages) directly for expressing GUI concepts, but I don't understand the space well enough to do a good job.


I've found it to be useful for things where the auto-complete is both long enough and boring enough that it doesn't actually take longer to look at it

For a practical example, lets say I define protobuf models, I start by writing

```

service X {

   rpc doSomething([cursor]
}

```

It'll generally be smart enough to complete to the same pattern as every other one in the codebase

I can then put my cursor after and get it to generate the models by just tabbing a bunch:

```

service X {

  rpc doSomething(DoSomethingRequest) returns (DoSomethingResponse);
}

[cursor]

```

```

service X {

  rpc doSomething(DoSomethingRequest) returns (DoSomethingResponse);
}

message DoSomethingRequest {}

message DoSomethingResponse {}

```

Then I can go to my actual code file where this is implemented, and having had both the context of the codebase and the context of the protobuf file, it'll generate

```

class DoSomethingImpl extends some.grpc.package.DoSomethingService {

  override def doSomething(request: DoSomethingRequest): Future[DoSomethingResponse] {

     // Some usually bad code I just delete

  }
}

```

Nothing here is super complicated, I know it's right at a glance, it's super easy to write, but I hate having to do the boilerplate. Could I write a simple script that kind autogenerates this for a given API? Maybe. But a bunch of typing and piping things around becomes just a bunch of "tab" presses. And an LLM is more flexible to slight changes in the pattern.

This is multiplied times 10 if I then want to consume this from some other language, and it has both in it's context. I literally just need to create the one definition, and the LLM will complete the required code to product / consume the API on both sides as long as it's all in the same context window (and tools are getting good at doing this)

What I really want now is to just have to say "Add a proto api for `doSomething` and add boilerplate to give a response and then consume it. Put it in X existing grpc service" and just have it do all of this without a series of smaller completion.


A lot of the time it'll guess the type signature I'm using or the entire shape of the mock object I'm trying to build. When it works it's much better than intellisense, but I don't ever go to it, if the ghost text pops up in a way that works for me I'll hit tab, otherwise I'll keep typing until/if it figures out what I'm aiming for.


And all the generated code needs to be maintained.

Typing the code yourself incentivizes you to write code that's concise and easy to read and avoids repetition.


I personally never use a Word Processor, or any other electronic typing appliance, for the simple reason that I enjoy typing on my mechanical device.

Even if I were unfamiliar with a new keyboard, I'll still never use it. Instead, I'll consult the technical guides and follow examples.

I like typing, and I neither need nor want a word processor to do it

Certain things may seem strange, or that they take away from the experience, but sometimes it is simply progress... good or bad.


> This article 100% read like it was written by AI.

I noticed that too, found it quite ironic =)


>> I will personally never use Copilot, or any other AI code generation tool, for the simple reason that I enjoy writing code.

LLM code generators are a problem for those who write code for a living, not for pleasure. They may well eventually be forced to use such systems, whether they like it or not. When your paycheck is on the line, you will conform.

For those who both enjoy writing code and have to do it for a living, their everyday jobs may just get a lot more tedious and boring than they are right now.


ChatGPT is really good at answering specific questions. Just like asking a co-worker, it will do the best to its knowledge, but expect you to know how to integrate that knowledge.

It’s a very useful tool to see how to do something a different way, and sometimes it’s surprising and has some really good ideas!

The problem is people expect these tools to “just do it”, but that doesn’t work. You need to iterate—if it makes a mistake, nudge it in the right direction and you’ll find it knows how to intelligently adjust where it goes wrong.


> This article 100% read like it was written by AI.

  q 'write a an article called "Why Copilot is Making Programmers Worse at Programming"'
1. Introduction: In recent years, the use of Copilot has become increasingly popular among programmers. While Copilot can be useful in certain situations, it is also making programmers worse at programming in several ways.

2. Lack of critical thinking skills: When using Copilot, programmers may rely too heavily on the tool to generate code for them. This can lead to a lack of critical thinking skills and an inability to solve problems independently.

3. Overreliance on automation: Copilot is designed to automate certain tasks in programming, which can lead to an overreliance on automation. While this can be helpful in some cases, it can also make programmers less able to think creatively and outside the box.

4. Dependence on a single tool: Copilot is just one tool among many that programmers use to write code. By becoming too dependent on a single tool, programmers may be less well-rounded in their skills and less able to adapt to different programming environments.

5. Lack of creativity: When using Copilot, programmers may not have the opportunity to be creative with their code. This can lead to a lack of originality and an inability to come up with innovative solutions to problems.

6. Inability to learn from mistakes: With Copilot, programmers may not have to deal with as many errors in their code. However, this can also make it harder for them to learn from their mistakes and improve their programming skills.

7. Lack of attention to detail: When using Copilot, programmers may be more likely to overlook small details in their code. This can lead to bugs and other issues that could have been caught with a little more attention to detail.

8. Conclusion: While Copilot can be useful in certain situations, it is also making programmers worse at programming in several ways. By becoming too dependent on the tool and not developing critical thinking and problem-solving skills, programmers may be less well-equipped to handle more complex programming tasks. It is important for programmers to use Copilot as a supplement to their existing skills rather than a replacement for them.

  mistral-7b-instruct-v0.1-Q4_K_M-main.llamafile


> I will personally never use Copilot, or any other AI code generation tool, for the simple reason that I enjoy writing code.

This will sound extremely harsh; but I noticed I strongly favour colleagues who do use AI-assisted tooling over those who do not. The PR, documentation and code just looks cleaner.

So if it comes to who I favour working with; it's usually people who rely on AI-tools. They deliver code I like maintaining more.


I do use AI support. Locally hosted though and no copilot. It just supports me, but I do write my code myself.

What I see is that new programmers do use it extensively though and if the service is down they often remain helpless. Still think it is a very good support in the beginning as long as some knowledge remains retained, which it very likely will.


I also enjoy writing code a lot. Sometimes I try to let GPT help me by me providing some context or debugging something. It's been good at that so far, as I sometimes don't have the time at work to scroll through the depths of the internet. I still write 99% of the code myself though. It's typing practice for me. Typing is a good thing.


Few of us only write for fun, vs for results on a deadline. I like having time with my wife, and getting work done so we can be together. I also like writing, but I mainly write to accomplish tasks. There are many, many uses for AI within the domains of "writing" (English or code)


>>I will personally never use Copilot, or any other AI code generation tool, for the simple reason that I enjoy writing code.

Im pretty sure I will enjoy riding horses. But if I have to deliver things to people, I think I will be using a truck.


I like reading source code examples to learn so if an AI coding bot can help me level up I think that’d be great… maybe even analyze my code and suggest improvements or a better way to solve a problem. I’m not sure AI is at that level yet though.


Copilot explains code really well, in my experience.


Reverse engineering code to design patterns is what I am interested in. And then say, have it express the code with different (relevant) patterns possibly.

Or take requirements and propose extensible design patterns. There aren't many design patterns per se but that task has a lot of value. Perhaps the most value.

Also an AI that can find similar concepts using different patterns where they should be consolidated. This could even be for something "as simple as" file access. So many bugs and issues due to one person not knowing how to access resource X.


Yeah I have found copilot to be lacking when it comes to understanding the structure of certain things. Like when you ask it to add tests, it can figure out where the test files go, but can't seem to insert code into existing tests, so you have to rearrange code fairly often.

I think that code generator models need to understand the abstract syntax tree, like using treesitter to better understand the structure, and as you said, the patterns involved in the code. Plus generators could then check that their new code correctly implements a pattern.

I'd love to be able to feed a repo into a model and get documentation and diagrams out of it.


The worry for me is when documentation and examples start to be written by AI because the team decides that writing it themselves is boring and a waste of their time.


The most helpful way I use GPT feels more like using a supercharged autocomplete than not writing the code myself. Often I do know what I want to write, and if I get a quarter of the way into a function there's a solid chance in my experience that the AI will suggest to the letter exactly what I was intending on typing anyways. This saves me a huge amount of time and feels no less satisfying.

I still read the docs and examples because it isn't reliable enough to tell me accurately how to use most packages. It's absolutely a mess of flaws and sharp edges so I get the hesitation in using it, and for sure it has allowed less than stellar coders to confidently present steaming trash heaps with a pretty bow on top, but I don't think it's always fair to assume using AI tools means being less involved with the code you produce.


Pass. This article could have been published as "Why Internet Message Boards Are Making Programmers Worse at Programming" 30 years ago, "Why Google Is Making Programmers Worse at Programming" 20 years ago or "Why StackOverflow Is Making Programmers Worse at Programming" 10 years ago. Its the same-old-same-old. Has it been true in the past? Maybe, if your criteria for "good programmer" is "someone who went through the same struggle I did". But the industry grows and adapts. The next generation of programmers is going to be good at different things than we are. As long as they can get the same job done, who are we to say that they are wrong?


Agreed.

If you're getting paid to ship features, no one cares whether you're learning, or learning the "correct" way. You simply have to get the job done. If tools like copilot help you meet your goals and budget, then they are a good thing.

I heard the same thing in the 90s when Java (and even C++) came on the scene. The C programmers bristled at the idea that you no longer had to deal with memory management. They thought it would make the programmers sloppy, generating poorly performing code bases.

They were right, and it didn't matter.


A generation later, most C programmers are bad at performance anyway. And they get segfaults.


> The C programmers bristled at the idea that you no longer had to deal with memory management

The funny thing is that is the same example I was thinking of.


You’re right on with the parallel with stack overflow, google, and message boards. I take a different conclusion though, which is that lazy irresponsible copy pasting has always been with us, but that doesn’t make it good. Remember the “full StackOverflow engineer?” Competence and diligence will continue to produce different results than faking it.


I was going to ask something along the same lines. Is the article analogous to saying python is making programmers less familiar with the underlying hardware. Perhaps something’s we don’t want to think about (boilerplate) and other things that really drive performance or business


The worse programmers are winning!


Some of these points may be true, but as someone who just started using Copilot at a new job, on an unfamiliar code base and programming language, it's been a lifesaver.

Obviously you have to read the code to make sure it makes sense, and much of the work is deleting the main bit of functionality it attempted to implement, and re-implementing it correctly.

However, having it autocomplete entire function definitions, including all the {} () => | : `${x.y}` fiddly bits, sure saves a lot of time.

The one point I don't agree with at all is `Dependency on Proprietary Tools` there are already plenty of open source alternatives, and these will only improve with time.


From the point of view of corporate use, there absolutely isn't an open source alternative. At least, not yet.

The critical feature, which also counts out some paid tools, is the ability to avoid regurgitating training data into your codebase. GitHub has the raw dataset to be able to offer that (and the legal indemnification that comes with it). I don't know of any open source system that can offer the same, even in principle.


I feel like we'll be having the same arguments forever. Sure LLMs bring some issues but programmers will always find ways to write bad code no matter what tooling or techniques we have available. His critique is basically the same as 'calculators' will make us poorer at mathematics -- at lest in Europe PISA Maths skills have been steadily increasing since the introduction of calculators to school syllabi.


The analogy to calculators isn't particularly close. A calculator will help with the mechanics of arithmetic or even graphing but it won't help a student who doesn't understand when and why to multiply, add, or subtract. Copilot will generate working code (for some definition of "working") that the programmer might not understand at all.


Graphing calculators, with applications that you can black box pattern match and plug into, might be argued to have done so, since you don't need to understand the machinery, just pattern match on the problem type.


> His critique is basically the same as 'calculators' will make us poorer at mathematics

Calculators did make us bad at arithmetic. It's just that people stopped caring.

I don't think this is bad, though, anything to avoid typing repetitive boilerplate, which is pretty miserable regardless of your skill level.


Writing made us bad at remembering. Socrates was famously a critic of writing for this reason[1]. If we can say anything about the effectiveness of the argument, it's that it's completely impotent against new technology and tools.

1. https://fs.blog/an-old-argument-against-writing/


> If we can say anything about the effectiveness of the argument, it's that it's completely impotent against new technology and tools.

Not if you care about these skills! It's not like Socrates was wrong, it's just that the world moved on without him. Having a good memory has nearly zero value these days.


Well, Socrates was right. Writing did make us bad at remembering.


Perhaps worse at remembering, but many orders of magnitude better at preserving, accessing, and learning from information about the past.


In stored form, sure.

Otherwise the information the average person knows about the past is probably even worse, pop memes aside ("Cleopatra's nose", "Egyptians worshipped cats").

Even the immediate past - like 10-20 years before people were born, would be a foreign concept, if it weren't for the ocassional period movie and tv series reference.


> and learning from information about the past.

You'd assume so based on the prevalence of information, but I'm not sure there's any evidence of this. Hell from my own interactions about the past with others, you might as well have completely wiped everything before WWII from cultural consciousness and pretended like the last 80 years are the only ones that matter. Some days it's feeling more like the last 40 years represent the only world people can even imagine, let alone have an opinion on.


It's a mixed bag, of course. But it really makes me wonder: before the advent of writing, what would it have been like to think about an event that happened 80 years ago--outside just about everyone's living memory? It seems like those events would already have begun to recede into the age of myth! You might not even have a great sense of exactly how long ago it was.

On the other hand, thanks to the invention of writing, we have a chance at having at least some reliable knowledge about how people lived thousands of years ago, across the world. (I assume your comment is at least partially tongue in cheek. But I take your point that, just because writing gives us incredible ability to make records and to learn about the past does not mean people will use it.)


Until someone opens a 487 line PR, with a near-zero understanding of what the changes actually do.

It's one thing to use LLM's to assist you in understanding the codebase, and even helping to write some of the changes. It's another thing entirely to blindly trust the LLM - which is what ~90% of LLM users are going to do, especially given enough time.

We've all heard the tall-tales about people crafting prompts and getting complete video games as output, etc. They're nearly entirely BS, but people believe in that type of result and therefore will believe their results are just as good.

We see this now - the stories of non-programmers using ChatGPT to hallucinate-up poorly understood/written scripts/webpages and then getting really upset at their IT team for refusing to use it. "They're just afraid AI will steal their jobs!"...


>at lest in Europe PISA Maths skills have been steadily increasing since the introduction of calculators to school syllabi.

Has it really?

https://media.licdn.com/dms/image/v2/D5612AQG5D1oqhEfgnQ/art...

And calculators are mere helper tools, they don't take over the solving/thinking part.


A more apt comparison would be someone always telling you how to solve a maths problem without you trying to figure it out yourself. I think it's pretty clear how that would be detrimental to skill progression.


I was surprised to see the author didn't provide a single source to support his ramblings. Without any data to support our arguments, I doubt we'll settle these questions, ever.


Sure! sqrt(-1) is actually (-1/12), as proved by Ramanujan summation.


I'm in the "it doesn't matter" camp.

Over time, more people will realize that tools like Copilot aren't worth the headache. The solutions are often wrong, the explanations of those solutions are wrong, the corrections when you point out a mistake are wrong, etc.

Once "AI" hype dies down and people see these tools for what they are, glorified Markov chains, it won't really matter. Maybe it will get some use in making boilerplate code for the most basic of applications, but that's about it. And the occasional junior dev stumbling into it not realizing just how bad their output can be.


I have a moderate sized legacy project where I need to migrate tests from Enzyme to React Testing Library (RTL). Probably 150+ test files, each containing upwards of 10 test cases.

While not using Copilot, I have GTP-4o assistant with a system prompt setup from trial and error to convert a given test from Enzyme to RTL. There are certain scenarios where a given test cannot actually exist in RTL due to a difference in testing philosophy between the two frameworks and I am required to make some decisions, but overall this is probably 10x faster than refactoring these tests by hand.

One of the important aspects of this, though, is when a I encounter a repeated failure of the LLM, I update the system prompt going forward. Even though this is a simple 1-shot approach, it still works well for a task like this.


Solutions from any source are often wrong. Stack Overflow, intellisense, human peers. The question you have to ask is: does it make you more productive even though there are mistakes?


Right now its a bit like Tesla's self driving. It mostly works but mostly works isn't a great standard and maintaining supervision to correct errors involves continually re-building state and trying to debug AI-code which can be more taxing than just doing the thing yourself.

This is case by case of course.. I used it the other day to generate fairly idiomatic table-driven tests. It took a few swings plus some manual tweaking but as I don't particularly enjoy writing tests I was pretty satisfied with the outcome and it had more coverage than I probably would've written. Well worth the 25 cents in API credits. On the other hand, there have been more than a few times I've given up trying to nudge the AI and just did it myself. In those cases it was just a net negative and just wasted time. So the trick is feeling out where that line is for each model so wasted time < saved time.


There's no evidence of any of that, it's pure speculation. I bet people were saying the same thing about Intellisense, and Google, and documentation-on-hover, and live compiler errors, etc. etc.


You don’t need evidence for something to be true. The observations made by this article are self-evident to those working in industries that have adopted copilot.

However, in this case there is empirical evidence copilot makes code quality worse: https://visualstudiomagazine.com/articles/2024/01/25/copilot...


> The observations made by this article are self-evident to those working in industries that have adopted copilot

I work in such an industry and it's certainly not "self-evident". Clearly it's also not self-evident to many other people commenting on this post.

> However, in this case there is empirical evidence copilot makes code quality worse: https://visualstudiomagazine.com/articles/2024/01/25/copilot...

One study does not prove anything, although at least it is _some_ evidence. But other studies linked in that very article show opposite results.

To be clear I don't know if LLMs are better or not for developer productivity. I think it's too early to tell. But given that people, in general, are prone to conservatism, combined with the fact that past developer tooling enhancements have not caused the negative outcomes that many people at the time predicted they would cause, leads me to believe LLMs will probably be a positive change.


Forget the sources, but I recall people saying the same about syntax highlighting even.


I'm feeling both nostalgic and elated. The first time a friend gave me a version of Turbo Pascal with syntax highlighting was an epiphany. The same feeling struck about a decade later after trying actually usable auto-complete for the first time in JBuilder. Pure magic.

Tools like the replit bot evoke similar emotions. I have no idea why they generate so much negativity from most HN comments. This cohort must be on average younger than myself. I'm pushing 50 and for the first time in decades I fee like we are actually about to witness actual productivity boost in programming craft because let me tell ya, server side js or spa or nosql certainly weren't it.


They were absolutely. I recall when Stack Overflow was the great evil. Luddites should get over it. We are moving forward. There is no putting the LLM back in the box, and even if there was a way to do that, it would be dumb.


Luddites weren't against technology for the sake of it. They were critical of the social ramifications of technology because they were being put out of jobs and their livelihoods. How should then Luddites simply get over mass unemployment? Until we "solve" the problem of unemployment, this problem will continue to exist.


LLMs are here, they are staying, we either learn to live with them or accept it's the end of our time. There is no putting LLMs back in the box. Whoever does not use them will become irrelevant. Societal progress is not one way, but technological progress is — and there is a good reason for that; the universe is biased towards processes that are better at increasing entropy, if we aren't that, we will be replaced.

It may suck, but facts are facts. Dealing with them is better than pretending they don't exist.


No one is asking for technology to be un-invented. We're asking for the societal consequences of technology to be addressed.

Technology can be disruptive, but "just deal with the disruption," is not a viable response. It breads resentment. It favors populism. It says, "sorry, but your problems are not worth fixing."


This article is not about that, it's about how a technology supposedly makes people worse at something, when in reality the technology transforms the activity. It's about as sensible as modern input methods make people worse at programming. If humans can no longer be useful at some ancient, irrelevant practice because of a quantum leap in technology ... so what? To quote most historic person that has ever and will ever live; what difference, at this point, does that make?


Because people being useful = people's livelihoods.

When that equation changes en mass, and we respond with "things change, too bad," it is a social failure.


> To quote most historic person that has ever and will ever live

Hillary Clinton? Is this bait?


People have said that types make people worse at programming too lol


I found the error rate of Copilot unacceptable for most of my daily work, so 2 months ago we kicked off a project to write a tool more appropriate for someone who practices TDD -- specifically, I don;t want to see generated code unless it passes my tests. The early results are very promising for my stack which is backend Java/Spring. See https://testdriven.com/testdriven-2-0-8354e8ad73d7


> When a developer writes every line of code manually, they take full responsibility for its behaviour, whether it’s functional, secure, or efficient. In contrast, when AI generates significant portions of code, it’s easy to shift that sense of responsibility onto the AI assistant.

I’m not sure how true this is. Any place I worked, whoever checked in the code is responsible for it.


From TFA: "Erosion of Core Programming Skills, Over-Reliance on Auto-Generated Code, Reduced Learning Opportunities, Narrowed Creative Thinking, Dependency on Proprietary Tools, False Sense of Expertise"

Personally, I've been making the same arguments against using even vanilla auto-complete. It's a distraction, erosion of the mind, encourages bad habits, etc.


Seems like stretched reasons. Similar things could be said about using libraries or languages that abstract away low level complexities. Also pretty sure it's AI generated article which I guess could be stronger point about writing skills being eroded than the arguments itself.


Even if the assertion is correct (which I believe to be the case), the most probable reality-based outcome is that decision makers will continue to push toward automation.

If some significant portion of the humans doing the job can be replaced by LLM, or by much cheaper humans augmented by LLM, they will be so replaced. By the time it creates a real problem, those folks will have cashed out and moved on.

That isn't new behavior, but folks who fall back to that as a way of dismissing concerns lack a good grasp of the scale enabled by technology, here.

"Interesting times".


It's not so much that Copilot is a threat because it could produce tight, elegant code that handles all the edge cases and so on, but because employers still see coders as expensive keyboard jockeys whose code is not significantly better than what a Copilot user could cargo-cult into existence.

Blaming Copilot feels a bit like the wrong target. Like the Luddites, the real question is the relationship between employer and employee, and how the presence of the machine empowers or endangers the worker. To put it another way. Suppose a perfect Copilot existed, that required a human to drive it but made that human a 10x developer. Do you think they would get paid 10x as much? Or would the worker stay where they were, perhaps under threat of replacement, and the employer take the spoils?

https://www.flyingpenguin.com/?p=28925

Edit: to be clear, I am actually a big fan of Copilot for increasingly one's personal productivity, rather like a super-Google or a non-snarky-Stack Overflow. But I remain rather cynical about how those benefits might work in the new corporate environment.


Note: This is absolute speculation - it has no evidence or even anecdotes around it.


But it does. The same holds for Google and stackoverflow: the majority of developers have trouble with sql.since the introduction of or mappers and plenty of other examples. Ask a junior dev about the difference between a stack allocated and a heap allocated object and be amazed.


I haven't been able to read any brand of assembly since about 2010, and that was JVM not Intel.

I still retained the 10000 foot view, and that mechanical sympathy has definitely helped me with fixing supposedly intractable performance issues with some regularity.

Like a lot of things in education, there's an important distinction between learning something to use it, and learning it to appreciate or inform what you do with what we have now.

Similarly there are processes we try to deploy 24/7 at work that I think would be better off being done for four weeks twice a year just to knock the cobwebs off of how we perform the job the rest of the year.


[flagged]


I read comments like this all the time in all sorts of variants (we don't need math, we don't need CS theory, we don't need regexes, we don't need pointers).

What do you people even do?

Most of my work is with managed languages, but lower level concerns bubble up multiple times per project; you can't write great Python if you can't write decent C.


i never said we don't need it. just that not everyone should be expected to know it and you don't need it except in particular situations.


I'm not sure what type of code you develop but it sure ain't the entirety of the real world.

In my corner of the real world we are quite concerned about the difference between stack and heap and profile our applications to pinpoint any opportunities for optimization in e.g. heap allocations.

Like, most of the world runs on puny chips, and if the chips are not puny, the workloads are still going to eat up battery and contend with all of the other crap running on the end users device.


sorry if i struck a nerve. i maintain that this is a domain specific skillset and that only specific types of coders really need that knowledge.


Carefully read through the points and tell me the author is wrong. Everything written is reasonable and logical.

At a basic level, which of these two practices are more likely to result in you retaining information and learning: reading, writing, and debugging your own code, or copy-pasting someone else’s?


I think you're misunderstanding the author's point. They're not saying that AI-assisted coding is inherently bad, but rather that it can lead to a lack of understanding and retention of knowledge if used as a crutch.

The analogy you're making between reading, writing, and debugging your own code versus copy-pasting someone else's is a good one, but it's not a direct comparison to AI-assisted coding. When you copy-paste someone else's code, you're still reading and understanding the code, even if you're not writing it yourself. You're still learning from it, even if it's not through the process of writing it.

With AI-assisted coding, the issue is that the AI is generating code that you may not fully understand, and you're not necessarily learning from it in the same way. You're not reading and understanding the code, you're just accepting it as a solution. And that's where the problem lies.

It's not about whether AI-assisted coding is good or bad, it's about how it's being used. If you're using it as a tool to help you learn and understand the code, then that's one thing. But if you're using it as a crutch to avoid learning and understanding the code, then that's a problem.

So, to answer your question, I think it's more likely that copy-pasting someone else's code would result in you retaining information and learning, simply because you're still reading and understanding the code, even if you're not writing it yourself. But that's not the same as AI-assisted coding, where the AI is generating code that you may not fully understand.


I don’t know where you’ve worked that people read and understand code they’ve copy-pasted, but I envy you. That has not been my experience.


The author may be right, or wrong, we don't know. It may look reasonable and logical, but it needs to come with data.


> Everything written is reasonable and logical

So was Ptolemy's model of the solar system.

That doesn't make it right or valuable.


It's not even well thought-out speculation. It's basically "old man rants at clouds."

I'm pretty negative about the LLM hype, but this is just a low-effort, low-value "back in my day" rant.


Yes I was hopping for a scientific study.


Some reasons you shouldn't drive a car:

- Erosion of core horse-riding skills

Getting from point A to point B used to be a highly-skilled task, involving a fusion of man and beast working in tandem to accomplish the job. Now, by using a so-called "automobile" (more like "auto-mo-blah", am I right?) we're losing these core skills. Rather than deeply understanding the inner working of the horse's digestive tract, we're left with only the choice: basic petrol or premium?

- Over-reliance on roads

When driving a car, drivers can quickly reach their destination without understanding the underlying terrain. This leads to what experts (me) call "road dependence", where drivers are too reliant on roads, without checking if the route is the most efficient. There could be a badger path cutting 20 minutes off of your commute!

- Lack of ownership and responsibility

When going from point A to point B, car drivers shift responsibility for the drive to the roads they drive on. But the roads could expose them to rockslides, ice, highway robbers, bank robbers, and dangerous wildlife. They may think "if the road goes through here, it must be safe", rather than do due diligence and thoroughly research the route beforehand.

- Reduced learning opportunities

Getting from point A to point B used to be a highly trial-and-error process that forced you to LEARN THE HARD WAY that certain cliffs are too steep for the average horse. Rather than falling off a cliff repeatedly, road drivers don't learn these lessons at all.

- Narrowed creative riding

When riding a horse, you are beset by constant questions. "Is that cliff safe for my horse to scale", "are those berries safe for my horse to eat", "is that a bee nest in my path or just a lumpy tree branch". These force you to think creatively about your travels. As a road driver, the way is predetermined for you, and you won't be as adaptable if you run into unusual situations.

- Dependency on proprietary engines

All horses are exactly the same, right down to the color and number of hooves! This makes it easy to transfer your expertise from one horse to another. Unfortunately, once you become a car driver, you'll find that the manufacturers put the damn volume knob in a different place on every single model. And there's nothing you can do to change it, because it's proprietary.


Copilot comes free for students through GitHubs student developer pack[0]. I’ve gotten to try it out and I’ve found it to just be a great cheating tool in my classes.

Most assignments done by students are basic problems that have been solved tens of thousands of times and could be found everywhere all over GitHub.

Assignments where you have to write algorithms like bubblesort or binary search are as easy as typing the function signature and then having copilot fill in the rest.

Therefore, using copilot as a student will make you worse at programming, since you are robbed of the fundamental thinking skills that come from solving these problems.

[0] https://education.github.com/pack


Copilot is useful for auto completing glob or writing a simple regex. Anything more complicated and it will often make mistakes. Finding mistakes in 20 lines of AI generated code is slower than writing it yourself.


I think it's important to actually think through problems and read error messages yourself before hammering out some request and throwing it to a coding bot. Copilots may be pushing out how much time it takes for newer devs to actually do that which feels like the most damaging aspect relative to skill progression.

It's been frustrating to request a simple feature or tool and see interviewees spend hours fighting with a LLM to make it do what they want instead of just trying to do it themselves first and correctly picking the spots to use the AI.


If your premise is that the world is a better place when programmers all have regex syntax committed to memory, then sure, I guess AI tools are bad.

Personally, I don't think the aspects of writing code that AI tools help with the most are the important parts. I think AI tools are great at taking out the rote aspects and the glue code so that programmers can concentrate on the core issues and broader structure.


I doubt this was your intent, but regex syntax is actually a great example of why it is important to fully understand what you’re doing [0]. It’s also quite useful; once you know a decent number of the symbols, it’s faster to write than it is to describe to an LLM what you want.

I have no problem with using LLMs to act as rubber ducks, or find alternative ways to do something, or filling in boilerplate. But for the first two, anyway, you really need to already know how to do the task in order to determine if the LLM got it right or not. Or at the very least, know the language well enough to spot problems.

[0]: https://blog.cloudflare.com/details-of-the-cloudflare-outage...


"Why C is making assembly programmers worse at programming"

"Its going to make you reliant on the [compiler], and you will never be able to do anything that the [compiler] cant already do."

LLMs are here to stay, even if they don't write perfect code -- they are clearly very useful to an increasing number of existing developers and, more importantly, bringing new developers into the art of software creation.


> more importantly, bringing new developers into the art of software creation.

Counterpoint: the field is already chock-full of juniors trying to get a job, and the ones who are actually good at coding are now even more diluted in a sea of shiny nonsense.


A better thesis might be that copilot is making programmers _that_ are worse at programming, which is to say new developers or those who otherwise aren’t very good are able to use it to pass as more competent than they are.

Copilot saves me a lot of time. It frees me up to think more critically of the code I’m writing.


Replace "code generation" with "stack overflow" and you have essentially the same rant people were making 10+ years ago about that site - even more poignant in that a large amount of copilot's training comes from sites like stack overflow


There’s not much to talk about here. Of course you won’t improve when a third party is producing your code.

That being said, I’m interested to see what happens to LLM effectiveness over time as the amount of LLM-generated code starts infecting training data.


Well you know the old saying. There was only ever one COBOL Program written from scratch. All others were copied from it or other programs decedent from it.


I will happily argue that Copilot, used thoughtfully and responsibly, can make programmers better at programming.

The very, very short version is that it lets programmers move faster and try more things, which helps them learn quicker.

The rate at which I learn new libraries, frameworks and languages has accelerated dramatically over the past two years as I learned to effectively use Copilot and other LLM tools.

I have 20+ years of experience already, so a reasonable question to ask is if that effect is limited to experienced developers.

I can't speak for developers with less experience than myself. My hunch is that they can benefit too, if they deliberately use these tools to help and accelerate their learning instead of just outsourcing to them. But you'd have to ask them, not me.


Treat Copilot (or any other AI system) as a TOOL. Each tool has its purpose and use. But remember YOU are the craftsman...


I also dislike using AI for programming but for what its worth, I cannot reconfigure my neovim lua settings without AI


Why Hammers Make Builders Worse at Building


This just means job security for me. We will be the next "Fortran/COBOL Cowboy's"


it doesn't make them worse, they just stay at the same level. the tool doesn't change that the person using it doesn't care about the end result.

making systematic mistakes is the trait of the character, there is nothing that can fix that, except the subject itself


While this raises valid concerns, Copilot and friends can actually enhance learning by exposing devs to new patterns and approaches. They still require problem-solving and critical evaluation skills. By handling routine tasks, they free up time for higher-level thinking.

It's just another abstraction layer, like high-level languages were. Responsible use combined with continuous learning can boost productivity without sacrificing knowledge.

The impact differs between experienced devs and beginners. As these tools evolve, we'll likely develop new meta-skills around AI collaboration. Like any tool, it's about how we use it.


this article is just a rehashing of the same tired contrarian takes about AI-assisted coding that we've been hearing for years. 'Programmers will become lazy and reliant on AI' is not a new problem, and it's not like Copilot is somehow uniquely capable of eroding fundamental programming skills.

in reality, Copilot and other LLMs are just tools, and like any tool, they can be used well or poorly. If a programmer is relying on Copilot to do all their thinking for them, then yeah, they're probably not going to learn much. But if they're using it as a starting point to explore new ideas and learn from the code it generates, then that's a different story.

And let's not forget that AI-assisted coding is not a replacement for human judgment and critical thinking. If a programmer is not reviewing and understanding the code they're writing, then that's a problem with their workflow, not with the tool they're using.

I'd love to see some actual data on how Copilot is being used in the wild, rather than just anecdotal evidence and hand-wringing about the 'dangers' of AI-assisted coding. Until then, I'll remain skeptical of this article's claims.


Imo, there's something to be said newer devs feeling self-pressure or anxiety to use the Copilot for everything which is a massive time sink and slows skill progression.

I also don't think it's a great starting point for exploration. There's a lot more value out of learning a couple different quirky frameworks and reasoning through why they exist than there is hacking together something quickly within the bounds of what the LLM's can do.


i personally dont use Copilot so im fairly unbiased here and will admit to some of the negative points you raised:

yes you can absolutely sink time and lose efficiency

i encounter it many times, i have it generate large amount of code that seemingly does what i asked it to do

but then it takes many revisions to finally arrive at edge-case-free state

I also end up having to read through the code generated to build a mental model of what the app is doing and to gate keep future revisions.

But what makes all this worthwhile is that it saves my employer a lot of money.

For example: We've generated a large enterprise react application for roughly $100 USD within a month using all the available code gen tools out there. I'm talking the entire stack: backend, frontend, documentation.

Trying to pull that off with a real team? it would cost 1000x minimum and roughly 3 quarters, maybe the entire year.

A business won't care about the "art" side of things. They just see they can generate software at almost 99% discount.

I'm very worried about our software engineering jobs beyond 2024. It's going to massively shrink and wages are going to reflect this new cost saving that code gen provides.


Any idiot can write code, that was never the problem.

Building high quality software on the other hand takes a lot of empathy, skill and experience; neither of which is attainable for a computer.

I feel bad for the users of these discounted applications...


seems like an emotional and out of date view


It is in fact an emotional view. That doesn't make it out of date, however.

When you've lived through enough piles of barely-working garbage produced by contractors who didn't understand, or coworkers who didn't care, you'll know why they were emotional. You'll also have a suspicion that they are not yet out of date.


its out of date because the code quality is the best i've seen much better than humans can

this technology is getting better not worse. ive already been able to do without a dedicated frontend react developer. i cant be alone.


If you honestly think LLMs write better code than humans I suspect your experience is limited to people who would probably be better off doing something else.


I get his point, but let's be real here: 10, 20 years from now, how many will actually be programming - the we know programming today?

In 10 years time, we will have a generation of coders where the majority have never coded anything without the help of some LLM. If that's even the thing, when that time comes.

Some posters here are living in some luddite delusion, if they think the "AI hype" will somehow just blow over, and people will go back to sifting through stack exchange, or simply read the man pages for something.

Sorry, that's just not going to happen. In the past two years we already have junior devs that are dependent on LLMs to work efficiently.

These posts remind me of how older folks (25-30 years ago) warned about search engines, and how they'd make the youngins lazy and uncritical.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: