I have deep wounds from pre-chatGPT times of trying to manually figure out libraries. As in otooling each library, error, etc then recompiling into various places until things 'worked.' I'm appreciative of the experience, though now I think with LLMs the problem could be solved with some simple queries.
So I guess you're right, it requires effort, but I was a 10x engineer in the wrong direction and never want to experience that again ;)
What works for me is figuring out the platforms I need to support and creating a build pipeline that can handle those requirements. If you join onto a project I don't want you playing around with libraries on your local machine. If you are I consider that a failure of the build script/makefile/docker compose file/etc. First it's lost productivity, second you might get it wrong and have misleading results, causing more lost productivity.
A lot of places do this nonsense "spend hours configuring your machine" method for projects. The only reason I can think of doing that is "job security" and who wants to keep a job with horrible tooling?
That's definitely working smarter. I'm coming around to understanding how all the tooling works - the lost weeks learning the basics happen less often now.
In a way, that time was a chapter in 'job security.' Ran out of money, had to let everyone go, and picked up the pieces to fight another day (self-taught under threat of survival). By the grace of God I'm a reformed 'idea-guy' and am now a builder.