That is correct. Some historical context is much appreciated in this thread.
> tl;dr on Section 174, Research & Experimentation costs went from being fully deductible in the year incurred to being deductible over a 5 year period.
Larger tax bills and a tightening on what roles/activities are deductible as R&E are likely what OP is pointing at with his comment.
To the best of my non-inside baseball research, Section 174 changes were simply one part of a package of revenue generating measures to offset the large tax cuts from the broader tax act they were a part of.
The changes came from The Tax Cuts & Jobs Act of 2017 that was introduced to the House of Representatives by Congressman Kevin Brady (R) Texas. The bill passed both houses of Congress along party lines. Then President Trump signed the bill into law. Section 174 changes did not take effect until 2021.
Which is funny because US industrial investment was on a tear pre-tariff as companies near/on-shored at historic rates. Not only were tariff's not needed, they've effectively shut down their intended goal.
This chart is a pretty astounding look at how successful Bidenomics was at achieving this and how catastrophic Trumponomics is already turning out to be:
Credit to prior on-shoring should go to Covid 19 and the Chinese government's heavy handed response to it, not any US policy or administration.
Talk to anyone in industry and they'll say "we on/near shored because Covid supply chain problems taught us a lesson" or something along those lines. Those investments were just starting to bear fruit in the past few years.
Edit: And before anyone tries to put words in my mouth, this is neither an endorsement of "Trumpenomics" nor a dispute of the prior commenter's statement about them.
The difference between on- and near-shoring is pretty important, and I'm going to bet the trillion dollars in incentives played a far, far bigger role than the delta in people's resilience estimations between putting a factory in Mexico vs the United States.
Credit should go to all factors, but sorry it's foolish to think orchestration of a trillion dollars into exactly these sectors wouldn't play a huge role.
>The difference between on- and near-shoring is pretty important,
Of course. I don't think many saw coming that we wouldn't be on friendly trade terms with Mexico and Canada.
> and I'm going to bet the trillion dollars in incentives played a far, far bigger role than the delta in people's resilience estimations
Which have you seen companies respond more swiftly to, opportunities to make money, or disruptions in their existing ways of making money? Which gets the CEO on the phone faster, a potential business deal or a prod outage?
Just about everyone experienced the latter during Covid. "Yeah we'd love to do all the electrical for your covid construction boom fueled McMansion development but switchgear is back ordered 18mo, sorry", and so on and so on across many industries. And they're all real salty about it.
I'm sure the incentives added fuel to the fire, but the way the Chinese economy stayed disrupted for longer really pained a lot of people in the US who depend on stuff from there to make money here.
Depends on if the outage impacts the entire industry/sector or just his company. If the former, it's relatively fine because stock performance is going to be similar to competitors and investors won't be asking too many questions. Might be some wailing and gnashing of teeth but not too much action in the form of spending money on the problem.
I don't think many companies in my estimation took too many lessons from the COVID supply chain crises. If they did, the lesson was to simply outwait things. Everyone was having similar problems and you didn't be the single one of your sector to have re-shored production at triple the expense while your competitors simply spun back up once the Chinese factories got back on-line.
If it was such a compelling thing that was already happening we'd be seeing a lot more 'low value base component' manufacturing coming back, such as electronic components like resistors. So far from my basic understanding of the subject it's all the stuff quite high up the value chain instead. At best some things got near-sourced or moved around, with the raw inputs seemingly still coming from China in the end either way.
It would certainly be interesting to find some actual data here though.
The IRA added huge incentives towards factory construction in certain areas. Biden also maintained strategic tariffs on certain industries (as opposed to the blanket tariffs Trump is using.) I understand that people with specific political preferences would like to believe these incentives didn’t work, but there’s every reason to believe the simpler explanation: incentives like this do work. We should be learning from these results instead of denying them.
Companies started shifting plans as soon as the election result was decided in November. Many started stockpiling Chinese imports then. Additionally many have paused Capex due to the economic uncertainty:
The tariffs were not a surprise after the election. The President said he was going to enact crushing tariffs, so this was effectively predictable since early November.
I hate the fact that CI peaked with Jenkins. I hate Jenkins, I hate Groovy, but for every company I've worked for there's been a 6-year-uptime Jenkins instance casually holding up the entire company.
It peaked with Jenkins? I'm curious which CI platforms you've used.
I swear by TeamCity. It doesn't seem to have any of these problems other people are facing with GitHub Actions. You can configure it with a GUI, or in XML, or using a type safe Kotlin DSL. These all actually interact so you can 'patch' a config via the GUI even if the system is configured via code, and TeamCity knows how to store config in a git repository and make commits when changes are made, which is great for quick things where it's not worth looking up the DSL docs or for experimentation.
The UI is clean and intuitive. It has all the features you'd need. It scales. It isn't riddled with insecure patterns like GH Actions is.
I think people just hate CI set up by other people. I used TeamCity in a job a few years back and I absolutely hated it, however I imagine a lot of my hatred was the way it was set up.
CI is just the thing no one wants to deal with, yet everyone wants to just work. And like any code or process, you need engineering to make it good. And like any project, you can't just blame bad tools for crappy results.
I think people just hate enterprise CI/CD. Setting up a pipeline for your own project isn't this hard and provides immediate value. But then you start getting additional requirements like "no touching your CI/CD code", "no using plugins except A, B or C" and "deployment must be integrated with Rally/Microfocus/another corporate change management system". Suddenly your pipelines become weird and brittle and feel like busywork.
It seems to inspire strong feelings. I set it up at a previous company and at some point after I left they replaced it with Jenkins. However, nobody could explain to me why or what problems they thought they were solving. The feedback was the sort of thing you're saying now: a dislike that can't be articulated.
Whereas, I could articulate why I didn't like Jenkins just fine :)
I would feel that way but I've had the misfortune to work with a wide open ci system where any developer could make thanges and one guy did. The locked down system prevents me form some changes I want but in return my builds don't suddenly start failing because some ci option was turned on for everyone.
The people who admin our CI system do a good job so it doesn't break, (well it does all the time, but onnetwork type errors not configuration - that is IT's fault not their fault.)
The thing I want to change are things that I do in the build system so that it is checked in and previous versions when we need to build them (we are embedded where field failure is expensive so there are typically branches for the current release, next release, and head). This also means anything that can fail on CI can fail on my local system (unless it depends on something like the number of cores on the machine running the build).
While the details can be slightly different, how we have CI is how it should be. most developers should have better things to do than worry about how to configure CI.
In our CI we do a lot of clever stuff like posting comments to github PRs, sending messages on slack, etc. Even though those are useful things it makes the CI a bit harder to make changes to and test. Making it do more things also makes it a bit of a blackbox.
TeamCity's "config as code" feels a bit like an afterthought to me. (It's very Windows-style, where PowerShell got bolted on, and you're still fighting a bit of an upstream current getting clickops users out of old habits. I've also only experienced it at .NET-stack jobs, though, so I might be a bit biased :-)
(I don't recall _loving_ it, though I don't have as many bad memories of it as I do for VSTS/TFS, GitLab, GH Actions, Jenkins Groovyfiles, ...)
The quotes around "config as code" are necessary unfortunately, because TeamCity only allows minimal config changes. The UI will always show the configuration from the main branch and if you remove or add steps it might not work.
We needed two more or less completely different configurations for old a new versions of the same software (think hotfix for past releases), but TeamCity can't handle this scenario at all. So now we have duplicated the configuration and some hacky version checks that cancel incompatible builds.
Maybe their new Pipeline stuff fixes some of these short comings.
Try doing a clean git clone in TeamCity. Nope, not even with the plugins that claim “clean clone” capability. You should be confident that CI can build/run/test an app with a clean starting point. If the CI forces a cached state on an agent that you can’t clear… TeamCity just does it wrong.
You just check the "delete files in checkout directory" box in the run screen. Are you thinking of something different? I've never had trouble doing a clean clone.
It’s been a while since I used it but I do remember that it doesn’t do a clean checkout and you can’t force it to. It leaves artifacts on the agent that can interfere with subsequent builds. I assume they do it for speed but it can affect reliability of builds
> You can configure it with a GUI, or in XML, or using a type safe Kotlin DSL.
This is making me realize I want a CI with as few features as possible. If I'm going to spend months of my life debugging this thing I want as few corners to check as I can manage.
I've never had to spend time debugging TeamCity setups. It's very transparent and easy to understand (to me, at least).
I tend to stick with the GUI because if you're doing JVM style work the complexity and tasks is all in the build you can run locally, the CI system is more about task scheduling so it's not that hard to configure. But being able to migrate from GUI to code when the setup becomes complex enough to justify it is a very nice thing.
Jenkins is cron with bells and whistles. The result is a pile of plugins to capture all the dimensions of complexity you are likely to otherwise bury in the shell script but want them easier to point and click at. I'll hate on jenkins with the rest of them, but entropy is gonna grow and Jenkins isn't gonna say "no, you can't do that here". I deal with multiple tools where if tried to make fun about how low the jenkins plugin install starts are, you'd know exactly where I work. Once I've calmed down from working on CI I can appreciate Jenkins' attempts to manage all of it.
Any CI product play has to differentiate in a way that makes you dependent on them. Sure it can be superficially nicer when staying inside the guard rails, but in the age of docker why has the number of ways I configure running boring shell scripts gone UP? Because they need me unable to use a lunch break to say "fuck you I don't need the integrations you reserve exclusively for your CI" and port all the jobs back to cron.
And the lesson is that you want a simple UI to launch shell scripts, maybe with complex triggers but probably not.
If you make anything more than that, your CI will fail. And you can do that with Jenkins, so the people that did it saw it work. (But Jenkins can do so much more, what is the entire reason so many people have nightmares just by hearing that name.)
well, I got tired of Groovy and found out that using Jenkins with plain bash under source control is just right for us. Runs everywhere, very fast to test/develop and its all easy to change and improve.
We build Docker images mostly so ymmv.
I have a "port to github actions" ticket in the backlog but I think we're not going to go down that road now.
Yeah, I've come back around to this: you do not want "end users" writing Groovy, because the tooling around it is horrible.
You'll have to explain the weird CPS transformations, you'll probably end up reading the Jenkins plugins' code, and there's nothing fun down this path.
So Microsoft's definition of winning is being the host for AI inference products/services. Startups make useful AI products, MSFT collects tax from them and build ever more data centers.
I haven't thought too critically yet about Meta's strategy here, but I'd like to give it a shot now:
* The release/leak of Llama earlier this year shifted the battleground. Open source junkies took it and started optimizing to a point AI researchers thought impossible. (Or were unincentivized to try)
* That optimization push can be seen as an end-run on a Meta competitor being the ultimate tax authority. Just like getting DOOM to run on a calculator, someone will do the same with LLM inference.
Is Meta's hope here that the open source community will fight their FAANG competitors as some kind of proxy?
I can't see the open source community ever trusting Meta, the FOSS crowd knows how to hold a grudge and Meta is antithetical to their core ideals. They'll still use the stuff Meta releases though.
I just don't see a clear path to:
* How Meta AI strategy makes money for Meta
* How Meta AI strategy funnels devs/customers into its Meta-verse
Sounds like the classic commoditize your compliment. Meta benefits from AI capabilities but doesn’t need to hold a monopoly on the tech. They just benefit from advances so they can work with open source community to achieve this.
Tech stocks trade at mad p/e ratios compared to other companies because investors are imagining a future where the company's revenue keeps going up and up.
One of the CEO's many jobs is to ensure investors keep fantasising. There doesn't have to be revenue today, you've just got to be at the forefront of the next big thing.
So I assume the strategy here is basically: Release models -> Lots of buzz in tech circles because unlike google's stuff people can actually use the things -> Investors see Facebook is at the forefront of the hottest current trend -> Stock price goes up.
At the same time, maybe they get a model that's good at content moderation. And maybe it helps them hire the top ML experts, and you can put 60% of them onto maximising ad revenue.
And assuming FB was training the model anyway, and isn't planning to become a cloud services provider selling the model - giving it away doesn't really cost them all that much.
> * How Meta AI strategy funnels devs/customers into its Meta-verse
The metaverse has failed to excite investors, it's dead. But in a great bit of luck for Zuck, something much better has shown up at just the right time - cutting edge ML results.
Remember that Meta had launched a chatbot for summarizing academic journals, including medical research, about two weeks before ChatGPT. They strongly indicated it was an experiment but the critics chewed it up so hard that Meta took it down within a few days.
I think they realized that being a direct competitor to ChatGPT has very low chance of traction, but there are many adjacent fields worth pursuing. Think whatever you will about the business, hey my account has been abandoned for years, but there are still many intelligent and motivated people working there.
1. Capital cost of AI only feasible by FAANG level players.
2. For Microsoft et. al., "winning" means being the defacto host for AI products- own the marketplace AI services are run on.
3. Humans are only going to provide monthly recurring revenue to products that provide value.
---
Jippity is not my friend, it's a tool I use to do knowledge work faster. Google Photos isn't trying to trick me, it's providing a magic eraser so I keep buying Pixel phones.
High inference cost means MSFT charges a high tax through Azure.
That high cost means services running AI inference are going to require a ton of revenue in a highly competitive market.
Value-add services will outcompete scams/low-value services.
And we're seeing the result in real-time. Stupid shit doers have been replaced with hopefully-less-stupid-shit-doers.
It's a real shame too, because this is a clear loss for the AI Alignment crowd.
I'm on the fence about the whole alignment thing, but at least there is a strong moral compass in the field- especially compared to something like crypto.
I feel we hold up single-observability-solution as the Holy Grail, and I can see the argument for it- one place to understand the health of your services.
But I've also been in terrible vendor lock-in situations, being bent over the barrel because switching to a better solution is so damn expensive.
At least now with OTel you have an open standard that allows you to switch easier, but even then I'd rather have 2 solutions that meet my exact observability requirements than a single solution that does everything OKish.
Biased as a founder in the space [1] but I think with OpenTelemetry + OSS extensible observability tooling, the holy grail of one tool is more realizable than ever.
Vendor lock in with Otel now is hopefully a thing of the past - but now that more obs solutions are going open source, hopefully it's not necessarily true that one tool would be mediocre over all use cases (since DD and the likes are inherently limited by their own engineering teams, vs OSS products can have community/customer contributions to improve the surface area over time on top of the core maintainer's work).
I think that OpenTelemetry will solve this problem of vendor lock in. I am a founder building in this space[1] and we see many of our users switching to opentelemetry as that provides an easy way to switch if needed in future.
At SigNoz, we have metrics, traces and logs in a single application which helps you correlate across signals much more easily - and being natively based on opentelemetry makes this correlation much easier as it leverages the standard data format.
Though this might take sometime, as many teams have proprietary SDK in their code, which is not easy to rip out. Opentelemetry auto-instrumentation[2] makes it much easier, and I think that's the path people will follow to get started
Switch the backend destination of metrics/traces/logs, but all your dashboards, alerts, and potentially legacy data still need to be migrated. Drastically better than before where instrumentation and agents were custom for each backend, but there's still hurdles.
I am sure that it is just the initial prompt leaking. Claude is being told to be ethical and non-sexual, most LLMs have similar instructions, but usually, they are engineered in such a way that they don't appear in the answer. Not so much for Claude.
Now Trump second round fixes it, but expires in next (presumably) Democrat administration.