I think that summarizes it well. It's not 10x better that makes the risky bet of going into vendor lock from a VC-backed company worth it. Same issue with Prisma and Next for me.
No, that is exactly what an ORM is, plus mapping it back. Anything around that is additional toolings that no ORM needs to be ORM, but is nonetheless usefull.
Pydantic isn’t an ORM, any more than JSON.stringify() and JSON.parse() are an ORM.
Pydantic knows nothing of your database. It’s schema-on-read (a great pattern that pydantic is well suited for), or serialization, or validation, but not an ORM.
Do you advise then use Pydantic for data mapping to/from raw SQL, to avoid using a full ORM? My thinking is you're almost at an ORM with that method with tools like SQLModel that I'm unsure what the benefit is to the plain Pydantic method.
In one sense you’re right, but (at least in data projects) the goal is a bit different. We’re often reading not only from SQL databases, but also parquet files, CSV, JSON, APIs, piped input from another process’s STDOUT, and so on.
Basically we don’t always know what the future unknown data source we may be reading from, and also the schema of the source might change, but we can define what we expect on the receiving end (in pydantic), and have it fail loudly when our assumptions change.
The nice thing about SQLModel is they're still Pydantic models so you can use them with custom data mappers like parquet, csv, json, etc. I think you make a good point about keeping the data model pure so you're not dependent on a data source. But I think SQLModel largely accomplishes that, and so does SQLAlchemy's declarative dataclass mapping (though I've not used the latter).
Yeah it's funny they even mention ORM while at the same offering something that has nothing to do with ORMs at all. Yes, many ORM libraries offer additional tools like migration and querybuiler, but that's not the point of an ORM. ORM maps relation data to your OOP data structures. They completely misused the term entirely, which is kinda surprising.
Well said. Please never be silent over this fact. It's important to educate people on what an ORM is, what it means and especially what it doesn't mean. Especially in times where VC-baked companies misinform and manipulate people about that, like Prisma is doing
That is the real reason we get a less feature-rich TypeScript in the future and Node not supporting full TypeScript. Because they want to be supported by browser.
> I think this is a great step in the right direction by node
I think it's the opposite. It will be a net negative, since people will now run TS by default without type checking. Wasting so much time chasing weird runtime errors - just to end up running the full blown TSC type checking again. They will also write very different TS now, trying to workaround the limitation and arguably very useful features like Enums, constructor properties, etc. This has real negative effects on your codebase if you rely on these, just because Node chose to support only a subset.
It's interesting to see the strategy now and to see people even gaslighting people into believing no type checks and less features is a good thing. All just because of one root cause - TSC being extremely slow.
Yes, they regret them because they hinder adoptions. Why? Because nobody chose to add TSC with all features in their runtime because TSC is extremely slow.
They know they can skyrocket adoption by limiting the language. That's the reason they regret it. This is just a strategy to increase adoption. Not because they are bad features. They are in fact very useful, and you should not stop using them just because your favorite runtime decided to go the easy way and only supporting a subset of TS by stripping types. You should rather switch the runtime instead of compromising your codebase.
There is always a compile step (JS -> Bytecode -> Machine code). The question is only if it is visible to you or not. They could have made it totally transparent to you by fully support TS including type checking under the hood including support full TS and not this subset of it, but decided not to do so. There is nothing inherently great to have less compile steps if you are not even aware of it. See v8 how many compile and optimizations steps they have - You don't care, because you don't see it. The only problem of TS is, you will always be able to see it because of it being slow.
I think running TS without type checks is almost entirely pointless.
The point is I don't have to deal with the index.js blob that gets produced by running the compile step myself. Worse yet the source maps. It's significantly less steps so pretty helpful I'd say.
This is misleading. It is not transpiling TS in JS, it is transpiling a subset of TS into JS. If my normal TS code can not be "executed" by Node, then it is not executing TS per definition but something else. If you are good with Node supporting and "executing" only a subset of TS and lacking useful features, that's fine. But don't tell people it is executing TypeScript. That's like me saying my rudimentary C++ compiler supports C++ while in reality only supporting 50%. People would be pissed if they figure it out once they try to run it on their codebase.