> So, when you model data "correctly" and turn "2026-02-10 12:00" (or better yet, "10/02/2026 12:00") into a "correct" DateTime object, you are making a hell lot of assumptions, and some of them, I assure you, are wrong.
I think that's the benefit of strong typing: when you find an assumption is wrong, you fix it in a single place (in this example, the DateTime object).
If your datetime values are stored as strings everywhere in your code:
a) You are going to have a bad day trying to fix a broken assumption in every place storing/using a datetime, and
b) Your wrong assumptions are still baked in, except now you don't have a single place to fix it.
First of all, you are imagining some strawman situation, where indeed that datetime is encoded-decoded all across the codebase. Don't keeping your code DRY is an entirely different problem, which doesn't need to happen with this approach any more than if you use a DateTime. I don't mean it theoretically, I mean, really, I was working with codebases, where this approach was taken, and all was fine. You still would have 1 class that works with DTs, it's just that it mostly contains functions of type str → str, and for the rest of the system it's a MySQL-format datetime (i.e. a string). And the point is that your system doesn't try to make any assumptions about that string unless really needed, so you always preserve the original string (which may be a completely invalid gibberish for all you care), and while some auxillary processes might break, you never lose or destroy the original data that you received (usually from some very important 3rd party system, that doesn't care about us, so you cannot really break on input: you must do your best to guess what that input means, and drop whatever you couldn't process yourself into some queue for human processing).
And also, second, this is more specific to this particular example, but when we say "DateTime object" we usually mean "your programming language stdlib DateTime object". Or at least "some popular library DateTime object". Not your "home-baked DateTime object". And I've yet to see a language where this object makes only correct assumptions about real-life datetimes (even only as far, as my own current knowledge about datetime goes, which almost certainly still isn't complete!). And you'd think datetimes are trivial compared to the rest of objects in our systems. I mean, seriously, it's annoying, but I have to make working software somehow, despite the backbone of all of our software being just shit, and not relying on this shit more than I need to is a good rule to follow. Sure, I totally can use whatever broken DateTime objects when the correctness is not that important (they still work for like 99% of use-cases), but when correctness is important, I'd better rely on a string (maybe wrapped as NewType('SpecialDate', str)) that I know won't modify itself, than on stdlib DateTime object.
I think that's the benefit of strong typing: when you find an assumption is wrong, you fix it in a single place (in this example, the DateTime object).
If your datetime values are stored as strings everywhere in your code:
a) You are going to have a bad day trying to fix a broken assumption in every place storing/using a datetime, and
b) Your wrong assumptions are still baked in, except now you don't have a single place to fix it.