We've banned this account for using HN primarily for ideological battle. That's not allowed here, regardless of which ideology you're battling for, because it destroys what the site is supposed to be for.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
It’s so amazing to have the ability to enhance your database with custom methods. Keeps your data model really well organized across your infrastructure
An API makes sense for this case when you won't be throwing away a lot of the raw data in favor of the processed/transformed data.
But doing a lot of these operations on-server makes sense when there's a significant volume of highly parallelizable transformations which need to be done on the data before it's usable.
Of course the best solution is likely to be a happy medium between the two, where simple low-level transformations are done on-server and the rest of the data preparation is done as the data is transferred to the client.
Data and its structure outlives the developer, and often the application.
Tightly coupling a lot of the application-specific compute in with how the day is stored and accessed says you up for even more difficulties when you need to debug, scale, migrate storage/compute or evolve your application faster or more radically than your data organisation.
Database is the last thing that scales usually so if you put a bunch of computational load on it besides queries, you've set yourself up for scaling/sharding sooner than later.
Not always! If computation involves math over set of records, then Postgres is great for that. Have the operation inside the db reduce connection pooling on the application level.
I remember a friend telling me in a previous job me they loaded the database with stored procedures because it ran on the most powerful server when the product launched.
And then the database server hit its capacity very quickly.
Reminds me of why I learned to program in PostScript - the PostScript printer was, by far, the fastest computer I had access to - also it had a ton of memory and its very own SCSI hard disk.
Plotting a Mandelbrot on the Mac would take a lot longer, even in C, than just making the printer do it.
Usually overnight, because the program took a couple hours to run most of the time.
Thanks, love the post. I agree with everything you said. Recently, I was in an interview for a company and they gave me three requirements for a versioning tool I should design conceptually. I started with the data models and then the backend / frontend communication. I wasn't told who this versioning tool should be for, how many people would use it and other important domain related things, so I said I'll concept a simple solution that would work fine for a group of people but would have to be refactored if we'd want to scale up and support thousands or more connections simultaneously, I was told "sure, go for it".. Once I finished I said that I was confident that a versioning tool has way more complexity and requirements than described initially within those three sentences and that 60 minutes to design such system in a stable way is optimistic.
A week after the interview I was told that they don't want to continue with me because I didn't ask enough questions and that my solution wouldn't work if millions of people would use the system at the same time. I think I avoided a bullet