Hacker Newsnew | past | comments | ask | show | jobs | submit | wallace01's commentslogin

Unfortunately the ones who could do something about that are the ones who do the same shit


That's not quite right, we significantly outnumber them.


[flagged]


We've banned this account for using HN primarily for ideological battle. That's not allowed here, regardless of which ideology you're battling for, because it destroys what the site is supposed to be for.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


Enjoyed that read but to be honest I always had mixed feelings about PSQL Extensions. Logic in a database is wrong


It’s so amazing to have the ability to enhance your database with custom methods. Keeps your data model really well organized across your infrastructure


That's the job of an API IMO.


An API makes sense for this case when you won't be throwing away a lot of the raw data in favor of the processed/transformed data.

But doing a lot of these operations on-server makes sense when there's a significant volume of highly parallelizable transformations which need to be done on the data before it's usable.

Of course the best solution is likely to be a happy medium between the two, where simple low-level transformations are done on-server and the rest of the data preparation is done as the data is transferred to the client.


Not in data processing. There are cases where you have so much data that is better to bring computation to data rather than the other way around.


Having dealt with the fallout of a database that’s had hundreds of stored procedures and custom code stapled into it: never again.

It was so hard to debug, so hard to safely maintain these procs, and a general nightmare


Do what thou wilt shall be the whole of the Law


Can you expand on why you think it is wrong or problematic?


Data and its structure outlives the developer, and often the application.

Tightly coupling a lot of the application-specific compute in with how the day is stored and accessed says you up for even more difficulties when you need to debug, scale, migrate storage/compute or evolve your application faster or more radically than your data organisation.


Database is the last thing that scales usually so if you put a bunch of computational load on it besides queries, you've set yourself up for scaling/sharding sooner than later.


Not always! If computation involves math over set of records, then Postgres is great for that. Have the operation inside the db reduce connection pooling on the application level.


> you've set yourself up for scaling/sharding sooner than later.

OTOH, if you can pre-process the data (or shards) in the server (or servers), you won't need to move it across the network for processing.

Hard problems are hard and one of the reasons databases outlive their systems and developers is because moving that data is a Royal Pain.


I remember a friend telling me in a previous job me they loaded the database with stored procedures because it ran on the most powerful server when the product launched. And then the database server hit its capacity very quickly.


Reminds me of why I learned to program in PostScript - the PostScript printer was, by far, the fastest computer I had access to - also it had a ton of memory and its very own SCSI hard disk.

Plotting a Mandelbrot on the Mac would take a lot longer, even in C, than just making the printer do it.

Usually overnight, because the program took a couple hours to run most of the time.


Thanks, love the post. I agree with everything you said. Recently, I was in an interview for a company and they gave me three requirements for a versioning tool I should design conceptually. I started with the data models and then the backend / frontend communication. I wasn't told who this versioning tool should be for, how many people would use it and other important domain related things, so I said I'll concept a simple solution that would work fine for a group of people but would have to be refactored if we'd want to scale up and support thousands or more connections simultaneously, I was told "sure, go for it".. Once I finished I said that I was confident that a versioning tool has way more complexity and requirements than described initially within those three sentences and that 60 minutes to design such system in a stable way is optimistic.

A week after the interview I was told that they don't want to continue with me because I didn't ask enough questions and that my solution wouldn't work if millions of people would use the system at the same time. I think I avoided a bullet


how come all the players in the pandora papers are from any country but the US?


An excellent question.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: