Location: EST
Remote: Yes
Willing to relocate: Yes (US preferred)
Technologies: PostgreSQL (partitioning, performance, OLTP architecture), SQL, F#, C#, C, Java, Clojure, Common Lisp, Scheme, Emacs Lisp, Python, Ruby, AWS, Linux
Email: ebellani at gmail
I work on high-throughput systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.
Recent work:
Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.
Open to consulting or full-time roles where data is core to the business and performance/architecture matters.
ocation: EST Remote: Yes Willing to relocate: Yes (US preferred)
Technologies: PostgreSQL (partitioning, performance, OLTP architecture), SQL, F#, C#, C, Java, Clojure, Common Lisp, Scheme, Emacs Lisp, Python, Ruby, AWS, Linux
Email: ebellani at gmail
I work on high-throughput PostgreSQL systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.
Recent work:
Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.
Open to consulting or full-time roles where Postgres is core to the business and performance/architecture matters.
I work on high-throughput PostgreSQL systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.
Recent work:
Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.
Open to consulting or full-time roles where Postgres is core to the business and performance/architecture matters.
> This is the reason for the push-back against it.
Do you have evidence for that? From memory, it was basically because it was associated with the java/.net bloat from the early 2000s. Then ruby on rails came.
I think that's basically the same reason, right? XML itself is bloated if you use it as a format for data that is not marked-up text, so it comes with bloated APIs (which where pushed by Java/.NET proponents). I believe that if XML been kept to its intended purpose, it would be considered a relatively sane solution.
(But I don't have a source; I was just stating my impression/opinion.)
I work on high-throughput PostgreSQL systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.
Recent work:
Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.
Open to consulting or full-time roles where Postgres is core to the business and performance/architecture matters.
I have written the entire backend of a fintech using nothing but postgresql, integration over http and webhook receival included (the last bit was with postgrest, but you get the point)
I work on high-throughput PostgreSQL systems, especially when they’ve grown into a state where migrations, performance, or schema design have become limiting factors.
Recent work:
Re-architected two multi-terabyte OLTP tables (~2TB and ~1TB) receiving 200+ writes/sec. I focus on “rescue architecture” work: fixing dangerous schemas, stabilizing hot paths, removing app-level complexity, and making Postgres scale without rewriting the product.
Open to consulting or full-time roles where Postgres is core to the business and performance/architecture matters.
> Not to mention that perfectly normalizing a database always incurs join overhead that limits horizontal scalability. In fact, denormalization is required to achieve scale (with a trade-off).
This is just not true, at least not in general. Inserting on a normalized design is usually faster, due to smaller index sizes, fewer indexes and fitting more rows per page.
reply