Hacker Newsnew | past | comments | ask | show | jobs | submit | charesjrdan's commentslogin

This looks super useful for anything requiring optimistic updates, but how do you get data from triplit into another db? Perhaps for analytics or audit purposes.

With something like electric-sql you can just use one of the many Postgres tools, and other local-first options like https://ably.com/livesync have database adaptors for replication. I think this is an important requirement whenever you build your own database


[I work on Triplit] Currently, You can accomplish this by pulling from Triplit with either a JS client that just subscribes to each collection or with the REST API but we're currently working on a way to define custom "triggers" on your Triplit Server so you could directly push into any other database as you'd like


Is “triggers” a change-data-capture like thingy? Having an easy to connect to CDC stream seems like a great feature to offer. Is your trigger concept like DynamoDB Stream + triggers? https://docs.aws.amazon.com/amazondynamodb/latest/developerg...


Yes that's pretty much what we're going for. I'm less familiar with DynamoDB Streams but we're taking inspiration from Postgres triggers https://www.postgresql.org/docs/current/sql-createtrigger.ht...


I recommend having a pre-built integration that dumps all changes to the database in a Debezium CDC compatible format. Not sure how you'd normalize Triplit changes into Debezium updates, but something to think about. Debezium CDC format lets you pipe changes from one DB into a stream system like Kafka, and then out of Kafka into another DB on the other end. It's handy.

https://debezium.io/documentation/reference/2.5/transformati...

For example, the original method for connecting Postgres to Materialize.com was using a Debezium stream: https://materialize.com/docs/ingest-data/cdc-postgres-kafka-...


That's a great pointer seems like a many birds single stone solution!


Is there any support for profiling js applications? I see node.js but seems like browser js isn’t supported?


Great question! That’s correct, for the moment we’re focusing on everything that happens in the backend. That said, the system will accept anything in the pprof format so I’d love to see something like “1% of users get profiled and data sent” from the frontend, that’d be awesome!

We’ll think about building something like this but if there is someone with a use case I’d love to chat and figure out how we can make it work together!


You could look into http://xtdb.com/ if you want an open source alternative


There are also similar datalog immutable databases with fully open source codebases out there too like https://www.xtdb.com/ or https://datahike.io/


Came here to search for the mention of XTDB (formerly known as Crux). I have never tried Datomic and only trivially played with XTDB, does anyone know if there's a comprehensive comparison of the two?

update: So, I asked chatGPT and this is what I got:

----

ChatGPT> Do you know how Datomic and XTDB differ?

Yes, Datomic and XTDB are both database systems, but they differ in a few ways:

1. Architecture: Datomic follows a client-server architecture, where the client sends requests to the server for data manipulation. On the other hand, XTDB is designed as a distributed database system, where data is automatically partitioned across multiple nodes for high availability and scalability.

2. Query Language: Datomic uses the Datalog query language, which is similar to Prolog, for querying the database. XTDB, on the other hand, provides a modern and expressive query language based on Clojure.

3. Data Model: Datomic stores data in the form of entities, attributes, and values. XTDB uses a document-based data model, where each document is a self-contained unit of data.

4. Distributed Transactions: XTDB supports distributed transactions while Datomic does not. This means that XTDB can guarantee the consistency of data across multiple nodes in a distributed system, while Datomic cannot.

5. Open Source: XTDB is open-source, while Datomic is a commercial product, which requires a license to use.

Overall, both Datomic and XTDB have their own unique features and advantages, and the choice between the two depends on the specific requirements and use case of the application.

----

I dunno, it sounds like XTDB is a clear winner here, no?


I'm tickled you asked ChatGPT but I think it misses the the forest for the trees somewhat.

1. Architecture - XTDB is designed to work with Kafka, but it also works with typical JDBC databases as a datastore. Datomic can use SQL databases, DynamoDB or Cassandra if you want distribution. I personally would think long and hard before I introduced a distributed database to my organisation "because it scales".

Part of the value proposition of Datomic is it easily scales read workloads horizontally and by isolating read and writes into separate processes it improves write performance significantly. The metric usually thrown around is 75% of CPU cycles in a traditional RDBMS are concurrency coordination, which is avoided by the Datomic model. That number is quite old now so I don't know if it's still accurate as of 2023.

2. Query language - both use Datalog and support the Datomic `pull` syntax. XTDB also supports SQL.

3. Datomic's EAVT quadruplets are a compelling feature because they are so generic and can be used/re-used in many contexts. A document database would have to fit your use case pretty directly.

4. Datomic has a single transactor process. Do you need distributed transactions? Does Datomic need distributed transactions? You'd have to find someone from say, Nubank, and ask them for war stories. :-)

5. Datomic is now free-as-in-beer.

In my unqualified opinion XTDB is appropriate to choose in the following situations:

- You need to model "valid time" as part of your domain.

- Do you want a document database and are happy with everything that entails?

- You need access to the source code of your database.

- Do you have existing analysts who know SQL but don't know or can't learn Datalog?


1, 2 and 4 are not to be trusted ;)


https://github.com/TimUntersberger/neogit Is a port of magit to neovim. Still some stuff missing (rebase/reset iirc) but mostly does the job


I've tried it. You can't really port magit from Emacs, its workflow depends entirely on how Emacs works (and this is a good thing, mind you). Neogit just feels awkward to use IMO from a vim perspective.

That said, I'm happy with Fugitive. It does the job in a way I'd expect from a vim plugin.


There's nothing Magit (or emacs) does that can't be done in vim, and vice-versa.

It's just a matter of having the will to do it and putting in the time.

I say this as a huge fan of both editors. They are both so powerful they can do just about anything.


That's not quite true for original Vim. The developer experience for writing plugins is incomparable; VimL lacks good ways of abstracting and composing behavior, while Emacs Lisp gives you just about everything a modern language should, including sophisticated object system with multimethods and multiple inheritance. The built-in debugger for Elisp is not on the level of JetBrains IDEs, but it provides all the typical functionalities and is GUI-driven, in contrast to Python's pdb or Ruby pry.

NeoVim is an entirely different beast, and I heard good things about its way of handling plugin development.


I think this is one of my favourite pieces of tech in the past five years tbh.

I still use bash for short <5 line scripts but everything else is bb (though I’ve started looking into nbb because you can use node libs like ink which seems pretty cool)

And repl integration with neovim and conjure is great!


What’s the reasoning behind using some database I’ve never heard of vs Postgres etc?

Interesting project though


The main ways that XTDB has been useful to me personally are:

- Immutability, which makes it fit well with the functional programming style used in Clojure. You can create a database connection object that represents a snapshot of the database at a specific point in time (usually the current time), and queries made via that object will ignore any transactions that occurred later. So you can effectively pass the entire DB around as an argument and your functions stay pure. (That also makes it easy to inspect historical snapshots, similar to looking through old git commits--I don't do it often, but it's very nice to have when you need it.)[1]

- Datalog instead of SQL. I find Datalog queries to often be more compact than the equivalent SQL, especially thanks to the implicit joins--you can query for things from multiple "tables" without having to type up a bunch of JOIN expressions. And there are various other handy doodads like pull expressions[2].

- Clojure ergonomics. You can store Clojure maps as XTDB documents as-is without needing to write code to translate them to records.

[1] XTDB's bitemporality also has some benefits over e.g. Datomic (https://www.datomic.com/, another immutable, Clojurey database), though I haven't needed to use it yet--Datomic's "monotemporality" would be sufficient for my current needs.

[2] https://docs.xtdb.com/language-reference/datalog-queries/#pu...


SQL for XTDB seems to be under way in the XTDB "Core2" dev work, hope they keep Datalog too.


It's bitemporal and a graph database and so on. It seems like it can cover most of the use cases of a number of other dbs. This goes into that: https://docs.xtdb.com/concepts/what-is-xtdb/

"Many databases can support various levels of "time travel" queries across transaction time (i.e. the transactional sequence of database states from the moment of database creation to its current state), however such capabilities are typically complex to use and have practical limitations. By contrast, XTDB provides an always-on capability for point-in-time querying of past transactional states and across the valid time axis."


I love babashka but always find nbb scripts hard because everything returns a promise which makes the normal REPL workflow tricky.. maybe I’m doing it wrong though


I'm not sure if this applies to nbb, but there's a command line flag for the node repl (`--experimental-repl-await`) which allows you to use await in the repl, allowing you to kind of sidestep the normal annoyance of handling promises.


Nbb has nbb.core/await for top level await which is handy for REPL usage. Furthermore it has promesa which is a library for dealing with promises.


Craft.do is a good solution to your problem if you don’t mind closed source.

I personally have started using logseq.com though, it is a good fully open source file system based approach. It supports markdown and org mode, and I store the files in an iCloud Drive folder so I can access stuff on my phone (though it’s not quite as slick as craft, and setting up an auto deployed website would have to be done manually whereas craft does it for you)


Twitter bought them for the devs and shut the project down. I just thought it was a very interesting project and even though it’s 10 years old I don’t think I’ve seen any DB tool that allows you to dynamically change the data modal like this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: