Or is it? Jepsen reported a number of issues like "read skew, cyclic information flow, duplicate writes, and internal consistency violations. Weak defaults meant that transactions could lose writes and allow dirty reads, even downgrading requested safety levels at the database and collection level. Moreover, the snapshot read concern did not guarantee snapshot unless paired with write concern majority—even for read-only transactions."
That report (1) is 4 years old, many things could have changed. But so far any reviewed version was faulty in regards to consistency.
Jepsen found a more concerning consistency bug than the above results when Postgres 12 was evaluated [1]. Relevant text:
We [...] found that transactions executed with serializable isolation on a single PostgreSQL instance were not, in fact, serializable
I have run Postgres and MongoDB at petabyte scale. Both of them are solid databases that occasionally have bugs in their transaction logic. Any distributed database that is receiving significant development will have bugs like this. Yes, even FoundationDB.
I wouldn't not use Postgres because of this problem, just like I wouldn't not use MongoDB because they had bugs in a new feature. In fact, I'm more likely to trust a company that is paying to consistently have their work reviewed in public.
FWIW, the latest stable release is 7.0.12, released a week or so ago: https://www.mongodb.com/docs/upcoming/release-notes/7.0/. (I'm not sure why the URL has /upcoming/ in it, actually: 7.0 is definitely the stable release.)
That report (1) is 4 years old, many things could have changed. But so far any reviewed version was faulty in regards to consistency.
1 - https://jepsen.io/analyses/mongodb-4.2.6