Total meh from me, an end user. User of KeePass since at least 2015, I've written end-user guides, contributed to the main documentation, evangelize it to my family and friends when they have security questions.
I store every single important piece of info in my KeePass database. It stores ALL of my passwords, my SSN, credit cards, my health information, even some weird stuff like my vehicle maintainence records and whatnot. My KDBX file currently sits at 466K. Size is not a particularly compelling reason. Hate to be that guy, but if your database is much larger than that - you're probably doing it wrong.
Newer features like TOTP and passkeys are likewise not a concern for me. What did KeePassXC do when TOTP came around? They stored the relevant data in the attributes, and added a UI around it. It even works with my Steam TOTP, which is a nonstandard implementation. I haven't looked into it, but I imagine they did the same thing with passkeys. I don't see why this couldn't continue to be the paradigm they use. I don't use attributes at all - I haven't needed to, the notes section work great - but I do appreciate being able to look into the "raw data" of attributes quite easily, from within the UI.
If KeePass were being developed from scratch today, or if the developers of the various projects collectively really, really want to switch to a SQLite system of their own volition. Then sure, SQLite. I'm not going to ask them to do that now though.
---
On a separate note, an unfufilled niche that I have though, if anyone's looking for ideas. My secure password storage is a solved problem, KeePass is cross platform, easy to use, and very secure. What remains a problem is secure notes. I want to be able to write markdown (`.md`) documents, add photos and PDFs, then save it to a secure, encrypted folder somewhere. Doesn't need the same security posture as KeePass, but I don't want to leak metadata like file names.
Obsidian - my current notes app - is good from a usability standpoint, but it's not exactly secure. I could pair it up with Veracrypt, but that's a pain from a usability standpoint, and I don't trust my OS to keep the mounted Veracrypt volume contents a secret. Whatever the solution is, it must have a GPL license, or else I'm not going to trust it - from a long-term viability standpoint more than anything else.
If anyone has any suggestions here, would love to hear them.
Attributes are meant to be user facing, and are super useful for all the assorted info that you can use during autofill steps. I primarily use this to autofill card information with autotype.
Cryptomator checks all the requirements. It is a cross platform, GPL'd, encrypted overlay filesystem which you can put anything in, not just markdown docs. Just unlock it and point your notes app to it. Lock when done.
> ...my KeePass database. It stores ALL of my passwords, my SSN, credit cards, my health information, even some weird stuff like my vehicle maintainence records and whatnot. My KDBX file currently sits at 466K.
I thought I had a lot of info in my *.kdbx file - not just passwords - mine is a mere 80kb, though i do keep medical in a 'note to self' on signal.
Note to self is quite a bit more ephemeral in practice than keeping it in your kdbx file. Any number of things could cause you to lose your chats with Signal.
Hot take alert! As an avid self-hoster, I'd like to hear why.
Personally, I self host because the benefits I receive simply aren't available anywhere else at the level of quality I've come to expect - Jellyfin is a great media player, it's free, and I don't want to switch. Pihole provides ad protection and privacy for my whole home network. It's also free. Homeassistant is amazing, and free. Etc etc.
Only if you don't care about your time or if your media collection is tiny.
Don't get me wrong, I love my 20 TB hard drives full of Linux ISOs, but it's a hard sell on anyone who doesn't have 'dicking about with computers' as their hobby. Regular old piracy using torrents has been a easier sell in my experience, once you can get over the hurdle of getting someone familiar with using a torrent client and the relevant search bar. Popcorn Time back in the day made that hurdle trivial. Getting people to use Jellyfin isn't hard. Getting someone to be the family/friend group Jellyfin sysadmin is a significantly tougher sell.
Pihole and the like is an easier sell, since it can be mostly set and forget, but it's not free unless you already have a computer which isn't doing anything, and even if you do, that computer isn't guaranteed to be one which has near-zero running costs when you factor in electricity.
The same sorts of problems apply to most things you can self-host.
I don't think many advice non tech people to get in to self hosting, but there are a lot of people who do enjoy messing with computers who these articles are marketing to.
The average user will only self host when it's a managed box they plug in and it just works. Like how Apple/Google home automation works. Maybe we will see managed products for photo / file syncing pop up.
> I don't think many advice non tech people to get in to self hosting, but there are a lot of people who do enjoy messing with computers who these articles are marketing to.
I agree, but even I, someone who does have this as a hobby and does self-host a few things, have my limits for the same reasons that the casuals do. Even when I have a computer that I can use for one more purpose, I rarely do that unless I know it will be set and forget, since having one more thing to deal with in my already overburdened life is a hard sell.
> The average user will only self host when it's a managed box they plug in and it just works. Like how Apple/Google home automation works. Maybe we will see managed products for photo / file syncing pop up.
Very true. I do hope some products like that will appear, but the workflow and UX will have to be damn near perfect, something which home automation often isn't (unless you use Home Assisant and thus have it as a hobby. Funny how that works).
I have the ikea home hub and it basically is maintanance free. There is no need or even a UI for updating/managing/reinstalling the OS. It just works and has been just working for years.
For other systems like photos and file storage the main complication would be around backups which fall back on the user. If your home automation hub dies, you just chuck a new one in and re-pair your lights. If your photo server drives die it's a disaster. Realistically you'd want to have a backup copy on the cloud, which would lead many casual users to wonder what the point of even hosting the server is if you still need to pay for the cloud backup.
I can't speak for Jellyfin, as I currently use Plex. But it truly has been "set it and forget it" for me. I've never had an update break things, it just does its job and does it well.
Well, it's stored in an encrypted way - in the encrypted password database. Much like a password, everyone already knows not to share a passkey. But also like a password, as the owner, sometimes I want to look at it!
Not really. No. You can easily checkout repo containing the Dockerfile, add a Dockerfile override, change most of the stuff while maintaining the original Dockerfile instact and the ability to use git to update it. Then you change one line in docker-compose.yaml (or override it if it's also hosted by the repo) and build the container locally. Can't imagine easier way to modify existing docker images, I do this a lot with my self-hosted services.
It is straightforward, but so is the NixOS module system, and I could describe writing a custom module the same way you described custom Docker images.
But it works on Ubuntu, it works on Debian, it works on Mac, it works on Windows, it works on a lot of things other than a Nix install.
And I have to know Docker for work anyhow. I don't have to know Nix for anything else.
You can't win on "it's net easier in Nix" than anywhere else, and a lot of us are pretty used to "it's just one line" and know exactly what that means when that one line isn't quite what we need or want. Maybe it easier after a rather large up-front investment into Nix, but I've got dozens of technologies asking me for large up-front investments.
Nix is for reproducibility. Nix and docker are orthogonal. You can create reproducible docker image via nix. You can run nix inside docker on systems that doesn’t allow you to create the nix store.
This is a familiarity problem. I've never used NixOS and all your posts telling me how simple it is sounds like super daunting challenges to me versus just updating a Dockerfile or a one liner in compose that I am already familiar with, I suspect its the inverse for you.
I find the granular nature of dependency sharing in NixOS to be really nice. In particular, I like systemd as my hypervisor. With systemd I can still isolate and lock down processes, but they can still, for example, share memory pages of `glibc`. It is certainly less "secure", and with Docker at least you're sharing the same kernel. It's also hard to share resources between Docker containers. Getting 4 Docker containers to use the same instance of Avahi, for example, requires explicit configuration.
Docker containers also don't have a "standard" for where to put binaries (outside of CMD/ENTRYPOINT), how to configure users/uids (many still run as root?), whether to put multiple services in one container or separate containers, where to put user data, etc. NixOS coordinates this centrally like any distro, assigning paths and UIDs and ports.
I have found Nix and NixOS to be able to absorb any amount of complexity I throw at it with grace.
If a Docker image truly is the best way to use a bit of functionality (like Home Assistant), then I will just configure NixOS to run it in podman as a systemd service with host networking.
I have not come across something that I could not package. The trick is that Nix composes functionality in a way that Dockerfiles or docker-compose configs cannot, because it's one language, one system, one abstraction.
Is it? Why? If a NixOS module doesn’t support what you need, you can just write your own module, and the module system lets you disable existing modules if you need to. Doing anything custom this way still feels easier than doing it in an imperative world.
I can see your point that it can be daunting to have all the pain upfront. When I was using Ubuntu on my servers it was super simple to get things running
The problem was when I had to change some obscure .ini file in /etc for a dependency to something new I was setting up. Three days later I'd realise something unrelated had stopped working and then had to figure out which change in the last many days caused this
For me this is at least 100x more difficult than writing a Nix module, because I'm simply not good at documenting my changes in parallel with making them
For others this might not be a problem, so then an imperative solution might be the best choice
Having used Nix and NixOS for the past 6-7 years, I honestly can't imagine myself using anything than declarative configuration again - but again, it's just a good fit for me and how my mind works
In the NixOS scenario you described, what keeps you from finding an unrelated thing stopped working three days later and having to find what changed?
I’m asking because you spoke to me when you said “because I'm simply not good at documenting my changes in parallel with making them”, and I want to understand if NixOS is something I should look into. There are all kinds of things like immich that I don’t use because I don’t want the personal tech debt of maintaining them.
I think the sibling answer by oasisaimlessly is really good. I'd supplement it by saying that because you can have the entire configuration in a git repo, you can see what you've changed at what point in time
I'm the beginning I was doing one change, writing that change down in some log, then doing another change (this I'll mess up in about five minutes)
Now I'm creating a new commit, writing a description for it to help myself remember what I'm doing and then changing the Nix code. I can then review everything I've changed on the system by doing a simple diff. If something breaks I can look at my commit history and see every change I've ever made
It does still have some overhead in terms of keeping a clean commut history. I occasionally get distracted by other issues while working and I'll have to split the changes into two different commits, but I can do that after I've checked everything works, so it becomes a step at the end where I can focus fully on it instead of yet another thing I need to keep track of mentally
I just realised I didn't answer the first question about what keeps me from discovering the issues earlier
The quick answer is complexity and the amount of energy I have, since I'm mostly working on my homelab after a full work day
Some things also don't run that often or I don't check up on them for some time. Like hardware acceleration for my jellyfin instance stopped working at some point because I was messing around with OpenCL and I messed up something with the Mesa drivers. Didn't discover it until I noticed the fans going ham due to the added workload
I'm not really sure what your point is, but I'll try to take it in good faith and read it as "why doesn't docker solve the problem for it, since you can also keep those configurations in a git repo?"
If any kind of apt upgrade or similar command is run in a dockerfile, it is no longer reproducible. Because of this it's necessary to keep track of which dockerfiles do that and keep track of when a build was performed; that's more out-of-band logging. With NixOS I will get the exact same system configuration if I build the same commit (barring some very exotic edge cases)
Besides that, docker still needs to run on a system, which must also be maintained, so Docker only partly addresses a subset of the issue
If Docker works for you and you're not facing any issues with such a setup, then that's great. NixOS is the best solution for me
That’s all my point was, yeah. Genuinely no extra snark intended.
> it is no longer reproducible
The problem I have with this is that most of the software I use isn’t reproducible, and reproducible isn’t something that is the be all and end all to me. If you want reproducible then yes nix is the only game in town, but if you want strong versioning with source controlled configuration, containers are 1000x easier and give you 95% of the benefit
> docker still needs to run on a system
This is a fair point but very little of that system impacts the app you’re running in a container, and if you’re regularly breaking running containers due to poking around in the host, you’re likely going to do it by running some similar command whether the OS wants you to do it or not.
> if you want strong versioning with source controlled configuration, containers are 1000x easier and give you 95% of the benefit
For some I'm sure that's the case; it wasn't in my case.
I ran docker for several years before. First docker-compose, then docker swarm, finally Nomad.
Getting things running is pretty fast, but handling volumes, backups, upgrades of anything in the stack (OS, scheduler, containers, etc) broke something almost every time - doing an update to a new release of Ubuntu would pretty much always require backing up all the volumes and local state to external media, wiping the disk, installing the new version, and restoring from the backup
That's not to talk about getting things running after an issue. Because a lot of configuration can't be done through docker envs, it has to be done through the service. As a consequence that config is now state
I had an nvme fail on me six months ago. Recovering was as simple as swapping the drive, booting the install media, install the OS, and transfering the most recent backup before rebooting
Took about 1.5 hours and everything was back up and running without any issues
Not OP, and not a very experienced with NixOS (I just use Nix for building containers), but roughly speaking:
* With NixOS, you define the configuration for the entire system in one or a couple .nix files that import each other.
* You can very easily put these .nix files under version control and follow a convention of never leaving the system in a state where you have uncommitted changes.
I've written a dozen flakes because I want some niche behavior that the home-manager impl didn't give me, and I just used an LLM and never opened Nix docs once.
It's just declarative configuration, so you also get a much better deliverable at the end than running terminal commands in Arch Linux, and it ends up being less work.
Have you seen how bad the Nix documentation is and how challenging Nix (the language) is? Not to mention that you have to learn Yet Another Language just for this corner case, which you will not use for anything else. At least Guix uses a lisp variant so that some of the skills you gain are transferable (e.g. to Emacs, or even to a GP language like Common Lisp or Racket).
Don't get me wrong, I love the concept of Nix and the way it handles dependency management and declarative configuration. But I don't think we can pretend that it's easy.
The documentation is not great (especially since it tends to document nix-the-language and not the conventions actually used in Nixpkgs), but there are very few languages on earth with more examples of modules than Nix.
Admittedly I didn't dive much into this to get the full context, but it's saddening to me that a legendary game designer had a GoFundMe. I was hoping achieving that level of status in a traditionally well-paid industry would leave one well off, financially.
It's such an erractic industry in terms of compensation. You can found a studio, make some acclaimed darlings, and still end up shuttering and being no better off than your average joe. Then there's being a "software engineer in games" where you're a cog in a wheel fixing bugs in the yearly Sportball game that gets compensated 200k and you live very well despite never truly "impacting" the industry the same way. 200k isn't mindblowing for a software engineer, but it's well beyond "average joe" range at that point.
I'm that cog. Or at least, was. Situations like this make me thing a lot about the state of the industry and where I lie in life.
Yep, unfortunately that's a big part of the reason I left for a more traditional tech role. The same skills are extremely valuable at any company writing performance critical software.
I'm wondering if she actually got the fundraiser money, considering how quickly this moved - the last update implied it would have to go to her funeral, and I hope it pays for the bills or helps her family.
The United States is the wealthiest nation on the planet according to Forbes, richer than the subsequent three nations combined.
It’s a tragedy that our own citizens are not the direct be beneficiaries of that wealth.
I think a lot about the scene in Star Trek IV when McCoy is in a hospital and says “what is this the dark ages?”
Gofundme is like a kafkaesque tragic absurdity that - hopefully - will be looked at as an indictment of the inequitable K shaped economy we’ve built, and hopefully fixed in the future.
> The United States is the wealthiest nation on the planet according to Forbes, richer than the subsequent three nations combined.
This framing by Forbes (any many others really) is insidious because it doesn't take into account the population number and how unevenly wealth is spread.
For instance, Switzerland is not a huge economy - around the 20th in the world, but its citizens enjoy an extremely high quality of life because both income inequality and incomes overall are significantly better that in the US.
Population size is usually included in those calculations. It’s typically GDP per capita.
But I couldn’t agree more that the inequality and social safety net (or lack thereof) make the numbers deeply disconnected from QoL. Which I believe is the whole point.
> As for whether this represents a "kafkaesque tragic absurdity" we would need intimate knowledge of a lifetime of financial decisions. Maybe she was really bad with money, and frittered it away in casinos. Maybe she was amazing with money, and donated to others more than will ever be donated to her.
As someone in a nation with socialised healthcare, no you don't. It's a Kafkaesque tragic absurdity, and this sentiment of "maybe she was bad with money" sounds a bit like "maybe she was holding the live hand grenade wrong".
The US is maybe the only developed nation where this happens, insurance exists because massively unlikely, massively expensive events are very hard to budget for. It's not the person's fault if they didn't manage that.
The UK has socialized healthcare, and that's not going so well. Societies excel at stuff they prioritize. Pretty much all societies don't prioritize other people's tragedies.
It's definitely going better than the US, where you basically need to beg people for treatment money. I'm not sure what "not going so well" means, in that regard, since virtually every other developed country is doing better than the US on this.
I’ve lived in both Canada and the US. My grandma in Canada had to wait 9 months for a hip replacement. Even though the government provided help with paid aids, it was not a great situation.
My mom here in the states needs a hip replacement and she can’t afford it because she’s maxed Medicare.
You mentioned ambulance. My wife called an ambulance for our kid who tripped on something at a park and a rather hysterical person told her she needed to call an ambulance right away. Pressured, she did so; our kid was fine. But we then owed $3,500 for the ambulance. Though we were paying on a payment plan and never missed a payment, the bill got turned over to collections for some unknown reason. We got it sorted it out but it took about 15 hours of work to resolve and fix our credit.
I’ve found that my Canadian relatives complain often about the system but very few seem to truly understand what is good about that system.
Pick your poison. Like many things here in the US, healthcare in the US is great if you have money, bad if you don’t.
It's not that great even if you have money. Unless you're talking about the type of money needed to pay for all of your treatments out of pocket, and give you access to special private care most people don't even know exists.
My experience has been: if you have an immediate health issue with an obvious solution, you can get pretty good care. Say if you have a broken arm, gun shot wound, heart attack, stroke, etc. Anything uncommon, or that requires ongoing care, is a life sucking nightmare.
I'll give some examples from my own life. I live outside a major metropolitan area. A relative was visiting me and had a stroke in my living room. I called 911, and an ambulance appeared 5 minutes later, in 25 minutes they were in a hospital with a telemedicine link to a stroke expert. The expert said they needed to be brought to a downtown hospital so they were sent there by helicopter. One of the two best neurosurgeons in the city performed an endoscopic removal of the clot and saved their life.
Contrast this with a different relation who struggles with chronic pain and spine problems and has spent the last 20 years bouncing around various doctors, battling insurance companies, pharmacies, waiting to be seen, waiting endlessly for specialists, tests, and having to keep track of all of their information themselves because the system is fragmented and every office wants a complete restatement to their medical history.
Yeah, exactly, I don't know much about the NHS but I wouldn't be surprised if the recent issues are because it's getting defunded so it can be sold off to private owners.
> this sentiment of "maybe she was bad with money" sounds a bit like "maybe she was holding the live hand grenade wrong".
Yes, it does sound like that when taken as an isolated sentence fragment. I'm not sure what your point is though, since no reasonable system of economics could possibly solve for people holding the metaphorical live hand grenade wrong.
I think the sentiment is not that generosity to those in need is bad, but that something bad must be causing so many to be in such desperate need.
It may be relevant that the US has higher health-care costs than every other country in the world except for Switzerland, but not because it's providing better care. Many countries have better outcomes.
The fact that you need intimate knowledge is evidence of the Kafkaesque nature. It describes a world where virtue doesn't exist except for the case of financial planning (which often equates quite well to luck).
Based on my understanding of Kafka, to fit the definition, funerals would be essential goods whose costs should be socially guaranteed. In reality, a funeral is a discretionary event about the deceased and for the living. Crowdfunding for the benefit of the crowd is not an inversion of responsibility, it's simply voluntary collective spending.
You could say it's an inversion of societal norms, but that's not Kafkaesque.
My apologies, I misread the original article and I was left with the impression that the GoFundMe was only for end-of-life and funeral costs. I must have missed the standfirst, which is where it was described as a "cancer fundraiser".
The Churchill line is about democracy, but the adapted version is a common variation. It works as a standalone maxim without need of attribution to some famous person.
I don't know if you've noticed, but internet discussions collectively can't seem to avoid "no true Scotsman"-ing what counts as capitalism, likewise its alternatives.
I've seen some people on HN criticise the "socialist" healthcare of the nordic countries on the basis of what Stalin was like, and others saying that China as is today is each of communist and capitalist depending on the point the poster wants to make.
I also clicked through ten pages of Google search results for "capitalism is the worst economic system, except for all the others", each of which showed the literal quotes in the preview excerpts, at which point I became too bored to continue.
I mean, how is "healthcare" from 500 years ago the bar here?
And isn't single-payer state-funded healthcare the scaled version of a small town passing the plate around anyway?
As I think about it, gofundme is even more kafkaesque in that it gatekeeps fundraising to those who have online social networks strong enough to fundraise. We don't hear about those who aren't able to because in the Jia Tolentino definition of "silence," they are not able to express that need online.
> Maybe she was really bad with money
I guess I fundamentally disagree that a kind of Dave Ramsey level of financial saving is a prequisite for healthcare. Indeed, I'd argue that casinos are a symptom, not a problem, of a system in which the only "viable" way out is gambling - again another tentpole in a complicated kafkaesque system.
I agree that single-payer baseline healthcare is the obviously correct answer. The experiment has been run countless times globally, and there's enough evidence to put this beyond debate. Rebecca's circumstance isn't Kafkaesque, it's merely adding to that mountain of evidence.
> how is "healthcare" from 500 years ago the bar
I agree completely, but it's not Kafkaesque for a person to ask one's own community for voluntarily contributions in their time of need, just because that community happens to be online.
> gofundme is even more kafkaesque in that it gatekeeps fundraising to those who have [strong] online social networks
There's nothing Kafkaesque about a popular person having more opportunities than an unpopular person. And there's nothing inherently capitalist about it either. This is human nature, nothing more. I would be far more concerned about an economic system that sought to "guarantee equality" in a way that reduces the individual's incentive to be kind to others.
Considering the James Van Der Beek of Dawson's Creek fame is having to hold a fundraising auction of his memorabilia to fund his cancer treatment, cancer is expensive in the US.
actually a difference is also how many players along the supply chain siphon money out of the process. the more greed is allowed and acted on for the treatment, the more expensive it gets. introduce layers of insurances, hedgefonds, pension funds, lobbyism, ... it adds up to riddiculous amounts far beyond the original R&D/infrastructure/treatment costs.
And also downsides, e.g. many treatments just aren't available, and many others would never have had their discovery funded without the market-based system existing.
Governments can (and do) directly fund medical research including drug discovery. This is in part because governments of even just middling competence have an incentive to keep their workforce (which also includes their military) healthy.
This… is a think that people believe, but it’s not as simple as that. Most basic research is universities, all over the place. Many drugs are developed in Europe. A lot of medical machinery is developed and made in Europe (Siemens, Philips and Roche are huge in this space). Like most things, med tech is fairly globalised.
And let's not forget that a substantial amount of medical research performed in the USA is not market-based but rather publicly funded through the NIH.
> This… is a think that people believe, but it’s not as simple as that.
This is a thing people believe because pharmaceutical companies keep repeating it. And to be fair, they're not entirely wrong in that getting a drug/treatment from the lab to the pharmacy is incredibly expensive because most drugs don't work and clinical trials are super expensive.
It does seem to me that a better system would be to split out the research/development and manufacturing of pharmaceuticals into the lab development (scientists), the clinical trials (should be government funded) and the manufacturing (this could easily be done via contract).
Which the US had a situation exactly like that until very recently: development labs, often at Universities, with scientists paid for by grants (some private, but the majority being public, government grants), with clinical trials overseen by government agencies like the National Institute for Health (NIH), and winning research eventually being tech transferred for cheap to Pharmaceutical companies to manufacture, distribute, and market.
The companies have the biggest PR arms, so took the most credit for a system that had been balanced on a lot of government funding in the earlier, riskier stages. Eventually the marketing got so unbalanced people didn't realize how much the system was more complex than the marketing and voted for people that decided it was a "free market" idea to smash the government funding for the hard parts of science.
> Which the US had a situation exactly like that until very recently: development labs, often at Universities, with scientists paid for by grants (some private, but the majority being public, government grants), with clinical trials overseen by government agencies like the National Institute for Health (NIH), and winning research eventually being tech transferred for cheap to Pharmaceutical companies to manufacture, distribute, and market.
Yeah, this isn't a particularly new idea. Like, most of the risk in pharma is on testing, and there's so much waste in spinning up plants for drugs that may not even succeed in Phase III. So I'd like to split that out.
> It does seem to me that a better system would be to split out the research/development and manufacturing of pharmaceuticals into the lab development (scientists), the clinical trials (should be government funded) and the manufacturing (this could easily be done via contract).
The market is there to risk money in the world of imperfect information trying to predict what would be good to pursue. That is one of the hardest parts of the process, but it's not even made your list.
Exactly. This was entirely deliberate as I (personally) believe that market signals are profoundly useless in healthcare. Like, there's no free market in life or death, nobody will quibble over cash when they're in pain so I'm not sure how a market is supposed to work.
Fundamentally, the incentives of society and private companies are misaligned with respect to healthcare. Society wants a cheap, simple treatment that basically works forever (like sterilising vaccines). However, because of how the patent system works, companies want a treatment that is recurring, and can easily be patented multiple times.
Because of this, so much money goes into lifestyle treatments for the rich world, and not enough into re-using things that can't be patented. I think this is a giant waste of resources, hence my suggestions above.
This doesn't make any sense. If you make a thing, the price you set for selling that thing in a country has little to do with where you happen to be living when you made that thing.
It's a lot more expensive in the US. Three years of ribociclib is US$100k here in Argentina, which dwarfs the usual costs of things like chemotherapy, radiation therapy, and surgical resection. (All of which is normally paid for either by a health plan or by the public hospital system.) In the US, if you have to go through all of that, I think the cost is going to be at least an order of magnitude higher.
I wouldn't say it's perfect quite yet. I just installed Debian on my Framework, and my microphone isn't working. Debugging it for the last 30 minutes has gotten me nowhere, and half the answers on the internet don't apply to my distro. Until basic issues like this go away or have easy solutions, it's hard to recommend it to anyone.
I'm going to be shown the door for this suggestion, but go consult with ChatGPT about your mic. ChatGPT had been very good for debugging Linux usability issues and papercuts in my experience.
Is it a normal mic, or bluetooth? I think, Trixie have some regressions in bluetooth stack of Cinnamon - it worked nicely in Bookworm, but I had weird issues on Trixie that just disappeared once I switched to KDE (didn't try Gnome).
Audio has always been overengineered and brittle. Vanilla alsa was the sweetspot, but things like pulseaudio and all the projects that followed it to "fix" it have too many things that can go wrong.
I don't seem to have any issues with audio anymore since Pipewire became default on Ubuntu, as a non-professional but fairly demanding user with a bunch of wired headphones plus bluetooth. I definitely used to have plenty of annoyances!
Maybe this is naive, but in a good crypto system, I would hope "when" is measured in millions or billions of years given current hardware capabilities.
If you have a long enough and random enough password, you're probably good. The trouble with short passwords is that there just aren't that many of them. An attacker can just compute the hash of all of them.
As long as the salt is secret from the attackers (which is not a given, of course), the length of the passwords shouldn't matter all too much; the input to the hash (i.e. password + hash) would still have enough entropy to not be brute-force-able.
If you have the hashed password, in most systems you have the salt. Salt+hash is for preventing the attackers from getting to try all your passwords in parallel.
Maybe this is what you're saying, I'm not sure - my understanding was that the salt prevents reused passwords from resulting in the same hash. So, if I use 'password' and you use 'password' the salt+hash will be different. That way attackers can't just hash all the common passwords once and immediately associate them with different accounts.
Yeah, exactly. Commonly, the salts are stored right next to the hashes in the DB, because they serve their purpose even if the attacker knows what the salts are. By using a different salt for every password, the attacker needs execute a full "guess, hash, compare, repeat" attack on each user, as opposed to "guess, hash, compare against all user passwords, repeat" on the entire database.
You can also have a system salt(s) that are not stored with the database, so that if someone accesses the database they have to guess password and two salts, one of which they hopefully do not have via the same penetration.
I'll add my opinion as a DevOps engineer, not a startup, so take it with a grain of salt.
* Kubernetes is great for a lot of things, and I think there's many use cases for it where it's the best option bar none
* Particularly once you start piling on requirements - we need logging, we need metrics, we need rolling redeployments, we need HTTPS, we need a reverse proxy, we need a load balancer, we need healthchecks. Many (not all!) of these things are what mature services want, and k8s provides a standardized way to handle them.
* K8s IS complex. I won't lie. You need someone who understands it. But I do enjoy it, and I think others do too.
* The next best alternative in my opinion (if you don't want vendor lock in) is docker-compose. It's easy to deploy locally or on a server
* If you use docker-compose, but you find yourself wanting more, migrating to k8s should be straightforward
So to answer your questions, I think you can adopt k8s whenever you feel like it, assuming you have the expertise and are willing to dedicate time to maintaining it. I use it in my home network with a 1 node "cluster". The biggest pitfalls are all related to vendor lock in - managed Redis, Azure Key Vault. Hyper specific config related to your managed k8s provider that might be tough to untangle. At the same time, you can just as easily start small with docker-compose and scale up later as needed.
From someone who was recently tasked with "add service mesh" - make service mesh obsolete. I don't want to install a service mesh. mTLS or some other from of encryption between pods should just happen automatically. I don't want some janky ass sidecar being injected into my pod definition ala linkerd, and now I've got people complaining that cilium's god mode is too permissive. Just have something built-in, please.
Various pieces support pieces for pod to pod mTLS are slowly being brought into the main Kubernetes project.
Take a look at https://github.com/kubernetes/enhancements/tree/master/keps/..., which is hopefully landing as alpha in Kubernetes 1.34. It lets you run a controller that issues certificates, and the certificates get automatically plumbed down into pod filesystems, and refresh is handled automatically.
Together with ClusterTrustBundles (KEP 3257), these are all the pieces that are needed for someone to put together a controller that distributes certificates and trust anchors to every pod in the cluster.
For my curiosity, what threat model is mTLS and encryption between pods driving down? Do you run untrusted workloads in your cluster and you're afraid they're going to exfil your ... I dunno, SQL login to the in-cluster Postgres?
As someone who has the same experience you described with janky sidecars blowing up normal workloads, I'm violently anti service-mesh. But, cert expiry and subjectAltName management is already hard enough, and you would want that to happen for every pod? To say nothing of the TLS handshake for every connection?
People keep talking about having been hacked, but it's honestly baffling to me.
I'm 28. I started using computers on a regular basis when I was ~9 years old, playing RuneScape. Since then, I've spend probably 10s of thousands of hours on the internet - downloading torrents, signing up for sketchy Russian websites, doing online banking, testing experimental software downloaded over HTTP from a .xyz domain. I graduated high school, went to a technical college for compsci, graduated, worked in helpdesk, desktop support, IT management, and more recently DevOps. I develop software using all sorts of package managers, and used hundreds of thousands of unvetted software packages that arrived as dependencies.
Not once have I, or anyone I've been responsible for, been hacked. No crypto, no viruses, nothing. What the heck are you guys doing getting your Android phones hacked???? Like I only use a modicum of common sense these days, but I guess I've just been lucky and have been the odd one out. I still enjoy reading HN arrivals about security though, so maybe I just have always been slightly more security conscience?
In any case, this is just a stream of consciousness / gut feeling comment. Don't put too much weight into it, I haven't.
The way in which getting hacked works these days is that you as the user will never know. They will just silently exfil your data, and also use you to get to others. You will be none the wiser.
If it's so secret that nobody will ever find out, then I'm okay with it.
On the other hand, it's true that some people find out their credit score is trash right before buying a house, or that their name is involved in terrorism when applying for a visa, etc.
You should not be okay with it because they will most definitely use it to exert power over you. They are not professors or space aliens that are doing it for academic curiosity. It can be the government that wants to lock up someone, either now or in the future, or anyone that wants to steal your hard-earned crypto. It is not okay. These days they will also pass it through their AI, and potentially also use it to tune their AI.
One would never know until one loses their crypto, or one has the government hurting one's freedoms, possibly even using parallel reconstruction, or one gets blackmailed. If one doesn't know that their phone could've been hacked, one will be left wondering what happened.
I store every single important piece of info in my KeePass database. It stores ALL of my passwords, my SSN, credit cards, my health information, even some weird stuff like my vehicle maintainence records and whatnot. My KDBX file currently sits at 466K. Size is not a particularly compelling reason. Hate to be that guy, but if your database is much larger than that - you're probably doing it wrong.
Newer features like TOTP and passkeys are likewise not a concern for me. What did KeePassXC do when TOTP came around? They stored the relevant data in the attributes, and added a UI around it. It even works with my Steam TOTP, which is a nonstandard implementation. I haven't looked into it, but I imagine they did the same thing with passkeys. I don't see why this couldn't continue to be the paradigm they use. I don't use attributes at all - I haven't needed to, the notes section work great - but I do appreciate being able to look into the "raw data" of attributes quite easily, from within the UI.
If KeePass were being developed from scratch today, or if the developers of the various projects collectively really, really want to switch to a SQLite system of their own volition. Then sure, SQLite. I'm not going to ask them to do that now though.
---
On a separate note, an unfufilled niche that I have though, if anyone's looking for ideas. My secure password storage is a solved problem, KeePass is cross platform, easy to use, and very secure. What remains a problem is secure notes. I want to be able to write markdown (`.md`) documents, add photos and PDFs, then save it to a secure, encrypted folder somewhere. Doesn't need the same security posture as KeePass, but I don't want to leak metadata like file names.
Obsidian - my current notes app - is good from a usability standpoint, but it's not exactly secure. I could pair it up with Veracrypt, but that's a pain from a usability standpoint, and I don't trust my OS to keep the mounted Veracrypt volume contents a secret. Whatever the solution is, it must have a GPL license, or else I'm not going to trust it - from a long-term viability standpoint more than anything else.
If anyone has any suggestions here, would love to hear them.