There are so many failures in microservices that just can't happen with a local binary. Inter-service communication over network is a big one with a failure rate orders of magnitude higher than running a binary on the same machine. Then you have to do deploys, monitoring, etc. across the whole platform.
You will basically need to employ solutions for problems only caused by your microservices arch. E.g. take reading the logs for a single request. In a monolith, just read the logs. For the many-service approach, you need to work out how you're going to correlate that request across them all.
Even the aforementioned network failures require a lot of design, and there's no standardization. Does the calling service retry? Does the callee have a durable queue and pick back up? What happens if a call/message gets 'too old'?
Also, from the other end, command line utils are typically made by entirely different people with entirely different philosophies/paradigms, so the encapsulation makes sense. That's not true when you're the one writing all the services, especially not at small-to-mid-size companies.
Plus, you already can do the single-concern thing in a monolith, just with modules/interfaces/etc.
One strategy to convince is to get someone less technical than you to sit by you while you try and trace everything from one error'd HTTP request from start to finish to diagnose the problem. If they see it takes half a day to check every call to every internal endpoint to 100% satisfy a particular request sometimes that can help.
Also sometimes they just think "this is a bunch of nerd stuff, why are you involving me?!" So it's not foolproof.
Oh, my non-technical boss agrees with me already. It's actually the engineers who've convinced themselves it's a good setup. Nice guys but very unwilling to change. Seems they're quite happy to have become 'experts' in this mess over the last 5-10 years. Almost like they're in retirement mode.
The real solution is probably to leave, but the market sucks at the moment. At least AI makes the 10-repos-per-tiny-feature thing easier.
how do you test for allergens? i did 5 years of immunotherapy shots, twice weekly at a doctors office and i had to stay 30 minutes after each shot for the anaphylaxis risk. it worked quite well but it was really inconvenient.
my allergy is triggered by dust mites and pollen. Not sure what the anit-mites component is in a healthy sinus cavity, but i'm sure I'm missing it. I think essentially the equivalent of wax in our ear canal. As for pollen, go figure on that one, boost my testosterone levels? I don't know.
We're working on getting there. What got us out of our seats to build this was realizing that LLMs still struggled with the fairly basic data modeling and distributed systems problems that existing payments providers pose. Any solution they came up with was only ever narrowly correct, brittle, and a nightmare to maintain
I totally agree. I actually started Wyndly (https://www.wyndly.com/) because I realized I was poisoning myself with antihistamines every single day for my allergies. I did the research, and antihistamines are know to cause anxiety, depression, weight gain, and brain fog!
I applied for YC. We got in!
And now we're chipping away at this corner of human health: treating the root cause of allergies with protein exposure therapy (allergy immunotherapy) instead of covering up allergy symptoms with ineffective (and, it turns out, dangerous) antihistamines.
Good question! It's likely because there are lots of different accents of Spanish that are distinct from each other. Our labels only capture the native language of the speaker right now, so they're all grouped together but it's definitely on our to-do list to go deeper into the sub accents of each language family!
Spanish is one of those languages I would love to see as a breakdown by country. I’m sure Chilean Spanish looks very different from Catalonian Spanish.
Not sure, could be the large number of Spanish dialects represented in the dataset, label noise, or something else. There may just be too much diversity in the class to fit neatly in a cluster.
Also, the training dataset is highly imbalanced and Spanish is the most common class, so the model predicts it as a sort of default when it isn't confident -- this could lead to artifacts in the reduced 3d space.
Then they shouldn't have offered it as a free service in the first place. It's like that discussion about how Google in all its 2-ton ADHD gorilla glory will enter an industry, offer a (near) free service or product, decimate all competition, then decide its not worth it and shutdown. Leaving a desolate crater behind of ruined businesses, angry and abandoned users.
reply