Hacker Newsnew | past | comments | ask | show | jobs | submit | dkatz23238's commentslogin

How does dehydrated compare to nginx-proxy with acme-companion from an operational perspective? I've been looking at other options and the best so far seems to be Caddy.


Parade DB are working on this problem: https://github.com/paradedb/paradedb/tree/dev/pg_analytics


I think something like this for Duckb would also be amazing!


I don't understand how a short tutorial on calling sklearn class methods consists of a "deep dive" into ML algorithms.


This is not all. There is "Machine Learning Mastery Series" linked at the bottom. You may think that will make you a master...


Seems to me like you have three options: keep on growing bootstrapped 100%, grow to 10k-20k mrr range and find investors to help you build a team to further grow and compete, third is to close the project and find better opportunities (a talented dev like you will not have a problem finding one). All of these options have expected rewards and risks, you must weigh out these alternatives compared to what you can currently sustain.

Option 2 will require you to focus on what are the 20% of tasks you can do that will have the highest short term impact to build traction.


Great choice. Setting up https from scratch using nginx is something you should definitely do ONCE in your life but then use automated tools such as nginx-proxy/acme-companion.


Or you could use caddy from the get go


The Apache stuff I learned like 20 years ago still works - especially for not-huge sites (<600RPS). Point being, once you get this sys-admin task done it's super repeatable (regardless of server). And from one, moving between isn't that hard either. Dozens of personal services can run on a $5/mo VPS. One I have now is nginx to SSL terminate to 8 backend docker-things


Still running apache for my side projects too, it's very convenient and easy to deal.


Good pun +1 for that.

I recommend nginx or Apache for the learning exercise. Then go with whatever best suits your use case. My rule is choose that which fits the objective and which doesn't overburden with cognitive load. So I don't use Apache for anything and sometimes just use python's built in. suitability for purpose and usability is what counts for me.


I am very happy with gcp. You can setup a cloud build trigger to build and push to gcr then deploy to gke or cloud run. A bit more involved that pythonanywhere but more flexible and arguably more powerful.


What statistics/metrics are used to evaluate RAG systems? Is there any paper that systematically compares different RAG methods (chunkings, models, ect)? I would assume that such metric would be similar to something used for evaluating summarization or question and answering but I am curious to know if there are specific methods/metrics used to evaluate RAG systems.


When I try to access the file it says "file has been deleted".


Apologies! here is the new link: https://silver-antonetta-18.tiiny.site/


This is why they could use the help :)


It is deleted today too.


What would happen if each word in "tokenized" to an integer and then you generate tokens instead of characters to produce a string of coherent words instead of random strings? Maybe the answer is obvious but not to me without diving into it at a deeper level. Would be interested to hear anyones thoughts on this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: