Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am sure you have thought about this, but curious how you are handling safeguards for crises that might require people to intervene, like rare conditions that nonetheless require medical attention or mental health problems that pose imminent risk of self harm.


It's a really serious issue, and we've tested many of those types of questions/messages, but you can test for yourself also if you want. We state that it definitely shouldn't be used for emergency situations, and the chatbot tries to provide medical information and recommend anything else to a healthcare professional.

But also it can be dangerous when you don't have access to medical information. The first friend that started testing the WhatsApp bot lives in the Sinai desert in Egypt where it's really hard to get to a clinic to ask questions. It's kind of similar in rural Nebraska where I grew up. We're taking things one step at a time and trying to provide the best services that we're able.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: