For manual testing, QA is a separate person/team but I'm recommending the developer observe in real time as they review since they're "shuttling" the change into production. QA should also basically be adding in automated tests for their manual evaluations so yes, I'd have them cycle through the development team.
On #6, yes, I'd say squash merge into single commits so single commits going out would be the enforcement measure. And deploys to production occur immediately on merge to `master` which would be blocked on tests passing with additional pushes to that branch triggering test reruns.
I feel like focusing on building revenue should be the focus of pure SaaS founders and then reach out to investors to leverage growth - something that can be done more easily once they've hit "proof of market". That's part of my hypothesis behind Automated Capital.
The focus on revenue is definitely a paradigm shift for the current environment. It's part of my hypothesis behind Automated Capital (http://automated.capital/).
Yeah lol he's not going to be 100% perfect. There are definitely improvements that could be made in the request parameters being sent to the OpenAI API but I think the actual training data being uploaded and referenced as part of the response could be improve I'm just not sure how. Open to ideas!
Sure. You can just add part of the history to the prompt. Not perfect. But it can work well enough that the user can refer back to something they said in the precious question.
Improving the requests being made against the OpenAI API is probably the biggest thing I'd be looking for right now. I am also considering switching over to their "search" API instead of the "Q&A" one currently available on the "questions" tab.
On #6, yes, I'd say squash merge into single commits so single commits going out would be the enforcement measure. And deploys to production occur immediately on merge to `master` which would be blocked on tests passing with additional pushes to that branch triggering test reruns.