Embed articles and throw the results in a vector database.
Throw up a search result that just uses cosine similarity on the vector search with questionable metrics and no explanations on how things are calculated.
Charge yearly because you know people will churn after a month or two.
I'll play DA here - there's quite a bit of engineering surrounding these apps that can appear hidden to folks from the outside looking in. Various levels of prompt engineering and in-context learning might be necessary to get optimal results, and this could mean significantly more complexity at the application level.
Every time I hear or read "prompt engineering", I can't help but cringe a bit. I'm not sure why, but it's the same reaction I would have if I heard someone say "Google search query engineering".
Comparing to google search, there definitely is skill involved in knowing how to google things well. We're all accustomed to googling things many times per day so I think a lot of people forget that being able to google things and get the results you want is a skill that has to be learned.
But I would never refer to being "good at writing google search queries" as any kind of engineering. But is becoming good at searching google any less difficult than getting good at writing LLM prompts?
I'd love to hear the other side of the argument. How difficult is it to become good at "prompt engineering"? Why do we even call it "prompt engineering" instead of just "writing effective prompts"?
Edit: I think the main gripe I have with the term "prompt engineering" is it makes the skill of writing prompts sound a lot harder than it actually is. Maybe I'm underestimating how difficult it is to learn how to write good prompts?
IMO you're right that "prompt engineering" is a cringe-y term because it implies what you're doing is mostly writing prompts. That being said, I don't think that's what it actually entails, any more than "backend engineering" is mostly writing SQL queries. Prompt engineering is building the systems around the prompts e.g. writing the LangChain or whatever code, parsing stuff and interacting with structured DBs, message queues, etc (and occasionally writing prompts too, but that's a relatively smaller part). It requires some domain-specific knowledge e.g. chain of thought and retrieval augmented generation techniques, some basic linear algebra, keeping up to date with new models and new ways of running them (ggml? gptq? openai functions vs logit masking llama 2?), but it's more or less backend engineering with a twist.
I've seen some of the more serious people switch to the term "Applied AI" which I think encompasses the role a lot better. Also I've seen a decent number of grifter types saying they're "prompt engineers" when what they mean is they're writing prompts into ChatGPT's UI, which I think is part of what drives the cringing feeling when you hear the phrase "prompt engineer" and probably drives some of the movement away from the term for engineers.
This is a great recap of the present role. I have been following a conversation around the AI Engineer as a label (although I like Applied AI better) from the Latent Space podcast team so a fair amount of meme hypecycle, but also active and actionable. They are setting up a conference in October and you can see the discussion unfold around the blog post below on X-itter.
They also have a great Slack community and are rapidly turning out features. Everything I have seen suggests that they are competent and committed to the mission of making it easier to do good science.
It is easy to be cynical about the gold rush, but don’t throw the babies out with the bathwater.
Throw up a search result that just uses cosine similarity on the vector search with questionable metrics and no explanations on how things are calculated.
Charge yearly because you know people will churn after a month or two.
Profit
- Every "AI startup" in the last 2 months