I hate when I find a cool AI project and I open the github to read the setup instructions and see "insert OpenAI API key." Nothing will make me loose interest faster.
Unconstructive comment. OpenAI is the golden standard for an llm: if you cared to dig deeper you’d realize that that you really could incorporate another llm with little effort.
Most projects also give you the option of providing an base url for the API so that people can use Azure's endpoints. You can use that config option with LiteLLM or a similar proxy tool to provide an OpenAI compatible interface for other models, whether that's a competitor like Claude or a local model like Llama or Mistral.
OpenAI API is simply a utility. The question is given this utility, how does one find the right use case, structure the correct context, and build the right UX.
OP has certainly built something interesting here and added significant value on top of the base utility of the OpenAI API.
I’m moving an inordinate amount of data between the ChatGPT browser window and my IDE (a lot through copying and pasting) and this demonstrates two things: 1) ChatGPT is incredibly useful to me and 2) the worflow UX is still terrible. I think there is room for building innovative UXs with OpenAI, and so far what I’ve seen in Jetbrains and VSCode isn’t it…
Not affiliated with the project but you could use something like OpenRouter to give users a massive list of models to choose from with fairly minimal effort
Thanks, I need to spend some time digging into OpenRouter. The main requirement would be reliable function calling and JSON, since Plandex relies heavily on that. I'm also expecting to need some model-specific prompts, considering how much prompt iteration was needed to get things behaving how I wanted on OpenAI.
I've also looked at Together (https://www.together.ai/) for this purpose. Can anyone speak to the differences between OpenRouter and Together?
I can't speak to the differences of Openrouter to Together but the Openrouter endpoint should work as a drop-in replacement for OpenAI api calls after replacing the endpoint url and the value of $OPENAI_API_KEY. The model names may differ to other apis but everything else should work the same.
Thanks, I'll give it a try. Plandex's model settings are version-controlled like everything else and play well with branches, so it will be fun to start comparing how all different kinds of models do vs. each other on longer coding tasks using a branch for each one.
For challenging tasks, I typically get code outputs from all three top models (gpt4, opus, and ultra), and pick the best one. It would be nice if your tool could simply this for me: run all three models and perhaps even facilitate some type of model interaction to produce a better outcome.
I think OpenAI is still the best of the bunch. Kind of feel like the others are kind of there to make people realize OpenAI works the best. Maybe when Gemini 1.5 is released?
Religion is a cancer that really needs to be cut free from humanity, not co-mingled with technology and the modern world. It's disgusting to consider the sheer number of people that have been killed in the name of fictitious "gods". Religion is a tool used by the wicked at almost every point in history to justify their greed, hatred, violence, and unjust actions as being the will of some higher power that only they can hear. Even now, far right nutjobs are again co-opting religion to push their vitriol into the minds of the weak.
The fact you equate that experience as being anything close to what Gen Z is experiencing kinda proves the point of the post.
Unless you went to the most expensive school possible, it would have cost you a couple hundred hours of work to afford your education. When you graduated, you were most likely able to afford to either buy a home, or live on your own using your income alone, right out of college, if not shortly afterwards.
All of those experiences are not possible for the vast majority of Gen Z individuals. For Gen Z to pay for four years of college they would have to work over 15,000 hours over 4 years. That's about 73 hours a week at the average pay rate in the US. There is nowhere an average income can afford someone to live alone. Homes your generation got for 20 to 50k are now in the 500K+ range.
So yea, I 100% agree that boomers will never understand.
Just as you'll never understand Boomers I guess. None of those things you stated are true of my case. I could not afford a home until I was 42 and I did it with a partners help.
We are going to hit a point soon where even creators won't be able to spot all the borrowed animation elements due to the black-box nature of generative AI models. I imagine it's going to be like the Akira bike slide but for everything.
Things lifted wholesale from training data but plastered together to create new works will leave us in an uncanny state of semi-permanent déjà vu where our little pattern matching blobs constantly chirp out subtle connections.
I've tested Binwalk on all the example files, and the BMP and TGA samples didn't show any zlib compressed data (https://ibb.co/3vqyhcv). Can you please confirm that you have used the files from the example folder (https://github.com/x011/SecretPixel/tree/main/examples)? Regarding PNG and TIFF, this is normal because they use DEFLATE compression, which is a variant of zlib compression, to compress the image data. This is part of the PNG/TIFF specifications and is used to reduce the file size of the image. Nevertheless, I've removed TIFF compression='tiff_deflate' option for TIFF images. Thanks for the paper :)