Hi IH!
Wanted to tell you of a new platform to help develop and test LLM json structured prompts (currently supporting OpenAI).
https://www.promptotype.io
Promptotype provides an extended playground to develop templated prompts intended to output a structured response (both json, and function calling).
While developing your templated prompts, you can easily test them on different queries, and save them along with their expected response (json schemas or actual values) in collections.
Finally, you can then test your prompts on whole collections at once- both when iterating on your prompt template, and periodically to make sure nothing breaks.
There's many more features such as library management, history and more coming out regularly.
Are you building something using LLM structured tasks?
Feel free to comment with questions, requests and feedback!