Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
378ad882-3249-4d5c-b415-2734a7de083b
OpenAIOpenAI/GPT-4o Minitexttext
Adam Singer
artforge
3 months ago
response only with true or false
if the {flower_type} includes rose then true else false
completed 51 rows1333 tokens$ 0.0002 1 iteration
dd0354f9-8b8d-40de-adc4-c9d05203f2cb
OpenAIOpenAI/GPT-4o Minitexttext
Adam Singer
artforge
6 months ago
question: does {flower_type} include the word 'tulip', ONLY awser true or false, in lowercase with no additional text
answer:
completed 20 row sample745 tokens$ 0.0001 12 iterations