Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
fbf423ad-5649-4694-a077-855b9db8c93d
QwenQwen/Qwen2.5 1.5B Instructtexttext
Ox
ox
2 months ago
Translate to Spanish:

{prompt}
completed 5 row sample0 tokens$ 0.0116 2 iterations
e437370b-36e3-4a69-8396-a8bbfea821c7
OpenAIOpenAI/GPT-4o Minitexttext
Ox
ox
3 months ago
Translate to french

{question}
completed 41 rows1361 tokens$ 0.0005 1 iteration
3fcf514a-a7bc-4ae7-94a8-ed5582dccb1c
OpenAIOpenAI/Dall-e 3textimage
Ox
ox
3 months ago
Generate an image with the following text

{question}
completed 5 row sample0 tokens$ 0.0000 1 iteration