Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
d589bf5f-1c22-4322-b87b-980603ac86ac
OpenAIOpenAI/GPT-4o Minitexttext
Ox
ox
7 months ago
convert to lowercase

{prediction}
started (100%)Running 5 rows69 tokens$ 0.0000 2 iterations
3bf5b786-cb3e-4d36-93b0-07b81ae44943
OpenAIOpenAI/GPT-4oimagetext
Ox
ox
7 months ago
Is a cat or dog? Respond with only one word.

{path}
completed 5 rows1378 tokens$ 0.0035 1 iteration
38f509a9-6813-44a6-81f3-b6ff05876424
OpenAIOpenAI/GPT-4oimagetext
Ox
ox
7 months ago
is it a cat or a dog?

{path}
completed 5 rows1375 tokens$ 0.0036 1 iteration