Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
1c2665e5-3c99-40d8-bf79-189c99beb4bf
OpenAIOpenAI/GPT-4o Minitexttext
Eloy Martinez
EloyMartinez
4 months ago
do sometb {output}
conflict-main-4fd4aa50-a266-4f8f-a184-fb111a106735
completed 100 rows90747 tokens$ 0.0388 1 iteration
7a65286f-61e9-4c5e-a2ca-8874ae973b6a
OpenAIOpenAI/GPT-4o Minitexttext
Eloy Martinez
EloyMartinez
4 months ago
do sometb {output}
completed 100 rows90923 tokens$ 0.0390 1 iteration