Evaluations
Run models against your data
Introducing Evaluations, a powerful feature designed to enable you to effortlessly test and compare a selection of AI models against your datasets.
Whether you're fine-tuning models or evaluating performance metrics, Oxen evaluations simplifies the process, allowing you to quickly and easily run prompts through an entire dataset.
Once you're happy with the results, output the resulting dataset to a new file, another branch, or directly as a new commit.
4f502e92-ec80-4e82-8159-e51ca58b3a3a
OpenAIOpenAI/GPT-4otexttext
lili
9 months ago
Answer the question in a few words. 

 {question} 

completed 11 rows663 tokens 1 iteration
6116a7cf-19a0-4140-b6db-26afb72d4b32
OpenAIOpenAI/GPT-4otexttext
lili
9 months ago
Answer the following question. 

 {question} 

started (40%)Running 2 / 5 row sample876 tokens 1 iteration
1713c8a8-6b8c-46a3-b4e8-0905c9b0ca95
OpenAIOpenAI/GPT-4otextembeddings
lili
9 months ago
question
completed 11 rows374 tokens 1 iteration
df7c960b-21d7-4923-9399-900fe7a141c8
OpenAIOpenAI/GPT-4otextembeddings
lili
9 months ago
question
completed 11 rows374 tokens 1 iteration