Hosted with nbsanity. See source notebook on GitHub.

LLM Arena-as-a-Judge

In this tutorial, we will explore how to implement the LLM Arena-as-a-Judge approach to evaluate large language model outputs. Instead of assigning isolated numerical scores to each response, this method performs head-to-head comparisons between outputs to determine which one is better — based on criteria you define, such as helpfulness, clarity, or tone.

We’ll use OpenAI’s GPT-4.1 and Gemini 2.5 Pro to generate responses, and leverage GPT-5 as the judge to evaluate their outputs. For demonstration, we’ll work with a simple email support scenario, where the context is as follows:

Dear Support,  
I ordered a wireless mouse last week, but I received a keyboard instead.  
Can you please resolve this as soon as possible?  
Thank you,  
John  

Installing the dependencies

!pip install deepeval google-genai openai
Requirement already satisfied: deepeval in /usr/local/lib/python3.12/dist-packages (3.4.0)
Requirement already satisfied: google-genai in /usr/local/lib/python3.12/dist-packages (1.30.0)
Requirement already satisfied: openai in /usr/local/lib/python3.12/dist-packages (1.100.0)
Requirement already satisfied: aiohttp in /usr/local/lib/python3.12/dist-packages (from deepeval) (3.12.15)
Requirement already satisfied: anthropic in /usr/local/lib/python3.12/dist-packages (from deepeval) (0.64.0)
Requirement already satisfied: click<8.3.0,>=8.0.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (8.2.1)
Requirement already satisfied: grpcio<2.0.0,>=1.67.1 in /usr/local/lib/python3.12/dist-packages (from deepeval) (1.74.0)
Requirement already satisfied: nest_asyncio in /usr/local/lib/python3.12/dist-packages (from deepeval) (1.6.0)
Requirement already satisfied: ollama in /usr/local/lib/python3.12/dist-packages (from deepeval) (0.5.3)
Requirement already satisfied: opentelemetry-api<2.0.0,>=1.24.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (1.36.0)
Requirement already satisfied: opentelemetry-exporter-otlp-proto-grpc<2.0.0,>=1.24.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (1.36.0)
Requirement already satisfied: opentelemetry-sdk<2.0.0,>=1.24.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (1.36.0)
Requirement already satisfied: portalocker in /usr/local/lib/python3.12/dist-packages (from deepeval) (3.2.0)
Requirement already satisfied: posthog<7.0.0,>=6.3.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (6.6.1)
Requirement already satisfied: pyfiglet in /usr/local/lib/python3.12/dist-packages (from deepeval) (1.0.4)
Requirement already satisfied: pytest in /usr/local/lib/python3.12/dist-packages (from deepeval) (8.4.1)
Requirement already satisfied: pytest-asyncio in /usr/local/lib/python3.12/dist-packages (from deepeval) (1.1.0)
Requirement already satisfied: pytest-repeat in /usr/local/lib/python3.12/dist-packages (from deepeval) (0.9.4)
Requirement already satisfied: pytest-rerunfailures<13.0,>=12.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (12.0)
Requirement already satisfied: pytest-xdist in /usr/local/lib/python3.12/dist-packages (from deepeval) (3.8.0)
Requirement already satisfied: requests<3.0.0,>=2.31.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (2.32.4)
Requirement already satisfied: rich<15.0.0,>=13.6.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (13.9.4)
Requirement already satisfied: sentry-sdk in /usr/local/lib/python3.12/dist-packages (from deepeval) (2.35.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.12/dist-packages (from deepeval) (75.2.0)
Requirement already satisfied: tabulate<0.10.0,>=0.9.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (0.9.0)
Requirement already satisfied: tenacity<=10.0.0,>=8.0.0 in /usr/local/lib/python3.12/dist-packages (from deepeval) (8.5.0)
Requirement already satisfied: tqdm<5.0.0,>=4.66.1 in /usr/local/lib/python3.12/dist-packages (from deepeval) (4.67.1)
Requirement already satisfied: typer<1.0.0,>=0.9 in /usr/local/lib/python3.12/dist-packages (from deepeval) (0.16.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.12/dist-packages (from deepeval) (0.45.1)
Requirement already satisfied: anyio<5.0.0,>=4.8.0 in /usr/local/lib/python3.12/dist-packages (from google-genai) (4.10.0)
Requirement already satisfied: google-auth<3.0.0,>=2.14.1 in /usr/local/lib/python3.12/dist-packages (from google-genai) (2.38.0)
Requirement already satisfied: httpx<1.0.0,>=0.28.1 in /usr/local/lib/python3.12/dist-packages (from google-genai) (0.28.1)
Requirement already satisfied: pydantic<3.0.0,>=2.0.0 in /usr/local/lib/python3.12/dist-packages (from google-genai) (2.11.7)
Requirement already satisfied: websockets<15.1.0,>=13.0.0 in /usr/local/lib/python3.12/dist-packages (from google-genai) (15.0.1)
Requirement already satisfied: typing-extensions<5.0.0,>=4.11.0 in /usr/local/lib/python3.12/dist-packages (from google-genai) (4.14.1)
Requirement already satisfied: distro<2,>=1.7.0 in /usr/local/lib/python3.12/dist-packages (from openai) (1.9.0)
Requirement already satisfied: jiter<1,>=0.4.0 in /usr/local/lib/python3.12/dist-packages (from openai) (0.10.0)
Requirement already satisfied: sniffio in /usr/local/lib/python3.12/dist-packages (from openai) (1.3.1)
Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.12/dist-packages (from anyio<5.0.0,>=4.8.0->google-genai) (3.10)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /usr/local/lib/python3.12/dist-packages (from google-auth<3.0.0,>=2.14.1->google-genai) (5.5.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.12/dist-packages (from google-auth<3.0.0,>=2.14.1->google-genai) (0.4.2)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.12/dist-packages (from google-auth<3.0.0,>=2.14.1->google-genai) (4.9.1)
Requirement already satisfied: certifi in /usr/local/lib/python3.12/dist-packages (from httpx<1.0.0,>=0.28.1->google-genai) (2025.8.3)
Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.12/dist-packages (from httpx<1.0.0,>=0.28.1->google-genai) (1.0.9)
Requirement already satisfied: h11>=0.16 in /usr/local/lib/python3.12/dist-packages (from httpcore==1.*->httpx<1.0.0,>=0.28.1->google-genai) (0.16.0)
Requirement already satisfied: importlib-metadata<8.8.0,>=6.0 in /usr/local/lib/python3.12/dist-packages (from opentelemetry-api<2.0.0,>=1.24.0->deepeval) (8.7.0)
Requirement already satisfied: googleapis-common-protos~=1.57 in /usr/local/lib/python3.12/dist-packages (from opentelemetry-exporter-otlp-proto-grpc<2.0.0,>=1.24.0->deepeval) (1.70.0)
Requirement already satisfied: opentelemetry-exporter-otlp-proto-common==1.36.0 in /usr/local/lib/python3.12/dist-packages (from opentelemetry-exporter-otlp-proto-grpc<2.0.0,>=1.24.0->deepeval) (1.36.0)
Requirement already satisfied: opentelemetry-proto==1.36.0 in /usr/local/lib/python3.12/dist-packages (from opentelemetry-exporter-otlp-proto-grpc<2.0.0,>=1.24.0->deepeval) (1.36.0)
Requirement already satisfied: protobuf<7.0,>=5.0 in /usr/local/lib/python3.12/dist-packages (from opentelemetry-proto==1.36.0->opentelemetry-exporter-otlp-proto-grpc<2.0.0,>=1.24.0->deepeval) (5.29.5)
Requirement already satisfied: opentelemetry-semantic-conventions==0.57b0 in /usr/local/lib/python3.12/dist-packages (from opentelemetry-sdk<2.0.0,>=1.24.0->deepeval) (0.57b0)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.12/dist-packages (from posthog<7.0.0,>=6.3.0->deepeval) (1.17.0)
Requirement already satisfied: python-dateutil>=2.2 in /usr/local/lib/python3.12/dist-packages (from posthog<7.0.0,>=6.3.0->deepeval) (2.9.0.post0)
Requirement already satisfied: backoff>=1.10.0 in /usr/local/lib/python3.12/dist-packages (from posthog<7.0.0,>=6.3.0->deepeval) (2.2.1)
Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.12/dist-packages (from pydantic<3.0.0,>=2.0.0->google-genai) (0.7.0)
Requirement already satisfied: pydantic-core==2.33.2 in /usr/local/lib/python3.12/dist-packages (from pydantic<3.0.0,>=2.0.0->google-genai) (2.33.2)
Requirement already satisfied: typing-inspection>=0.4.0 in /usr/local/lib/python3.12/dist-packages (from pydantic<3.0.0,>=2.0.0->google-genai) (0.4.1)
Requirement already satisfied: packaging>=17.1 in /usr/local/lib/python3.12/dist-packages (from pytest-rerunfailures<13.0,>=12.0->deepeval) (25.0)
Requirement already satisfied: iniconfig>=1 in /usr/local/lib/python3.12/dist-packages (from pytest->deepeval) (2.1.0)
Requirement already satisfied: pluggy<2,>=1.5 in /usr/local/lib/python3.12/dist-packages (from pytest->deepeval) (1.6.0)
Requirement already satisfied: pygments>=2.7.2 in /usr/local/lib/python3.12/dist-packages (from pytest->deepeval) (2.19.2)
Requirement already satisfied: charset_normalizer<4,>=2 in /usr/local/lib/python3.12/dist-packages (from requests<3.0.0,>=2.31.0->deepeval) (3.4.3)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.12/dist-packages (from requests<3.0.0,>=2.31.0->deepeval) (2.5.0)
Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.12/dist-packages (from rich<15.0.0,>=13.6.0->deepeval) (4.0.0)
Requirement already satisfied: shellingham>=1.3.0 in /usr/local/lib/python3.12/dist-packages (from typer<1.0.0,>=0.9->deepeval) (1.5.4)
Requirement already satisfied: aiohappyeyeballs>=2.5.0 in /usr/local/lib/python3.12/dist-packages (from aiohttp->deepeval) (2.6.1)
Requirement already satisfied: aiosignal>=1.4.0 in /usr/local/lib/python3.12/dist-packages (from aiohttp->deepeval) (1.4.0)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.12/dist-packages (from aiohttp->deepeval) (25.3.0)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.12/dist-packages (from aiohttp->deepeval) (1.7.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.12/dist-packages (from aiohttp->deepeval) (6.6.4)
Requirement already satisfied: propcache>=0.2.0 in /usr/local/lib/python3.12/dist-packages (from aiohttp->deepeval) (0.3.2)
Requirement already satisfied: yarl<2.0,>=1.17.0 in /usr/local/lib/python3.12/dist-packages (from aiohttp->deepeval) (1.20.1)
Requirement already satisfied: execnet>=2.1 in /usr/local/lib/python3.12/dist-packages (from pytest-xdist->deepeval) (2.1.1)
Requirement already satisfied: zipp>=3.20 in /usr/local/lib/python3.12/dist-packages (from importlib-metadata<8.8.0,>=6.0->opentelemetry-api<2.0.0,>=1.24.0->deepeval) (3.23.0)
Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.12/dist-packages (from markdown-it-py>=2.2.0->rich<15.0.0,>=13.6.0->deepeval) (0.1.2)
Requirement already satisfied: pyasn1<0.7.0,>=0.6.1 in /usr/local/lib/python3.12/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3.0.0,>=2.14.1->google-genai) (0.6.1)

In this tutorial, you’ll need API keys from both OpenAI and Google.

  • Google API Key: Visit https://aistudio.google.com/apikey to generate your key.

  • OpenAI API Key: Go to https://platform.openai.com/settings/organization/api-keys and create a new key. If you’re a new user, you may need to add billing information and make a minimum payment of $5 to activate API access.

Since we’re using Deepeval for evaluation, the OpenAI API key is required

import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass('Enter OpenAI API Key: ')
os.environ['GOOGLE_API_KEY'] = getpass('Enter Google API Key: ')
Enter OpenAI API Key: ··········
Enter Google API Key: ··········

Defining the context

Next, we’ll define the context for our test case. In this example, we’re working with a customer support scenario where a user reports receiving the wrong product. We’ll create a context_email containing the original message from the customer and then build a prompt to generate a response based on that context.

from deepeval.test_case import ArenaTestCase, LLMTestCase, LLMTestCaseParams
from deepeval.metrics import ArenaGEval

context_email = """
Dear Support,
I ordered a wireless mouse last week, but I received a keyboard instead.
Can you please resolve this as soon as possible?
Thank you,
John
"""

prompt = f"""
{context_email}
--------

Q: Write a response to the customer email above.
"""

OpenAI Model Response

from openai import OpenAI
client = OpenAI()

def get_openai_response(prompt: str, model: str = "gpt-4.1") -> str:
    response = client.chat.completions.create(
        model=model,
        messages=[
            {"role": "user", "content": prompt}
        ]
    )
    return response.choices[0].message.content
openAI_response = get_openai_response(prompt=prompt)

Gemini Model Response

from google import genai
client = genai.Client()

def get_gemini_response(prompt, model="gemini-2.5-pro"):
    response = client.models.generate_content(
        model=model,
        contents=prompt
    )
    return response.text
geminiResponse = get_gemini_response(prompt=prompt)

Defining the Arena Test Case

Here, we set up the ArenaTestCase to compare the outputs of two models — GPT-4 and Gemini — for the same input prompt. Both models receive the same context_email, and their generated responses are stored in openAI_response and geminiResponse for evaluation.

a_test_case = ArenaTestCase(
    contestants={
        "GPT-4": LLMTestCase(
            input="Write a response to the customer email above.",
            context=[context_email],
            actual_output=openAI_response,
        ),
        "Gemini": LLMTestCase(
            input="Write a response to the customer email above.",
            context=[context_email],
            actual_output=geminiResponse,
        ),
    },
)

Setting Up the Evaluation Metric

Here, we define the ArenaGEval metric named Support Email Quality. The evaluation focuses on empathy, professionalism, and clarity — aiming to identify the response that is understanding, polite, and concise. The evaluation considers the context, input, and model outputs, using GPT-5 as the evaluator with verbose logging enabled for better insights.

metric = ArenaGEval(
    name="Support Email Quality",
    criteria=(
        "Select the response that best balances empathy, professionalism, and clarity. "
        "It should sound understanding, polite, and be succinct."
    ),
    evaluation_params=[
        LLMTestCaseParams.CONTEXT,
        LLMTestCaseParams.INPUT,
        LLMTestCaseParams.ACTUAL_OUTPUT,
    ],
    model="gpt-5",
    verbose_mode=True
)

Running the Evaluation

metric.measure(a_test_case)
**************************************************
Support Email Quality [Arena GEval] Verbose Logs
**************************************************

Criteria:
Select the response that best balances empathy, professionalism, and clarity. It should sound understanding, 
polite, and be succinct. 
 
Evaluation Steps:
[
    "From the Context and Input, identify the user’s intent, needs, tone, and any constraints or specifics to be 
addressed.",
    "Verify the Actual Output directly responds to the Input, uses relevant details from the Context, and remains 
consistent with any constraints.",
    "Evaluate empathy: check whether the Actual Output acknowledges the user’s situation/feelings from the 
Context/Input in a polite, understanding way.",
    "Evaluate professionalism and clarity: ensure respectful, blame-free tone and concise, easy-to-understand 
wording; choose the response that best balances empathy, professionalism, and succinct clarity."
] 
 
Winner: GPT-4
 
Reason: GPT-4 delivers a single, concise, and professional email that directly addresses the context (acknowledges 
receiving a keyboard instead of the ordered wireless mouse), apologizes, and clearly outlines next steps (send the 
correct mouse and provide return instructions) with a polite verification step (requesting a photo). This best 
matches the request to write a response and balances empathy and clarity. In contrast, Gemini includes multiple 
options with meta commentary, which dilutes focus and fails to provide one clear reply; while empathetic and 
detailed (e.g., acknowledging frustration and offering prepaid labels), the multi-option format and an 
over-assertive claim of already locating the order reduce professionalism and succinct clarity compared to GPT-4.

======================================================================

'GPT-4'

The evaluation results show that GPT-4 outperformed the other model in generating a support email that balanced empathy, professionalism, and clarity. GPT-4’s response stood out because it was concise, polite, and action-oriented, directly addressing the situation by apologizing for the error, confirming the issue, and clearly explaining the next steps to resolve it, such as sending the correct item and providing return instructions. The tone was respectful and understanding, aligning perfectly with the user’s need for a clear and empathetic reply. In contrast, Gemini’s response, while empathetic and detailed, included multiple response options and unnecessary commentary, which reduced its clarity and professionalism. This result highlights GPT-4’s ability to deliver focused, customer-centric communication that feels both professional and considerate.

Visualizing the responses in a table

import pandas as pd

pd.set_option('display.max_colwidth', None)

df = pd.DataFrame({
    "Model": ["Gemini", "OpenAI"],
    "Response": [geminiResponse, openAI_response]
})
df
Model Response
0 Gemini Of course. Here are a few options, ranging from standard and professional to more proactive.\n\n### Option 1: Standard & Professional (Most Common Choice)\n\nThis is a clear, polite, and effective response that solves the problem efficiently.\n\n**Subject: Re: Your Recent Order - Incorrect Item Received**\n\nDear [Customer Name],\n\nThank you for contacting us and for bringing this to our attention.\n\nI am very sorry to hear that you received a keyboard instead of the wireless mouse you ordered. I understand this is frustrating, and I sincerely apologize for our mistake.\n\nWe want to get the correct item to you as quickly as possible. To help me locate your order, could you please reply with your order number?\n\nOnce I have that, I will immediately ship the correct wireless mouse to you. I will also email you a pre-paid return label so you can send the keyboard back to us at no cost.\n\nWe appreciate your patience and look forward to resolving this for you.\n\nBest regards,\n\n[Your Name]\nCustomer Support Team\n\n---\n\n### Option 2: Proactive & Customer-Focused\n\nThis option goes a step further by shipping the new item immediately, showing extra trust in the customer and prioritizing speed.\n\n**Subject: Getting Your Correct Order Shipped Immediately**\n\nDear [Customer Name],\n\nThank you for your email. I am so sorry about the mix-up with your recent order. Receiving the wrong item is definitely not the experience we want for our customers, and we apologize for the inconvenience.\n\nI have already located your order based on your email address and have arranged for the correct wireless mouse to be shipped to you today via express shipping. You should receive a separate email with the new tracking information shortly.\n\nRegarding the keyboard you mistakenly received, there's no rush. I've attached a pre-paid return label to this email. Please feel free to use the original packaging to send it back to us at your convenience.\n\nAgain, we sincerely apologize for the error. If there is anything else I can help you with, please don't hesitate to ask.\n\nSincerely,\n\n[Your Name]\nCustomer Support Team\n\n---\n\n### Option 3: Concise & Friendly\n\nThis version is slightly less formal and gets straight to the point, which can be effective for some brands.\n\n**Subject: Re: Wrong item in your order**\n\nHi there,\n\nThanks for reaching out, and I'm so sorry we sent you a keyboard instead of a mouse! Let's get that fixed for you right away.\n\nCould you please reply with your order number?\n\nAs soon as I have it, I'll get the correct wireless mouse shipped out to you and send you a free return label for the keyboard.\n\nApologies again for the mix-up!\n\nAll the best,\n\n[Your Name]\nCustomer Support
1 OpenAI Subject: Re: Incorrect Item Received – Order Support\n\nDear [Customer Name],\n\nThank you for reaching out to us, and I apologize for the mix-up with your order.\n\nWe’re sorry you received a keyboard instead of the wireless mouse you ordered. To resolve this as quickly as possible, could you please reply to this email with a photo of the item you received? Once we have this, we will arrange for the correct mouse to be sent to you right away and provide instructions for returning the incorrect item.\n\nThank you for your patience, and we look forward to resolving this for you.\n\nBest regards, \n[Your Name] \nCustomer Support Team \n[Company Name]