AI Model Evaluator (LLM & Agent Systems)

$20 - $30/hourpay

Required Skills

LLMs
Generative AI
AI Model Evaluation
AI Benchmarking
AI Quality Assessment
Model Performance Evaluation
Prompt Response Evaluation
AI Output Analysis
Rubric-Based Scoring

Job Description

Job Title: AI Model Evaluator (LLM & Agent Systems)


Job Type: Contract (Minimum 2 weeks, with potential extension)


Location: Remote


Job Summary:

Join our customer's team as an AI Model Evaluator (LLM & Agent Systems) and play a pivotal role in shaping the future of generative AI and autonomous agents. You'll help benchmark, analyze, and assess cutting-edge AI systems in real-world scenarios, providing structured insights that drive improvements. This position is ideal for analytical professionals passionate about AI quality and real-world impact.


Key Responsibilities:

  1. Evaluate outputs from large language models (LLMs) and autonomous agent systems against defined guidelines and rubrics
  2. Review multi-step agent actions, including screenshots and reasoning traces, to determine accuracy and quality
  3. Consistently apply evaluation standards, flagging edge cases and identifying recurring patterns or failure modes
  4. Provide detailed, structured feedback to inform benchmarking, product evolution, and model refinement
  5. Participate in calibration and alignment sessions to ensure consistent application of evaluation criteria
  6. Work collaboratively to adapt to evolving scenarios and ambiguous evaluation situations
  7. Document findings and communicate insights clearly both in writing and verbally to relevant stakeholders



Required Skills and Qualifications:

  1. Demonstrated experience with LLM evaluation, AI output analysis, QA/testing, UX research, or similar analytical roles
  2. Strong background in AI model evaluation, benchmarking, and applying rubric-based scoring frameworks
  3. Exceptional attention to detail and sound judgement in ambiguous or edge-case scenarios
  4. Proficiency in English (B2+ or equivalent) with excellent written and verbal communication skills
  5. Ability to adapt quickly to evolving guidelines and work independently
  6. Comfort with remote work and a commitment of at least 20 hours per week for the initial term
  7. Analytical mindset with a focus on actionable, qualitative feedback



Preferred Qualifications:

  1. Experience with RLHF, annotation workflows, or AI benchmarking frameworks
  2. Familiarity with autonomous agent systems or workflow automation tools
  3. Background in mobile apps or digital product evaluation processes


Apply now

Please note that after completing the interview process, you’ll be added to our talent pool and considered for this and other roles that match your skills.

Have any questions? See FAQs

Refer and Earn$100