Company Description
MillionLogics, a trusted Oracle Partner, is a global IT solutions provider with offices in London, UK, and a development hub in Hyderabad, India. The company specialises in delivering smart, scalable, and future-ready IT solutions to empower businesses to evolve and lead. With expertise in Data & AI, Cloud migrations, IT consulting, and enterprise application optimisation, MillionLogics offers tailored services with a focus on client success. Supported by a team of over 50 AI & Oracle experts, MillionLogics combines cutting-edge technology with a commitment to delivering impactful outcomes. Learn more about our services on our website.
Role Overview:
As an AI Quality Analyst, you will evaluate a new personalisation feature for Gemini. You will assess how well the model uses information from your past Gemini conversations, Gmail, Google Search, and YouTube activity to make responses more relevant and helpful. This role requires a unique blend of creativity and analytical rigour. You will actively design prompts from the perspective of your own personal experiences. You will then use your analytical skills to assess the quality of the model's personalised responses, evaluating dimensions like Grounding, Integration, and Helpfulness.
Offer Details:
Pay: US$ 1500 per month (Net/take-home)
Contract Duration: 12 months
Number of Open Positions: 20
Key Qualifications
* French Proficiency: Ability to read and write in French with a high degree of comp, as French is the focus language for this project.
* Schedule Flexibility: Full-time availability in your local time zone is required. We are staffing a global, 24-hour operations team.
* Exceptional Analytical Thinking: Demonstrate ability to evaluate nuanced and ambiguous AI responses, specifically assessing personalisation quality.
* Creative Prompt Engineering: Experience in designing creative, multi-turn starting prompts based on personal context to thoroughly test the model's capabilities.
* Strong Evaluation Acumen: Understanding of personalisation concepts, including the ability to identify incorrect personalisation, poor inferences, and forced connections.
* Meticulous Attention to Detail: The ability to review Side-by-Side (SxS) model responses and spot subtle differences in naturalness and overnarrating.
* Excellent Written Communication: Superior ability to write clear, concise, and structured rationales for model rankings, explicitly referencing specific turn numbers.
* Feedback: Ability to provide constructive feedback and detailed annotations.
* Communication: Excellent communication and collaboration skills.
* Independence: Self-motivated and able to work independently in a remote setting.
* Technical Setup: Desktop/Laptop set up with a good internet connection.
Description:
* In this role, you will be part of a dynamic team focused on evaluating the quality of personalised AI interactions. Your day-to-day work will involve:
* Designing and executing multi-turn conversational prompts (typically 1-5 turns) that require the AI to utilise your personal information and experiences.
* Evaluating model responses based on your intent from the starting prompt, checking if the personalisation was appropriately applied.
* Analysing responses for Grounding issues, ensuring claims about you are supported by evidence and not flawed inferences or hallucinations.
* Assessing Integration quality to ensure personal data is woven naturally into the response without robotic \"overnarrating\".
* Rigorously evaluating and stack-ranking two model responses side-by-side (SxS) to determine which is overall more helpful, easy to use, and enjoyable.
* Writing clear, defensible rationales for your comparisons, explicitly referencing where issues or positive aspects occurred in the conversation.
* Extracting and verifying \"Debug Info\" from the model to confirm that chat summaries and data sources were properly utilised.
* Maintaining strict data hygiene by deleting evaluation conversations to prevent them from polluting your future chat history.
Education & Experience
* BS/BA degree or equivalent experience in a relevant field (e.g., Policy, Law, Ethics, Linguistics, Journalism, Computer Science, or a related analytical field).
* Experience in data annotation, AI quality evaluation, content moderation, or a related role is strongly preferred.
Offer Details:
* Commitments Required: at least 4 hours per day and a minimum of 30 hours per week, with 4 hours of overlap with PST. (We have 2 options of time commitment: 30 hrs/week or 40 hrs/week)
* Engagement type: Contractor
Evaluation Process:
* Shortlisted candidates will be sent a Job Interest Form.
* After the profile review, an assessment will be shared, which must be completed within 24 hours.
* Based on the assessment outcomes, shortlisted candidates will be contacted to discuss the pre‑onboarding requirements.
How to Apply?
Please send us your updated CV to, with job ID 72062 in the email subject line.