The annotation platform is where you’ll complete your practice and project tasks. Below is a guide to help you navigate the platform, understand how tasks work, and resolve common issues.
All practice tasks are completed on Handshake’s annotation platform. Depending on the project, you may work on Handshake’s internal platform or directly within the partnering AI lab's platform. You’ll always receive clear instructions on where to go and how to access your task dashboard before beginning projects.
You’ll receive an email invitation to access the annotation platform when it’s time to start your tasks.
To learn more about the invite process to Handshake's annotation platform, check out Invitation to Handshake's Annotation Platform.
Project Overview and Expectations
If you're accepted into the program, refer to your Disco space and weekly webinars for detailed project updates and training sessions.
In general, your tasks will include:
- Creating prompts in your field of expertise. You will not be asked to write prompts outside your domain.
- Evaluating the model’s responses, identifying inaccuracies, and adding relevant citations or research to help improve model performance.
- Participating in training sessions where the team will walk through expectations, examples, and prompt-writing best practices.
Prompt Creation Guidelines
- Submit one question per prompt.
- Prompts do not need to be long. If a clear, concise question plus a formula effectively challenges the model, that’s a great result.
- After submission, the model will generate a response.
- You are responsible for evaluating the response for correctness.
-
Do not use external AI tools to assist with evaluation. Doing so will result in removal from the program.
How is “success” or “failure” defined?
Each AI lab has its own evaluation criteria, but most follow a "wrongness threshold”. For example, if more than a certain percentage of the answer is incorrect, it’s considered a failure.
Common model failure types include:
- Incomplete or incorrect citations
- Faulty logic or broken mathematical reasoning
- Inability to integrate knowledge across domains
- Biased or insensitive content
If you’re unsure about your specific project, we recommend reaching out to your project lead for clarification.
Handshake’s AI Platform & Access Issues
The below section applies to tasks completed on Handshake’s annotation platform. If the AI lab you're working with has chosen to use their own platform, connect directly with them for support and instructions.
Not finding your assigned project on the platform? Contact us so we can add the project.
Seeing the wrong project or unable to start a task? Contact us so we can correct your task access.
Trouble accessing the platform with your school email? Not a problem—send us a personal email address and we’ll update your account and resend the invite.
Having issues with model responses? If a model’s response is incorrect or buggy, click the Revert button to return to a previous version. This helps maintain accuracy and provides useful feedback to our engineering team.