Ada can give hints to candidates when they get stuck, just like in an in-person interview. The entire experience of chat is thought out from the perspective of the candidate, and to make sure they are comfortable, so they can perform at their best. Ada provides increasingly more revealing hints to candidates as required and help them get to the correct answer. This helps Ada score candidates granularly, and helps you distinguish between candidates who are already familiar with the concept, candidates who can learn on the job and candidates who are not familiar with the basics enough to pick the skill up on the job.
Ada also gives candidates the option to use extra time to solve a question. Just like an interviewer might give extra time to a candidate if they have almost arrived at the solution, Ada does the same. This helps provide a friendly experience to candidates, and helps score their skills in a non-binary fashion.
For coding questions, Ada evaluates the candidates' code against multiple test cases to check for different edge cases. Depending on the number of test cases that pass, the candidate is scored for that question accordingly. Ada can also evaluate their solution for time complexity, and score accordingly for roles that are focused on algorithms and optimizations.
A good assessment is one that gets a broad distribution of candidates scores, ideally as close as possible to a bell curve. Adaface assessments are designed to help exceptional candidates stand out. The scoring for each question is granular, and this granularity multiplied across multiple questions gives you a very nice distribution of scores.