Search test library by skills or roles
⌘ K

2021
Adaface
Skill
Assessments
Report


Emergence of
Conversational Assessments


Insights based on 1,000,000 randomized and anonymized data points


INTRODUCTION


How do you identify the best talent that has the most significant impact?

Five years ago, all you had to do was remove resume bias and blunt proxies to identify skilled talents your competitors are missing. But the task has become more challenging with the increasing talent pool every quarter. With easy access to professional social channels, getting profiles of the ready talent pool has become easier.

Automation is the key. But it has become tougher to automate, screen, and identify top talents before your competition without losing on accuracy. Also, these social channels brought their cons along with the pros. Any mistake in the hiring process or use of non-candidate-friendly technologies is promptly highlighted and actively shared with other candidates — hurting your prospects and employer branding.

You are not alone. With government agencies, Fortune 500 companies, and tech giants using Adaface for the past couple of years has only shown us how massive the challenge is.

It is time to focus on building candidate-friendly hiring solutions for accurate evaluations, that is, asking the right questions, the right away — the conversational way, and doing it at scale.

Top companies are now joining the collective mission of using conversational assessments every day, and we are incredibly thankful for our clients' and candidates' immense trust. With Adaface conversational assessments, companies can collect 40% more data about their candidates than before. With 700+ customized assessments, companies can objectively test candidates for on-the-job skills and record a 75% reduction in time-to-hire without losing screening accuracy.

Massive usage of Adaface assessments gave us the chance to give back to the recruiting community. Our research team analyzed 1 million randomized and anonymized data points our chatbot, Ada, collected while screening candidates, and we are sure these insights make it for an enriching read.

We would love to hear your thoughts on the report and how you currently build world-class, diverse teams in the current candidate-driven market. Feel free to tweet us @AdafaceAI or send us an email at ada@adaface.com.


CAMPUS HIRING


Candidates widely use Python, C++, Java in campus assessments; An increasing trend of preference to JavaScript & C#


Even though Python and C++ are still monopolies (since they are the first programming languages taught in academic courses), we have observed an increasing trend of graduates learning JavaScript and C# each year. This trend also signals that candidates are learning web frameworks before graduating.

Campus recruiting assessments need a revision to capitalize on this trend. Instead of relying on tricky coding questions that depend on niche-algorithms and have nothing to do with on-the-job skills, we recommend moving to a new pattern of campus assessments that:

  • Assess candidates on reasoning skills. Top recommended sub-skills include logical reasoning, numerical reasoning, data interpretation, spatial reasoning, and data analysis.
  • Coding questions that reflect real-world work. These are proven from Adaface research to have a higher accuracy of identifying top talent with less false positives/ false negatives.
  • Include an adaptive assessment of programming language knowledge. Our chatbot, Ada, asks candidates which programming languages they are most familiar with and asks on-the-job questions on those languages.
  • Supports all popular programming languages and has compilers to work at scale.

Time taken to solve a coding question varies by programming language and developer experience.


We realized that the time candidates spend to answer the same coding question differs drastically from the programming language they picked and the coding experience they have in that language.

For example, Swift developers took 25% more time than Java developers. Because app developers rarely spend time writing individual functions and are not familiar with external coding editors, we recommend recruiters to use skill-assessment solutions that factor in such details and set the time limits for questions accordingly.

Pushing a candidate to code a solution faster than the average peers would drastically affect the assessment's screening accuracy.


Adaface enables us to conduct coding, aptitude and psychometric assessments seamlessly. My hiring managers have never been happier with the quality of candidates shortlisted. We were able to close 106 positions in a record time of 45 days!

Siddhartha Gunti profile image

AMIT KATARIA

Chief Human Resources Officer, Hanu

Language used by candidates in campus assessments



Average time taken to solve a coding question (Campus Hiring)



Average time taken to solve a coding question (Lateral Hiring)

CANDIDATE EXPERIENCE


Conversational assessments have a 4.5/5 average rating and have a highly favourable candidate sentiment.

There are two critical metrics (other than traditional NPS) that are candidate brands for assessment technologies- Average rating and Sentiment analysis.

Our experts recommend two exercises for recruiters to understand if their candidate brand is taking a hit and if there is a need to switch to conversational assessments:

  • Use feedback forms to collect candidate ratings on deployed assessments. Candidate ratings should be anonymized and untrackable for minimal bias. Our AI bot, Ada, has this in-built into every chat flow. We noticed that ratings are honest when candidates explicitly know that their ratings would not affect their screening scores or profile in any way (Achieved through anonymization and zero-tracing). This exercise proved that conversational assessments are the most candidate-friendly assessments with a 4.5/5 average rating.
  • Analyze candidate sentiment based on the feedback forms. Sentiment analysis provides a strong indication of how candidates perceive a company's hiring process and point out significant flaws. The weighted word cloud on the right is our experts' sentiment analysis from completely randomized and anonymized data points collected by our bot, Ada, during conversational assessments.

Candidate ratings and reviews are dependent on their self-evaluation of the assessment.

Even after anonymizing the review collection process and improving the candidate messaging, we observed that candidate ratings depend on how they performed. Candidates are more or less aware of their performance by the end of the assessment and that the inherent feeling of success/failure reflects in their feedback. We observed that 82% of candidates who scored 50% or more gave four or five ratings (on a five-point scale), and 77% of candidates who scored lesser than 50% gave four or five ratings.


Great library of questions that are designed to test for fit rather than memorization of algorithms + Great candidate experience, the friendly chat bot emulates an in person interviewer + Great completion rates

Swayam narain profile image

SWAYAM NARAIN

Co-founder and CTO, Affable





ConversationChatInterestingHumanQuickEnjoyedCoolLoveResponsivenessAIAdaIntuitiveChatbotUser-friendlyDifferentUnderstandingUsefulExcitingAmazingCreativeHappyRealInteractiveNewPoliteFunHelpfulNiceSimpleSmoothInnovativeGoodGreat-experienceUniquePerfectClearBestEaseFriendlyGuideHintsComfortableTalking

PROCTORING


Non-googleable, customized questions and conversational assessments are the best weapons against cheating.


'Minimizing fraudulent activities' continues to be the most researched problem statement for assessment solution providers. So far, there is no 100% foolproof solution to prevent cheating in online assessments without compromising on candidate experience.

Gone are the days where candidates are expected to install additional software to prevent cheating. Such software has loopholes that are cleverly exploited by candidates and drastically decreases the candidate coverage since it may or may not work on every device seamlessly, thereby removing potential candidates from the screening process.

We found a mix of proctoring solutions on a web-based screening platform is the best way of minimizing cheating without hurting candidate experience. However, it is to be handled with care and actively iterated.

For example, window and tab proctoring are employed by many assessment solutions (including Adaface). The idea is that candidate test sessions are expired if they switch tabs/ windows during the assessment. But we found that such a strict approach created harsh results and didn't work as expected in real-world scenarios.

40% of candidates switched tabs 1-3 times and 10% switch tabs more than three times even after warnings. Upon detailed analysis, we found that candidates have to change tabs due to entirely unexpected events like system popups, notifications, alerts, battery issues.

What worked for Adaface customers is an innovative audit log approach that informs the recruiter when the candidate had left the window, when they came back, and what part of the assessment is likely to be under fraudulent activities.

We strongly recommend avoiding cookie-cutter assessments and assessment platforms with a shared questions library. Instead, opt for assessment solutions that create non-googleable questions tailor-made for your roles. Coupling this with conversational assessments and other proctoring suite features like IP proctoring, webcam/ video proctoring, plagiarism detection and social listening for leaked questions will give the best protection against candidate cheating. All the while, without hurting the candidate experience.

When employed in conversational assessments, only 0.3% of candidates dropped out of the process due to strict webcam proctoring restrictions.



With Adaface, we were able to optimise our initial screening process by upwards of 75%, freeing up precious time for both hiring managers and our talent acquisition team alike!

Brandon profile image

BRANDON LEE

Head of People, Love, Bonito

Number of times candidates switched window/tab

ON-THE-JOB SKILLS


Conversational assessments are 4x more efficient when compared to traditional assessments.


Companies are screening candidates on four key categories - Reasoning, Programming Languages, Coding, and Domain-specific. Evaluation of these five skills in the same assessment gave us the best screening accuracy peaks.

What is curious is the division of categories amongst the five skills- 40% of them are either reasoning, coding, or programming language skills. In contrast, the rest 60% skills in each assessment are specific domain skills (Ex: Linux for DevOps roles, Spring for Java roles, SQL for backend roles, Excel for data analysis roles, etc.).

The result of such usage is two-fold:

  • Efficient use of the interviewer's time. Hiring managers are not facing situations where candidates coming to interviews have decent reasoning skills but don't have enough experience with on-the-job skills.
  • Better screening accuracy. With analysis broken down to each skill, recruiters can gauge where candidate strengths lie and which role they would fit best.

We recommend using assessment solutions that offer non-googleable and customized questions for all must-have skills (popular and niche) in your job descriptions so that you can get tailor-made assessments for your jobs.



Adaface helped us save around 30 hrs per week of our team’s time to help screen candidates. Using Adaface helped our admissions team to be more productive and offered a fair opportunity for candidates to showcase their work.

Oleg profile image

OLEG KUROCHKA

Enterpreneur First

JavaIosProject ManagementSsasMicrosoft ExcelReduxDebuggingMongodbKafkaBasicsPythonVueSsrsAngularCs FundamentalsAwsData InterpretationReactAgileSeleniumDiagrammatic ReasoningHtml-CssNumerical ReasoningSpatial AwarenessAlgorithmC#AndroidVerbal ReasoningJavascriptCassandraMachine LearningProbabilityLogical ReasoningWebpackMarket AnalysisComputer NetworksSpring.NetLaravelSpring BootPowershellEnglishReact-NativeSqlJenkinsDjangoDockerCPattern MatchingSpatial ReasoningCyber SecuritySparkElastic-SearchGitT-SqlCucumberMysqlAzureSsisAccountingTechnical AptitudeRestNodejsProblem SolvingShopifyPowerbiOracle-SqlPowerappAppiumItilExcelPandasHibernatePower BiScalaKubernetesEmbedded CTestingPhpLinuxRuby-On-RailsKotlinNumpyPostgresCommunicationWordpressSccmData ModelingJsonAbstract ReasoningRDrupalIbm CognosSolidSituational JudgementPersonalityRubyTechnical-AptitudeHtml/CssCode QualityEntity FrameworkSwiftMachine-LearningReact NativeC++Azure CliHadoopFinanceMicroservicesWindows Administration

TEST TAKING RATE


Test-taking rate is the same for any day of the week. The best time to send assessment invites is from 7 AM-10 AM and 3 PM-6 PM.


We often heard from recruiters that they want to send assessments to candidates on weekends to achieve a higher test-taking rate. But when we were engaging with candidates, we found a significant portion of candidates who do not want to schedule anything on their weekends or have off-work activities planned on their weekends. There are substantial chunks of candidates who prefer taking the assessment before they head off to work on a weekday.

We put this theory to the test and analyzed thousands of lateral hiring invites sent out by Adaface recruiters in the past year. The results surprised many recruiters:

  • The test-taking rate is roughly the same (~1-2% change) irrespective of the day you send the invite on!
  • The number of candidates who take the assessment on weekends is roughly offset by candidates who prefer to take assessments on weekdays.

We dug deeper to understand how quickly do candidates take the assessment once email invites are sent out and at what time of the day are they able to take assessments. Here's what we found:


37% of candidates take conversational assessmentsthin 3 hours of sending the link.

80% of candidates take the test within 48 hours of sending the link.


And when we club that candidates activity time, 10 AM-12 PM and 6 PM-8 PM are the peak times irrespective of the day of the week. Incorporating that with email click and open times, we found best times to send email invites for senior candidates and lateral hiring are from 7 AM-10 AM or 3 PM-6 PM.



Love Adaface and have recommended them to many of my tech startup friends. The ease of set up and use, breadth of assessments and dashboard make screening fast and simple!

Hayley Bekker profile image

HAYLEY BAKKER

Founder, Colibri

% of candidates who took assessments at different hours

GRANULAR SCORING


Conversational assessments collect 40% more data when compared to traditional assessments.


Conversational assessments engage and collect more granular data from candidates, improving scoring accuracy. Our research found that for simple MCQ questions, 40% of candidates took hints and engaged with our bot compared to traditional MCQs where candidates make mistakes and never have a chance to correct them. With Adaface intelligent hints, our bot Ada understood which candidate could pick up concepts on the job, which is a crucial indicator for job performance. 40% more data for every question gives rise to granular and uniform bucketing recruiters can use further. Here's how the scoring buckets look when traditional assessments are compared with Adaface conversational assessments:

  • On average, traditional assessments bucket candidates into passed (10%) and failed (90%) and give no granular data to recruiters on those who failed, since all of the failed candidate scores are nearly zero.
  • Conversational assessments bucketed candidates into passed (22%), borderline passed (15%), borderline failed (20%), and failed (42%). The granularity in scoring helped Adaface recruiters eliminate false positives and false negatives from the screening process.


Adaface is the best assessment tool out there today: I have evaluated 5 other tools before moving to Adaface, It is the most up-to-date and modern assessment tool that I have used. It is an integrated and customizable solution with comprehensive results and our hiring managers have reported that the scores are reflective of the candidates.

Sakshi profile image

Sakshi Saini

Lead-Strategic Hiring, Hanu

Binary traditional assessments scoring

Granular conversational assessments scoring

CODING TESTS


Fizz-buzz style coding questions are accurate indicators of coding quality compared to complex algorithmic problems.


Everyone knows the classic case of FizzBuzz coding question, and most developers cannot solve it. FizzBuzz coding question requires candidates to use basic syntax and simple error-free logic to finish the code, and more than 90% of candidates fail at solving such problems.

Hiring managers and recruiters using traditional assessments tend to focus on using tricky, algorithmic questions that require candidates to use niche concepts which are never used on-the-job. At Adaface, we recommend using coding problems that are closer to on-the-job skills and are non-tricky. To validate our thesis, we looked into the performance of a fizz-buzz style question used by our recruiters on Adaface platform. The results assure and reinforce that more straightforward, and on-the-job coding questions are critical to prevent hiring managers from wasting time interviewing candidates who can't code.

We found 78% of programmers who claim hands-on coding experience on their resume cannot code simple fizz buzz style questions completely. Almost 59% of test-takers scored 0, in contrast to the 19% of applicants who scored between 0 and 50% of the test score. Only 13% of candidates solved it with a perfect score.



Over the past year, the world-class tech team that was built around our lead engineer were all hired through Adaface. They have helped us find diamonds in the rough that we didn't have the time, skills or the resources for internally.

Invygo profile image

ESLAM & PULKIT

Founders, Invygo

Candidate score distribution for fizz buzz style coding question
logo
40 min tests.
No trick questions.
Accurate shortlisting.