Advait Ambeskar
cto @ seevee
Conversations with Machine
cosmos.com

Conversational AI: The Right Tool for Early Screening – and Why Humans Still Decide

Introduction – résumés compress humans

In early-stage hiring, recruiters face a paradox. The job market is crowded, yet discovering true fit is harder than ever. A 2023 ManpowerGroup study found that 77% of employers worldwide struggled to recruit top talent, the highest in 17 years. Screening hundreds of résumés and scheduling introductory calls is labour-intensive, so companies often default to keyword filters and arbitrary cut-offs. That approach compresses humans into bullet points, missing nuance and perpetuating bias. We need a better first step.

As an AI engineer turned founder, I’ve spent the last decade building AI systems. At The Fitting Room and now at SeeVee, I’ve seen how well-designed AI can bridge the gaps in decision matrix. But I’ve also witnessed the pitfalls of automating too much.

AI shouldn’t decide who gets hired - it should surface evidence so humans decide better, faster, and more fairly.


The Job of Early Screening

Early screening exists to filter - but also to represent. Done right, it should:

  • Verify fundamentals quickly: Confirm eligibility, skills, and logistics without a manual introductory call.
  • Capture motivation and communication: Go beyond the résumé to understand why candidates want the role.
  • Standardize evaluation: Ensure two similar candidates aren’t judged on wildly different criteria.
  • Respect time: Give candidates a fair chance without imposing a scheduling burden on recruiters.

AI already assists parts of this process. NLP parses resumes and job descriptions. Automated sourcing surfaces profiles from professional networks. Chatbots handle FAQs and scheduling. Video analysis tools provide transcripts and speaking metrics. These reduce repetitive tasks - LinkedIn’s 2024 Future of Recruiting report found that 74% of HR professionals expect AI to automate much of early candidate engagement (LinkedIn, 2024).

Yet today’s platforms often feel impersonal. 63% of candidates report poor communication after applying (CareerPlug, 2022), and algorithmic scoring has a troubled history. Amazon famously scrapped a recruiting tool when it learned to favor male candidates (Reuters, 2018).

Fairness concerns persist. A 2025 study in Humanities and Social Sciences Communications found that candidates often perceive AI-enabled interviews as less fair and less attractive, especially in low-tech industries (Springer, 2025).

So how do we harness efficiency while preserving humanity?


Why Conversational AI is Uniquely Suited

  • High-signal, low-friction conversations. Unlike static surveys, conversational AI asks adaptive follow-ups. A candidate who says “I led a blue/green cutover on ECS” can be probed on trade-offs or outcomes. Recruiters get richer evidence asynchronously - reducing time-to-first screen from days to hours.

  • Semantic understanding. Large language models map jargon to competencies (“blue/green cutover” → “zero-downtime migration”), not just match keywords. This reduces false negatives and uncovers overlooked talent.

  • Consistency at scale. Every candidate faces the same core prompts. Unilever reported saving 100,000 hours and ~$1M annually by standardizing early screening with AI video analysis (Harvard Business School, 2020).

  • Structured outputs with a human feel. Candidates experience a natural interaction, but recruiters receive structured summaries: transcripts, competency scores, justifications. This blend is vital for compliance and trust.

  • Respect for candidates. Done right, conversational AI reflects candidates’ unique skills back to them instead of reducing them to keywords. Studies show that chatbots and personalized job recommendations improve candidate satisfaction (IBM, 2021).


Balancing Efficiency with Fairness

No technology is neutral. Efficiency without fairness risks reputational and legal damage. Best practice includes:

  • Hybrid human-plus-AI: Pair AI assessments with human interviews to mitigate perceptions of unfairness (Springer, 2025).
  • Transparent criteria: Share question scope and rubrics up front; many candidates’ discomfort stems from feeling unprepared.
  • Procedural justice: Let candidates clarify or add context. Perceived fairness strongly influences application continuation (Colquitt, 2001).
  • Bias monitoring: Regularly audit demographic outcomes. If selection ratios deviate (e.g., 80% rule in US compliance law), refine prompts, retrain models, or adjust scoring.

Beyond Today: The Future of AI in Hiring

Conversational AI isn’t the endgame - it’s a step. Future systems may incorporate multimodal signals (tone, text, gestures) to better interpret nuance. But adoption will depend on trust. Candidates will accept AI if it helps them tell their story more fully, not if it feels like another black box.

As the EU AI Act (2024) and US EEOC guidance (2023) evolve, compliance and transparency will become non-negotiable. Companies that combine AI efficiency with human oversight and candidate respect will not only hire faster but also build stronger employer brands.


Conclusion - Humans Decide, AI Assists

Conversational AI is not about replacing judgment. Used responsibly, it removes guesswork between a résumé and a first call. It frees recruiters to focus on deep, human-centered interviews. It expands access by letting candidates respond on their schedule. And with transparent rubrics and oversight, it can make hiring faster, fairer, and more humane.

As we move toward an AI-augmented future, let’s hold to this principle:

AI shouldn’t decide who gets hired - it should surface evidence so humans decide better, faster, and more fairly.


📚 Further Reading