Many devs get a productivity boost from using ChatGPT (,…) or Copilot in their day-to-day jobs, and this is generally fine unless organizational compliance concerns make this a problem. So, should devs be able to augment their capabilities using ChatGPT during (remote) tech interviews? Here are a few ways recruiting is changing, based on my observations while making several lower-budget hires over the past few months.
-
Take Homes. I’ve never been a fan of Take Homes (IMO it’s rude to ask for several hours of a candidate’s time), but they are effectively useless now that ChatGPT exists, unless you plan to repeat similar vetting in person at a later stage in the interview process.
-
Cover Letters. Particularly on Upwork, I’ve seen reams of LLM generated copy in cover letters. LLM improved resumes to get past Applicant Tracking System bingo is one thing, but going to the trouble of creating a cover letter and then just filling it with meaning-lite LLM “content” is a huge red flag.
-
Helper Interviews. There have always been candidates who bring a helper to remote interviews. The helper would typically sit off screen, and whisper (or type) the answers for the interviewee to try to pick up on. Sometimes you could hear the whispering. Often the tell-tale consistent pattern of bewildered-look -> extended disfluency -> fluent answer would be a giveaway. (I have no idea what these candidates intend to do if hired; presumably somebody else would do the actual work?) Anyway, with ChatGPT in the mix, the whispering stage has gone, and the helper simply types the interviewer’s question into ChatGPT for the interviewee to read the answer from.
-
Whiteboard Interviews. I’ve seen candidates copy/paste questions presented to them in a google doc, and then approximately 10 seconds later start typing a code-perfect answer from start to finish. Of course in this scenario you can try to prompt for their reasoning, but all you typically get is a reading from the summary of business logic that ChatGPT gives you at the end of the code block. You could make the code question read only, but there’s an “analog hole” to that too; a screenshot fed into image recognition/ocr gets you back to square 1.
-
Online Automated Coding Tests. These are generally of questionable utility anyway, unless particularly well implemented. Some detect document focus being lost, which would flag for a solo test-taker switching tabs to ask ChatGPT (or search); that does nothing against cheaters who utilize a helper to do the search (which, again, is a not-uncommon setup.)
Don’t get me wrong - it can be interesting to explicitly let candidates use their favorite search engine and ChatGPT during a simulated “real task” exercise, to see how productive they can be when augmented by a second brain. The real problem is cheaters attempting to use more sophisticated tooling than was previously available, and unfortunately they’re going to lead to an arms race that makes recruiting and job-seeking worse for the rest of us. Anybody who has tried applying to jobs in, say, the past two years already knows how much of a clusterfuck ATS bingo currently is. Whatever new recruiting tools are coming that aim to defeat LLM augmented cheating are going to be yet one more hoop to jump through.