Spain has become reliant on an algorithm to score how likely a domestic violence victim may be abused again and what protection to provide — sometimes leading to fatal consequences.
I think beyond that it’s purely the failure of the interviewer and not the tool. I think getting rid of the tool will just leave you with shitty interviewers and back to the same situation as you had before.
I’ve given plenty of algorithmic driven assessments myself, though mine are generally much shorter and the weights on the questions much simpler (plus I know the actual reasons behind the weight of my questions and why I’m asking them). You can always intervene when someone’s lying and redirect them and can override the algorithm just like this Spanish policy. Lazy judges and police will exist without the tool.
It might be helpful for the tool to include a label that the interviewer thinks the result is unreliable due to the evasiveness of the interviewee, if only to show where the problems are coming from.
I think beyond that it’s purely the failure of the interviewer and not the tool. I think getting rid of the tool will just leave you with shitty interviewers and back to the same situation as you had before.
I’ve given plenty of algorithmic driven assessments myself, though mine are generally much shorter and the weights on the questions much simpler (plus I know the actual reasons behind the weight of my questions and why I’m asking them). You can always intervene when someone’s lying and redirect them and can override the algorithm just like this Spanish policy. Lazy judges and police will exist without the tool.
It might be helpful for the tool to include a label that the interviewer thinks the result is unreliable due to the evasiveness of the interviewee, if only to show where the problems are coming from.