The Ethical Use of AI in Marcomms Recruitment
Regular readers of our blogs will know that we’ve written extensively about the growing role of artificial intelligence (AI), not just within marketing, but across hiring too.
However, the growth of these platforms has led to concerns in some circles around potential biases; with this in mind how can employers ethically use AI in marcomms recruitment?
AI in marcomms recruitment
We should start by saying that AI is already being used as a key part of many hiring programmes, particularly at larger firms, and allows employers to identify candidates and review their suitability for roles far quicker than they were able to in the past, amongst other benefits. When these tools first emerged, they were initially mooted as having the potential to remove biases from the recruitment process. In fact, they’re likely having the opposite effect. As with any transformative technology, adoption comes with uncertainty around ethical responsibilities. For those seeking marcomms expertise, the question isn’t just about leveraging AI but rather doing so in a way that upholds fairness, inclusivity, and human-centred values.
Several of the leading GenAI tools based on large language models (LLMs) have already been under the spotlight for the potential biases they generate in many user answers. The so-called ‘knowledge’ of these platforms is built on a foundation of pre-existing data and, ultimately, the attitudes of the individuals or groups feeding it this information. In practice, this means that potential biases are an inherent part of the platforms, and hard to avoid without careful management. But not doing so can perpetuate discrimination related to gender, race, age or disabilities, amongst other areas.
Transparency lacking
The likes of ChatGPT, Microsoft’s CoPilot and DALL-E also face problems with a lack of transparency and understanding as to how they operate at a fundamental level. Put simply, few people actually understand why these platforms work in the way they do, and how they learn and take on new knowledge. While it's still relatively early days, many tools have also displayed divergent behaviours, and have the ability to generate answers that even specialists are confounded by. They continue to constantly learn and therefore must be monitored consistently to gauge whether the tendencies being developed could lead to further biases. If this sounds slightly unsettling, it’s because it is. To use AI effectively means understanding and appreciating that it has limitations, and also evolving our expectations and relationships with technology. Where once we would have assumed the information being provided to us would always be correct, now its value must be carefully weighed and evaluated.
And that’s before we even touch on issues related to data privacy. Without strict and rigorous safeguards in place, it would be very easy for – often highly sensitive – candidate data to be misappropriated in ways that the individuals themselves may not even be aware of. When we consider that even sharing someone’s email address without their permission breaches GDPR, then feeding a person’s entire CV into a platform, with their contact details, educational information and more, without a proper process in place, could be gauged as unethical, to say the least.
It's clear that, despite AI holding the potential to simplify and expedite the recruitment process, its usage can create more questions than answers for employers to deal with, particularly those with stronger ethical considerations.
Using AI ethically
However, there are solutions for employers who do want to leverage these tools. Many organisations that we speak to have concerns over handing what is – and should be – a human-led process to technology. While AI can enhance efficiency, relying too heavily on it risks removing the human touch that is essential in marcomms recruitment, where understanding soft skills and cultural fit is critical. Rather than relying solely on emerging platforms to aid recruitment and outsource entire activities to them, firms should instead leverage the power of these tools as a partner and keep a steady hand on the tiller. Employers should also inform their candidates and applicants if they are using AI in the hiring process and give them the opportunity to withdraw their interest should they see fit (though this is highly unlikely to happen if you ask us).
As more organisations look to adopt AI to support their hiring activity, the potential for discriminatory practices will increase exponentially. Only those with a full awareness of the benefits, and the risks, of using these tools, will be able to use them effectively and in a fair and transparent way. And reducing bias isn’t just a legal obligation, but also a strategic imperative. By prioritising equality, transparency, and candidate experience, marcomms employers can harness AI’s potential while fostering trust and inclusivity.
*****
If your organisation is looking to source the best marcomms talent, speak to our expert team.