News
Artificial intelligence (AI) is no longer just a futuristic concept. It is transforming the workplace
at a rapid pace, from how companies recruit talent to how they monitor employee
productivity and deliver services. However, with the power of AI comes the responsibility to
ensure its use is both legal and ethical. Employers need to carefully navigate issues around
employee rights, data privacy, and decision-making processes.
In our article we catch up with Clare Waller, an employment specialist at Dean Wilson LLP,
and Helen Vane, recruitment specialist and founder of Go Gecko Recruitment and
Consortium more than recruitment. They look at how UK employment law and recruitment
practices are catching up to the use of AI and how developing a comprehensive AI policy is
crucial for businesses to ensure they remain compliant while reaping the benefits of AI.
AI and employee surveillance: Setting clear boundaries
As more businesses are adopting AI-driven surveillance in order to improve productivity,
thought needs to be given to how this affects your employees’ rights. When we talk about AI
surveillance we are talking about tools like time-recording or productivity software. TMetric
or RescueTime for example use AI to help businesses track employee productivity. These
tools capture the websites and apps employees use during work hours, generate reports on
time spent, and provide insights into workflow efficiency. They can automatically categorise
tasks and suggest improvements based on patterns of behaviour, enhancing project
management and accountability. While it’s a great way to boost efficiency, businesses need
to tread carefully here.
Clare explains:
“Businesses need to be transparent about using these tools, and employees have to be
aware of the data that’s being collected. Under GDPR, they have the right to access their
data and may be able to object to its use if they feel their privacy is being compromised.”
Employers must strike a balance between boosting efficiency and respecting employees'
rights. This requires clear communication with staff about what is being monitored and why.
A well-structured AI policy aims to avoid trust issues and potential legal challenges.
Recruitment in the age of AI: Avoiding bias and ensuring fairness
AI has revolutionised recruitment by streamlining the hiring process. Larger companies often
use AI to screen CVs, rank candidates, and even predict job performance. However, as AI is
only as good as the data it is trained on, it can perpetuate biases if the training data is
flawed, potentially leading to discriminatory outcomes.
Even smaller businesses may be using AI unknowingly—through pre-screening criteria on
platforms like LinkedIn or Indeed—to sift through applicants. This underscores the
importance of monitoring these tools for bias and ensuring human oversight remains central
to the decision-making process.
Helen adds:
“I often use AI to start off a job description because it saves time to collate lots of relevant
information from company websites and job postings. We also use people finding tools like
Indeed and Linked in. But I’d never use it in isolation. AI can make assumptions or leave out
key details, so human oversight is crucial. I also know of fantastic candidates who have been
overlooked due to a reliance by larger firms on AI sifting tools”
An AI policy should cover recruitment processes and require regular reviews to ensure
fairness. Additionally, employers should guarantee that human evaluators make the final
decisions, to prevent the system from excluding potentially strong candidates based on
biased algorithms.
AI and technology in the workplace: Ensuring responsible use and oversight
As businesses increasingly integrate AI tools into their workflows, it’s important to consider
not just how AI is used, but also how employees are accessing and interacting with these
tools. Should employees use free AI resources like ChatGPT, or does the company provide
access to more advanced, paid-for AI solutions? The answer depends on the needs of the
business and the potential risks involved. Just as businesses have social media policies to
guide employees' online behaviour, clear guidelines are needed for the use of AI technology
in the workplace.
Free AI tools, while convenient, may pose risks when it comes to data security and output
reliability. For instance, employees may unknowingly input sensitive information into these
models, which could then be processed, stored, or used in ways that violate data privacy
regulations like GDPR. Employers must have a clear policy that outlines what kind of
information can and cannot be input into AI systems, whether free or paid.
Key Considerations for Using AI Tools in Client-Facing Roles:
- Free vs. Paid Resources: Should employees rely on free tools, or is it safer and more
effective to invest in paid, enterprise-grade AI solutions? Paid options often offer
enhanced security features and more reliable outputs.
- Data Privacy: Employees need to understand the risks of inputting confidential or
sensitive information into AI models. Any AI policy should include strict guidelines on
what data can be shared with these tools.
- Quality Control: While AI can streamline tasks such as content creation or contract
drafting, there must be a human review process in place to ensure accuracy and
appropriateness. This is especially critical in client-facing roles where mistakes could
harm relationships or result in legal liabilities.
Clare emphasises:
"AI is a powerful tool, but no business should rely on it entirely for client-facing work.
Mistakes can slip through, and if those mistakes affect your clients, the consequences could
be significant."
This highlights the importance of checks and balances. An AI policy should ensure that
human oversight is required, especially when final outputs are being delivered to clients.
Why your business needs an AI Policy
Just as many businesses have social media policies to guide how employees use social
platforms in a professional context, an AI policy is critical for managing how AI is used in the
workplace. Without a policy, businesses risk misuse, data breaches, reputational damage
and legal disputes. Policies also help ensure AI tools are used responsibly and transparently.
A good AI policy should cover the following areas:
Transparency: Clearly outline how AI will be used, and communicate this to employees.
Fairness: Ensure AI systems do not perpetuate bias, particularly in recruitment and
promotion decisions.
Legal Compliance: Ensure employee data is processed in line with GDPR requirements as
well as your own Privacy Notice and Data Protection policies. There may also be issues in
relation to use of copyrighted material.
Human Oversight: AI-driven decisions, especially those impacting employees or clients,
must be reviewed by human experts.
System Usage: Define which AI systems are approved for use, and establish guidelines for
whether free or paid tools should be employed.
Accountability: Include provisions for what happens if employees fail to comply with AI
policies.
In summary: AI is transforming workplaces, but preparation is key
AI can bring tremendous benefits to businesses, whether it’s improving employee
productivity, enhancing recruitment, or delivering faster client services. However, businesses
need to ensure they are using AI responsibly and ethically, which requires clear policies and
human oversight at key decision points.
As Helen succinctly puts it:
“AI can streamline so much, but it’s important to remember that technology works best when
it enhances human decision-making, not by replacing it.”
With a strong AI policy, businesses can unlock the potential of AI while staying compliant
with legal standards, protecting employees' rights, and maintaining the trust of clients.