It is clear that developments in AI will continue over the coming years: completely autonomous cars are on the horizon and there is a growing use of AI assisted technologies like automated ‘chat bots’, behavioural algorithms designed to learn from customer behaviours and automatically anticipate or adjust settings, and even ‘smart contracts’.
Already, the use of AI technologies in business is an attractive prospect. It has the potential to cut costs, and provide greater accuracy and efficiency. However, when deciding whether your business should jump on the AI bandwagon, have a thought for some of the legal implications which could follow.
AI is a potential minefield of legal issues. The constant monitoring of conversations creates likely privacy and data protection issues. The use of algorithms to predict behaviour (e.g. the likelihood of reoffending for criminals) or to sort through potential job applicants could create discrimination issues.
Big Brother Watch produced a briefing regarding the use of facial recognition technology in security and policing in 2018, suggesting that facial recognition algorithms could be biased because there is evidence to suggest that they are better at recognising some groups of faces (based on race) than others. Also, AI algorithms are designed to learn and adapt; such learning is based on training data. The training data used may not be complete or may be unrepresentative of the group of people affected by the algorithm, which could again, lead to biased results.
On top of that, there is the issue of transparency in the decision-making process of an algorithm. Algorithms are being used for everything from selecting suitable job candidates to recommending a person to date. Obviously the consequences of some decisions by algorithms will be more serious than others. But the main question is how? How does the algorithm select one job candidate over another? Many algorithms are so complex; the answer is that no one really knows. This could lead to difficult questions if a candidate brings a claim for say, discrimination, based on the use of the algorithm.
The use of automated technology, like driverless cars, also poses potential difficulties for the issue of fault. Who would you bring a claim against in the event that a breach of your rights was caused by AI? In 2015, a robot that was part of an art installation in Switzerland bought drugs through the dark web. The Swiss police confiscated the robot (and the drugs), before returning it to the owners, with no action being taken (either against the robot or its owners).
Finally, whilst some aspects of the use of AI would potentially fall within existing laws and regulations (e.g. privacy and/or data protection), there are vast areas of potential AI use where the law has not yet evolved to cater directly for specific situations, whether by statute or case law. This creates uncertainty for companies using AI, and a confusing legal landscape for compliance at best. It is likely to be some time before our legal system adequately catches up with the use of the technology which it is trying to regulate.
The integration of AI in its many different forms into everyday life and business is inevitable. Businesses just need to appreciate that in many respects our legal system has not kept pace with such development, and there will continue to be a degree of uncertainty as to the potential legal ramifications of its use. In that respect early adopters will potentially be treading new ground, and they and their lawyers will need to be cognizant of the risks they run in doing so, and the changing legal framework within which they operate.
For further advice, please contact Samantha Woodley or a member of Birketts’ Technology Team.
The content of this article is for general information only. It is not, and should not be taken as, legal advice. If you require any further information in relation to this article please contact the author in the first instance. Law covered as at May 2019.