Alexa…what are the data privacy risks posed by machine learning?

21st January 2020 Media Law

Throughout history, technological progress has been greeted with a mixture of acclaim and alarm. Back in the 19th-century, the development of textile machinery was the spark for the so-called “Luddite rebellion”, a furore prompted by concerns about what the introduction of new tools might mean for those people who worked in the industry.

Over time, of course, automation has moved far beyond the cotton mills. Nowadays, far more powerful and portable devices have become commonplace - in our places of work, in our pockets and around the cities in which we live. Most fascinating of all is the advent of Artificial Intelligence (AI), computers that are programmed in order to be capable of 'learning' from their environment and, thereby, achieving the tasks for which they have been created.

The Luddites might have asked whether AI is a “good” or “bad” thing. Of course, the question misses the point. Technology is, in itself, fundamentally neutral. The key issue is how it is used. That is where ethics - and the law - become as relevant to AI as the coding and data which underpin it.

It's one of the topics for discussion at a major international conference which I'm attending this week in Brussels. 'Data Protection and AI' has been organised by CPDP, a non-profit organisation at the cutting edge of developments in privacy and data protection. The conference schedule hints at the topics exercising the minds of those involved, with sessions devoted to discussions about the impact of AI on law enforcement, Uber, dating apps and even children's toys.

AI is not a glimpse of the future but already part of everyday life. Only last year, the High Court ruled on the case of a shopper who argued that his privacy had been infringed by South Wales Police's use of facial recognition technology. I share the views of a 2018 House of Lords' report which concluded that AI presents a huge opportunity to benefit our economy and security.

Computers which are able to achieve solutions to often complex problems from a base of zero knowledge can only improve our ability to confront and overcome major challenges and obtain new, critical insights into life-changing issues such as medical research.

However, many people have genuine and reasonable concerns about the proliferation of AI. Amongst other things, CPDP will consider how the technology might infringe the privacy of ordinary people and what should be done to balance the various tensions.

A study recently demonstrated that AI could predict the outcome of US Supreme Court cases with an astonishing degree of accuracy. Just think what that might mean for those pursuing cases with enormous public interest. And the future of lawyers…!

On the other hand, ‘its also worth considering research which noted the impossibility of controlling the tens of millions of surveillance cameras across the United States alone.

Having presented at a previous edition of the CPDP conference, I'm attending this year as a very interested observer.

We need to be open to the benefits of AI, but realistic about the challenges.

We're Social

Nick McAleenan is a Partner located in Manchester in our Media & Reputation Management; Data Protection & Privacy department

View other posts by Nick McAleenan

Let us contact you

*
*
*
*
*

COVID-19 Update - Our website and phone lines are operating as normal and our teams are on hand to deal with all enquiries. Meetings can be conducted via telephone and video conferencing.

View our Privacy Policy

Areas of Interest