Comments on the European Commission White Paper on AI

4 minute read

Published:

A few weeks ago the European Commission released a White Paper on Artificial Intelligence detailing what actions the EU may take in the next few years to regulate applications of AI and promote and incentivize new AI innovation.

At a high level, the commission promotes splitting AI applications into two categories: high risk and low risk, determined based on the sector and intended use of a given application. “High risk” applications would be subject to additional requirements around “training data, data and record-keeping, information to be provided, robustness and accuracy, and human oversight.” “Low risk” applications could voluntarily put themselves up this kind of certification system. In this way, the paper sets the stage for a potential certification market, an approach that has often fallen short (been reading a lot this week about the certification method under the Clean Development Mechanism of the UN).

Overall, like many of these white papers, the report on AI is light on details of how most of this regulation would be carried out and how risk would be assessed. That said, I did want to highlight my thoughts on a few potential blindspots in the report.

Risks of AI in and of itself vs. risks of applications

  • The report generally assesses risks by focusing on applications of AI rather than of technologies in and of themselves. For example, it mentions AI as crucial to addressing climate change with scant mention of the climate effects of the energy demands of training AI systems.
  • One could say there’s opportunity for a more granular risk assessment, if proper tools were developed. However, that could run the risk of fragmenting regulation too much, enabling firms to try to game how risk is assessed.
  • I do think the approach of establishing public and private systems for assessing compliance makes a lot of sense, I think there’s an interesting opportunity for financial auditing firms to step up here.
  • Under-resourced areas, including rural areas, may have problems ensuring that AI does not result in deepening inequality.

Markets and automation

  • The report mostly assesses risk as it will be realized by the end-consumer, rather than the impact it may have on other stakeholders, for example, workers. An assessment of AI’s broader impacts on labor markets and shocks to existing systems should be considered as macro-“risks.” The report focuses primarily on AI as an economic opportunity and only pays passing mention to its impact across society as a whole. As with other innovations, it is possible that some AI will provide economic benefit in a disparate manner – and ultimately disenfranchise many workers. I think this needs to be a focus of some research, especially by government or quasi-government entities.
  • The commission argues that small and medium-sized enterprises need access to finance to enable their use of AI. However, they largely ignore questions of how to ensure that SMEs can gain the skills needed to effectively deploy or leverage AI. Moreover, without requirements for sharing innovative technologies, SMEs may not be equipped to benefit from applying those systems to innovative applications.
  • They report pays particular attention to the EU’s strong manufacturing sector, which may not be the highest risk at this point according to Mike Webb’s research. That risk is more likely to reside in service businesses. While the EU is focusing on upskilling and reskilling workers, and basing AI in education and training, there also needs to be a focus on the risk of AI deployment in a way that affects the overall workforce.

Interpretability

  • The commission argues that high-risks can be largely mitigated with enough human and machine oversight. For example, facial recognition technology is high risk, but at this point – I’d argue – the risks can only be fully mitigated by a total ban.
  • Evaluation is hard, and may be impossible. It seems premature to assess anything that proposes objective evaluation as a panacea without clearly defining those objective functions. I think there is a clear need to figure out what the objective function of any approach to assessing the impact of AI should be.