Can a new type of artificial intelligence make the EU Regulatory framework on AI obsolete?

 

EU Regulatory framework proposal on artificial intelligence

In April 2021 the EU Commission has released the new EU Regulatory framework on artificial intelligence. It aims to bring excellence and trustworthiness to AI thanks to the issue of rules that protect the functioning of markets and the public sector, people safety, and fundamental rights. It is also strengthened and supported by the European AI strategy, which proposes measures to optimise research and policies for AI regulations. 
Rules on AI are needed to ensure access to trustable data for European citizens and avoid unwished outcomes. It is not uncommon, indeed, that AI-based systems might take a decision based on unclear criteria. These situations might pose specific categories of people in a disadvantaged position. In order to overcome these issues, the EU framework aims to:

  • address risks specifically created by AI applications;
  • propose a list of high-risk applications;
  • set clear requirements for AI systems for high-risk applications;
  • define specific obligations for AI users and providers of high-risk applications;
  • propose a conformity assessment before the AI system is put into service or placed on the market;
  • propose enforcement after such an AI system is placed in the market;
  • propose a governance structure at the European and national level.

Despite the noble premises, a new type of Artificial Intelligence (i.e., foundation models) might make this framework  obsolete. 

 

Foundation models and the future of AI

Foundation models are used by the biggest tech companies and have the potential to become the most used  infrastructures on which other applications are built. While, from one side, these models show high potential to grow, on the other one, if any shortfalls are present on the foundation models, they are inherited as a cascade by other apps  and tools too. A study from Stanford University has highlighted the risks of adopting these models for the abovementioned risks. This is valid especially in vulnerable situations, like the healthcare sector. 
The EU Regulatory framework on artificial intelligence, released last April, does not dig into details on how to adopt  foundation models. Hence, it poses the entire European AI-based infrastructure under risk. This aspect, for example, has been criticized by the Future of Life Institute for this reason. In order to overcome this potential problem, some amendments have been applied by the Slovenian Council presidency in November 2021 to strengthen and better clarify the role of foundation models. 
However, the behaviour of these models should be monitored and updated in case stronger rules to protect the safety of the citizens are needed.

 

AI-SPRINT role in the European AI context

The AI-SPRINT could positively contribute to the debate on privacy and security issues in AI.
AI-SPRINT aims to overcome the risks linked to privacy and ethics adding to its framework solutions for privacy-preserving data inference. This can be guaranteed while exploiting the potential of AI models at the edge to provide novel and advanced mechanisms for digital sovereignty, that ensure security and privacy of data across computing  infrastructures, networks and communications. In this way, the AI-SPRINT tools orchestrate the diverse security  mechanisms available at different nodes to guarantee security to the whole system. In particular, they aim to:

  • Define system-wide access control policies and an orchestration tool for deploying specific software components on the nodes that provide the necessary security mechanisms to establish end-to-end encryption which includes key generation and distribution;
  • Develop an environment for enforcing the access control policy at nodes by leveraging the TEEs provided by modern CPUs and managing the security attestation;
  • Develop mechanisms to attest that the correct OS has been booted using trusted and secure boot techniques;
  • Define a virtualised networking architecture based on the Software Defined Networking (SDN) concept to provide uniform management of the network spanning private and public segments and for routing traffic flows among the deployed AI-SPRINT components.

In this way, security and privacy can be guaranteed also in vulnerable situations, like the AI-SPRINT healthcare use case, that deals with sensitive patients’ data.

 


References

European Commission: Amendments to the Regulatory framework proposal on artificial intelligence 

European Commission: Communication Artificial Intelligence for Europe

European Commission: Regulatory framework proposal on artificial intelligence

Science Business: A new type of powerful artificial intelligence could make EU’s new law obsolete

Stanford University: On the Opportunities and Risks of Foundation Models