On 8 April 2019, a high-level expert group on AI (AI HLEG), set up by the European Commission, released a document called ‘Ethics Guidelines For Trustworthy AI’. In the document, AI HLEG has identified three elements that should be scrutinized to determine AI’s trustworthiness, and it should be noted that these elements need to exist for the whole life cycle of an AI. AI should be:
1. lawful, complying with all applicable laws and regulations;
2. ethical, ensuring adherence to ethical principles and values; and
3. robust, from both a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.
The first required element is as an obvious but an essential one in the attempt to allocate liability to AI or the actors involved. These guidelines are the first step in mapping out how AI should function in society and what kind of expectations should be set for such technology, and explicitly urge developers (producers) to employ the requirements in the production process. These guidelines are necessary to help the EU as a whole achieve one of its main goals of harmonization.