欧盟人工智能法案(中、英对照)(第110-113条、附录1-6)
Article 110 Amendment to Directive (EU) 2020/1828
In Annex I to Directive (EU) 2020/1828 of the European Parliament and of the Council (58), the following point is added:
‘(68) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ 2024/1689/oj).’
第一百一十条 关于第2020/1828号指令的修正案
在欧洲议会和欧盟理事会第2020/1828号指令(58)的附件一中增加以下内容:
“(68)2024年6月13日,欧洲议会和欧盟理事会制定第2024/1689号条例规定了人工智能的统一规则,并修订了第300/2008号、第167/2013号、第168/2013号和第2018/858号、第2018/1139号和第2019/2144号条例以及第2014/90/EU号、第2016/797号和第2020/1828号指令(《人工智能法》)(OJ L,2024/1689,2024年7月12日,ELI:http://data.europa.eu/eli/reg/2024/1689/oj)。
Article 111 AI systems already placed on the market or put into service and general-purpose AI models already placed on the marked
1. Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X that have been placed on the market or put into service before 2 August 2027 shall be brought into compliance with this Regulation by 31 December 2030.
The requirements laid down in this Regulation shall be taken into account in the evaluation of each large-scale IT system established by the legal acts listed in Annex X to be undertaken as provided for in those legal acts and where those legal acts are replaced or amended.
2. Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), this Regulation shall apply to operators of high-risk AI systems, other than the systems referred to in paragraph 1 of this Article, that have been placed on the market or put into service before 2 August 2026, only if, as from that date, those systems are subject to significant changes in their designs. In any case, the providers and deployers of high-risk AI systems intended to be used by public authorities shall take the necessary steps to comply with the requirements and obligations of this Regulation by 2 August 2030.
3. Providers of general-purpose AI models that have been placed on the market before 2 August 2025 shall take the necessary steps in order to comply with the obligations laid down in this Regulation by 2 August 2027.
(58) Directive (EU) 2020/1828 of the European Parliament and of the Council of 25 November 2020 on representative actions for the protection of the collective interests of consumers and repealing Directive 2009/22/EC (OJ L 409, 4.12.2020, p. 1).
第一百一十一条 已投放到市场或投入使用的人工智能系统和已投放到市场的通用人工智能模型
1、 在不影响本法第一百一十三条第3款(a)项关于第五条的适用方式的情况下,作为根据附录十所列法案建立的大型计算机系统之组成部分的人工智能系统,在2027年8月2日之前被投放到市场或投入使用的,应当在2030年12月31日前符合本法规定。
根据附录十所列法案及其替代规范、修正案对根据该等法案建立的每个大型计算机系统进行评估时,应当考虑本法规定的要求。
2、 在不影响第一百一十三条第3款(a)项关于第五条的适用方式的情况下,于2026年8月2日之前被投放到市场或投入使用的高风险人工智能系统(本条第1款所述系统除外)的设计自该日起发生重大变化的,本法应当适用的该等高风险人工智能系统的运营方。但公权力机关使用的高风险人工智能系统的提供者和部署者应当在2030年8月2日前采取必要措施,使之符合本法规定的要求和义务。
3、 在2025年8月2日之前被投放到市场的通用人工智能模型的通着应当采取必要措施,在2027年8月2日之前履行本法规定的义务。
【注释】(58)2020年11月25日,欧洲议会和欧盟理事会发布《关于保护消费者集体利益的代表性活动及废除第2009/22/EC号指令》的第2020/1828号指令(OJ L 4092020年12月4日,第1页)。
Article 112 Evaluation and review
1. The Commission shall assess the need for amendment of the list set out in Annex III and of the list of prohibited AI practices laid down in Article 5, once a year following the entry into force of this Regulation, and until the end of the period of the delegation of power laid down in Article 97. The Commission shall submit the findings of that assessment to the European Parliament and the Council.
2. By 2 August 2028 and every four years thereafter, the Commission shall evaluate and report to the European Parliament and to the Council on the following:
(a) the need for amendments extending existing area headings or adding new area headings in Annex III;
(b) amendments to the list of AI systems requiring additional transparency measures in Article 50;
(c)amendments enhancing the effectiveness of the supervision and governance system.
3. By 2 August 2029 and every four years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The report shall include an assessment with regard to the structure of enforcement and the possible need for a Union agency to resolve any identified shortcomings. On the basis of the findings, that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation. The reports shall be made public.
4. The reports referred to in paragraph 2 shall pay specific attention to the following:
(a) the status of the financial, technical and human resources of the national competent authorities in order to effectively perform the tasks assigned to them under this Regulation;
(b) the state of penalties, in particular administrative fines as referred to in Article 99(1), applied by Member States for infringements of this Regulation;
(c) adopted harmonised standards and common specifications developed to support this Regulation;
(d) the number of undertakings that enter the market after the entry into application of this Regulation, and how many of them are SMEs.
5. By 2 August 2028, the Commission shall evaluate the functioning of the AI Office, whether the AI Office has been given sufficient powers and competences to fulfil its tasks, and whether it would be relevant and needed for the proper implementation and enforcement of this Regulation to upgrade the AI Office and its enforcement competences and to increase its resources. The Commission shall submit a report on its evaluation to the European Parliament and to the Council.
6. By 2 August 2028 and every four years thereafter, the Commission shall submit a report on the review of the progress on the development of standardisation deliverables on the energy-efficient development of general-purpose AI models, and asses the need for further measures or actions, including binding measures or actions. The report shall be submitted to the European Parliament and to the Council, and it shall be made public.
7. By 2 August 2028 and every three years thereafter, the Commission shall evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of the requirements set out in Chapter III, Section 2 for AI systems other than high-risk AI systems and possibly other additional requirements for AI systems other than high-risk AI systems, including as regards environmental sustainability.
8. For the purposes of paragraphs 1 to 7, the Board, the Member States and national competent authorities shall provide the Commission with information upon its request and without undue delay.
9. In carrying out the evaluations and reviews referred to in paragraphs 1 to 7, the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of other relevant bodies or sources.
10. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology, the effect of AI systems on health and safety, and on fundamental rights, and in light of the state of progress in the information society.
11. To guide the evaluations and reviews referred to in paragraphs 1 to 7 of this Article, the AI Office shall undertake to develop an objective and participative methodology for the evaluation of risk levels based on the criteria outlined in the relevant Articles and the inclusion of new systems in:
(a) the list set out in Annex III, including the extension of existing area headings or the addition of new area headings in that Annex;
(b) the list of prohibited practices set out in Article 5; and
(c) the list of AI systems requiring additional transparency measures pursuant to Article 50.
12. Any amendment to this Regulation pursuant to paragraph 10, or relevant delegated or implementing acts, which concerns sectoral Union harmonisation legislation listed in Section B of Annex I shall take into account the regulatory specificities of each sector, and the existing governance, conformity assessment and enforcement mechanisms and authorities established therein.
13. By 2 August 2031, the Commission shall carry out an assessment of the enforcement of this Regulation and shall report on it to the European Parliament, the Council and the European Economic and Social Committee, taking into account the first years of application of this Regulation. On the basis of the findings, that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation with regard to the structure of enforcement and the need for a Union agency to resolve any identified shortcomings.
第一百一十二条 评估和审查
1、 在本法生效后,欧盟委员会应当每年评估一次是否需要修订本法附录三所列清单和第五条所规定的禁止性人工智能活动清单,直至第九十七条规定的授权期限结束。欧盟委员会应当将评估结果提交给欧洲议会和欧盟理事会。
2、 2028年8月2日之前以及此后每四年,欧盟委员会都应当评估并向欧洲议会和欧盟理事会报告以下情况:
(a)需要扩展附录三所列的既有领域标题或增加新的领域标题;
(b)修订本法第五十条中需要采取额外透明度措施的人工智能系统的清单;
(c)增强监督治理体系有效性的修正案。
3、 2029年8月2日之前及此后每四年,欧盟委员会都应当向欧洲议会和欧盟理事会提交一份关于本法的评估和审查的报告。报告应当包括对执法结构和欧盟机构解决已发现的缺陷的可能需求。根据调查结果,该报告应当在适当的情况下附上对本法的修订建议。报告应当公开。
4、 本条第2款所述报告应当特别关注以下方面:
(a)为有效执行本法规定的任务,成员国主管机关的财政、技术和人力资源状况;
(b)成员国对违反本法的行为适用的处罚措施情况,特别是第九十九条第1款所规定的行政罚款;
(c)为落实本法而制定的统一标准和通用规范的采用情况;
(d)本法实施后进入市场的企业数量,以及其中的中小企业占比。
5、 2028年8月2日之前,欧盟委员会应当评估人工智能办公室的运行情况,人工智能办公室是否被赋予足够的权力和能力来完成其任务,是否与正确实施和执行本法以提升人工智能办公室及其执法能力并增加其资源有关,并且是实现该目的所需。欧盟委员会应当向欧洲议会和欧盟理事会提交一份评估报告。
6、 在2028年8月2日之前及此后每四年,欧盟委员会都应当提交一份报告,审查通用人工智能模型节能开发标准化成果的完成进展,并评估是否需要采取进一步措施或行动(包括具有约束力的措施或行动)。上述报告应当提交给欧洲议会和欧盟理事会,并应当公开。
7、 在2028年8月2日之前及此后每三年,欧盟委员会都应当评估自愿型行业守则的影响和有效性,以促进本法第三章第二节中规定的适用于除高风险人工智能系统以外的人工智能系统的要求,以及可能适用于除高危人工智能系统之外的人工智能的其他附加要求,包括环境可持续性方面的要求。
8、 为本条第1款至第7款之目的,人工智能委员会、各成员国和成员国主管机关应当根据欧盟委员会的要求,尽快向欧盟委员会提供信息。
9、 在进行本条第1款至第7款项下评估和审查时,欧盟委员会应当考虑人工智能委员会、欧洲议会、欧盟理事会和其他相关机构或来源的立场和调查结果。
10、 特别是考虑到技术的发展、人工智能系统对健康和安全以及基本权利的影响,并根据信息社会的进步状况,欧盟委员会应当在必要时提交适当的提案以修订本法。
11、 为指导本条第1款至第7款项下的评估和审查,人工智能办公室应当承诺根据相关条款中概述的标准,制定一套客观和参与式的风险水平评估方法,并纳入以下范围的新系统:
(a)附录三中的清单,包括在该附录中扩展现有的领域标题或添加新的领域标题;
(b)本法第五条项下的禁止类行为清单;和
(c)根据第五十条规定,需要采取附加透明度措施的人工智能系统清单。
12、 根据本条第10款或相关规章条例或实施细则规定,对本法进行的任何修订涉及附录一第B条所列行业的欧盟统一立法的,应当考虑每个部门的监管特殊性,以及行业已建立的现有治理、合格评定和执法机制和机构。
13、 考虑到是本法实施的最初几年,欧盟委员会应当在2031年8月2日之前对本法的执行情况进行评估,并向欧洲议会、欧盟理事会、欧洲经济和社会委员会报告。根据调查结果,欧盟委员会应当视情况随报告附上一份关于本法的执行结构和欧盟机构解决已发现缺陷的需求的修正案提案。
Article 113 Entry into force and application
This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union.
It shall apply from 2 August 2026. However:
(a) Chapters I and II shall apply from 2 February 2025;
(b) Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August 2025, with the exception of Article 101;
(c) Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027. This Regulation shall be binding in its entirety and directly applicable in all Member States.
第一百一十三条 生效和施行
本法自正式版全文在《欧盟官方公报》上公布之日起第二十日开始生效。
本法自2026年8月2日起开始施行,但下列条款除外:
(a) 本法第一章和第二章自2025年2月2日起施行;
(b) 本法第三章第四节、第五章、第七章和第十二章(除第一百零一条外)以及第七十八条自2025年8月2日起施行;
(c) 本法第六条第1款及其相关条款规定的相应义务应当自2027年8月2日起施行。
本法应当具有全面约束力并直接适用于所有成员国。
Done at Brussels, 13 June 2024.
2024年6月13日发布于布鲁塞尔。
For the European Parliament 欧洲议会 For the Council 欧盟理事会
The President 主席 The President 主席
R. METSOLA 梅特索拉 M. MICHEL 夏尔·米歇尔
(Roberta Metsola) (Charles Michel)
【译文的成稿,非常感谢大成律师事务所合伙人张长丹博士的支持】

ANNEX I List of Union harmonisation legislation
附录一 欧盟统一立法清单
Section A. List of Union harmonisation legislation based on the New Legislative Framework
1. Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24);
2. Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ L 170, 30.6.2009, p. 1);
3. Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft and personal watercraft and repealing Directive 94/25/EC (OJ L 354, 28.12.2013, p. 90);
4. Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to lifts and safety components for lifts (OJ L 96, 29.3.2014, p. 251);
5. Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive atmospheres (OJ L 96, 29.3.2014, p. 309);
6. Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC (OJ L 153, 22.5.2014, p. 62);
7. Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of pressure equipment (OJ L 189, 27.6.2014, p. 164);
8. Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cableway installations and repealing Directive 2000/9/EC (OJ L 81, 31.3.2016, p. 1);
9. Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC (OJ L 81, 31.3.2016, p. 51);
10. Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on appliances burning gaseous fuels and repealing Directive 2009/142/EC (OJ L 81, 31.3.2016, p. 99);
11. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1);
12. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
第A条 基于新立法框架的欧盟统一立法清单
1、2006年5月17日,欧洲议会和欧盟理事会《关于机械以及修订第95/16/EC号指令的指令》(第2006/42/EC号)(OJ L 157,2006年6月9日,第24页);
2、2009年6月18日,欧洲议会和欧盟理事会关于玩具安全的第2009/48/EC号指令(OJ L 170,2009年6月30日,第1页);
3、2013年11月20日,欧洲议会和欧盟理事会关于娱乐船只和个人船只,并废除第94/25/EC号指令的第2013/53/EU号指令(OJ L 354,2013年12月28日,第90页);
4、2014年2月26日,欧洲议会和欧盟理事会《关于统一成员国有关电梯和电梯安全部件的法律的指令》(第2014/33/EU号)(OJ L 96,2014年3月29日,第251页);
5、2014年2月26日,欧洲议会和欧盟理事会《关于统一成员国有关用于潜在爆炸性环境的设备和防护系统的法律的指令》(第2014/34/EU号)(OJ L 96,2014年3月29日,第309页);
6、2014年4月16日,欧洲议会和欧盟理事会《关于统一成员国有关在市场上提供无线电设备的法律并废除第1999/5/EC号指令的指令》(第2014/53/EU号)(OJ L 153,2014年5月22日,第62页);
7、2014年5月15日,欧洲议会和欧盟理事会《关于统一成员国有关在市场上提供压力设备的法律的指令》(第2014/68/EU号)(OJ L 189,2014年6月27日,第164页);
8、2016年3月9日,欧洲议会和欧盟理事会《关于索道安装及废除第2000/9/EC号指令的条例》(第2016/424号)(OJ L 81,2016年3月31日,第1页);
9、2016年3月9日,欧洲议会和欧盟理事会《关于个人防护设备及废除第89/686/EEC号指令的条例》(第2016/425号)(OJ L 81,2016年3月31日,第51页);
10、2016年3月9日,欧洲议会和欧盟理事会《关于燃烧气体燃料的电器及废除第2009/142/EC号指令的条例》(第2016/426号)(OJ L 81,2016年3月31日,第99页);
11、2017年4月5日,欧洲议会和欧盟理事会《关于医疗器械,修订第2001/83/EC号指令、第178/2002号和第1223/2009号条例,并废除第90/385/EEC号和第93/42/EEC号指令的条例》(第2017/745号)(OJ L 117,2017年5月5日,第1页);
12、2017年4月5日,欧洲议会和欧盟理事会《关于体外诊断医疗器械,及废除第98/79/EC号指令和欧盟理事会第2010/227/EU号决定的条例》(第2017/746号)(OJ L 117,2017年5月5日,第176页)。
Section B. List of other Union harmonisation legislation
13. Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72);
14. Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52);
15. Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1);
16. Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146);
17. Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44);
18. Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1);
19. Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1);
20. Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1), in so far as the design, production and placing on the market of aircrafts referred to in Article 2(1), points (a) and (b) thereof, where it concerns unmanned aircraft and their engines, propellers, parts and equipment to control them remotely, are concerned.
第B条 其他欧盟统一立法清单
13、2008年3月11日,欧洲议会和欧盟理事会《关于民用航空安全领域共同规则并废除第2320/2002号条例的条例》(第300/2008号)(OJ L 97,2008年4月9日,第72页);
14、2013年1月15日,欧洲议会和欧盟理事会《关于两轮或三轮车辆和四轮车的批准和市场监督的条例》(第168/2013号)(OJ L 60,2013年3月2日,第52页);
15、2013年2月5日,欧洲议会和欧盟理事会《关于农业和林业车辆的批准和市场监督的条例》(第167/2013号)(OJ L 60,2013年3月2日,第1页);
16、2014年7月23日,欧洲议会和欧盟理事会《关于海洋设备,及废除理事会第96/98/EC号指令的指令》(第2014/90/EU号指令)(OJ L 257,2014年8月28日,第146页);
17、2016年5月11日,欧洲议会和欧盟理事会《关于欧盟内部铁路系统互操作性的指令》(第2016/797号)(OJ L 138,2016年5月26日,第44页);
18、2018年5月30日,欧洲议会和欧盟理事会《关于机动车辆及其拖车以及用于此类车辆的系统、部件和独立技术单元的批准和市场监督,以及修订第715/2007号和第595/2009号条例,并废除第2007/46/EC号指令的条例》(第2018/858号)(OJ L 151,2018年6月14日,第1页);
19、2019年11月27日,欧洲议会和欧盟理事会《关于机动车辆及其拖车以及用于此类车辆的系统、部件和独立技术单元的类型批准要求,有关其一般安全和对乘车人员及使用道路的弱势群体的保护,并修订第2018/858号条例,并废除第78/2009号、第79/2009号和第661/2009号条例,以及欧盟委员会第631/2009号、第406/2010号、第672/2010号、第1003/2010号、第1005/2010号、第1008/2010号、第1009/2010号和第19/2011号、第109/2011号、第458/2011号和第65/2012号、第130/2012号、第347/2012号和第351/2012号、第1230/2012号和第2015/166号条例的条例》(第2019/2144号)(OJL 325,2019年12月16日,第1页);
20、2018年7月4日,欧洲议会和欧盟理事会《关于民用航空领域共同规则和建立欧盟航空安全局,以及修订第2111/2005号、第1008/2008号、第996/2010号、第376/2014号条例和第2014/30/EU号和第2014/53/EU号指令,并废除第552/2004号、第216/2008号、第3922/91号条例的条例》(第2018/1139号)(OJ L 212,2018年8月22日,第1页),就第2条第(1)款(a)项和(b)项所述飞机的设计、生产和投放市场而言,涉及无人驾驶飞机及其发动机、螺旋桨、远程控制零件和设备。
ANNEX II List of criminal offences referred to in Article 5(1), first subparagraph, point (h)(iii)
附录二 第五条第1款第(h)项第(iii)段所述刑事犯罪类型清单
Criminal offences referred to in Article 5(1), first subparagraph, point (h)(iii):
— terrorism,
— trafficking in human beings,
— sexual exploitation of children, and child pornography,
— illicit trafficking in narcotic drugs or psychotropic substances,
— illicit trafficking in weapons, munitions or explosives,
— murder, grievous bodily injury,
— illicit trade in human organs or tissue,
— illicit trafficking in nuclear or radioactive materials,
— kidnapping, illegal restraint or hostage-taking,
— crimes within the jurisdiction of the International Criminal Court,
— unlawful seizure of aircraft or ships,
— rape,
— environmental crime,
— organised or armed robbery,
— sabotage,
— participation in a criminal organisation involved in one or more of the offences listed above.
本法第五条第1款第(h)项第(iii)段项下刑事犯罪的类型包括:
——恐怖主义犯罪,
——贩卖人口,
——对儿童的性剥削和儿童色情制品,
——非法贩运麻醉药品或精神类药物,
——非法贩运武器、弹药或爆炸物,
——谋杀、严重身体伤害,
——人体器官或组织的非法交易,
——非法贩运核材料或放射性材料,
——绑架、非法拘禁或劫持人质,
——国际刑事法院管辖范围内的犯罪行为,
——非法扣押飞机或船舶,
——强奸,
——环境破坏犯罪,
——有组织或武装抢劫,
——蓄意破坏财物,
——参与涉及上述一种或多种犯罪的犯罪组织。
ANNEX III High-risk AI systems referred to in Article 6(2)
附录三 第六条第2款项下的高风险人工智能系统
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:
1. Biometrics, in so far as their use is permitted under relevant Union or national law:
(a)remote biometric identification systems.
This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be;
(b)AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;
(c)AI systems intended to be used for emotion recognition.
2. Critical infrastructure: AI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
3. Education and vocational training:
(a) AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels;
(b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;
(c) AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;
(d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.
4. Employment, workers’ management and access to self-employment:
(a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;
(b) AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.
5. Access to and enjoyment of essential private services and essential public services and benefits:
(a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;
(c) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance;
(d) AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems.
6. Law enforcement, in so far as their use is permitted under relevant Union or national law:
(a) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf to assess the risk of a natural person becoming the victim of criminal offences;
(b) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities as polygraphs or similar tools;
(c) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences;
(d) AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;
(e) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences.
7. Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national law:
(a) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies as polygraphs or similar tools;
(b) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State;
(c) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence;
(d) AI systems intended to be used by or on behalf of competent public authorities, or by Union institutions, bodies, offices or agencies, in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.
8. Administration of justice and democratic processes:
(a) AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution;
(b) AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.
本法第六条第2款项下的高风险人工智能系统是指被列入下列任一领域的人工智能系统:
1、生物测定技术,须相关欧盟或成员国法律允许使用:
(a)远程生物识别系统。
不包括用于生物特征验证、唯一目的是确认特定自然人就是其所声称主体的人工智能系统;
(b)根据对敏感或受保护的属性或特征的推断进行生物特征分类的人工智能系统;
(c)用于情感识别的人工智能系统。
2、关键基础设施:用作关键数字基础设施、道路交通或供水、供气、供暖或供电的管理和运营中的安全组件的人工智能系统。
3、教育和职业培训:
(a)用于确定各级教育和职业培训机构的准入或录取,或将自然人分配到该等机构的人工智能系统;
(b)用于评估学习成果的人工智能系统,包括将该等成果用于指导各级教育和职业培训机构中自然人的学习过程;
(c)用于评估一个人在各级教育和职业培训机构环境下或机构内部可以接受或能够获得的适当教育水平的人工智能系统;
(d)用于在各级教育和职业培训机构环境下或机构内部监测和检测学生在考试期间的违禁行为的人工智能系统。
4、就业、员工管理和自营职业机会:
(a)用于招聘或选拔自然人的人工智能系统,特别是发布有针对性的招聘广告、分析和过滤求职申请以及评估候选人;
(b)用于做出影响工作相关关系条款的决策,促进或终止工作相关合同关系,根据个人的行为或个性特征或特点分配任务,或监测和评估此类关系中人员的表现和行为的人工智能系统。
5、获取和享受基本私人服务和基本公共服务及福利:
(a)由公权力机关或其代表用于评估自然人获得基本公共援助福利和服务(包括医疗服务)的资格,以及提供、减少、撤销或收回此类福利和服务的人工智能系统;
(b)用于评估自然人的信誉或确定其信用评分的人工智能系统,但用于检测金融欺诈的人工智能除外;
(c)用于人寿和健康保险中与自然人的风险评估和定价的人工智能系统;
(d)用于对自然人的紧急呼叫进行评估和分类,或用于调度或在紧急第一反应服务(包括警察、消防员和医疗援助)和紧急医疗患者分流系统的调度中确定优先级的人工智能系统。
6、执法,须相关欧盟或成员国法律允许使用:
(a) 执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动而使用的),用于评估自然人成为刑事犯罪受害者的风险的人工智能系统;
(b)由执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动)作为测谎仪或类似工具使用的人工智能系统;
(c)执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动)用于支持执法机关在调查或公诉的过程中评估证据的可靠性的人工智能系统;
(d)执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动)用于支持执法机关评估自然人犯罪或再次犯罪的风险的人工智能系统,不只是基于第2016/680号指令第3条第(4)款所述的自然人特征或评估自然人或群体的人格特征或过去的犯罪行为;
(e)执法机关或其代表(或欧盟各机构为支持执法机关或其代表的执法活动)用于支持执法机关在侦查、调查或公诉过程中对第2016/680号指令第3(4)条所述的自然人进行特征分析的人工智能系统。
7、移民、庇护和边境管制管理,须相关欧盟或成员国法律允许使用:
(a)主管公权力机关或其代表或由联盟各机构用作测谎仪或类似工具的人工智能系统;
(b)主管公权力机关或其代表或欧盟各机构用于评估拟进入或已经进入成员国领土的自然人构成的风险(包括安全风险、非正常移民风险或健康风险)的人工智能系统;
(c)主管公权力机关或其代表或由欧盟各机构用于协助主管机关审查庇护、签证或居留许可申请,以及关于申请身份的自然人资格的相关投诉(包括对证据可靠性的相关评估)的人工智能系统;
(d)主管公权力机关或其代表或欧盟各机构在移民、庇护或边境管制管理方面用于检测、识别或确认自然人(旅行证件核查除外)的人工智能系统。
8、司法和民主程序:
(a)司法机关或其代表用于协助司法机关研究和解释事实和法律,并将法律应用于具体事实或以类似方式用于替代性争议解决的人工智能系统;
(b)用于影响选举或公民投票结果或自然人在选举或全民公投中的投票行为的人工智能系统。不包括从行政或后勤角度组织、优化或构建政治活动的工具等自然人不直接接触的人工智能系统。
ANNEX IV Technical documentation referred to in Article 11(1)
附录四 第十一条第1款项下的技术文档
The technical documentation referred to in Article 11(1) shall contain at least the following information, as applicable to the relevant AI system:
1. A general description of the AI system including:
(a) its intended purpose, the name of the provider and the version of the system reflecting its relation to previous versions;
(b) how the AI system interacts with, or can be used to interact with, hardware or software, including with other AI systems, that are not part of the AI system itself, where applicable;
(c) the versions of relevant software or firmware, and any requirements related to version updates;
(d) the description of all the forms in which the AI system is placed on the market or put into service, such as software packages embedded into hardware, downloads, or APIs;
(e)the description of the hardware on which the AI system is intended to run;
(f) where the AI system is a component of products, photographs or illustrations showing external features, the marking and internal layout of those products;
(g)a basic description of the user-interface provided to the deployer;
(h)instructions for use for the deployer, and a basic description of the user-interface provided to the deployer, where applicable;
2. A detailed description of the elements of the AI system and of the process for its development, including:
(a)the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre-trained systems or tools provided by third parties and how those were used, integrated or modified by the provider;
(b)the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, including with regard to persons or groups of persons in respect of who, the system is intended to be used; the main classification choices; what the system is designed to optimise for, and the relevance of the different parameters; the description of the expected output and output quality of the system; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Chapter III, Section 2;
(c)the description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing; the computational resources used to develop, train, test and validate the AI system;
(d)where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including a general description of these data sets, information about their provenance, scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);
(e)assessment of the human oversight measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the deployers, in accordance with Article 13(3), point (d);
(f)where applicable, a detailed description of pre-determined changes to the AI system and its performance, together with all the relevant information related to the technical solutions adopted to ensure continuous compliance of the AI system with the relevant requirements set out in Chapter III, Section 2;
(g)the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracy, robustness and compliance with other relevant requirements set out in Chapter III, Section 2, as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f);
(h) cybersecurity measures put in place;
3. Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system; the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the deployers; specifications on input data, as appropriate;
4.A description of the appropriateness of the performance metrics for the specific AI system;
5.A detailed description of the risk management system in accordance with Article 9;
6.A description of relevant changes made by the provider to the system through its lifecycle;
7.A list of the harmonised standards applied in full or in part the references of which have been published in the Official Journal of the European Union; where no such harmonised standards have been applied, a detailed description of the solutions adopted to meet the requirements set out in Chapter III, Section 2, including a list of other relevant standards and technical specifications applied;
8.A copy of the EU declaration of conformity referred to in Article 47;
9.A detailed description of the system in place to evaluate the AI system performance in the post-market phase in accordance with Article 72, including the post-market monitoring plan referred to in Article 72(3).
第十一条 第1款项下的技术文档应当至少包括以下适用于相关人工智能系统的信息:
1、人工智能系统的一般描述,包括:
(a)其预期目的、提供者名称和反映其与先前版本关系的系统版本;
(b)人工智能系统如何与硬件或软件交互,包括与不属于人工智能系统本身的其他人工智能系统的交互(如适用);
(c) 相关软件或硬件的版本以及与版本更新相关的任何要求;
(d) 人工智能系统投放到市场或投入使用的全部形式的描述,例如嵌入硬件的软件包、下载或API;
(e) 人工智能系统预期运行所需的硬件描述;
(f)如果人工智能系统是产品的组成部分,显示该等产品的外部特征、标记和内部布局的照片或插图;
(g)提供给部署者的用户界面的基本描述;
(h)部署者的使用说明,以及提供给部署者的用户界面的基本描述(如适用);
2、详细说明人工智能系统的要素及其开发过程,包括以下内容:
(a)为开发人工智能系统而采取的方法和步骤,包括在相关情况下使用第三方提供的预训练系统或工具,以及提供者如何使用、集成或修改该等系统或工具;
(b)系统的设计规范,即人工智能系统和算法的一般逻辑;所做的基本原理和假设等关键设计选择,包括与系统拟使用的人或群体有关的基本原理或假设;主要的分类选择;系统被设计用于优化的对象以及不同参数的相关性;系统预期输出和输出质量的描述;关于为符合本法第三章第二节规定的要求而采取的技术解决方案的任何可能的权衡决定;
(c)系统架构的描述,解释软件组件如何相互构建或相互注入并集成到整体处理程序中;用于开发、训练、测试和验证人工智能系统的计算资源;
(d)在相关情况下描述训练方法和技术,以及所使用训练数据集的数据表中的数据要求,包括数据集的一般描述、其来源、范围和主要特征相关信息;数据是如何获得和选择的;标签程序(例如用于监督学习)、数据清理方法(例如异常值检测);
(e)根据本法第十四条规定开展评估所需的人工监督措施,包括根据第十三条第3款(d)项评估推动部署者解释人工智能系统输出所需的技术措施;
(f)详细描述人工智能系统及其性能的预期变化,以及与所采用的技术解决方案相关的所有相关信息,确保人工智能系统持续符合第三章第二节规定的相关要求(如适用);
(g)所使用的验证和测试程序,包括所使用验证和测试数据及其主要特征信息;用于衡量准确性、稳健性和符合第三章第二节规定的其他相关要求的指标,以及潜在的歧视性影响;测试日志和所有由负责人注明日期并签字的测试报告,包括本条(f)项所述的预期变化;
(h)到位的网络安全措施;
3、关于人工智能系统的监测、运行和控制的详细信息,特别是关于其性能的表现和局限性(包括系统拟用于的特定个人或群体的准确程度,以及与其预期目的相关的总体预期准确程度);鉴于人工智能系统的预期目的,可预见的意外后果以及健康与安全、基本权利和歧视的风险来源;根据第十四条规定所需的人工监督措施,包括为推动部署者对人工智能系统输出的解释而采取的技术措施;输入数据规范(视具体情况);
4、描述特定人工智能系统的性能指标的适当性;
5、根据第九条规定对风险管理系统的详细说明;
6、描述提供者在系统生命周期中对系统所做的相关修改;
7、已在《欧洲联盟公报》上公布的全部或部分可适用的统一标准清单;如果没有适用此类统一标准,应当详细说明为满足本法第三章第二节规定的要求而采用的解决方案,包括应用的其他相关标准和技术规范清单;
8、第四十七条规定的欧盟符合性声明副本;
9、为根据第七十二条规定评估人工智能系统上市后阶段的性能而对系统进行的详细说明,包括第七十二条第3款规定的上市后监测计划。
ANNEX V EU declaration of conformity
附录五 欧盟符合性声明
The EU declaration of conformity referred to in Article 47, shall contain all of the following information:
1.AI system name and type and any additional unambiguous reference allowing the identification and traceability of the AI system;
2.The name and address of the provider or, where applicable, of their authorised representative;
3.A statement that the EU declaration of conformity referred to in Article 47 is issued under the sole responsibility of the provider;
4.A statement that the AI system is in conformity with this Regulation and, if applicable, with any other relevant Union law that provides for the issuing of the EU declaration of conformity referred to in Article 47;
5.Where an AI system involves the processing of personal data, a statement that that AI system complies with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680;
6.References to any relevant harmonised standards used or any other common specification in relation to which conformity is declared;
7.Where applicable, the name and identification number of the notified body, a description of the conformity assessment procedure performed, and identification of the certificate issued;
8. The place and date of issue of the declaration, the name and function of the person who signed it, as well as an indication for, or on behalf of whom, that person signed, a signature.
第四十七条 规定的欧盟符合性声明应当包含以下所有信息:
1、人工智能系统的名称和类型,以及支持识别和追溯到该人工智能系统的任何其他明确标记;
2、提供者或其授权代表(如适用)的名称和地址;
3、声明第四十七条所规定的欧盟符合性声明是由提供者全权负责发布;
4、说明人工智能系统符合本法规定,且符合所有其他要求发布第四十七条项下欧盟符合性声明的相关欧盟法律(如适用)的一份声明;
5、如果人工智能系统涉及个人数据处理,应当声明该人工智能系统符合第2016/679号和第2018/1725号条例以及第16/680号指令;
6、援引所使用的所有相关统一标准或与符合性声明有关的其他通用规范;
7、评定机构的名称和识别号、所执行的符合性评定程序的说明以及所获得的证书的标识(如适用);
8、声明的签发地点和日期、签署声明的人的姓名和职务,以及书名人或其代表的签署说明。
ANNEX VI Conformity assessment procedure based on internal control
附录六 内部控制为基础的符合性评定程序
1.The conformity assessment procedure based on internal control is the conformity assessment procedure based on points 2, 3 and 4.
2.The provider verifies that the established quality management system is in compliance with the requirements of Article 17.
3.The provider examines the information contained in the technical documentation in order to assess the compliance of the AI system with the relevant essential requirements set out in Chapter III, Section 2.
4.The provider also verifies that the design and development process of the AI system and its post-market monitoring as referred to in Article 72 is consistent with the technical documentation.
1、以内部控制为基础的符合性评定程序是根据本附录第2条、第3条和第4条规定完成的符合性评定流程。
2、提供者验证所建立的质量管理体系符合本法第十七条的要求。
3、提供者检查技术文档中包含的信息,以评定人工智能系统是否符合本法第三章第二节规定的相关基本要求。
4、提供者验证第七十二条所规定的人工智能系统设计和开发过程及其上市后监测是否与技术文档一致。
(后接第十部分)
-
梭二罗 赞了这篇日记 2024-09-04 14:23:18