欧盟人工智能法案(中、英对照)(第36-50条)
Article 36 Changes to notifications
1. The notifying authority shall notify the Commission and the other Member States of any relevant changes to the notification of a notified body via the electronic notification tool referred to in Article 30(2).
2. The procedures laid down in Articles 29 and 30 shall apply to extensions of the scope of the notification.
For changes to the notification other than extensions of its scope, the procedures laid down in paragraphs (3) to (9) shall apply.
3. Where a notified body decides to cease its conformity assessment activities, it shall inform the notifying authority and the providers concerned as soon as possible and, in the case of a planned cessation, at least one year before ceasing its activities. The certificates of the notified body may remain valid for a period of nine months after cessation of the notified body’s activities, on condition that another notified body has confirmed in writing that it will assume responsibilities for the high-risk AI systems covered by those certificates. The latter notified body shall complete a full assessment of the high-risk AI systems affected by the end of that nine-month-period before issuing new certificates for those systems. Where the notified body has ceased its activity, the notifying authority shall withdraw the designation.
4. Where a notifying authority has sufficient reason to consider that a notified body no longer meets the requirements laid down in Article 31, or that it is failing to fulfil its obligations, the notifying authority shall without delay investigate the matter with the utmost diligence. In that context, it shall inform the notified body concerned about the objections raised and give it the possibility to make its views known. If the notifying authority comes to the conclusion that the notified body no longer meets the requirements laid down in Article 31 or that it is failing to fulfil its obligations, it shall restrict, suspend or withdraw the designation as appropriate, depending on the seriousness of the failure to meet those requirements or fulfil those obligations. It shall immediately inform the Commission and the other Member States accordingly.
5. Where its designation has been suspended, restricted, or fully or partially withdrawn, the notified body shall inform the providers concerned within 10 days.
6. In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall take appropriate steps to ensure that the files of the notified body concerned are kept, and to make them available to notifying authorities in other Member States and to market surveillance authorities at their request.
7. In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall:
(a) assess the impact on the certificates issued by the notified body;
(b) submit a report on its findings to the Commission and the other Member States within three months of having notified the changes to the designation;
(c)require the notified body to suspend or withdraw, within a reasonable period of time determined by the authority, any certificates which were unduly issued, in order to ensure the continuing conformity of high-risk AI systems on the market;
(d) inform the Commission and the Member States about certificates the suspension or withdrawal of which it has required;
(e) provide the national competent authorities of the Member State in which the provider has its registered place of business with all relevant information about the certificates of which it has required the suspension or withdrawal; that authority shall take the appropriate measures, where necessary, to avoid a potential risk to health, safety or fundamental rights.
8. With the exception of certificates unduly issued, and where a designation has been suspended or restricted, the certificates shall remain valid in one of the following circumstances:
(a) the notifying authority has confirmed, within one month of the suspension or restriction, that there is no risk to health, safety or fundamental rights in relation to certificates affected by the suspension or restriction, and the notifying authority has outlined a timeline for actions to remedy the suspension or restriction; or
(b) the notifying authority has confirmed that no certificates relevant to the suspension will be issued, amended or re-issued during the course of the suspension or restriction, and states whether the notified body has the capability of continuing to monitor and remain responsible for existing certificates issued for the period of the suspension or restriction; in the event that the notifying authority determines that the notified body does not have the capability to support existing certificates issued, the provider of the system covered by the certificate shall confirm in writing to the national competent authorities of the Member State in which it has its registered place of business, within three months of the suspension or restriction, that another qualified notified body is temporarily assuming the functions of the notified body to monitor and remain responsible for the certificates during the period of suspension or restriction.
9. With the exception of certificates unduly issued, and where a designation has been withdrawn, the certificates shall remain valid for a period of nine months under the following circumstances:
(a) the national competent authority of the Member State in which the provider of the high-risk AI system covered by the certificate has its registered place of business has confirmed that there is no risk to health, safety or fundamental rights associated with the high-risk AI systems concerned; and
(b) another notified body has confirmed in writing that it will assume immediate responsibility for those AI systems and completes its assessment within 12 months of the withdrawal of the designation.
In the circumstances referred to in the first subparagraph, the national competent authority of the Member State in which the provider of the system covered by the certificate has its place of business may extend the provisional validity of the certificates for additional periods of three months, which shall not exceed 12 months in total.
The national competent authority or the notified body assuming the functions of the notified body affected by the change of designation shall immediately inform the Commission, the other Member States and the other notified bodies thereof.
第三十六条 通报变更
1、 通报机构应当通过本法第三十条第2款所规定的电子通报工具,将评定机构通报的任何相关变更通报欧盟委员会和其他成员国。
2、 本法第二十九条和第三十条规定的程序适用于扩展通报范围。
除扩大其范围外,通报的变更应当按本条第3款至第9款规定的程序进行。
3、 评定机构决定停止其符合性评定业务的,应当尽快通知通报机构和有关提供者;如果计划终止其符合性评定业务,应当在终止前至少一年通知上述主体。评定机构已经颁发的证书在其业务停止后的九个月内仍然有效,但有其他评定机构书面确认将对其证书所涵盖的高风险人工智能系统承担责任的除外。上述其他评定机构应当在九个月期限届满前完成对受影响高风险人工智能系统的全面评估,并为该等系统颁发新的证书。评定机构终止符合性评定业务后,通报机构应撤回对该机构的指定。
4、 如果通报机构有充足理由认为评定机构不再符合本法第三十一条规定的要求,或者未能履行其义务,通报机构应当立即对该情况展开最大程度调查。通报机构对评定机构的上述意见应当送达相关评定机构,并允许评定机构发表意见。如果通报机构认定评定机构不再符合本法第三十一条规定的要求或未能履行其义务,应当根据评定机构未能符合要求或未履行义务的严重程度,酌情限制、中止或撤回对该评定机构的指定。发生上述情况的,通报机构应当立即通知欧盟委员会和其他成员国。
5、 通报机构对评定机构的指定被暂停、限制或全部或部分撤回后,评定机构应当在十天内通知相关提供者。
6、 评定机构的指定被限制、暂停或撤销后,通报机构应当采取适当措施,确保保留评定机构相关的档案,并根据其他成员国的通报机构和市场监管机构的要求向其提供该等档案。
7、 限制、暂停或撤回对评定机构的指定时,通报机构应当:
(a) 评估对评定机构所颁发证书的影响;
(b) 在指定变更通知后三个月内向欧盟委员会和其他成员国提交一份关于其调查结果的报告;
(c) 要求评定机构在主管机关确定的合理期限内暂停或撤销其不当颁发的证书,以确保市场上高风险人工智能系统的持续合规性;
(d) 向欧盟委员会和成员国通报其要求暂停或撤销的证书;
(e) 按提供者注册营业地所在国的主管机关的要求向其提供暂停或撤销的证书相关的所有信息;上述主管机关应当在必要时采取适当措施,避免引发健康、安全或基本权利方面的潜在风险。
8、 除不当签发的证书,以及被暂停或限制指定的证书外,符合下列条件的证书仍然有效:
(a)通报机构已经在采取暂停或限制措施后一个月内确认,受暂停或限制影响的证书对健康、安全或基本权利无风险,且通报机构已经列明暂停或限制的补救措施时间安排;或
(b)通报机构已经确认,在暂停或限制过程中,不会颁发、修订或重新颁发与暂停事项有关的证书,并说明评定机构是否有能力继续监督并对暂停或限制期间颁发的既有证书负责;如果通报机构确定评定机构没有能力支持已颁发的现有证书,证书所涵盖的系统的提供者应当在评定机构被暂停或限制后三个月内,以书面形式向其注册营业地所在国的主管机关确认另一家合格评定机构临时承担评定机构的职责,在原评定机构被暂停或限制期间监测并对证书负责。
9、 除不当签发的证书和被撤销指定的证书外,在以下情况下,证书的有效期为九个月:
(a)证书所涵盖的高风险人工智能系统的提供者的注册营业地所在成员国的成员国主管机关确认,不存在与相关高风险人工智能系统相关的健康、安全或基本权利风险;及
(b)有另一家评定机构已经书面确认,它将立即对该等人工智能系统负责,并在撤销指定后12个月内完成评估。
在本款第一项所述情况下,证书所涵盖的系统的提供者的注册营业地所在成员国的成员国主管机关可以将证书的临时有效期再延长三个月,但有效期合计最长不超过12个月。
受指定之变更影响的成员国主管机关或承继评定机构职能的评定机构应当立即通知欧盟委员会、其他成员国和其他评定机构。
Article 37 Challenge to the competence of notified bodies
1. The Commission shall, where necessary, investigate all cases where there are reasons to doubt the competence of a notified body or the continued fulfilment by a notified body of the requirements laid down in Article 31 and of its applicable responsibilities.
2. The notifying authority shall provide the Commission, on request, with all relevant information relating to the notification or the maintenance of the competence of the notified body concerned.
3. The Commission shall ensure that all sensitive information obtained in the course of its investigations pursuant to this Article is treated confidentially in accordance with Article 78.
4. Where the Commission ascertains that a notified body does not meet or no longer meets the requirements for its notification, it shall inform the notifying Member State accordingly and request it to take the necessary corrective measures, including the suspension or withdrawal of the notification if necessary. Where the Member State fails to take the necessary corrective measures, the Commission may, by means of an implementing act, suspend, restrict or withdraw the designation. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).
第三十七条 对评定机构权限的质询
1、 欧盟委员会应当在必要时调查所有有理由质疑评定机构权限或评定机构是否持续履行本法第三十一条规定的要求及其应承担责任的案件。
2、 通报机关应当根据申请,向欧盟委员会提供与通报或维持相关评定机构的权限有关的所有相关信息。
3、 欧盟委员会应当确保在根据本条规定进行调查的过程中,按照本法第七十八条规定对所获得的所有敏感信息保密。
4、 如果欧盟委员会认定评定机构不符合或不再符合通报的要求,应当相应地通知通报机构所属成员国,并要求其采取必要的纠正措施,包括在必要时暂停或撤回通报。如果上述成员国未采取必要的纠正措施,欧盟委员会可以通过制定实施细则直接暂停、限制或撤回对相应评定机构的指定。本款所称实施细则应当根据本法第九十八条第2款规定的审查程序经审查通过。
Article 38 Coordination of notified bodies
1. The Commission shall ensure that, with regard to high-risk AI systems, appropriate coordination and cooperation between notified bodies active in the conformity assessment procedures pursuant to this Regulation are put in place and properly operated in the form of a sectoral group of notified bodies.
2. Each notifying authority shall ensure that the bodies notified by it participate in the work of a group referred to in paragraph 1, directly or through designated representatives.
3. The Commission shall provide for the exchange of knowledge and best practices between notifying authorities.
第三十八条 评定机构的协调
1、 对于高风险人工智能系统,欧盟委员会应当确保按本法规定积极参与符合性评定程序的各评定机构之间适当地协调合作,并以评定机构行业小组的形式妥善运作。
2、 每个通报机构都应确保其通报的机构直接或通过指定代表参与本条第1款所规定行业小组的工作。
3、 欧盟委员会应当支持通报机构之间互相交流知识和最佳实践。
Article 39 Conformity assessment bodies of third countries
Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement may be authorised to carry out the activities of notified bodies under this Regulation, provided that they meet the requirements laid down in Article 31 or they ensure an equivalent level of compliance.
第三十九条 第三国符合性评定机构
根据与欧盟缔约的第三国法律设立的符合性评定机构,在符合本法第三十一条规定的要求或确保具备同等程度合规性的前提下,可以根据本法规定经授权开展评定机构有权开展的活动。
SECTION 5 Standards, conformity assessment, certificates, registration
第五节 标准、符合性评定、证书和登记
Article 40 Harmonised standards and standardisation deliverables
1. High-risk AI systems or general-purpose AI models which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, with the obligations set out in of Chapter V, Sections 2 and 3, of this Regulation, to the extent that those standards cover those requirements or obligations.
2. In accordance with Article 10 of Regulation (EU) No 1025/2012, the Commission shall issue, without undue delay, standardisation requests covering all requirements set out in Section 2 of this Chapter and, as applicable, standardisation requests covering obligations set out in Chapter V, Sections 2 and 3, of this Regulation. The standardisation request shall also ask for deliverables on reporting and documentation processes to improve AI systems’ resource performance, such as reducing the high-risk AI system’s consumption of energy and of other resources during its lifecycle, and on the energy-efficient development of general-purpose AI models. When preparing a standardisation request, the Commission shall consult the Board and relevant stakeholders, including the advisory forum.
When issuing a standardisation request to European standardisation organisations, the Commission shall specify that standards have to be clear, consistent, including with the standards developed in the various sectors for products covered by the existing Union harmonisation legislation listed in Annex I, and aiming to ensure that high-risk AI systems or general-purpose AI models placed on the market or put into service in the Union meet the relevant requirements or obligations laid down in this Regulation.
The Commission shall request the European standardisation organisations to provide evidence of their best efforts to fulfil the objectives referred to in the first and the second subparagraph of this paragraph in accordance with Article 24 of Regulation (EU) No 1025/2012.
3. The participants in the standardisation process shall seek to promote investment and innovation in AI, including through increasing legal certainty, as well as the competitiveness and growth of the Union market, to contribute to strengthening global cooperation on standardisation and taking into account existing international standards in the field of AI that are consistent with Union values, fundamental rights and interests, and to enhance multi-stakeholder governance ensuring a balanced representation of interests and the effective participation of all relevant stakeholders in accordance with Articles 5, 6, and 7 of Regulation (EU) No 1025/2012.
第四十条 统一标准和标准化成果
1、 如果高风险人工智能系统或通用人工智能模型符合其模型引用已按欧洲议会和欧盟理事会第1025/2012号条例规定在《欧洲联盟公报》上公布的统一标准或其任何部分,且上述标准涵盖本章第二节规定的要求或者本法第五章第二节和第三节规定的义务(如适用),该系统或模型应当被推定为符合本章第二节或者本法第五章第二节和第三节规定。
2、 根据欧洲议会和欧盟理事会第1025/2012号条例第10条规定,欧盟委员会应当尽快签发涵盖本章第二节项下要求的标准化请求,并发出涵盖本法第五章第二节和第三节项下义务的标准化请求(如适用)。标准化请求还应要求提供报告和信息处理方面的成果,以提高人工智能系统的资源性能。例如,在系统生命周期内减少高风险人工智能系统对能源和其他资源的消耗,以及通用人工智能模型的节能开发。在准备标准化申请时,欧盟委员会应当征询人工智能委员会及咨询论坛等其他利益相关者的意见。
在向欧洲标准化组织发出标准化申请时,欧盟委员会应当明确规定标准必须清晰、一致,包括与附录一中列明的现有欧盟统一立法所涵盖的产品所属各行业制定的标准一致,并旨在确保在欧盟被投放到市场或投入使用的高风险人工智能系统或通用人工智能模型符合本法规定的相关要求或义务。
欧盟委员会应当要求欧洲标准化组织根据欧洲议会和欧盟理事会第1025/2012号条例第24条规定提供证据,证明其已尽最大努力实现本项第一段和第二段中规定的目标。
3、 标准化进程的参与者应当根据欧洲议会和欧盟理事会第1025/2012号条例第5条、第6条和第7条规定,努力推动人工智能的投资和创新,包括通过提高法律的确定性以及欧盟市场的竞争力和增速,推动全球标准化合作并考虑符合欧盟价值观、基本权利和利益的人工智能领域现有国际标准,并加强多方参与治理以确保平衡各界代表和所有相关利益相关者的有效参与。
Article 41 Common specifications
1. The Commission may adopt, implementing acts establishing common specifications for the requirements set out in Section 2 of this Chapter or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V where the following conditions have been fulfilled:
(a) the Commission has requested, pursuant to Article 10(1) of Regulation (EU) No 1025/2012, one or more European standardisation organisations to draft a harmonised standard for the requirements set out in Section 2 of this Chapter, or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V, and:
(i) the request has not been accepted by any of the European standardisation organisations; or
(ii) the harmonised standards addressing that request are not delivered within the deadline set in accordance with Article 10(1) of Regulation (EU) No 1025/2012; or
(iii) the relevant harmonised standards insufficiently address fundamental rights concerns; or
(iv) the harmonised standards do not comply with the request; and
(b) no reference to harmonised standards covering the requirements referred to in Section 2 of this Chapter or, as applicable, the obligations referred to in Sections 2 and 3 of Chapter V has been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012, and no such reference is expected to be published within a reasonable period.
When drafting the common specifications, the Commission shall consult the advisory forum referred to in Article 67.
The implementing acts referred to in the first subparagraph of this paragraph shall be adopted in accordance with the examination procedure referred to in Article 98(2).
2. Before preparing a draft implementing act, the Commission shall inform the committee referred to in Article 22 of Regulation (EU) No 1025/2012 that it considers the conditions laid down in paragraph 1 of this Article to be fulfilled.
3. High-risk AI systems or general-purpose AI models which are in conformity with the common specifications referred to in paragraph 1, or parts of those specifications, shall be presumed to be in conformity with the requirements set out in Section 2 of this Chapter or, as applicable, to comply with the obligations referred to in Sections 2 and 3 of Chapter V, to the extent those common specifications cover those requirements or those obligations.
4. Where a harmonised standard is adopted by a European standardisation organisation and proposed to the Commission for the publication of its reference in the Official Journal of the European Union, the Commission shall assess the harmonised standard in accordance with Regulation (EU) No 1025/2012. When reference to a harmonised standard is published in the Official Journal of the European Union, the Commission shall repeal the implementing acts referred to in paragraph 1, or parts thereof which cover the same requirements set out in Section 2 of this Chapter or, as applicable, the same obligations set out in Sections 2 and 3 of Chapter V.
5. Where providers of high-risk AI systems or general-purpose AI models do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that meet the requirements referred to in Section 2 of this Chapter or, as applicable, comply with the obligations set out in Sections 2 and 3 of Chapter V to a level at least equivalent thereto.
6. Where a Member State considers that a common specification does not entirely meet the requirements set out in Section 2 or, as applicable, comply with obligations set out in Sections 2 and 3 of Chapter V, it shall inform the Commission thereof with a detailed explanation. The Commission shall assess that information and, if appropriate, amend the implementing act establishing the common specification concerned.
第四十一条 通用规范
1、满足下列条件后,欧盟委员会可以制定实施细则,就本章第二节规定的要求或第五章第二节和第三节规定的义务(如适用)建立通用规范:
(a)欧盟委员会已经根据欧洲议会和欧盟理事会第1025/2012号条例第10条第(1)款规定,请求欧洲标准化组织就本章第二节规定的要求或第五章第二节和第三节规定的义务(如适用)起草一份统一标准,
(i)该请求尚未被任何欧洲标准化组织接受;
(ii)处理该请求的统一标准未在欧洲议会和欧盟理事会第1025/2012号条例第10条第(1)款规定的截止日期内交付;
(iii)相关统一标准不足以解决基本权利问题;或
(iv)统一标准不符合要求;
(b)根据欧洲议会和欧盟理事会第1025/2012号条例,《欧洲联盟公报》上没有公布涵盖本章第二节项下要求或第五章第二节和第三节项下义务(如适用)的统一标准,且预计在合理期限内也不会公布此类文件。
在起草通用规范时,欧盟委员会应当征询本法第六十七条所规定的咨询论坛的意见。
本款第1项所述实施细则应当按照本法第九十八条第2款规定经审查程序通过。
2、在起草实施细则草案之前,欧盟委员会应当告知第1025/2012号条例第22条项下委员会,其认为本条第1款规定的条件已满足。
3、如果本条第1款所述通用规范或其任何部分涵盖了本章第二节规定的要求和/或本法第五章第二节和第三节所规定的义务(如适用),符合上述通用规范或其相应部分的高风险人工智能系统或通用人工智能模型应当被推定为符合本章第二节规定的要求或本法第五章第二节和第三节所规定的义务(如适用)。
4、如果欧洲标准化组织制定统一标准并建议欧盟委员会在《欧洲联盟公报》上公布,欧盟委员会应当根据第1025/2012号条例规定对统一标准进行评估。统一标准经《欧洲联盟公报》公布后,欧盟委员会应当废除本条第1款项下的实施细则或其中与本章第二节规定相同要求或本法第五章第二节和第三节规定同样义务的部分内容。
5、如果高风险人工智能系统或通用人工智能模型的提供者不符合本条第1款项下的通用规范,其应当适当地证明其已经采用符合本章第二节规定的技术解决方案,或者遵守本法第五章第二和第三节规定的义务(如适用),至少已经达到与之相应的合规水平。
6、如果成员国认为上述通用规范不完全符合本法本章第二节规定的要求或本法第五章第二节和第三节规定的义务,应当通知欧盟委员会并作出详细解释。欧盟委员会应当对该事项进行评估,并在适当的情况下修改建立相关通用规范的实施细则。
Article 42 Presumption of conformity with certain requirements
1. High-risk AI systems that have been trained and tested on data reflecting the specific geographical, behavioural, contextual or functional setting within which they are intended to be used shall be presumed to comply with the relevant requirements laid down in Article 10(4).
2. High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 and the references of which have been published in the Official Journal of the European Union shall be presumed to comply with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements.
第四十二条 符合特定要求的推定
1、 经过和测试,其数据能够反映其预期被使用的具体地理、行为、场景或功能环境的高风险人工智能系统,应当被推定为符合本法第十条第4款规定的相关要求。
2、 已经根据第2019/881号条例规定的网络安全计划获得认证或符合性声明,且已经《欧洲联盟公报》发布的高风险人工智能系统,只要其网络安全证书或符合性声明涵盖本法第十五条规定,即应当被推定为符合本法第十五条规定的网络安全要求。
Article 43 Conformity assessment
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following conformity assessment procedures based on:
(a) the internal control referred to in Annex VI; or
(b) the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider shall follow the conformity assessment procedure set out in Annex VII where:
(a) harmonised standards referred to in Article 40 do not exist, and common specifications referred to in Article 41 are not available;
(b) the provider has not applied, or has applied only part of, the harmonised standard;
(c) the common specifications referred to in point (a) exist, but the provider has not applied them;
(d) one or more of the harmonised standards referred to in point (a) has been published with a restriction, and only on the part of the standard that was restricted.
For the purposes of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, where the high-risk AI system is intended to be put into service by law enforcement, immigration or asylum authorities or by Union institutions, bodies, offices or agencies, the market surveillance authority referred to in Article 74(8) or (9), as applicable, shall act as a notified body.
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body.
3. For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply.
For the purposes of that assessment, notified bodies which have been notified under those legal acts shall be entitled to control the conformity of the high-risk AI systems with the requirements set out in Section 2, provided that the compliance of those notified bodies with requirements laid down in Article 31(4), (5), (10) and (11) has been assessed in the context of the notification procedure under those legal acts.
Where a legal act listed in Section A of Annex I enables the product manufacturer to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering all requirements set out in Section 2 of this Chapter.
4. High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer.
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification.
5. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annexes VI and VII by updating them in light of technical progress.
6. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraphs 1 and 2 of this Article in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimising the risks to health and safety and protection of fundamental rights posed by such systems, as well as the availability of adequate capacities and resources among notified bodies.
第四十三条 符合性评定
1、对于附录三第1条所列高风险人工智能系统,提供者适用本法第四十条规定的统一标准或本法第四十条规定的通用规范(如适用)证明高风险人工智能系统符合本章第二节规定的要求时,应当根据情况选择以下任一符合性评定程序:
(a)附录六规定的内部控制;或
(b)在附录七规定的评定机构的参与下,对质量管理体系和技术文档进行评估。
在证明高风险人工智能系统符合本章第二节规定的要求时,系统提供者应当遵守附录七规定的符合性评定程序,其中:
(a)不存在本法第四十条规定的统一标准,也不存在本法第四十条规定的通用规范;
(b)提供者未适用或仅适用了部分统一标准;
(c)虽然存在本项(a)段中提到的通用规范,但提供者尚未适用;
(d)虽然本项(a)段规定的一项或多项统一标准已经被发布但受限制,并且仅部分标准受限制。
为实现附录七所规定的符合性评定程序目的,提供者可以选择任何评定机构。但是,如果高风险人工智能系统拟由执法、移民或庇护机关或欧盟各机构投入使用,本法第七十四条第8款或第9款所规定的市场监管机构(如适用)应当作为评定机构。
2、对于附录三第2条至第8条所述高风险人工智能系统,提供者应当遵守附录六所述的以内部控制为基础的符合性评定程序,该程序未要求评定机构的参与。
3、对于附录一第A条所述欧盟统一立法涵盖的高风险人工智能系统,提供者应当遵守该等法案规定的相关符合性评定程序。本章第二节规定的要求应当适用于该等高风险人工智能系统,并应当作为评定的一部分。附录七第4.3条、第4.4条、第4.5条和第4.6条第五款也应当适用。
为评估目的,如果在上述法案规定的通报程序中已经对评定机构是否符合本法第三十一条第4款、第5款、第10款和第11款规定的要求进行了评估,根据上述法案通报的评定机构有权控制高风险人工智能系统是否符合本章第二节规定。
如果产品制造商适用了涵盖所有相关要求的所有统一标准,其可以根据附录一第A条所列规范选择拒绝第三方符合性评定。但该制造商只有适用了涵盖本章第二节所规定全部要求的统一标准或本法第四十一条所规定的通用规范(如适用)后,才能享有该项选择权。
4、已完成符合性评定程序的高风险人工智能系统发生重大修改时,无论修改后的系统计划用于进一步分销还是继续由当前部署者使用,均应当重新履行符合性评定程序。
对于被投放到市场或投入使用后仍在学习的高风险人工智能系统,提供者在首次符合性评定时已经预先确定的、本法附录四第2条(f)款项下技术文档中所包含信息中记载的对该高风险人工智能系统及其性能的修改不构成实质性修改。
5、为伴随技术进步对本法附录六和附录七进行更新达到修订目的,欧盟委员会有权根据本法第九十七条规定制定规章条例。
6、欧盟委员会有权根据本法第九十七条规定制定规章条例以修改本条第1款和第2款,使本法附录三第2条至第8条所述高风险人工智能系统接受附录七或其任何部分规定的符合性评定程序。欧盟委员会在制定该等规章条例时,应当考虑以本法附录六所述内部控制为基础的符合性评定程序在预防或尽量减少该等系统对健康和安全的风险、在基本权利保护方面的有效性,以及评定机构是否有足够的能力和资源。
Article 44 Certificates
1. Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be easily understood by the relevant authorities in the Member State in which the notified body is established.
2. Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid.
3. Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision.
An appeal procedure against decisions of the notified bodies, including on conformity certificates issued, shall be available.
第四十四条 证书
1. 评定机构根据本法附录七颁发的证书应当以评定机构所在成员国有关当局易于理解的语言书就。
2. 证书在其载明的有效期内有效,本法附录一所载人工智能系统的证书的有效期不得超过五年,附录三所载人工智能系统的证书的有效期不得超过四年。根据提供者的要求,在按照所适用的符合性评定程序重新评定后,可以延长证书的有效期,附录一所载人工智能系统每次可延长不超过五年、附录三所载人工智能系统每次可延长不超过四年。证书附页在证书有效时均应持续有效。
3. 如果评定机构发现人工智能系统不再符合本章第二节规定,应当按照比例原则暂停或撤销已颁发的证书或对其施加限制,但系统提供者在评定机构规定的合理期限内采取适当的纠正措施确保系统符合上述规定的除外。评定机构应当说明其作出决定的理由。
对于评定机构的决定(包括其颁发合格证书),应当可以申诉。
Article 45 Information obligations of notified bodies
1. Notified bodies shall inform the notifying authority of the following:
(a) any Union technical documentation assessment certificates, any supplements to those certificates, and any quality management system approvals issued in accordance with the requirements of Annex VII;
(b) any refusal, restriction, suspension or withdrawal of a Union technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII;
(c) any circumstances affecting the scope of or conditions for notification;
(d) any request for information which they have received from market surveillance authorities regarding conformity assessment activities;
(e) on request, conformity assessment activities performed within the scope of their notification and any other activity performed, including cross-border activities and subcontracting.
2. Each notified body shall inform the other notified bodies of:
(a) quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has issued;
(b) Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued.
3. Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities covering the same types of AI systems with relevant information on issues relating to negative and, on request, positive conformity assessment results.
4. Notified bodies shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78.
第四十五条 评定机构的信息义务
1.评定机构应当向通报机构通报以下事项:
(a)根据本法附录七的要求颁发的所有联盟技术文档评估证书、该等证书的附页以及质量管理体系批文;
(b)拒绝、限制、暂停或撤销按本法附录七规定颁发的欧盟技术文档评估证书或质量管理体系批文;
(c)影响通报范围或条件的任何情况;
(d)从市场监管机构收到的任何关于符合性评定活动的信息要求;
(e)按要求在其通报范围内开展的符合性评定活动以及任何其他活动(包括跨境活动和分包)。
2、每个评定机构都应当将下列事项通知其他评定机构:
(a)被拒绝、暂停或撤销的质量管理体系批文,以及按要求签发的质量体系批文;
(b)被拒绝、撤回、暂停或以其他方式限制的联盟技术文档评估证书或其附页,以及按要求颁发的证书和/或其附页。
3、每个评定机构都应当向针对同类人工智能系统开展类似符合性评定活动的其他评定机构提供与否定性评定结果相关问题有关的信息,并应当按要求提供与肯定性评定结果相关问题有关的信息。
4、根据本法第七十八条规定,评定机构应当对其获悉的信息保密。
Article 46 Derogation from conformity assessment procedure
1. By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay.
2. In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and outputs of such use shall be immediately discarded.
3. The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall not cover sensitive operational data in relation to the activities of law-enforcement authorities.
4. Where, within 15 calendar days of receipt of the information referred to in paragraph 3, no objection has been raised by either a Member State or the Commission in respect of an authorisation issued by a market surveillance authority of a Member State in accordance with paragraph 1, that authorisation shall be deemed justified.
5. Where, within 15 calendar days of receipt of the notification referred to in paragraph 3, objections are raised by a Member State against an authorisation issued by a market surveillance authority of another Member State, or where the Commission considers the authorisation to be contrary to Union law, or the conclusion of the Member States regarding the compliance of the system as referred to in paragraph 3 to be unfounded, the Commission shall, without delay, enter into consultations with the relevant Member State. The operators concerned shall be consulted and have the possibility to present their views. Having regard thereto, the Commission shall decide whether the authorisation is justified. The Commission shall address its decision to the Member State concerned and to the relevant operators.
6. Where the Commission considers the authorisation unjustified, it shall be withdrawn by the market surveillance authority of the Member State concerned.
7. For high-risk AI systems related to products covered by Union harmonisation legislation listed in Section A of Annex I, only the derogations from the conformity assessment established in that Union harmonisation legislation shall apply.
第四十六条 符合性评定程序的限制
1、 通过对本法第四十三条作出限制并根据合理的请求,市场监管机构可以为公共安全或保护人的生命和健康、保护环境或保护关键行业和基础设施财产这类特殊原因,授权在相关成员国境内上市或投入使用特定的高风险人工智能系统。考虑到上述限制的特殊原因,在履行必要的符合性评定程序后,上述授权应当仅在限定的期间内有效。上述程序应当尽快完成。
2、 在以公共安全作为特殊原因而产生合理理由的紧急情况下,或者在人的生命安全或身体健康受到具体、实质性且紧迫的威胁时,执法机关或民防机构可以在未取得本条第1款所规定授权的情况下投入使用特定的高风险人工智能系统,但其应当在使用期间或之后尽快申请上述授权。如果其申请本条第1款项下授权却被拒绝,应当立即停止使用高风险人工智能系统,并立即丢弃使用结果和输出。
3、 仅当市场监管机构认为高风险人工智能系统符合本章第二节的要求时,才应当签发本条第1款所述的授权。市场监管机构应当将其根据本条第1款和第2款签发的授权通知欧盟委员会和其他成员国。但其应当通知的内容不包括与执法机关的活动有关的敏感业务数据。
4、 如果在收到本条第3款项下信息后的15日内,成员国或欧盟委员会均未对市场监管机构根据本条第1款规定签发的授权提出异议,则该授权应被视为合理。
5、 如果在收到本条第3款规定的通知后15日内,某成员国对上述市场监管机构(位于另一成员国)颁发的授权提出异议,或者欧盟委员会认为该授权违反欧盟法,或者上述成员国针对本条第3款所述系统的合规性结论没有根据,欧盟委员会应当尽快与相关成员国协商。并应当询问相关运营方,且被询问的运营方可能会提出意见。基于上述考虑,欧盟委员会应当决定授权是否合理。欧盟委员会应当将其决定告知相关成员国和相关运营方。
6、 如果欧盟委员会认定授权不合理,(签发授权的)相关成员国的市场监管机构应当撤回授权。
7、 对于与本法附录一第A条所列欧盟统一立法涵盖的产品有关的高风险人工智能系统,仅适用该等欧盟统一立法中规定的符合评定限制。
Article 47 EU declaration of conformity
1. The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request.
2. The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into a language that can be easily understood by the national competent authorities of the Member States in which the high-risk AI system is placed on the market or made available.
3. Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation legislation to which the declaration relates.
4. By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate.
5. The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex V by updating the content of the EU declaration of conformity set out in that Annex, in order to introduce elements that become necessary in light of technical progress.
第四十七条 欧盟符合性声明
1、 提供者应当为每个高风险人工智能系统起草一份机器可读的、有物理或电子签名的欧盟符合性书面声明,并交由成员国主管机关在该高风险人工智能系统被投放到市场或投入使用后10年内处置。欧盟符合性声明应当明确其对应的高风险人工智能系统。应当按要求向相关成员国主管机关提交欧盟符合性声明副本。
2、 欧盟符合性声明应当说明相关高风险人工智能系统符合本章第二节规定的要求。欧盟符合性声明应当包含本法附录五所列信息,并应翻译成高风险人工智能系统上市或可用的成员国国家主管当局易于理解的语言。
3、 如果高风险人工智能系统受其他欧盟统一立法约束,且该等立法也要求配备欧盟符合性声明,则(提供者)应当就该等高风险人工智能系统所适用的所有欧盟法起草一份单独的欧盟符合性声明。声明应当包含识别与该声明有关的欧盟统一立法所需的全部信息。
4、 通过起草欧盟符合性声明,提供者应当承担遵守本章第二节规定的责任。提供者应当视情况更新其欧盟符合性声明。
5、 为更新本法附录五所述上述欧盟符合性声明内容以修订本法附录五,欧盟委员会有权根据本法第九十七条制定规章条例,增加因技术进步而有必要引入的要素。
Article 48 CE marking
1. The CE marking shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008.
2. For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the interface from which that system is accessed or via an easily accessible machine-readable code or other electronic means.
3. The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.
4. Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking.
5. Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other law.
第四十八条 CE标识
1、 CE标识应当符合第765/2008号条例第30条规定的一般原则。
2、 对于以数字化方式提供的高风险人工智能系统,只有在可以通过访问该系统的接口或通过易于获取的机器可读代码或其他电子方式轻松使用该系统时,才应当使用数字化CE标识。
3、 对于高风险人工智能系统,CE标识应当明显、清晰、不可去除。如果因高风险人工智能系统的性质而无法达到或不能保证达到上述标准,则应当视情况将其加贴在系统的包装或随附文件上。
4、 CE标识后面应当带有负责本法第四十三条项下符合性评定程序的评定机构的识别号(如适用)。评定机构的识别号应当由该机构自行或根据其指示由提供者或提供者的授权代表标注。如果在任何宣传材料中提到高风险人工智能系统符合CE标识的要求,均应同时注明(评定机构的)识别号。
5、 如果高风险人工智能系统受其他欧盟法约束,且该等法律也要求加贴CE标识,CE标识应当注明该高风险人工智能系统也符合上述欧盟法的要求。
Article 49 Registration
1. Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 71.
2. Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 71.
3. Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to in Article 71.
4. For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Article shall be in a secure non-public section of the EU database referred to in Article 71 and shall include only the following information, as applicable, referred to in:
(a) Section A, points 1 to 10, of Annex VIII, with the exception of points 6, 8 and 9;
(b) Section B, points 1 to 5, and points 8 and 9 of Annex VIII;
(c) Section C, points 1 to 3, of Annex VIII;
(d) points 1, 2, 3 and 5, of Annex IX.
Only the Commission and national authorities referred to in Article 74(8) shall have access to the respective restricted sections of the EU database listed in the first subparagraph of this paragraph.
5. High-risk AI systems referred to in point 2 of Annex III shall be registered at national level.
第四十九条 登记
1、 在将附录三所列高风险人工智能系统投放到市场或投入使用之前,除附录三第2条所述高风险人工智能系统外,提供者或授权代表(如适用)应当在本法第七十一条所规定的欧盟数据库完成其自身及其系统的登记。
2、 在将人工智能系统投放到市场或投入使用之前,如果提供者根据本法第六条第3款认定该系统不具有高风险,则该提供者或授权代表(如适用)应当在本法第七十一条所规定的欧盟数据库中完成其自身及该系统的登记。
3、 在投入使用或使用附录三所列高风险人工智能系统之前,除附录三第2条所列高风险人工智能系统外,公权力机关、欧盟各机构或其授权代表部署者应当本法第七十一条所规定的欧盟数据库中登记、选定该系统并登记其使用情况。
4、 对于本法附录三第1条、第6条和第7条所述高风险人工智能系统,在执法、移民、政治庇护和边境管制管理领域,本条第1款、第2款和第3款所述登记应当在第七十一条所规定欧盟数据库的安全且非公开板块进行,并应当仅登记以下信息(如适用):
(a) 附录八第A条第1款至第10款中,除第6款、第8款和第9款以外的部分;
(b) 附录八第B条第1款至第5款、第8款和第9款;
(c) 附录八第C条第1款至第3款;
(d) 附录九第1款、第2款、第3款和第5款。
仅当本法第七十四条第8款项下欧盟委员会和成员国主管机关才能访问前段所列欧盟数据库中相应受限制板块。
5、 附录三第2条所述高风险人工智能系统应当办理国家级登记。
CHAPTER IV TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMS
第四章 特定人工智能系统的提供者和部署者的透明度义务
Article 50 Transparency obligations for providers and deployers of certain AI systems
1. Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, unless those systems are available for the public to report a criminal offence.
2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences.
3. Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law.
4. Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.
Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.
5. The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the applicable accessibility requirements.
6. Paragraphs 1 to 4 shall not affect the requirements and obligations set out in Chapter III, and shall be without prejudice to other transparency obligations laid down in Union or national law for deployers of AI systems.
7. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. The Commission may adopt implementing acts to approve those codes of practice in accordance with the procedure laid down in Article 56 (6). If it deems the code is not adequate, the Commission may adopt an implementing act specifying common rules for the implementation of those obligations in accordance with the examination procedure laid down in Article 98(2).
第五十条 特定人工智能系统的提供者和部署者的透明度义务
1、 提供者应当确保用于和自然人直接互动的人工智能系统的设计和开发方式使相关自然人知道他们正在与人工智能系统互动,但是,考虑到系统的使用情况和场景,从一个知情、观察力敏锐且谨慎的自然人的角度看上述情况是显而易见的除外。上述义务不适用于经法律授权用于检测、预防、调查或公诉刑事犯罪并已经适当地保障第三方权利和自由的人工智能系统,但公众可利用该等系统举报犯罪行为的除外。
2、 生成合成音频、图像、视频或文本内容的人工智能系统(包括通用人工智能系统)的提供者,应当确保该等人工智能系统的输出以机器可读格式标记,并可以被检测出是系统生成或操纵的。提供者应当确保其技术解决方案在技术上可行的情况下有效、可互操作、稳健且可靠,同时考虑到相关技术标准中可能反映的各类内容的特殊性和局限性、实施成本和公认的最新技术。如果人工智能系统为标准的编辑提供辅助功能,或者不会实质上改变部署者提供的输入数据或其语义,或者经法律授权被用于检测、预防、调查或起诉刑事犯罪,则不适用该项义务。
3、 情绪识别系统或生物识别分类系统的部署者应当向接触该系统的自然人告知该系统的运行情况,并应当根据第2016/679号和第2018/1725号条例以及第16/680号指令(如适用)处理个人数据。该项义务不适用于经法律允许用于检测、预防或调查刑事犯罪且对第三方的权利和自由采取适当的保障措施并符合欧盟法律的生物特征分类和情感识别人工智能系统。
4、 生成或篡改的图像、音频或视频内容构成深度伪造的人工智能系统的部署者应当披露该等内容是系统生成或操纵的。如果经法律授权将该系统用于发现、预防、调查或起诉刑事犯罪,则不适用该项义务。如果上述内容成为明显具有艺术性、创造性、讽刺性、虚构性或类似性质的作品或节目的一部分,本款规定的透明度义务则仅限于以不妨碍作品展示或欣赏的恰当方式披露上述生成或操纵内容存在这一事实。
生成或操纵文本的人工智能系统的部署者应当披露该等文本是系统生成或被操纵的。如果经法律授权将该等系统用于检测、预防、调查或起诉刑事犯罪,或者上述人工智能生成内容经过了人工审查或编辑控制,并且有自然人或法人对上述内容的发布负有编辑责任,则不适用该项义务。
5、 本条第1款至第4款所述信息最迟应在第一次与自然人互动或接触时以清晰可辨的方式提供给相应自然人。信息应符合所适用的无障碍访问要求。
6、 本条第1款至第4款不影响本法第三章规定的要求和义务,也不影响欧盟或成员国法律规定人工智能系统部署者应当承担的其他透明度义务。
7、 人工智能办公室应当鼓励和推动在欧盟层面制定业务守则,以促进有效落实对系统生成或操纵内容的相关检测和标记义务。欧盟委员会可根据本法第五十六条第6款规定的程序制定实施细则批准上述业务守则。如果欧盟委员会认为上述业务守则不够充分,可以根据本法第九十八条第2款规定的审查程序制定一份实施细则,细化履行上述义务的一般规则。
(后接第五部分)