news

 

新技术从“初出茅庐”,到对公众生活及工作产生广泛影响,其所需时间正在不断缩短——生成式人工智能(“生成式AI”)便是最好的例证。与新技术同步到来的还有新的监管规则,在生成式AI监管领域,中国无疑已经走在了世界前列。

 

今年8月底,包括百度、字节跳动、商汤等在内的8家企业和机构正式宣布对中国公众用户开放其大模型服务,至此,中国《生成式人工智能服务管理暂行办法》( “生成式AI管理办法”)下第一批备案通过的大模型诞生。

此时距离“生成式AI管理办法”正式生效不过半个月之久,距离生成式AI热潮席卷全球也不到半年时间,在生成式AI这一全新领域,中国的监管反应速度令人瞩目。

实际上,中国是首个在生成式AI领域实现规则落地的司法管辖区。对比之下,欧盟的《人工智能法案》授权草案仍在针对关键问题进行谈判;美国则主要由头部生成式AI公司展开自治,制定初步监管法律仍在计划之中。

平衡安全与发展

谈到中国在生成式AI领域监管的快速落地,世辉律师事务所合伙人李慧律师指出,这背后主要有两方面原因。“生成式人工智能由于其技术特点——尤其是强互动性和泛用性,对人类社会的影响比以往的很多人工智能技术都更大,因此其带来的相关风险也更为突出和急迫。”

其次,“监管生成式人工智能的‘工具箱’此前在中国已有一些积累,在《关于加强互联网信息服务算法综合治理的指导意见》、《互联网信息服务算法推荐管理规定》、《互联网信息服务深度合成管理规定》中均有制度设计和具体实践——如算法备案、安全评估等”。

不过,在这一全新领域,中国的立法仍在摸索着寻求平衡,既保障大模型输出内容的安全,又不抑制创新。为此,“生成式AI管理办法”也体现出了立法智慧。

一方面,李慧律师指出,今年4月“生成式AI管理办法”最初征求意见时,“那一版本是更为严格的,我们也协助多家人工智能领域的头部企业反馈了修改意见,很多最终都得到了接受,可见监管机构对于产业界的意见予以重视和尊重”。

另一方面,“如果仔细拆分条款,会发现管理办法新设的义务很少,更多还是现有法律规定在生成式人工智能领域的细化”,她说。

“尤其值得关注的是,新规很罕见地在标题中就有‘暂行’两字,这在近年国家网信部门的科技立法中是不太常见的,也能看到监管机构对生成式人工智能这样的新技术具有需要密切观察的心态,或许有可能根据技术的发展对规定进行调整。”

隔离风险、快速反应

针对市场热议的第一批在“生成式AI管理办法”下获得审批、拿到“牌照”的大模型,李慧律师解释道,虽然新规搭建了较为完整的监管框架,“但严格来说通篇并未使用‘审批’字样,而是采取‘转致规定’,即主要根据《具有舆论属性或社会动员能力的互联网信息服务安全评估规定》、《算法规定》等对生成式人工智能服务开展安全评估和算法备案”。

的确,上文8家公司和机构在宣布对公众开放产品时,都没有强调大模型“通过备案”的身份。

实践中,除了法规规定,李慧律师说,“还存在相关模板和评估要点,指导业务进行相关申报工作,其中包含大量规定动作,企业需如实、详尽地向监管机构说明其算法模型的基本情况,并对风险进行评估和控制”。

由于相关备案、风险评估主要由企业自行开展,过程中,熟悉这一领域监管思路的律师也很大程度扮演了把关人的角色。

李慧律师告诉ALB,目前世辉给生成式AI领域客户提供的建议分为两个版本。“一个是详细版本,其中包含了详细的清单和制度,需要结合具体情况进行量体裁衣。”

另一个则被她称为“简单版本,企业需要把握监管对于风险的判断,以制度和技术两个维度来管控风险,要么具备领先的技术措施,要么采取严格的管理制度,或者二者兼具。出现风险不可怕,怕的是对风险没有隔离机制,也不能快速反应。因此,合规要把重点放在隔离风险和快速反应这两点上。”

关键性角色

在“生成式AI管理办法”下,除了帮助相关企业更快、更安全地“进场”,展开大模型商业化之路,李慧律师指出,外部律师也能够发挥更为多元的作用。

“根据新规和境内外的实际案例,生成式人工智能服务提供者会涉及到诸多法律问题,例如数据保护、知识产权、内容安全、开源合规等。”李慧律师说。而解决这些问题涉及诸多难点,例如要能够平衡多方主体利益、深度理解技术方案和商业模式,而且针对许多问题,无论是监管机构、头部公司、学术界,目前也尚未达成共识。

在此过程中,律师不但要提供落地的解决方案,也要扮演好促进沟通的角色。“律师需要深度参与规则的讨论、建设,另一方面也要协助企业准确理解法律和监管要求,搭建风险防控体系。”

“尤其考虑到领域的立法常有所谓‘拔高’的要求,律师需要帮助企业判断各项要求的轻重缓急以及实现的现实可能性及潜在成本。在这一过程中,外部律师不仅是法律专家,也必须成为行业专家。”李慧律师说。

 


CONTROLLED GROWTH

 

The timeframe for a "nascent" technology to significantly influence the public and their daily lives is continually shrinking. Generative artificial intelligence, or generative AI, is a prime contemporary illustration of this trend. With the emergence of new technologies comes the need for corresponding regulations, and in the realm of generative AI, China is unequivocally at the forefront.

 

In late August of this year, eight companies and institutions, including Baidu, ByteDance, and SenseTime, officially launched their large model services to the Chinese public. This marked the initial rollout of large models approved for registration under China’s Interim Administrative Measures for Generative Artificial Intelligence Services, commonly known as the "Generative AI Measures."

At the time of this launch, it had been just two weeks since the Generative AI Measures came into effect and less than six months before the generative AI frenzy captivated the world. China's rapid regulatory response to this burgeoning field is truly remarkable.

Notably, China holds the distinction of being the first country to implement rules governing generative AI. In contrast, the European Union's draft rules for implementing its Artificial Intelligence Act are still awaiting resolution of key issues through negotiations. Meanwhile, the United States primarily relies on self-regulation by leading generative AI players, and the development of preliminary regulations remains in the planning stage.

BALANCING SECURITY AND DEVELOPMENT

Discussing China's swift implementation of generative AI regulations, Hilda Li, a partner at Shihui Partners, highlights two primary factors. "Generative AI has a greater impact on human society compared to many previous AI technologies, owing to its technical features, notably its robust interactive nature and versatility. Consequently, the associated risks are more acute and pressing."

Li continues, "Moreover, China already possesses tools for regulating generative AI. For instance, there are existing rules, designs, and concrete practices like registration, security assessment, and more in the domains of algorithm recommendations and deep synthesis management."

However, in this nascent field, Chinese legislation is navigating to strike a balance, ensuring the safety of content generated by large models without stifling innovation. The Generative AI Measures reflect legislative wisdom in this regard.

Li explains further, "During the initial public consultation stage of the Generative AI Measures in April this year; the version was notably more stringent. We assisted numerous leading AI companies in providing feedback on suggested revisions. Many of these suggestions were ultimately adopted, underscoring the regulatory authorities' appreciation for and respect of industry insights."

Furthermore, Li adds, "Upon closer examination of the provisions, one can observe that the Generative AI Measures introduced very few new obligations. The majority of provisions refine existing legal frameworks within the generative AI domain. Particularly intriguing is the inclusion of the word 'interim' in the title of the measures. This is relatively rare in the recent science and technology legislation adopted by the national cyberspace administration. It signifies the regulatory authorities' stance of vigilant monitoring concerning emerging technologies like generative AI, suggesting potential adjustments to the measures as the technology evolves."

IDENTIFYING AND RESPONDING TO RISK

Regarding discussions in the market about the initial set of large models that have obtained "licenses" under the Generative AI Measures, Li provides insights. She notes that while the measures have established a robust regulatory framework, "strictly speaking, the term 'examination and approval' is not explicitly used in the text. Instead, a 'renvoi' approach is taken. Essentially, the security assessment and algorithm registration for generative AI primarily adhere to the Provisions on Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilization Capabilities and the Provisions on Algorithms, among others."

In reality, when the eight companies mentioned above and institutions made their products publicly accessible, none explicitly highlighted that their large models had "successfully completed registration."

Li also mentions that besides regulatory provisions, "there are templates and essential assessment points. Businesses require guidance on relevant applications that encompass a multitude of specified elements. Companies must transparently and comprehensively elucidate the fundamental information about their algorithm models to regulatory authorities while assessing and managing risks."

As enterprises themselves primarily conduct the registration and risk assessment processes, legal experts well-versed in regulatory approaches within this domain act as significant gatekeepers throughout this procedure.

Li further explains that Shihui Partners currently offers two versions of advice to clients in the generative AI field. "One is a comprehensive version featuring detailed lists and regulations. This version needs to be customised based on specific circumstances."

The other version, referred to as the "simplified version," involves enterprises gaining an understanding of how regulatory authorities evaluate risks. Subsequently, they manage these risks by implementing internal rules and technical measures. Li emphasises, "Having risks is not alarming. What is concerning is not having any mechanisms in place to isolate risks and respond swiftly. Therefore, compliance efforts should focus on risk isolation and prompt responses."

KEY ROLE PLAYED BY LAWYERS

Within the framework of the Generative AI Measures, in addition to aiding relevant companies in entering the market swiftly and securely, and initiating the commercialisation of large models, external lawyers can contribute in a versatile manner.

"Based on these new measures and real-world use cases both domestically and internationally, providers of generative AI services will confront numerous legal challenges, including data protection, intellectual property, content safety, and open-source compliance," says Li. Addressing these challenges entails various complexities, such as balancing the interests of multiple stakeholders and possessing a deep understanding of technical solutions and business models. Furthermore, many of these issues currently lack a consensus among regulatory authorities, industry leaders, and academia.

In this process, lawyers need to offer not only practical solutions but also facilitate communication and knowledge exchange. "Lawyers should actively participate in the discussion and formulation of regulations. They should guide companies in comprehending legal and regulatory requisites accurately and assist in constructing their risk prevention and control systems."

"Given that sector-specific legislation often imposes what we call 'elevated' requirements," Li continues, "lawyers must help companies determine the prioritisation of various requirements and assess the practical feasibility and potential costs associated with complying with these diverse demands. Throughout this journey, external lawyers transcend their role as legal experts and must evolve into industry experts," emphasises Li.

 

TO CONTACT EDITORIAL TEAM, PLEASE EMAIL ALBEDITOR@THOMSONREUTERS.COM

Related Articles

有控制的发展 (ZH/EN)

新技术从“初出茅庐”,到对公众生活及工作产生广泛影响,其所需时间正在不断缩短——生成式人工智能(“生成式AI”)便是最好的例证。与新技术同步到来的还有新的监管规则,在生成式AI监管领域,中国无疑已经走在了世界前列。

AI工具成为律所“常客”:担心被替代,不如抢先拥抱新工具(ZH/EN)

by Charlie Wu 吴卓言 |

生成式AI ChatGPT所掀起的“人工智能热”似乎久久无法褪去,它的火爆也引发了法律界的关注。据BBC报道,使用了AI技术的法律科技工具,如ROSS、TAX-I、Litigate等都已被多家律所“雇用”。