Logo, so that AI-generated content "shines the identity"
Updated on: 23-0-0 0:0:0

  【Economic Interface】

Guangming Daily reporter Liu Kun Guangming Daily correspondent Liu Suwen

In the dead of night, some people look to it to chat online and get comfort; When a video is brought to goods, someone uses it to quickly generate marketing copy; At class reunions, some people use it to compose poems on the spot...... It is what many people call a "good helper in life" and "a good partner for work" - a large artificial intelligence (AI) model.

Recently, a number of large-scale model products have been born and swept the world, once again setting off an artificial intelligence boom. But at the same time, there are also some people who use artificial intelligence to generate and synthesize fake content, or to attract attention, earn traffic, or try to confuse the real with falsehood, rumors and fraud, etc., bringing certain risks and hidden dangers, causing social concern.

  近日,國家互聯網信息辦公室、工業和資訊化部、公安部、國家廣播電視總局聯合發佈《人工智慧生成合成內容標識辦法》(以下簡稱《辦法》)。《辦法》聚焦人工智慧“生成合成內容標識”關鍵點,通過標識提醒用戶辨別虛假資訊,明確相關服務主體的標識責任義務,規範內容製作、傳播各環節標識行為,將於2025年9月1日起施行。

How to make AI-generated synthetic content "reveal identity" and no longer "difficult to distinguish between real and fake"? How to solve the problem of AI security governance? The reporter conducted an interview on this.

Humanoid robot displayed at the "Artificial Intelligence +" Innovation and Development Promotion Conference in Suzhou, Jiangsu Province. Xinhua News Agency

Artificial intelligence synthesis makes it difficult to distinguish between the real and the fake

Kittens and puppies dance to the music, and the "cute baby" fights the rooster at home...... On short video platforms, people will swipe similar videos from time to time. In fact, some "magic videos" with a large number of spreads and likes are not real, but are generated and synthesized by artificial intelligence.

The so-called AI-generated synthetic content refers to the text, pictures, audio, video, virtual scenes and other information generated and synthesized by artificial intelligence technology.

In recent years, the rapid development of artificial intelligence technology has provided convenient tools for the generation of synthetic text, pictures, audio, video and other information, and massive information can be quickly generated and synthesized and disseminated on online platforms, which not only promotes economic development, enriches online content, and facilitates public life, but also brings problems such as the abuse of generative synthesis technology and the destruction of network ecology, which has aroused the attention of all sectors of society.

"Some viewers who like the film and television dramas I participated in were deceived very badly by the AI face-swapping video, which is very bad." One actor said he hoped better norms could be established. In fact, many public figures have experienced similar problems.

  公安部數據顯示,近兩年來,全國共發生“AI換臉”類詐騙案近百起,累計造成的經濟損失高達2億元。

"The increasing realism of AI-generated synthetic content has also given rise to new security risks such as the spread of false news, identity impersonation, and malicious content, and has weakened the public's trust in online content." Chen Chun, academician of the Chinese Academy of Engineering and professor of Zhejiang University, said.

Liu Quan, deputy chief engineer of China Electronic Information Industry Development Research Institute, believes that there is a problem of unexplainability in the artificial intelligence model itself. The operating mechanism of the large model is opaque, and the resulting "black box" attribute leads to unexplainability. This kind of unexplainability reduces the credibility and makes the output content possible with factual errors and biases, especially in the fields of medical care and finance, which is difficult to meet the strict credibility requirements.

Liu Xiaochun, director of the Internet Rule of Law Research Center of the University of Chinese Academy of Social Sciences, said that the time cost of content "forgery" and "fake" has been reduced through artificial intelligence generation and synthesis technology, especially in the fields of pictures, audio, and video. It poses a challenge to the cyberspace order, including content governance, and puts forward newer and higher requirements for governance tools, ways and means.

According to the data, by the end of 238, a total of 0 generative AI services had been filed with the Cyberspace Administration of China, of which 0 were newly filed that year.

"At present, large models can generate high-fidelity text, portraits, scenes, and audio, and it is often difficult for ordinary people to distinguish the authenticity of content without the help of detection tools." Cao Juan, director and researcher of the Digital Content Synthesis and Forgery Detection Laboratory of the Institute of Computing Technology of the Chinese Academy of Sciences, believes that with the application of generative artificial intelligence technology in various industries, the scale of data generated by synthetic content is growing rapidly, and the cost of comprehensive detection of generated content is too high. The application scenarios for generating synthetic content have been greatly expanded, which has contributed to the diversification of deception scenarios.

At a science and technology festival held in a middle school in Qingdao, Shandong Province, students watch a robot dog performance. Bright pictures photo by Zhang Ying

Build trustworthy AI technology

The introduction of the Measures is an important measure for China to promote security governance in the field of artificial intelligence, promote the healthy development of industrial norms, and guide technology for good, marking a key step in building a safe and trustworthy ecosystem in the field of generative artificial intelligence.

According to the relevant person in charge of the Cyberspace Administration of China, the "Measures" focus on solving the problems of "which is generated", "who generated" and "where is generated", promote the security management of the whole process from generation to dissemination, and strive to create reliable artificial intelligence technology.

For service providers, the Measures specify that explicit labels shall be added to the generated and synthesized content such as text, audio, pictures, videos, and virtual scenes, and that when providing functions such as downloading, copying, and exporting the generated and synthesized content, it shall ensure that the files contain explicit marks that meet the requirements; An implicit identifier should be added to the metadata of the file that generated the synthetic content.

For internet application distribution platforms, the Measures propose that when an application is put on the shelves or reviewed online, it shall require the internet application service provider to explain whether it provides artificial intelligence generation and synthesis services, and verify the materials related to the identification of the generated synthetic content.

For users, the Measures propose that if a user uses an online information content dissemination service to publish and generate synthetic content, it shall take the initiative to declare and use the identification function provided by the service provider to identify it.

In addition, no organization or individual may maliciously delete, tamper with, forge, or conceal the generated synthetic content identifiers provided for in these Measures, must not provide tools or services for others to carry out the malicious conduct described above, and must not harm the lawful rights and interests of others through improper labeling methods.

"The Measures improve security at a reasonable cost, promote the accelerated implementation of artificial intelligence in various application scenarios such as text dialogue, content production, and auxiliary design, and at the same time reduce the harm of abuse of artificial intelligence generation and synthesis technology, prevent the use of artificial intelligence technology to produce and disseminate false information and other risky behaviors, and promote the healthy and orderly development of artificial intelligence." The person in charge said.

Zhang Linghan, a professor at the Institute of Data Rule of Law at China University of Political Science and Law, believes that the AI-generated content identification system has unique institutional value. Identification can effectively distinguish the information generated and synthesized by artificial intelligence and prevent the spread and use of false information; Identifiers can help users quickly understand the relevant attributes or parameter information of generative AI products and services; Identification can assist regulatory authorities in the evaluation and traceability of generative AI products or services, and promote the legal and compliant development of AI-generated synthetic content.

Zhang Linghan said that at present, the identification system of AI-generated synthetic content mainly focuses on the form judgment of "whether it is machine-generated". With the progress and development of identification technology, the future identification system can play a richer function, gradually shifting from form judgment to "whether it is reliable enough" quality judgment, and further promote the healthy development of the industry.

Security governance enters the "deep water area"

Good laws and good governance focus on implementation.

In order to promote the implementation of the "Measures", the mandatory national standard "Network Security Technology - Methods for Identifying Artificial Intelligence Generated and Synthetic Content" was released simultaneously to better guide relevant entities to standardize identification activities.

"At present, AI security governance has moved from hazard discussion to the deep water area of actual law enforcement." Cao Juan said that the iterative update of large model generation technology is very fast, and new milestone models will appear in an average of two months, so it is important to improve the generalization ability of new forgery methods, and it is necessary to abandon the afterthought of "one shot" and build an AI forgery base model to improve the generalization and applicability of the forgery detection model.

Cao Juan said that it is necessary to study the technology of precise confrontation and counterfeiting to prevent malicious evasion risks. At the same time, it is necessary to reduce the impact on the dissemination of harmless generated content, take into account the development and governance of the application of generated content, create a national forgery detection tool, and promote the use of counterfeiting by everyone.

"It is foreseeable that China's artificial intelligence security law enforcement will continue the direction of supervision of key matters and promote the orderly development of the industry." Jin Bo, deputy director of the Third Research Institute of the Ministry of Public Security, said that with the gradual organic connection between logo management and algorithm filing, security assessment and other mechanisms, the compliance of generated synthetic content identification may become a key focus area for relevant departments to carry out artificial intelligence supervision and inspection, and special actions. In this process, how to balance development and security, innovation and responsibility, and improve the professional, refined, and intelligent level of law enforcement, so as to cultivate a safe, open, and fair ecological environment for the artificial intelligence industry, still needs to be explored in depth.

"In addition, the 'human factor' should be integrated into the whole process of AI identification management." Jin Bo said that it is necessary to focus on improving the public's ability to evaluate the authenticity of information content and source traceability, actively cultivate the public's artificial intelligence literacy, and ensure the inclusive sharing of artificial intelligence technology achievements.

In Chen Chun's view, the implementation of the "Measures" is inseparable from the cohesion and coordination of the whole society. Local competent departments, universities, scientific research institutions, and enterprises can jointly participate to form a multi-point collaborative content governance network that takes precedence in identification work, and promote the steady and far-reaching development of identification work. In addition, it is necessary to establish a national public service platform for content labeling, and promote the in-depth understanding of the public and the industry in the form of visual and interactive practical practice.

"Generative AI has led to the evolution of information technology from an ordinary tool to a highly intelligent tool of the 'thinking partner' type, bringing disruptive development opportunities, and it is bound to become an essential skill for us in the future, which requires us to understand and manage AI correctly when using it." Chen Chun said.

  《光明日報》(2025年03月27日 15版)

Source: Guangming Network - "Guangming Daily"

Don't let the scale fool you
Don't let the scale fool you
2025-03-25 23:24:57