NEED-TO-KNOW
LEGAL ISSUED IN ARTIFICIAL INTELLIGENCE

1. OVERVIEW

  • Date of promulgation: The Law on Artificial Intelligence (Law No. 134/2025/QH15) was ratified by the National Assembly on December 10, 2025, and officially comes into force from March 01, 2026.
  • Scope of regulation: The Law regulates the research, development, provision, deployment, and use of artificial intelligence (hereinafter referred to as “AI”) systems; the rights and obligations of relevant organizations and individuals; and the state management of AI activities in Vietnam. The Law applies to Vietnamese authorities, organizations, and individuals, as well as foreign entities participating in AI activities in Vietnam. It does not apply to AI activities serving only national defense, security, and cipher activities.
  • Transitional provisions: For AI systems put into operation before March 01, 2026, providers and deployers are responsible for fulfilling compliance obligations within the following time limits from the effective date of the Law: 18 months for AI systems in the fields of health, education, and finance; 12 months for other AI systems. During these periods, the systems may continue to operate unless state management authorities determine a risk of causing serious damage and request the suspension or termination of operations.

2. KEY ISSUES TO NOTE

Firstly, Risk-based Classification and Management

  • Main Content and Legal Basis: According to Article 9, AI systems are classified into three risk levels: high, medium, and low. High-risk AI systems are those that cause or pose a risk of significant harm to life, health, legitimate rights and interests of organizations and individuals, national interests, public interests, or national security.
  • Conditions, Procedures and Obligations: Pursuant to Article 10 and Article 14, providers must self-classify AI systems before putting them into operation. For medium-risk and high-risk systems, providers must prepare classification dossiers and notify the classification results to the Ministry of Science and Technology via the single-window website on AI before operation. Specifically, high-risk AI systems must undergo conformity evaluation before operation; establish and maintain risk management measures; archive technical dossiers and activity logs; and ensure human oversight and intervention capabilities. Foreign providers with high-risk AI systems provided in Vietnam must have a legal contact point; in cases where the system is subject to mandatory conformity certification, they must have a commercial presence or an authorized representative in Vietnam.
  • Relevant Entities: Enterprises and organizations acting as providers (those who provide systems to the market) and deployers (those using systems to provide services) need to prioritize these regulations to perform classification and maintain continuous system conformity.

Secondly, Transparency Obligations and Prohibited Activities

  • Main Content and Legal Basis: Detailed provisions are set out in Article 11 regarding transparency responsibility and Article 7 regarding prohibited activities in AI activities.
  • Conditions, Procedures and Obligations: Providers must ensure that AI systems interacting directly with humans are designed and operated so that users can recognize when they are interacting with the systems. Audio, images, and videos generated or edited by AI systems to simulate or imitate the appearance or voice of real persons (such as deepfakes) or reenact actual events must be marked in a machine-readable format and labeled clearly to distinguish them from human-made content and avoid confusion. Simultaneously, the Law strictly prohibits developing, providing, deploying, or using AI systems to deceive or manipulate human perception and behaviors; exploiting the vulnerabilities of vulnerable groups (including children, the elderly, persons with disabilities, etc.). It is also prohibited to collect, handle, or use data to train AI systems against the laws on protection of personal data or to conceal information that must be disclosed, transparent, or explained.
  • Relevant Entities: Developers, digital content solution providers, and application platforms directly interacting with users need to establish notification labeling features and automated content moderation systems.

Thirdly, AI Regulatory Sandbox

  • Main Content and Legal Basis: Article 21 regulates the AI regulatory sandbox to encourage innovation.
  • Conditions, Procedures and Obligations: The sandbox is conducted under the supervision of competent state authorities, which are responsible for receiving and appraising dossiers in accordance with fast appraisal and response procedures. Authorities have the power to decide on the suspension or termination of the sandbox if risks to security, rights, or legitimate interests of organizations and individuals are detected. The results from the sandbox serve as a critical basis for the State to consider the recognition of conformity evaluation results, or the exemption, reduction, or adjustment of obligations prescribed in the Law.
  • Relevant Entities: Digital technology enterprises and startups developing new AI products and services that require a controlled, real-world environment for research, production, and commercialization.

Fourthly, Incident Management

  • Main Content and Legal Basis: Article 12 stipulates the responsibilities for reporting and handling serious incidents, which are events occurring during the operation of an AI system that cause or pose a risk of significant harm to life, health, human rights, property, cybersecurity, etc.
  • Conditions, Procedures and Obligations: When a serious incident occurs, developers and providers must promptly apply technical measures to fix, suspend, or recall the system, and notify competent authorities. Deployers and users are obligated to promptly record and notify incidents and cooperate in the fixing process. The entire process of reporting and handling incidents must be conducted via the single-window website on AI.
  • Relevant Entities: Developers, providers, deployers, and users all bear responsibilities for ensuring security and reliability, with specific obligations assigned to each entity based on the level of incident response.

In summary, to ensure compliance, relevant organizations and individuals may need to perform the following:

  • Review, evaluate, and self-classify the risk levels (high, medium, and low) for AI systems currently under development or being provided to the market.
  • Prepare notification procedures for medium-risk and high-risk AI systems; conduct conformity evaluation for high-risk AI systems.
  • Implement labeling mechanisms and mark content in a machine-readable format for AI-generated outputs to fulfill transparency responsibilities.
  • Establish adjustment plans for existing AI systems within the transitional period (12 to 18 months starting from March 01, 2026).
  • Develop internal procedures for archiving technical dossiers and activity logs, data management, and establishing incident response scenarios and online reporting via the single-window website on AI when serious incidents occur.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Translate »