23.11.2025, автор Spik.kz.
Monika Malik: On Kazakhstan’s AI Law
Вчера, 17:37, автор Spik.kz.
RU
KZ
EN
The adoption of the AI law in Kazakhstan marks the beginning of a new phase in the formation of the country’s national cognitive infrastructure. The world is moving toward a stratification of states based on their ability to manage models and data. In this context, the foundation of intellectual capital becomes legislation that creates frameworks for the sovereign development of AI platforms. Such frameworks must take into account data protection algorithms and the implementation of ethical standards. Currently, experts note that Kazakhstan is only just entering the global game of cognitive sovereignty, where the stakes are measured not by resources but by the ability to create and manage the algorithmic architectures of the future.

Our editorial team reached out to Monika Malik, Lead Data/AI Engineer at AT&T, one of the world’s largest sector players. In her commentary for SPIK.KZ, Monika emphasized that the Kazakhstani law establishes clear roles and responsibilities for owners, users, and operators of AI systems, introduces prohibitions on manipulative and unlawful practices, and formalizes requirements for labeling synthetic content. In her view, these measures lay the foundation for trust; however, the law’s true effectiveness depends on secondary rules, audit procedures, and transparent practical implementation.
Overall, within the info-sphere, experts have already noted that in the legal field of AI, managing cognitive infrastructure and local LLMs plays a key role. From this perspective, Kazakhstan’s AI law becomes not merely a regulatory act but a tool for the country’s strategic positioning within the global architecture of future intelligent ecosystems. As Malik notes, the law’s potential will only be realized in combination with the development of human capital, engineering competencies, and open data governance institutions.
Monika summarized several key observations regarding the adopted AI law in our country. She noted that the law performs well in several areas, highlighting the clear allocation of responsibility and roles among owners, holders, and users – a critical factor for real-world legal and operational accountability. She stressed that ethical guardrails, including specific prohibitions on manipulative or subliminal methods and unlawful personal-data processing, as well as principles of explainability and transparency, provide a solid basis for trust. Monika also pointed out the mandatory labeling of AI-generated content as a practical measure against deepfakes and disinformation. She perceives the creation of a National AI Platform as critical for establishing a sovereign environment for developing, training, and piloting models. Additionally, she noted that data protection measures, such as stricter consent and withdrawal rights, align AI use with existing privacy standards.
At the same time, Monika identified principal risks and potential downsides. She warned that the law’s success depends on its enforceability in combination with secondary regulations. Without proper rules, audits, and penalties, its provisions may remain symbolic. Broad bans on manipulative technologies may inadvertently hinder the deployment of safe UX or assistive features and research. Large-scale deepfake labeling requires clearly defined detection, provenance, and takedown processes to prevent circumvention. She also noted that the distribution of responsibility can be burdensome for small and medium-sized enterprises, and that a national platform without interoperability and portability rules may create dependency on external vendors. Furthermore, she emphasized the need for clarity regarding copyright and AI-generated content to avoid litigation risks.
Regarding likely consequences, Monika highlighted that government and regulated sectors can act faster once explainability and labeling requirements are formalized. Neutral and auditable access to the National AI Platform can anchor local LLMs, safety evaluation hubs, and public-service agents; if access is limited, the platform risks becoming a bottleneck. Monika also drew attention to the “public-trust dividend,” stressing that clear prohibitions and labeling will only be effective if complaint procedures, redress mechanisms, and penalties are transparent and swift.
She described what “good” implementation should look like, suggesting the issuance of technical standards within 90 days, the establishment of an independent AI Safety/Eval Hub – a center for AI model assessment and testing with open benchmarks and red-team reports. Interoperability of toolsets, API portability, and escrow of critical models for public services should be ensured. She also emphasized the need for proportional compliance measures, including safe harbors for SMEs. Monika highlighted the need for a comprehensive stack to respond to deepfakes and align legislation with international standards such as NIST AI RMF and ISO 42001, noting that Kazakhstan is deliberately aligning its AI governance with global practice.
Monika concluded that the law represents a high-signal framework with real potential, but its authority now depends on strict secondary rules, neutral platform access, and visible enforcement. She also noted that deepfake labeling is necessary but insufficient without rapid takedown procedures and provenance verification. According to her, the National AI Platform can serve both as a springboard for local LLMs and as a single point of failure, depending directly on interoperability rules. The real challenge, she emphasized, is not passing the law but operationalizing it.
by Rafael Balgin

Our editorial team reached out to Monika Malik, Lead Data/AI Engineer at AT&T, one of the world’s largest sector players. In her commentary for SPIK.KZ, Monika emphasized that the Kazakhstani law establishes clear roles and responsibilities for owners, users, and operators of AI systems, introduces prohibitions on manipulative and unlawful practices, and formalizes requirements for labeling synthetic content. In her view, these measures lay the foundation for trust; however, the law’s true effectiveness depends on secondary rules, audit procedures, and transparent practical implementation.
Overall, within the info-sphere, experts have already noted that in the legal field of AI, managing cognitive infrastructure and local LLMs plays a key role. From this perspective, Kazakhstan’s AI law becomes not merely a regulatory act but a tool for the country’s strategic positioning within the global architecture of future intelligent ecosystems. As Malik notes, the law’s potential will only be realized in combination with the development of human capital, engineering competencies, and open data governance institutions.
Monika summarized several key observations regarding the adopted AI law in our country. She noted that the law performs well in several areas, highlighting the clear allocation of responsibility and roles among owners, holders, and users – a critical factor for real-world legal and operational accountability. She stressed that ethical guardrails, including specific prohibitions on manipulative or subliminal methods and unlawful personal-data processing, as well as principles of explainability and transparency, provide a solid basis for trust. Monika also pointed out the mandatory labeling of AI-generated content as a practical measure against deepfakes and disinformation. She perceives the creation of a National AI Platform as critical for establishing a sovereign environment for developing, training, and piloting models. Additionally, she noted that data protection measures, such as stricter consent and withdrawal rights, align AI use with existing privacy standards.
At the same time, Monika identified principal risks and potential downsides. She warned that the law’s success depends on its enforceability in combination with secondary regulations. Without proper rules, audits, and penalties, its provisions may remain symbolic. Broad bans on manipulative technologies may inadvertently hinder the deployment of safe UX or assistive features and research. Large-scale deepfake labeling requires clearly defined detection, provenance, and takedown processes to prevent circumvention. She also noted that the distribution of responsibility can be burdensome for small and medium-sized enterprises, and that a national platform without interoperability and portability rules may create dependency on external vendors. Furthermore, she emphasized the need for clarity regarding copyright and AI-generated content to avoid litigation risks.
Regarding likely consequences, Monika highlighted that government and regulated sectors can act faster once explainability and labeling requirements are formalized. Neutral and auditable access to the National AI Platform can anchor local LLMs, safety evaluation hubs, and public-service agents; if access is limited, the platform risks becoming a bottleneck. Monika also drew attention to the “public-trust dividend,” stressing that clear prohibitions and labeling will only be effective if complaint procedures, redress mechanisms, and penalties are transparent and swift.
She described what “good” implementation should look like, suggesting the issuance of technical standards within 90 days, the establishment of an independent AI Safety/Eval Hub – a center for AI model assessment and testing with open benchmarks and red-team reports. Interoperability of toolsets, API portability, and escrow of critical models for public services should be ensured. She also emphasized the need for proportional compliance measures, including safe harbors for SMEs. Monika highlighted the need for a comprehensive stack to respond to deepfakes and align legislation with international standards such as NIST AI RMF and ISO 42001, noting that Kazakhstan is deliberately aligning its AI governance with global practice.
Monika concluded that the law represents a high-signal framework with real potential, but its authority now depends on strict secondary rules, neutral platform access, and visible enforcement. She also noted that deepfake labeling is necessary but insufficient without rapid takedown procedures and provenance verification. According to her, the National AI Platform can serve both as a springboard for local LLMs and as a single point of failure, depending directly on interoperability rules. The real challenge, she emphasized, is not passing the law but operationalizing it.
by Rafael Balgin
Похожие статьи
Window of Opportunity or the Bottleneck? An Interview with Monika Malik
Окно возможностей или бутылочное горлышко?
22.11.2025, автор Spik.kz.
Тюркский мир – это про братство и единый алфавит, а не про экономику
28.05.2025, автор Spik.kz.
Глобальное управление по-китайски: карты, деньги, институты
21.11.2025, автор Spik.kz.
БРИКС: каков его потенциал, и что он может дать Казахстану?
22.07.2025, автор Spik.kz.