South Korea’s KMCC Demands X Implement Minor‑Protection Safeguards for Grok AI
Updated (3 articles)
KMCC Issues Formal Request to X for Minor Safeguards On Wednesday the Korea Media and Communications Commission formally asked X to install protective measures against sexual content generated by the Grok AI model, targeting teenage users; the move reflects heightened regulatory worry over AI‑produced deep‑fake sexual material; KMCC cited existing statutes that require social networks to appoint a minor‑protection officer and submit annual compliance reports [1].
Legal Framework Mandates Dedicated Minor‑Protection Officer South Korean law obliges platforms to designate a person responsible for safeguarding minors and to file yearly reports on protective actions; non‑compliance can trigger criminal penalties for creating, distributing, or storing non‑consensual sexual deepfakes; KMCC’s demand aligns with these duties, reinforcing the regulator’s enforcement agenda [1].
Chair Emphasizes Balance Between Safety and Innovation KMCC chair Kim Jong‑cheol framed the initiative as part of a dual strategy to foster safe technological development while curbing harmful side effects; he argued that protecting minors does not conflict with encouraging AI advancement; the statement positions the regulator as both guardian and promoter of innovation [1].
No Deadline or Enforcement Mechanism Specified The KMCC release outlines expectations for safeguards but omits a concrete timeline for X’s compliance; it also lacks details on how violations would be monitored or penalized; this ambiguity leaves X’s obligations open to interpretation pending further regulatory guidance [1].
Timeline
Dec 16, 2025 – At a parliamentary confirmation hearing, Kim Jong‑cheol, a Yonsei Law School professor nominated to lead the Korea Media and Communications Commission (KMCC), declares that “it is absolutely necessary” to consider age‑restriction policies for teenage social‑media use, citing Australia’s recent ban on platforms for users under 16. He pledges a strong commitment to youth protection, promises to strengthen AI‑focused dispute‑resolution mechanisms, and vows to promote AI adoption for national competitiveness. The KMCC later clarifies that his remarks do not signal an immediate ban but signal an exploration of options such as parental‑consent requirements, using Australia’s policy as a model [2][3].
Jan 14, 2026 – The KMCC formally requests X to install minor‑protection safeguards for content generated by the Grok AI model, demanding measures to block sexual deep‑fake material and to limit teenage access. The commission reminds X of its legal duty to appoint a minor‑protection officer, submit annual reports, and face criminal penalties for creating or distributing non‑consensual sexual deep‑fakes. While the request outlines expectations, it provides no specific deadline or enforcement mechanism, underscoring the regulator’s push to balance innovation with child safety [1].