How Foreign-Controlled AI Systems Are Using Protected Content at Scale
- The Chilean government's proposed copyright law, which would allow large artificial intelligence systems to use protected content without restriction, has reignited debate over intellectual property rights in the...
- The proposal, initially advanced during President Gabriel Boric's administration and now being replicated by the Kast government, permits AI developers—many of whom are foreign corporations—to train models on...
- Critics argue that such provisions undermine creators' rights and could enable widespread exploitation of artistic, journalistic, and academic work by AI systems, particularly those developed outside Chile and...
The Chilean government’s proposed copyright law, which would allow large artificial intelligence systems to use protected content without restriction, has reignited debate over intellectual property rights in the age of AI, as similar measures face scrutiny in other countries.
The proposal, initially advanced during President Gabriel Boric’s administration and now being replicated by the Kast government, permits AI developers—many of whom are foreign corporations—to train models on copyrighted material available online without requiring licenses or compensating rights holders.
Critics argue that such provisions undermine creators’ rights and could enable widespread exploitation of artistic, journalistic, and academic work by AI systems, particularly those developed outside Chile and deployed globally.
Public opposition to the measure has grown, with artists, writers, publishers, and digital rights organizations warning that the law would create a legal loophole allowing AI firms to bypass copyright protections while profiting from content they did not create or authorize.
The controversy echoes broader international debates about how copyright law should apply to AI training data. In the United States and the European Union, policymakers are grappling with similar questions, weighing the need for access to data against the protection of intellectual property.
According to analysis from the Information Technology and Innovation Foundation (ITIF), released in March 2026, rules governing the use of publicly available data are shaping the future of AI development, with some jurisdictions favoring permissive approaches that allow training on web-scraped content, while others advocate for stricter controls to prevent misuse.
ITIF recommends that regulators focus on AI outputs rather than training inputs, promote transparency for autonomous AI systems, and establish safe harbors for responsible use of public data to balance innovation with accountability.
Meanwhile, concerns about AI-driven influence operations have intensified, particularly regarding foreign actors using generative AI to produce and distribute disinformation at scale, as documented in reports from September 2025 detailing China-linked accounts leveraging AI for propaganda campaigns.
These developments underscore the growing tension between enabling AI innovation and safeguarding legal, ethical, and democratic norms in digital ecosystems, especially as governments consider legislation that could significantly alter how AI systems access and use existing content.
