Can Canadian Culture Survive the Age of AI Slop? | Unpublished
Hello!
Source Feed: Walrus
Author: Vass Bednar
Publication Date: January 6, 2026 - 06:29

Stay informed

Can Canadian Culture Survive the Age of AI Slop?

January 6, 2026

Have you heard Solomon Ray’s new album Faithful Soul? It’s number one on the gospel charts—and entirely AI generated, just like the musical artist behind it.

The idea that a hit Spotify artist might not be human is a satire of the attention economy itself: an ecosystem once based on authenticity and connection now topped by a synthetic voice engineered for maximum uplift. What does “soul” even mean when it’s made by software trained on real music?

In a year when other “ghost artists,” like Velvet Sundown, also made headlines, Canada is being forced to rethink an old problem. The fight is about platforms, algorithms, and the ever-hazy question of Canadian content—or CanCon. Traditionally, CanCon policy has been about ensuring the ongoing survival of compelling, high-quality works by Canadian creators. The phrase “Made by Canadians” has been a guiding principle.

CanCon emerged during a distinctly sovereignty-driven era. In the 1960s and ’70s, Ottawa worried that broadcasters, studios, and cultural products from the United States were overwhelming Canada’s airwaves and shaping Canadian identity. CanCon quotas, the Canadian Radio-television and Telecommunications Commission, the Broadcasting Act, Telefilm, and the Canadian Broadcasting Corporation were all established as tools of self-determination. They were designed to ensure Canadian stories had space to exist and compete inside a market where foreign players, especially US networks, held disproportionate control. Today, the threat is foreign AI models—and a flood of synthetic media that collapses the very meaning of “Canadian” in the first place.

A few weeks ago, the federal broadcast regulator, the CRTC, released a new definition of CanCon. It says that humans, not AI, must occupy key creative roles in a production to meet the requirements. That makes sense. But something more foundational requires parsing: the composition of the content itself. Before we can decide what qualifies as “Canadian,” we need to be sure that what we hear, watch, or read is actually human-made.

The past month brought a wave of stories reminding us just how impossible that’s becoming. AI-generated pop songs topping Billboard charts, “pitch perfect” AI tracks celebrated by the BBC, The Local uncovering an alleged journalist-impersonation scam powered by AI, and that viral Bloomberg column begging Spotify to “stop the slop” before it reaches our ears.

People are inclined to reject fake content, and a recent survey found that Canadians are increasingly “concerned” about AI-generated material. Consumers appear willing to revolt.

So, will the market just magically correct for the infusion of computer-made material in our feeds? In some cases, that’s where the wind is blowing. YouTube announced this past summer that it will not monetize content generated by AI, while Vine is relaunching with a proudly no-AI policy. Some platforms, in other words, are beginning to draw real boundaries.

However, in other places—like Spotify—the distinction is non-existent. The music-streaming platform has no obligation, and currently no ability, to guarantee that your favourite song was sung by an actual person. Meanwhile, music labels are striking deals with AI-first streaming platforms, and mountains of sloppified songs are dominating TikTok. Pitchfork is already calling it a crisis.

This slopification parallels the “firehose of falsehood” method of spreading disinformation: overwhelm the system with vast quantities of low-quality synthetic material so authenticity becomes impossible to discern, and platforms default to whatever is cheapest and most scalable. Worse: public trust erodes, not just in the content itself but with the institutions tasked with curating it.

An optimist may argue that the free market will correct this. Audiences will demand human-made work, interventions—like labels—will differentiate material, and platforms will adapt. But market discipline only works when consumers can make informed choices, and that requires a level of disclosure the system now actively withholds.

Closing that information gap has become an active question for regulators. Quebec, through its Privacy Act, is currently the only province in Canada that requires public agencies to disclose the use of AI in their decision-making processes. In the United Kingdom, government agencies post their disclosures on a public hub and complete transparency reports. The Organisation for Economic Co-operation and Development’s recent report on AI usage in core government functions similarly encourages governments to act openly and with due regard for the public good.

Other jurisdictions are recognizing that the absence of disclosure is a consumer protection issue. California’s landmark Generative Artificial Intelligence: Training Data Transparency Act went into effect on January 1, 2026, requiring developers of generative AI systems to publish the source and ownership of their datasets; whether it includes copyrighted works and consumer information; any synthetic data; and how that data supports their AI model’s purpose. Part of a broader push from California to govern AI’s underlying systems, the bill aims to overcome the “black box” problem by making its inputs transparent.

But what also matters for the average internet user are labels. The European Union’s labelling requirements come into force in August 2026. With the goal of enhancing transparency and preventing deception, Article 50 of the EU regulation imposes a requirement to tag content created or manipulated by AI systems that could be perceived as real or human-made, including text, images, voices, and videos.

In Massachusetts, the Artificial Intelligence Disclosure Act, introduced in February 2025, would require a “clear and conspicuous notice” of AI involvement. Several US states, including Pennsylvania, are considering similar legislation. Georgia has proposed a law requiring disclosures whenever the technology is used in advertising or commerce.

If Canada wants its cultural policy to survive the age of slop, it will have to insist that what claims to be human—and Canadian—be verified as such. Sovereignty, in this context, is not just about protecting domestic production from foreign influence; it’s about preserving the conditions under which authorship by someone with a past and a place still matters. Otherwise, “Canadian content” risks becoming as hollow a category as an AI gospel song climbing the charts—convincing, uplifting, but ultimately empty.

Adapted from “The National Interest” newsletter, with permission of The Canadian SHIELD Institute.

The post Can Canadian Culture Survive the Age of AI Slop? first appeared on The Walrus.


Unpublished Newswire

 
On Jan. 3, United States forces captured Venezuelan President Nicolás Maduro and his wife in an early morning raid that included attacks on the capital city of Caracas. He is awaiting trial in the U.S. for President Donald Trump’s accusations of heading up a drug cartel.Who’s in charge of Venezuela after Nicolás Maduro’s capture? Latest updates on Donald Trump’s plans
January 7, 2026 - 19:25 | Globe Staff | The Globe and Mail
Canola farmers and producers in Saskatchewan say they will be watching closely next week as Prime Minister Mark Carney visits China for trade talks that will include agriculture.
January 7, 2026 - 19:23 | Vanessa Tiberio | Global News - Canada
Harshkumar Patel was sentenced to just over 10 years in prison last year, after a Minnesota jury found him guilty of being part of the smuggling ring.
January 7, 2026 - 18:45 | Globalnews Digital | Global News - Canada