The author stumbled upon a disturbing website while mistyping a URL, a site purporting to be a news platform entirely generated by artificial intelligence. This wasn’t the usual landing page filled with irrelevant ads; this was a collection of hundreds of poorly written articles on Swedish politics, complete with fabricated quotes and authors presented as AI bots, seemingly gearing up for the 2026 Swedish general election. The author refuses to share the exact URL to avoid driving traffic to the ad-laden site, but hints that it’s easily searchable for those curious enough.
The articles themselves are described as dizzyingly bad. One example cites a piece about the Center Party’s crisis and road to recovery following a scandal involving a high-ranking official. The writing is formulaic, lacking any depth or genuine insight, resembling the hollow pronouncements of a well-disguised zombie. Each article is attributed to a fictional AI persona, complete with a generated image and a fabricated bio. “Johan EcoBot,” for instance, is presented as a right-wing expert on economics, while ”Mikael NationGuard” focuses on migration and national security from a nationalist perspective. These personas, with their obviously fake names and generic pronouncements, only underscore the absurdity of the entire endeavor. The author notes a chilling similarity to encountering a human being and gradually realizing they’re speaking to a meticulously crafted, yet ultimately lifeless, entity. It’s a stark illustration of the potential pitfalls of over-relying on AI-generated content.
Further investigation reveals that these AI personas once had more realistic names, suggesting an earlier iteration of the website that attempted a veneer of legitimacy. The author references a previous report on the site by the newspaper ETC, which covered the site when it used realistic-looking AI-generated images and seemingly human names. Now, the site’s creators have seemingly abandoned this pretense, opting for overtly artificial identities. This shift, although bizarre, suggests an awareness of the previous criticism and a possible attempt to reposition the site.
Despite its obvious flaws and amateurish execution, the sheer volume of content is concerning. While generating such articles is relatively easy using tools like Chat GPT, the effort invested in creating hundreds of these pieces, along with associated social media accounts on platforms like Facebook, X (formerly Twitter), and Instagram, suggests a deliberate campaign, albeit a remarkably ineffective one. The social media posts mirror the inane quality of the articles, featuring nonsensical strings of keywords related to Swedish politics. The author contemplates dismissing the site as mere internet flotsam, yet its scale and apparent political intent prevent simple dismissal.
The author then draws a parallel to a more mainstream instance of AI-generated content gone awry: Apple’s news summarization feature in its latest iPhone operating system. This feature, designed to condense news headlines, has produced inaccurate and misleading summaries. The examples cited involve a misrepresentation of a BBC headline about a suspect in a New York murder case and a false report about the arrest of Benjamin Netanyahu, attributed to the New York Times. These errors, occurring within a widely used and trusted platform, highlight the potential for even sophisticated AI systems to generate misinformation, inadvertently or otherwise.
This juxtaposition of the crude, obviously fake news website with the errors produced by Apple’s AI underscores a critical point: the dangers of AI-generated content are not limited to obscure corners of the internet. The clumsy attempts at political manipulation on the AI-generated website are easily dismissed, but the errors in Apple’s system demonstrate that even well-intentioned applications of AI can unintentionally contribute to the spread of misinformation. The author concludes by questioning the need for fabricated news websites when even reputable tech giants are inadvertently creating false narratives through their AI systems. This raises a larger concern about the increasing reliance on AI for information dissemination and the potential for both subtle and blatant manipulation in an increasingly AI-driven world. The incident with Apple serves as a stark reminder that the challenge lies not just in identifying and combating deliberately fake news, but also in grappling with the unintended consequences of AI’s growing influence on our information landscape.