Philippe Dufresne, the Privateness Commissioner of Canada, has discovered OpenAI was “not compliant with” Canadian federal and provincial privateness legal guidelines within the coaching of its AI fashions. Following an investigation, Dufresne and his counterparts in Alberta, Quebec and British Columbia say OpenAI’s strategy to issues like information assortment and consent stepped on a number of legal guidelines, together with Canada’s Private Data Safety and Digital Paperwork Act (PIPEDA), which governs how corporations gather and use private data through the regular course of enterprise.
The commissioners collaborating within the investigation recognized a number of privateness points with OpenAI’s strategy, together with that the corporate “gathered huge quantities of private data with out enough safeguards to forestall use of that data to coach its fashions,” and that it failed to accumulate consent to gather and use that non-public data within the first place. Warnings in ChatGPT notice that interactions with the AI could possibly be utilized in coaching, however third-party information OpenAI has bought or scraped additionally consists of private particulars folks probably aren’t even conscious of. The truth that ChatGPT customers haven’t any option to entry, right or delete that information was one other challenge that the commissioners recognized, based on a abstract of the investigation’s findings, together with OpenAI’s lackluster makes an attempt to acknowledge the inaccuracy of a few of ChatGPT’s responses.
Canada’s Privateness Commissioner contends that OpenAI was open and attentive to the investigation, and has already dedicated to creating a number of modifications to ChatGPT to observe Canadian privateness legal guidelines. OpenAI has retired earlier fashions that violated Canadian privateness regulation, and now makes use of “a filtering device to detect and masks private data (resembling names or telephone numbers) in publicly accessible web information and licensed datasets used to coach its fashions,” the Commissioner says. The corporate has additionally agreed inside the subsequent three months so as to add a brand new discover to the signed-out model of ChatGPT explaining that chats can be utilized for coaching and delicate data should not be shared, and inside the subsequent six months:
Whereas Canada’s investigation into OpenAI’s privateness insurance policies was opened in 2023, the corporate has obtained scrutiny from regulators extra just lately due to its connection to the mass capturing that occurred in Tumbler Ridge in February 2026. OpenAI had reportedly flagged the alleged shooter’s account in 2025 for holding warnings of real-world violence, however didn’t escalate these considerations to Canadian regulation enforcement. Following the capturing, regulators demanded the corporate change its strategy to security, and OpenAI in the end agreed to be extra collaborative with Canadian regulation enforcement and well being companies sooner or later.


