Snap’s AI chatbot, known as ‘My AI,’ is facing scrutiny from the Information Commissioner’s Office (ICO), the UK’s privacy watchdog. The ICO has expressed apprehensions that Snap’s AI chatbot may pose a privacy risk to children. The concern has led the ICO to issue a preliminary enforcement notice against the company, citing a potential failure to adequately evaluate the privacy risks associated with My AI.
Snap’s AI chatbot: Privacy Risks for Children
Information Commissioner John Edwards has highlighted that the ICO’s initial findings indicate a troubling lapse on Snap’s part in properly identifying and assessing privacy risks, especially concerning children and other users. My AI’s rollout seems to have occurred without a comprehensive assessment of these risks. The ICO has underscored that if Snap does not adequately address these concerns, it may take measures that could result in the blocking of the ChatGPT-powered chatbot in the UK.
Snap’s Response and User Concerns
While the preliminary notice serves as a red flag, it does not automatically imply that the ICO will take legal action against Snap or confirm any violation of data protection laws. Snap has responded by stating that My AI underwent a thorough legal and privacy review process before its public release. The company is committed to working collaboratively with the ICO to ensure that its risk assessment procedures align with the regulator’s expectations.
Notably, My AI’s introduction into the Snapchat platform marked a significant milestone as the first instance of a generative AI system being integrated into a major messaging platform in the UK. However, concerns regarding privacy and the blurring of lines between humans and machines arose shortly after its launch, particularly among parents. These concerns extended beyond privacy issues and raised questions about teaching children to differentiate between humans and AI, which appeared visually similar.