In a recent security update, Salesforce Slack Technologies criticized a key flaw in its AI-powered Slack platform that exposed users to potential data theft and phishing attacks. Discovered by the security firm PromptArmor, the vulnerability stemmed from a “prompt injection” flaw within Slack AI, which enables users to query messages using natural language.
The core issue lies in how Slack AI’s underlying large language model (LLM) processes instructions. It fails to differentiate between legitimate commands and malicious ones, leading to two significant risks. First, attackers with access to a Slack workspace could manipulate the AI to steal sensitive data from private channels. Second, they could initiate phishing attacks by embedding malicious prompts, potentially compromising user credentials.
PromptArmor’s research highlighted the expanding attack surface due to Slack AI’s recent capability to ingest uploaded documents and files. This change increased the risk of exploitation as attackers could leverage these files to introduce harmful instructions into the AI system.
After PromptArmor disclosed the flaw on August 14, Slack responded with a patch, though initially describing the behavior as “intended.” Despite the patch, this incident raises broader concerns about the security of AI tools in business environments. It serves as a reminder for organizations to review and tighten AI-related security settings to protect sensitive information from evolving threats.