
The report shows that generative AI (genAI) adoption in the retail sector has surged to 95%, up from 73% last year. While adoption is up, the use of personal genAI accounts at work in retail has dropped sharply, from around 74% of the workforce using personal genAI accounts at work in January to 36% in June. The use of personal genAI accounts at work is a major data security risk because security teams have no ability to monitor and secure this usage, and employees regularly leak sensitive data when using genAI.
The report shows that source code (47%) and regulated data (39%) account for most of those leaks, with employees feeding genAI tools with business and customer information. Intellectual property, passwords, and API keys are also being exposed through genAI apps, with exposure rates in retail mirroring cross-industry averages.
In contrast, adoption of organisation-approved genAI apps has more than doubled, from 21% to 52% over the same period. Organisations are deploying genAI applications among their workforce so they can make the most of their productivity benefits, but with stronger data safeguards and greater control over usage.
Other key insights include:
- The use of ChatGPT in the retail industry dropped slightly between February and May this year. This is the first time the Netskope Threat Labs observed a decrease in ChatGPT usage in retail, mirroring a cross-industry trend.
- 97% of retail organisations rely on genAI applications that collect user data for training purposes.
- ZeroGPT and DeepSeek lead the list of the most blocked applications by retail organisations. This is mostly due to issues with and transparency over their data handling practices.
- Employees within retail organisations are adopting more sophisticated AI platforms allowing them to build and deploy genAI models or AI agents. In doing so, they sometimes bypass formal security approval processes. These platforms can directly connect to enterprise data sources, and retailers must aim to discover this “shadow AI” as misconfigurations or uncontrolled access can put sensitive data at risk.
- Attackers often exploit trusted cloud services to deliver malware. Microsoft OneDrive is the most affected, with 11% of organisations encountering malware downloads on the platform monthly, followed by Github (9.7%), an application popular among developers, and Google Drive (6.9%).
Gianpietro Cutolo, Cloud Threat Researcher at Netskope Threat Labs, said:
“GenAI adoption in the retail sector is accelerating, with organisations increasingly using platforms like Azure OpenAI, Amazon Bedrock, and Google Vertex AI. While the use of personal genAI accounts is declining, organisation-approved platforms are gaining traction, reflecting a shift toward more controlled and monitored usage. Retailers are strengthening data security and monitoring cloud and API activity, helping to reduce exposure of sensitive information such as source code and regulated data. The goal is clear: leverage the benefits of AI innovation while protecting the organisation’s most valuable data assets.”
Stefan Baldus, Chief Information Security Officer at HUGO BOSS, explains:
“As a major international fashion label, the security of our data is paramount. The trend is clear and the era of uncontrolled shadow AI is over. As IT managers, we must no longer block innovation, we must manage it securely. That’s why we rely on modern security solutions that give us full transparency and control over sensitive data flows in the age of cloud computing and AI, and that can withstand constantly evolving cyber attacks. This is the only way we can harness the creative power of AI while ensuring the protection of our brand and customer data.”
Netskope protects millions of users worldwide. Information presented in the report is based on anonymised usage data collected by the Netskope One platform relating to a subset of Netskope customers from the retail industry, and with prior authorisation.
For more information visit www.netskope.com