WeTransfer has addressed recent concerns from users, confirming it does not use uploaded files to train artificial intelligence models.
The file-sharing platform came under fire on social media after updating its terms of service. Some users interpreted the revised language as giving the company permission to use personal files for AI-related purposes — particularly for training machine learning models. The backlash was swift, especially from those in creative industries who rely on the platform to share sensitive or original work.
In response, WeTransfer clarified that it has never used user content to train AI or sold data to third parties. A company representative explained that the controversial clause was intended solely to allow for potential use of AI in content moderation — for example, identifying harmful material — not for training large-scale AI models.
The clause originally included references to machine learning and broad usage rights, including reproducing, distributing, and publicly displaying user content. This vague wording sparked fears that the company could use or sell files to external AI firms.
After the outcry, WeTransfer revised the language in its terms of service to be more specific and easier to interpret. The updated terms, which take effect on August 8 for existing users, now state that the company has a license to use content “for the purposes of operating, developing, and improving the Service,” all in accordance with its Privacy & Cookie Policy.
This isn’t the first time a file-sharing service has faced scrutiny over AI-related concerns. In late 2023, Dropbox was forced to issue a similar clarification after users raised alarms about its AI use policies. While that controversy was also based on a misunderstanding, it highlighted growing distrust toward tech firms’ use of customer data in the AI era.
Legal experts warn that even subtle changes to terms and privacy policies can carry hidden implications. As one data protection lawyer pointed out, companies eager to tap into AI may seek ways to use existing user data under broad justifications like “improving services.” For users heavily reliant on these platforms, sudden shifts in policy can feel coercive — leaving them with few real alternatives.
WeTransfer’s swift update may help restore some trust, but the incident underscores a larger tension in the digital landscape: as AI development accelerates, transparency and trust are becoming just as important as innovation.
