On a recent Wednesday, social networking platform X, which was previously known as Twitter, made significant amendments to its Privacy Policy. The pivotal change indicates that X will allow third-party “collaborators” access to user data for the purpose of training artificial intelligence models—implicitly recommending that users opt out if they wish to maintain a level of privacy concerning their data. This decision comes on the heels of a public investigation led by the European Union’s privacy regulators, particularly focusing on the activities of xAI’s Grok AI, which has been trained using data harvested from X users. What complicates the situation even further is that this alteration in policy marks a pivotal shift in how X manages user data, aiming to capitalize on the burgeoning market for AI training.
The updated policy makes apparent that X is on the lookout for innovative monetization strategies, especially considering recent financial strains. Advertising withdrawals and boycotts have significantly affected the platform’s revenue streams, further exacerbated by a struggling subscription model. The potential licensing of user data to AI companies mirrors similar tactics adopted by platforms like Reddit, which have also turned to monetizing data to explore new income avenues. X’s willingness to sell user data poses critical concerns about the balance between profit and privacy, raising questions regarding users’ understanding of and consent to such practices.
The revised Privacy Policy, particularly Section 3 titled “Sharing Information,” offers insight into how users’ data may be exploited. Users are presented with an opt-out option, however, the procedure for doing so is not detailed effectively within the settings. Currently, users may perceive the “Privacy and safety” section as somewhat confusing, as the distinction between data shared with AI companies versus their established partners remains unclear. This ambiguity can easily mislead users, many of whom may inadvertently consent to third-party data sharing. This problematic approach undermines the trust users place in the platform and complicates users’ autonomy over their own data.
The update also replaced a previous policy statement, which stated that X would maintain user profiles and content for the duration of an account and would keep other personally identifiable information for a maximum of 18 months. Under the new structure, data retention appears more nebulous. X now indicates that different types of information will be stored for varying lengths of time—essentially contingent upon their need for such data to provide services or adhere to legal obligations. This vagueness can raise concerns among users regarding how their data is actually classified, retained, and potentially used in the context of business operations.
One particularly concerning addition to the policy is the reminder that public content might persist online beyond its deletion on X. This acknowledgment extends to third parties, who may store copies contrary to users’ intentions. The implication of this can land users in complicated scenarios where content they believed was eradicated from a personal standpoint remains accessible from external sources, including AI collectives or search engines. This raises substantive questions about data accountability and the ethical implications of AI training conducted on user-generated content.
The new “Liquidated Damages” section in X’s Terms of Service introduces penalties for organizations that exceed a specified threshold of content scraping. Specifically, organizations that request access to over a million posts within a day could face substantial fines, ostensibly acting as a protective measure for content generated on the platform. However, this also hints at the financial desperation of X, compounded by an unevenly received subscription model and deteriorating ad revenue. By monetizing its data and safeguarding against external exploitation, the company aims to draw a clear boundary around its digital assets.
The recent updates to X’s Privacy Policy reflect broader trends across social media platforms as they grapple with declining revenues and the need for alternative profit margins. While the shift towards sharing user data for AI model training represents a strategic move towards financial viability, it simultaneously poses intricate challenges related to privacy and user trust. As users become more aware of these changes, it is pertinent that they engage critically with privacy settings, understanding the implications of data sharing in a continuously evolving digital landscape.