Security Bite: UX going open source is bad news for anonymous accounts

9to5Mac Security Bite is brought to you only Mosyle, the only Apple Unified Platform. Making Apple devices efficient and secure for business is what we do. Our unique integrated approach to management and security combines Apple’s modern security solutions with self-enforcement and fully automated compliance, Next Generation EDR, AI-powered Zero Trust, and exclusive Rights Management with the most powerful and modern Apple MDM on the market. The result is an automated Apple Unified Platform that is now trusted by more than 45,000 organizations to make millions of Apple devices work effortlessly and affordably. Ask for your EXTRA TRIAL today and understand why Mosyle is everything you need to work with Apple.
Amidst a slew of EU fines imposed on X earlier this month, Elon Musk announced that the platform’s entire recommendation algorithm will be open source. It appears to be helping to cool the regulatory waters by shedding more light on how the social media giant organizes users’ timelines.
Often, IT professionals would see news about something happening open sourcethey smile and go on with their lives. But last week, I found an interesting thread on none other than X explaining how this move can expose anonymous accounts by using “behavioral fingerprinting”…

An OSINT aficionado under the handle @Harrris0n on X recently posted about his findings while tinkering with the platform’s open source recommendation code. What he found is a little scary if you care about privacy or if you use a whole network of bot accounts.
Buried in X’s repo was something called “User Action Sequences.”
This is not just a log either. It is the core of the transformer that integrates all your behavior history on the platform. It tracks the specific milliseconds you pause to scroll, the type of accounts that caused the ban, the specific taste of your content, and the exact time you interact with it. It represents thousands of individual data points collected when you saw your first cat post.
Now, here’s where it gets interesting. UX uses this thread to predict engagement (basically offering more relevant content to keep you on the platform), while at the same time creating more reliable behavioral fingerprints.
Harrison found that if you apply this coding to a known account and compare it to thousands of unknown accounts using something the repo calls “Candidate Classification,” you get the same results. The matches are unusually high. He even laid out the specific recipe needed to build this name removal tool, and the barrier to entry here is very low.
According to his thread, all one needs is an action sequence encoder (recently provided by the X repo), an embedded match search, and a little luck (lol). The only missing piece for most people is training data for verified alt accounts, but Harrison notes that he still has that from years of stalking a threat actor.
In theory, you could map those behavioral fingerprints from a public X user to an anonymous, or possibly cross-platform, account on Reddit and Discord. It will show that you can easily change your username, but it is very difficult to change your habits.
So, is the heat account really unknown? I’ll let you decide.
I wanted to share this thread here on Security Bite because it’s a wonderful reminder that these algorithms often know you better than you do. And that digital version of you is still at risk.
Subscribe to the 9to5Mac Security Bite podcast for biweekly in-depth coverage and interviews with Apple’s leading security researchers and experts:


FTC: We use auto affiliate links to earn income. More.




