Posts Tagged: Twitter bots


30
Aug 17

Twitter Bots Use Likes, RTs for Intimidation

I awoke this morning to find my account on Twitter (@briankrebs) had attracted almost 12,000 new followers overnight. Then I noticed I’d gained almost as many followers as the number of re-tweets (RTs) earned for a tweet I published on Tuesday. The tweet stated how every time I tweet something related to Russian President Vladimir Putin I get a predictable stream of replies that are in support of President Trump — even in cases when neither Trump nor the 2016 U.S. presidential campaign were mentioned.

This tweet about Putin generated more than 12,000 retweets and likes in a few hours.

This tweet about Putin generated more than 12,000 retweets and likes in a few hours.

Upon further examination, it appears that almost all of my new followers were compliments of a social media botnet that is being used to amplify fake news and to intimidate journalists, activists and researchers. The botnet or botnets appear to be targeting people who are exposing the extent to which sock puppet and bot accounts on social media platforms can be used to influence public opinion.

After tweeting about my new bounty of suspicious-looking Twitter friends I learned from my legitimate followers on Twitter that @briankrebs wasn’t alone and that several journalists and nonprofit groups that have written recently about bot-like activity on Twitter experienced something similar over the past few days.

These tweet and follow storms seem capable of tripping some kind of mechanism at Twitter that seeks to detect when accounts are suspected of artificially beefing up their follower counts by purchasing followers (for more on that dodgy industry, check out this post).

Earlier today, Daily Beast cybersecurity reporter Joseph Cox had his Twitter account suspended temporarily after the account was the beneficiary of hundreds of bot followers over a brief period on Tuesday. This likely was the goal in the campaign against my site as well.

Cox observed the same likely bot accounts that followed him following me and a short list of other users in the same order.

Cox observed the same likely bot accounts that followed him following me and a short list of other users in the same order.

“Right after my Daily Beast story about suspicious activity by pro-Kremlin bots went live, my own account came under attack,” Cox wrote.

Let that sink in for a moment: A huge collection of botted accounts — the vast majority of which should be easily detectable as such — may be able to abuse Twitter’s anti-abuse tools to temporarily shutter the accounts of real people suspected of being bots!

Overnight between Aug. 28 and 29, a large Twitter botnet took aim at the account for the Digital Forensic Research Lab, a project run by the Atlantic Council, a political think-tank based in Washington, D.C. In a post about the incident, DFRLab said the attack used fake accounts to impersonate and attack its members.

Those personal attacks — which included tweets and images lamenting the supposed death of DFR senior fellow Ben Nimmo — were then amplified and re-tweeted by tens of thousands of apparently automated accounts, according to a blost post published today by DFRLab.

Suspecting that DFRLab was now being followed by many more botted accounts that might retweet or otherwise react to any further tweets mentioning bot attacks, Nimmo cleverly composed another tweet about the bot attack — only this time CC’ing the @Twitter and @Twittersupport accounts. Sure enough, that sly tweet was retweeted by bots more than 73,000 times before the tweet storm died down.

tweetbotattack

“We considered that the bots had probably been programmed to react to a relatively simple set of triggers, most likely the words ‘bot attack’ and the @DFRLab handle,” Nimmo wrote. “To test the hypothesis, we posted a tweet mentioning the same words, and were retweeted over 500 times in nine minutes — something which, admittedly, does not occur regularly with our human followers.” Read more about the DFRLab episode here.

This week’s Twitter bot drama follows similar attacks on public interest groups earlier this month. On Aug. 19, the award-winning investigative journalism site ProPublica.org published the story, Leading Tech Companies Help Extremist Sites Monetize Hate.

On the morning of Tuesday, Aug. 22, several ProPublica reporters began receiving email bombs — email list subscription attacks that can inundate a targeted inbox with dozens or even hundreds of email list subscription confirmation requests per minute. These attacks are designed to deluge the victim’s inbox with so many subscription confirmation requests that it becomes extremely time-consuming to fish out the legitimate messages amid the dross.

On Wednesday ProPublica author Jeff Larson saw a tweet he sent about the email attacks get re-tweeted 1,200 times. Later that evening, senior reporting fellow Lauren Kirchner noticed a similar sized response to her tweet about how the subscription attack was affecting her ability to respond to messages.

On top of that, several ProPublica staffers suddenly gained about 500 new followers. On Thursday, ProPublica’s managing editor Eric Umansky noticed that a tweet accusing ProPublica of being an “alt-left #HateGroup and #FakeNews site funded by Soros” had received more than 23,000 re-tweets. Continue reading →


14
Aug 13

Buying Battles in the War on Twitter Spam

The success of social networking community Twitter has given rise to an entire shadow economy that peddles dummy Twitter accounts by the thousands, primarily to spammers, scammers and malware purveyors. But new research on identifying bogus accounts has helped Twitter to drastically deplete the stockpile of existing accounts for sale, and holds the promise of driving up costs for both vendors of these shady services and their customers.

Image: Twitterbot.info

Image: Twitterbot.info

Twitter prohibits the sale and auto-creation of accounts, and the company routinely suspends accounts created in violation of that policy. But according to researchers from George Mason University, the International Computer Science Institute and the University of California, Berkeley, Twitter traditionally has done so only after these fraudulent accounts have been used to spam and attack legitimate Twitter users.

Seeking more reliable methods of detecting auto-created accounts before they can be used for abuse, the researchers approached Twitter last year for the company’s blessing to purchase credentials from a variety of Twitter account merchants. Permission granted, the researchers spent more than $5,000 over ten months buying accounts from at least 27 different underground sellers.

In a report to be presented at the USENIX security conference in Washington, D.C. today, the research team details its experience in purchasing more than 121,000 fraudulent Twitter accounts of varying age and quality, at prices ranging from $10 to $200 per one thousand accounts.

The research team quickly discovered that nearly all fraudulent Twitter account merchants employ a range of countermeasures to evade the technical hurdles that Twitter erects to stymie the automated creation of new accounts.

“Our findings show that merchants thoroughly understand Twitter’s existing defenses against automated registration, and as a result can generate thousands of accounts with little disruption in availability or instability in pricing,” the paper reads. “We determine that merchants can provide thousands of accounts within 24 hours at a price of $0.02 – $0.10 per account.”

SPENDING MONEY TO MAKE MONEY

For example, to fulfill orders for fraudulent Twitter accounts, merchants typically pay third-party services to help solve those squiggly-letter CAPTCHA challenges. I’ve written here and here about these virtual sweatshops, which rely on low-paid workers in China, India and Eastern Europe who earn pennies per hour deciphering the puzzles.

topemailThe Twitter account sellers also must verify new accounts with unique email addresses, and they tend to rely on services that sell cheap, auto-created inboxes at HotmailYahoo and Mail.ru, the researchers found. “The failure of email confirmation as a barrier directly stems from pervasive account abuse tied to web mail providers,” the team wrote. “60 percent of the accounts were created with Hotmail, followed by yahoo.com and mail.ru.”

Bulk-created accounts at these Webmail providers are among the cheapest of the free email providers, probably because they lack additional account creation verification mechanisms required by competitors like Google, which relies on phone verification. Compare the prices at this bulk email merchant: 1,000 Yahoo accounts can be had for $10 (1 cent per account), and the same number Hotmail accounts go for $12. In contrast, it costs $200 to buy 1,000 Gmail accounts.

topcountriesFinally, the researchers discovered that Twitter account merchants very often spread their new account registrations across thousands of Internet addresses to avoid Twitter’s IP address blacklisting and throttling. They concluded that some of the larger account sellers have access to large botnets of hacked PCs that can be used as proxies during the registration process.

“Our analysis leads us to believe that account merchants either own or rent access to thousands of compromised hosts to evade IP defenses,” the researchers wrote.

Damon McCoy, an assistant professor of computer science at GMU and one of the authors of the study, said the top sources of the proxy IP addresses were computers in developing countries like India, Ukraine, Thailand, Mexico and Vietnam.  “These are countries where the price to buy installs [installations of malware that turns PCs into bots] is relatively low,” McCoy said.

Continue reading →


20
Mar 12

Twitter Bots Target Tibetan Protests

Twitter bots — zombie accounts that auto-follow and send junk tweets hawking questionable wares and services — can be an annoyance to anyone who has even a modest number of followers. But increasingly, Twitter bots are being used as a tool to suppress political dissent, as evidenced by an ongoing flood of meaningless tweets directed at hashtags popular for tracking Tibetan protesters who are taking a stand against Chinese rule.

It’s not clear how long ago the bogus tweet campaigns began, but Tibetan sympathizers say they recently noticed that several Twitter hashtags related to the conflict — including #tibet and #freetibet — are now so constantly inundated with junk tweets from apparently automated Twitter accounts that the hashtags have ceased to become a useful way to track the conflict.

The discovery comes amid growing international concern over the practice of self-immolation as a means of protest in Tibet. According to the Associated Press, about 30 Tibetans have set themselves on fire since last year to protest suppression of their Buddhist culture and to call for the return of the Dalai Lama — their spiritual leader who fled during a failed 1959 uprising against Chinese rule.

I first heard about this trend from reader Erika Rand, who is co-producing a feature-length documentary about Tibet called State of Control. Rand said she noticed the tweet flood and Googled the phenomenon, only to find a story I wrote about a similar technique deployed in Russia to dilute Twitter hashtags being used by citizens protesting last year’s disputed parliamentary elections there.

“We first discovered these tweets looking at Twitter via the web, then looked at TweetDeck to see how quickly they were coming,” Rand said in an email to KrebsOnSecurity.com late last week. “They no longer appear when searching for Tibet on Twitter via the web, but are still flooding in fast via TweetDeck. This looks like an attempt to suppress news about recent activism surrounding Tibet. We’re not sure how long it’s been going on for. We noticed it last night, and it’s still happening now.” Continue reading →


8
Dec 11

Twitter Bots Drown Out Anti-Kremlin Tweets

Thousands of Twitter accounts apparently created in advance to blast automated messages are being used to drown out Tweets sent by bloggers and activists this week who are protesting the disputed parliamentary elections in Russia, security experts said.

Image: Twitterbot.info

Amid widespread reports of ballot stuffing and voting irregularities in the election, thousands of Russians have turned out in the streets to protest. Russian police arrested hundreds of protesters who had gathered in Moscow’s Triumfalnaya Square, including notable anti-corruption blogger Alexei Navalny. In response, protesters began tweeting their disgust in a Twitter hashtag #триумфальная (Triumfalnaya), which quickly became one of the most-tweeted hashtags on Twitter.

But according to several experts, it wasn’t long before messages sent to that hashtag were drowned out by pro-Kremlin tweets that appear to have been sent by countless Twitter bots. Maxim Goncharov, a senior threat researcher at Trend Micro, observed that “if you currently check this hash tag on twitter you’ll see a flood of 5-7 identical tweets from accounts that have been inactive for months and that only had 10-20 tweets before this day. To this point those hacked accounts have already posted 10-20 more tweets in just one hour.”

“Whether the attack was supported officially or not is not relevant, but we can now see how social media has become the battlefield of a new war for freedom of speech,” Goncharov wrote.

I’ve been working with a few security researchers inside of Russia who asked not to be named for fear of retribution by patriotic Russian hackers or the government. Since Trend’s posting, they’ve identified thousands of additional accounts (e.g., @ALanskoy, @APoluyan, @AUstickiy, @AbbotRama, @AbrahamCaldwell…a much longer list is available here) that are rapidly posting anti-protester or pro-Kremlin sentiments to more than a dozen hashtags and keywords that protesters are using to share news, including #Navalny. Continue reading →