Social spam

Social spam is unwanted spam content appearing on social networking services, social bookmarking sites,[1] and any website with user-generated content (comments, chat, etc.). It can be manifested in many ways, including bulk messages,[2] profanity, insults, hate speech, malicious links, fraudulent reviews, fake friends, and personally identifiable information.

History

As email spam filters became more effective, catching over 95% of these messages, spammers have moved to a new target – the social web.[3] Over 90% of social network users have experienced social spam in some form.[4] Those doing the “spamming” can be automated spambots/social bots, fake accounts, or real people.[5] Social spammers often capitalize on breaking news stories to plant malicious links or dominate the comment sections of websites with disruptive or offensive content.[6]

Social spam is on the rise, with analysts reporting over a tripling of social spam activity in six months.[7] It is estimated that up to 40% of all social user accounts are fake, depending on the site.[8] In August, 2012, Facebook admitted through its updated regulatory filing[9] that 8.7% of its 955 million active accounts were fake.[10]

Types

Spam

Commercial spam is a comment that has commercial content irrelevant to the discussion at hand. Many of the old email spam content resurfaced on social networks, from Viagra ads, to work-from-home scams, to counterfeit merchandise. Recent analysis showed social spammers content preferences changing slightly, with apparel and sports accounting for 36% of all posts. Others included: porn and pills (16%), SEO/web development (23%), and mortgage loans (12%).[11]

Social networking spam

Social networking spam is spam directed specifically at users of internet social networking services such as Google+, Facebook, Pinterest, LinkedIn, or MySpace. Experts estimate that as many as 40% of social network accounts are used for spam.[8] These spammers can utilize the social network's search tools to target certain demographic segments, or use common fan pages or groups to send notes from fraudulent accounts. Such notes may include embedded links to pornographic or other product sites designed to sell something. In response to this, many social networks have included a "report spam/abuse" button or address to contact.[12] Spammers, however, frequently change their address from one throw-away account to another, and are thus hard to track.[13]

Facebook pages with pictures and text asking readers to e.g. "show your support" or "vote" are used to gather likes, comments and shares which improve the pages' ranking. The page is then slightly changed and sold for profit.[14][15]

Bulk

Bulk submissions are a set of comments repeated multiple times with the same or very similar text. These messages, also called as spam-bombs,[16] can come in the form of one spammer sending out duplicate messages to a group of people in a short period of time, or many active spam accounts simultaneously posting duplicate messages. Bulk messages can cause certain topics or hashtags to trend highly. For example, in 2009, a large number of spam accounts began simultaneously posting links to a website, causing ‘ajobwithgoogle’ to trend.[16]

Profanity

User-submitted comments that contain swear words or slurs are classified as profanity. Common techniques to circumvent censorship include “cloaking”, which works by using symbols and numbers in place of letters or inserting punctuation inside the word (for example, "w.o.r.d.s" instead of "words"). The words are still recognizable by the human eye, though are often missed by website monitors due to the misspelling.

Insults

User-submitted insults are comments that contain mildly or strongly insulting language against a specific person or persons. These comments range from mild name-calling to severe bullying. Online bullies often use insults in their interactions, referred to as cyberbullying. Hiding behind a screen name allows users to say mean, insulting comments with anonymity; these bullies rarely have to take responsibility for their comments and actions.[17]

Threats

User-submitted threats of violence are comments that contain mild or strong threats of physical violence against a person or group. In September 2012, Eric Yee was arrested for making threats in an ESPN comment section.[18] He started out discussing the high price of LeBron James shoes, but quickly turned into a stream of racist and insulting comments, and threats against children.[19] This is a more serious example of social spam.

Hate speech

User-submitted hate speech is a comment that contains strongly offensive content directed against people of a specific race, gender, sexual orientation, etc. According to a Council of Europe survey,[20] across the European Union, 78% of respondents had encountered hate speech online; 40% felt personally attacked or threatened; and 1 in 20 have posted hate speech themselves.

User-submitted comments can include malicious links that will inappropriately harm, mislead, or otherwise damage a user or computer. These links are most commonly found on video entertainment sites, such as YouTube.[21] When a user clicks on a malicious link, the result can include downloading malware to the user's device, directing the user to sites designed to steal personal information, drawing unaware users into participating in concealed advertising campaigns, and other harmful consequences.[22] Malware can be very dangerous to the user, and can manifest in several forms: viruses, worms, spyware, Trojan horses, or adware.[23]

Fraudulent reviews

Fraudulent reviews are reviews of a product or service from users that never actually used it, and therefore insincere or misleading. These are often solicited by the proprietor of the product or service, who contracts positive reviews, known as “reviews-for-hire”.[24] Some companies are attempting to tackle this problem by warning users that not all reviews are genuine.[25]

Fake friends

Fake friends occurs when several fake accounts connect or become “friends”. These users or spambots often try to gain credibility by following verified accounts, such as those for popular celebrities and public figures. If that account owner follows the spammer back, it legitimizes the spam account, enabling it to do more damage.[26]

Personally identifiable information

User-submitted comments that inappropriately display full names, physical addresses, email addresses, phone numbers, or credit card numbers are considered leaks of personally identifiable information (PII).[27]

See also

  • Forum spam – Advertisements on Internet forums
  • Messaging spam – spam targeting users of instant messaging (IM) services, sms or private messages within websites
  • Spam in blogs – Form of spamdexing
  • Television advertisement – Paid commercial segment on television
  • Wiki spam – Deliberate manipulation of search engine indexes

References

  1. Benjamin Markines; Ciro Cattuto; Filippo Menczer (2009). "Social spam detection". Proceedings of the 5th International Workshop on Adversarial Information Retrieval on the Web - AIRWeb '09. 5th International Workshop on Adversarial Information Retrieval on the Web (AIRWeb '09). pp. 41–48. doi:10.1145/1531914.1531924. ISBN 9781605584386.
  2. Rao, Sanjeev; Verma, Anil Kumar; Bhatia, Tarunpreet (30 December 2021). "A review on social spam detection: Challenges, open issues, and future directions". Expert Systems with Applications. 186: 115742. doi:10.1016/j.eswa.2021.115742.
  3. Tynan, Dan (3 April 2012). "Social spam is taking over the Internet". ITworld. Retrieved 5 August 2016.
  4. "Archived copy". Archived from the original on 15 October 2011. Retrieved 5 November 2012.{{cite web}}: CS1 maint: archived copy as title (link)
  5. "What is Social Spam? (And How to Avoid Creating It)". Constant Contact. 20 March 2012. Retrieved 5 August 2016.
  6. "Impermium – Google Impermium". Archived from the original on 15 October 2012. Retrieved 1 October 2016.
  7. Franceschi-Bicchierai, Lorenzo (1 October 2013). "Social Media Spam Increased 355% in First Half of 2013". Mashable. Retrieved 5 August 2016.
  8. Olga Kharif (25 May 2012). "'Likejacking': Spammers Hit Social Media". Businessweek. Archived from the original on 25 May 2012. Retrieved 5 August 2016.
  9. "Form 10-Q". Sec.gov. Retrieved 5 August 2016.
  10. Kelly, Heather (3 August 2012). "83 million Facebook accounts are fakes and dupes". CNN. Retrieved 5 August 2016.
  11. "Impermium – Google Impermium". Archived from the original on 15 September 2012. Retrieved 1 October 2016.
  12. "How do I report spam on Facebook? | Facebook Help Center". Facebook. Retrieved 5 August 2016.
  13. "Why is it so difficult to catch a spammer?". Spam Reader. Retrieved 5 August 2016.
  14. "Yahoo News: Why 'Liking' Facebook virals makes scammers rich". Yahoo. 24 October 2012. Retrieved 5 August 2016.
  15. Coles, Sarah (15 July 2016). "How 'Liking' a page on Facebook makes cash for spammers". AOL. Archived from the original on 29 May 2016. Retrieved 5 August 2016.
  16. Martin Bryant (1 September 2009). "New Twitter spam-bomb offers A Job With Google". The Next Web. Retrieved 5 August 2016.
  17. Hendrie, Alison (5 February 2010). "Complaint Box - Online Insults". Retrieved 1 October 2016.
  18. Kelly Dwyer (19 September 2012). "ESPN aids authorities in arresting a man accused of making threats against children in a post about LeBron James | Ball Don't Lie - Yahoo Sports". Sports.yahoo.com. Retrieved 5 August 2016.
  19. Wilson, Simone (18 September 2012). "Eric Yee, Yale Dropout, Allegedly Threatened to Shoot Valencia Schoolkids, Aurora Style, in ESPN Chatroom". Blogs.laweekly.com. Retrieved 5 August 2016.
  20. kernel (1) / Error - Young People against hate speech online Archived March 28, 2013, at the Wayback Machine
  21. "Video sites pose highest risk of malicious links in 2011". Kaspersky. 1 March 2012. Retrieved 5 August 2016.
  22. "Socializing with malware on Facebook and Twitter..." BullGuard. 27 August 2015. Retrieved 5 August 2016.
  23. "Malware – Good to Know – Google". www.google.com. Archived from the original on 19 October 2011. Retrieved 17 January 2022.
  24. Hsu, Tiffany (18 October 2012). "Yelp's new weapon against fake reviews: User alerts". Los Angeles Times. Retrieved 5 August 2016.
  25. Fiegerman, Seth (18 October 2012). "Yelp Cracks Down on Fake Reviews With New Consumer Alerts". Mashable. Retrieved 5 August 2016.
  26. http://twitter.mpi-sws.org/spam/pubs/twitterSpam_WWW2012.pdf
  27. "Get a high paying social media job today". 10 November 2022.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.