Abstract:
Every industry relies heavily on accurate news and information distribution. In the recent decade social media has become one of the main methods of sharing news and other social impacting information online. But there's an uprising threat to all social media platforms, especially for twitter, known as bots. Not all bots are malicious but these automated accounts are largely responsible for platform manipulation, which is the process of misleading, disrupting the experience of other users by engaging in deceptive, aggressive activities. There are many politically motivated groups in Facebook and Twitter who use various levels of manipulation to influence voters and thereby undermining the democratic process. Platform manipulation is not only carried out by malicious automation, but also with spam and inauthentic accounts (fake accounts). This paper presents novel methodology to detect these bots (automated accounts) using existing research as foundation and builds new research solution to the problem. This methodology can be applied to the news domain to find bots involved in spreading false information. This methodology classifies a given tweet into either fake news or not and use the result as a feature and in addition to that user credibility can also be taken into account.
Citation:
N. C. Wickramarathna and G. Upeksha Ganegoda, "Detecting Automatically Generated Tweets Using Lexical Analysis and Profile Credibility," 2019 4th International Conference on Information Technology Research (ICITR), 2019, pp. 1-6, doi: 10.1109/ICITR49409.2019.9407800.