Main Article Content
New approaches to collecting and analysing such large data have emerged as the amount, speed, and variety of user data on online social networks (e.g., user-generated data) has increased dramatically. Social bots, for example, have been used to provide automated analytics services and to provide better customer service. Malicious social media bots, on the other hand, have been used to propagate false information (e.g., fake news) that has real-world implications. As a result, identifying and removing hazardous social bots from online social networks is critical. Examining the quantitative aspects of their activity is one of the most common strategies for identifying malicious social bots. Social bots can readily replicate these characteristics, resulting in lower analytical accuracy. This study presents a novel method for detecting hostile social bots that includes feature selection based on clickstream sequence transition probability as well as semi-supervised clustering. This strategy takes use of both the ephemeral nature of user behaviour and the possibility for clickstreams to emerge. The results of our experiments on real online social networking platforms show that the
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.