Abstract:
Web crawlers are programs or automated scripts
that scan web pages methodically to create indexes. Search
engines such as Google, Bing use crawlers in order to provide
web surfers with relevant information. Today there are also
many crawlers that impersonate well-known web crawlers. For
example, it has been observed that Google’s Googlebot crawler
is impersonated to a high degree. This raises ethical and security
concerns as they can potentially be used for malicious purposes.
In this paper, we present an effective methodology to detect fake
Googlebot crawlers by analyzing web access logs. We propose
using Markov chain models to learn profiles of real and fake
Googlebots based on their patterns of web resource access
sequences. We have calculated log-odds ratios for a given set
of crawler sessions and our results show that the higher the
log-odds score, the higher the probability that a given sequence
comes from the real Googlebot. Experimental results show, at a
threshold log-odds score we can distinguish the real Googlebot
from the fake.
Citation:
N. Algiryage, G. Dias and S. Jayasena, "Distinguishing Real Web Crawlers from Fakes: Googlebot Example," 2018 Moratuwa Engineering Research Conference (MERCon), 2018, pp. 13-18, doi: 10.1109/MERCon.2018.8421894.