* Url 'http://www.kdau.com/files' blocked by robots.txt * Generating version from 1.2.0 * Brute forcing: http://www.kdau.com/files/gelemental-${PV}.tar.bz2 * Url 'http://www.kdau.com/files/gelemental-1.2.1.tar.bz2' blocked by robots.txt * Url 'http://www.kdau.com/files/gelemental-1.2.2.tar.bz2' blocked by robots.txt * Url 'http://www.kdau.com/files/gelemental-1.2.3.tar.bz2' blocked by robots.txt * Url 'http://www.kdau.com/files/gelemental-1.3.0.tar.bz2' blocked by robots.txt * Url 'http://www.kdau.com/files/gelemental-1.4.0.tar.bz2' blocked by robots.txt * Url 'http://www.kdau.com/files/gelemental-1.5.0.tar.bz2' blocked by robots.txt * Url 'http://www.kdau.com/files/gelemental-2.0.0.tar.bz2' blocked by robots.txt * Url 'http://www.kdau.com/files/gelemental-3.0.0.tar.bz2' blocked by robots.txt * Url 'http://www.kdau.com/files/gelemental-4.0.0.tar.bz2' blocked by robots.txt Once the base URL is blocked, we can skip the rest, because it will be blocked too.
Not always, "Disallow:" can be set only on a particular URL. Anyway, it's almost free to print these lines since robots.txt is fetched only once, and before scanning an url we see if we are allowed to do so before starting the network request. The only drawback is the noise in the log...