Chrome为什么要请求robots.txt? [英] Why does Chrome request a robots.txt?

查看:121
本文介绍了Chrome为什么要请求robots.txt?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在日志中注意到Chrome浏览器要求提供 robots.txt 以及我期望的所有内容。

I have noticed in my logs that Chrome requested a robots.txt alongside everything I expected it to.

[...]
2017-09-17 15:22:35 - (sanic)[INFO]: Goin' Fast @ http://0.0.0.0:8080
2017-09-17 15:22:35 - (sanic)[INFO]: Starting worker [26704]
2017-09-17 15:22:39 - (network)[INFO][127.0.0.1:36312]: GET http://localhost:8080/  200 148
2017-09-17 15:22:39 - (sanic)[ERROR]: Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/sanic/app.py", line 493, in handle_request
    handler, args, kwargs, uri = self.router.get(request)
  File "/usr/local/lib/python3.5/dist-packages/sanic/router.py", line 307, in get
    return self._get(request.path, request.method, '')
  File "/usr/local/lib/python3.5/dist-packages/sanic/router.py", line 356, in _get
    raise NotFound('Requested URL {} not found'.format(url))
sanic.exceptions.NotFound: Requested URL /robots.txt not found

2017-09-17 15:22:39 - (network)[INFO][127.0.0.1:36316]: GET http://localhost:8080/robots.txt  404 42
[...]

我正在运行Chromium:

I am running Chromium:

60.0.3112.113(Developer Build)在Ubuntu上构建,在Ubuntu 16.04(64位)上运行

这是为什么发生了什么?
有人可以详细说明吗?

Why is this happening? Can someone elaborate?

推荐答案

有可能不是您的网站在请求 robots.txt 文件,但其中一个Chrome扩展程序(例如 Wappalizer 您提到的)。

There is the possibility it was not your Website that was requesting the robots.txt file, but one of the Chrome extensions (like the Wappalizer you mentioned). This would explain why it only happened in Chrome.

要确定要确定,您可以检查Chrome DevTools的网络标签以查看发出请求的时间,以及是否发出请求。它来自您的脚本之一。

To know for sure you could check the Network tab of Chrome's DevTools to see at which point the request is made, and if it comes from one of your scripts.

这篇关于Chrome为什么要请求robots.txt?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆