与 Content: application/xhtml+xml 服务页面相关的问题是什么 [英] What are the problems associated with serving pages with Content: application/xhtml+xml
问题描述
从最近开始,我的一些新网页(XHTML 1.1)设置为对请求标头 Accept
进行正则表达式,如果用户代理接受 XML(Firefox 和Safari 可以).
Starting recently, some of my new web pages (XHTML 1.1) are setup to do a regex of the request header Accept
and send the right HTTP response headers if the user agent accepts XML (Firefox and Safari do).
IE(或任何其他不接受它的浏览器)只会得到纯 text/html
内容类型.
IE (or any other browser that doesn't accept it) will just get the plain text/html
content type.
Google 机器人(或任何其他搜索机器人)会对此有任何问题吗?我看过的方法有什么负面影响吗?您认为这个标头嗅探器会对性能产生很大影响吗?
Will Google bot (or any other search bot) have any problems with this? Is there any negatives to my approach I have looked over? Would you think this header sniffer would have much effect on performance?
推荐答案
我和你一样使用内容协商在application/xhtml+xml
和text/html
之间切换描述,而没有注意到搜索机器人的任何问题.但严格来说,您应该考虑接受标头中的 q 值,该值指示用户代理对每种内容类型的偏好.如果用户代理更愿意接受 text/html
但将接受 application/xhtml+xml
作为替代,那么为了最大的安全性,您应该将页面用作 文本/html
.
I use content negotiation to switch between application/xhtml+xml
and text/html
just like you describe, without noticing any problems with search bots. Strictly though, you should take into account the q values in the accept header that indicates the preference of the user agent to each content type. If a user agent prefers to accept text/html
but will accept application/xhtml+xml
as an alternate, then for greatest safety you should have the page served as text/html
.
这篇关于与 Content: application/xhtml+xml 服务页面相关的问题是什么的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!