使用Content提交页面有什么问题:application / xhtml + xml [英] What are the problems associated with serving pages with Content: application/xhtml+xml
问题描述
从最近开始,我的一些新网页(XHTML 1.1)设置为执行请求标头接受
的正则表达式,并在用户发送正确的HTTP响应标头代理接受XML(Firefox和Safari做)。
Starting recently, some of my new web pages (XHTML 1.1) are setup to do a regex of the request header Accept
and send the right HTTP response headers if the user agent accepts XML (Firefox and Safari do).
IE(或任何其他不接受它的浏览器)只会获得简单的文本/ html
内容类型。
IE (or any other browser that doesn't accept it) will just get the plain text/html
content type.
Google bot(或任何其他搜索机器人)是否会遇到任何问题?我看过来的方法是否有任何负面影响?你认为这个标题嗅探器会对性能有多大影响吗?
Will Google bot (or any other search bot) have any problems with this? Is there any negatives to my approach I have looked over? Would you think this header sniffer would have much effect on performance?
推荐答案
我使用内容协商在 application / xhtml + xml
和 text / html
就像你描述的那样,没有注意到搜索机器人的任何问题。但是,严格来说,您应该考虑accept头中的q值,该值表示用户代理对每种内容类型的偏好。如果用户代理更喜欢接受 text / html
但接受 application / xhtml + xml
作为替代,那么最大的安全性你应该把页面作为 text / html
。
I use content negotiation to switch between application/xhtml+xml
and text/html
just like you describe, without noticing any problems with search bots. Strictly though, you should take into account the q values in the accept header that indicates the preference of the user agent to each content type. If a user agent prefers to accept text/html
but will accept application/xhtml+xml
as an alternate, then for greatest safety you should have the page served as text/html
.
这篇关于使用Content提交页面有什么问题:application / xhtml + xml的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!