不是浏览器,虚拟,搜索引擎 [英] Not the browsers, dummy, the search engines

查看:73
本文介绍了不是浏览器,虚拟,搜索引擎的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述



如果我理解最近帖子的大致方向,那么想法是通过从各种各样的帮助来提高html / css的质量



浏览器。浏览器当然可以检测到问题但是它们没有明智的地方来报告它们并且没有办法防止同样的问题

在周围的多个站点中反复发生世界。

的想法根本不起作用。


但这个怎么样。假设我们拥有所有这些搜索引擎

蜘蛛做一个粗略的html / css编辑检查,当他们在互联网上徘徊

时,并没有发布有错误的项目进入他们的搜索

文件。或者也许在他们的列表上将它们标记为有错误。


这有一些事情要做。它可以防止在世界各地传播坏的html
;它会告诉作者

他们有问题的坏HTML,并鼓励他们解决他们的问题,因为这是任何人的唯一方式能够找到他们希望与世界分享的智慧核心。


从搜索引擎的角度来看,负面的是

蜘蛛需要花费更长的时间来分析给定的文件。

对他们来说也可能是积极的(除了那些温暖的光芒,当他们只知道他们做的正确的事情时,他们将是b $ b),他们会是

为客户提供更好的产品。如果有人从谷歌列表中选择了一个

项目,他们可以相当肯定它不会结束

是一堆尖括号和从墙到墙的文本。


问候,

Kent Feiler
www.KentFeiler.com

解决方案

2月10日,15:11,Kent Feiler< z .. 。@ zzzz.comwrote:


但这个怎么样。假设我们拥有所有这些搜索引擎

蜘蛛做一个粗略的html / css编辑检查,当他们在网上徘徊




有趣的想法...


这将解决作者不知道他们的

网站的问题无效。然而,真正的问题是,如果他们的网站无效,那么作者就不会知道(甚至理解)。如果某人

关心检查,那就不难说了。这种蜘蛛验证是指蜘蛛验证。想法

只是没有解决真正的问题。


在文章< aa ******* *************************@4ax.com> ;,

Kent Feiler< zz ** @ zzzz.comwrote :


但是这个怎么样。假设我们拥有所有这些搜索引擎

蜘蛛做一个粗略的html / css编辑检查,当他们在互联网上徘徊

时,并没有发布有错误的项目进入他们的搜索

文件。



这不会起作用,因为大多数Web内容都是错误的,但对用户来说仍然很有用

。搜索引擎会对其结果的有用性进行竞争,因此排除有用的结果会导致浏览器无法从b
默默恢复的错误,这将是一个非常糟糕的商业行为。


或者可能在列表中将它们标记为有错误。



不是一个新主意。最近在

WHATWG列表中对此进行了讨论(尽管讨论的主题是

)。


这不行,因为标记错误的页面意味着绝大多数搜索结果会在他们旁边显示错误标志

为搜索UI添加混乱。执行搜索的人不是主要对页面的规范一致性感兴趣。


这有一些事情要做。它可以防止在世界各地传播坏的html
;它会告诉作者

他们有问题的坏HTML,并鼓励他们解决他们的问题,因为这是任何人的唯一方式能够找到他们希望与世界分享的智慧核心。



搜索引擎并不是要在股票上设置坏的

HTML的人。


那里

对他们来说也可能是积极的(除了那些温暖的光芒之外

当他们知道他们正在做的时候正确的事情!),他们将是b $ b为他们的客户提供更好的产品。



为什么几乎每个搜索结果旁边都会出现错误标志

项目构成了向客户提供更好的产品?

-

Henri Sivonen
hs **** **@iki.fi
http://hsivonen.iki.fi /

Mozilla Web Author FAQ: http://mozilla.org/docs/web-developer/faq.html


文章< aa ******* *************************@4ax.com> ;,

Kent Feiler< zz ** @ zzzz.comwrote :


如果我理解最近帖子的大致方向,那么想法是通过征求帮助来提高html / css的质量。各种

浏览器。浏览器当然可以检测到问题但是它们没有明智的地方来报告它们并且没有办法防止同样的问题

在周围的多个站点中反复发生世界。

的想法根本不起作用。


但这个怎么样。假设我们拥有所有这些搜索引擎

蜘蛛做一个粗略的html / css编辑检查,当他们在互联网上徘徊

时,并没有发布有错误的项目进入他们的搜索

文件。或者可能在列表中将它们标记为有错误。



这是个好主意,肯特,但想一想 - 作为搜索引擎用户,

当你做搜索对于大头菜食谱,你是否关心它

带你去的页面是否有效?惩罚无效

页面的搜索引擎大多会受到惩罚。在一个理想的世界中,所有页面都会验证,但我认为有效页面将永远是少数,所以

搜索引擎别无选择,只能处理标签汤。他们

可以。他们做得很好,恕我直言。


现在,如果有人写了一个蜘蛛,其唯一目的是验证,那* *

会非常有趣。 ..


;)


-

菲利普
http://NikitaTheSpider.com/

全站点HTML验证,链接检查等等/>



If I understand the general direction of recent posts, the idea is to
improve the quality of html/css by soliciting help from the various
browsers. Browsers can certainly detect problems but they have no
sensible place to report them and no way to prevent the same problem
from happening over-and-over in multiple sites around the world. That
idea simply doesn''t work.

But how about this one. Suppose we have all of those search engine
spiders do a cursory html/css edit check while they''re creeping around
on the internet, and not post items with errors into their search
files. Or perhaps flag them on their lists as having errors.

This has some things going for it. It would prevent bad html from
being disseminated all over the world; it would inform the authors of
the bad html that they have a problem, and it would encourage them to
fix their problem since that''s the only way anyone will be able to
find the kernels of wisdom they wish to share with the world.

The negative from the search engine point of view would be that the
spiders would take substantially longer to analyze a given file. There
may be a positive for them as well (Other than that warm glow inside
when they just know they''re doing the right thing!), they would be
delivering a better product to their customers. If someone selected a
item from a Google list they could be fairly sure it wouldn''t end up
being a pile of pointy brackets and wall-to-wall text.

Regards,
Kent Feiler
www.KentFeiler.com

解决方案

On 10 Feb, 15:11, Kent Feiler <z...@zzzz.comwrote:

But how about this one. Suppose we have all of those search engine
spiders do a cursory html/css edit check while they''re creeping around
on the internet,

Interesting idea...

This would be a solution to a problem of authors not knowing if their
sites were invalid. However the real problem is that authors don''t
_care_ (or even understand) if their sites are invalid. If someone
cares to check, it''s not hard to tell. This "spider validation" idea
just doesn''t solve the real issue.


In article <aa********************************@4ax.com>,
Kent Feiler <zz**@zzzz.comwrote:

But how about this one. Suppose we have all of those search engine
spiders do a cursory html/css edit check while they''re creeping around
on the internet, and not post items with errors into their search
files.

That wont work, because most Web content is erroneous but still useful
to users. Search engines compete on the usefulness of their results, so
excluding useful results that have errors that browsers are able to
silently recover from would be a very bad business move.

Or perhaps flag them on their lists as having errors.

Not a new idea. This has been discussed relatively recently on the
WHATWG list, for example (even though the discussion was off-topic
there).

This wont work, because flagging erroneous pages would mean that the
vast majority of search results would have an error flag next to them
adding clutter to the search UI. A person performing a search isn''t
primarily interested about the spec conformance of the pages.

This has some things going for it. It would prevent bad html from
being disseminated all over the world; it would inform the authors of
the bad html that they have a problem, and it would encourage them to
fix their problem since that''s the only way anyone will be able to
find the kernels of wisdom they wish to share with the world.

Search engines aren''t in the business of putting perpetrators of bad
HTML on the stocks.

There
may be a positive for them as well (Other than that warm glow inside
when they just know they''re doing the right thing!), they would be
delivering a better product to their customers.

Why would having an error flag next to just about every search result
item constitute delivering a better product to their customers?

--
Henri Sivonen
hs******@iki.fi
http://hsivonen.iki.fi/
Mozilla Web Author FAQ: http://mozilla.org/docs/web-developer/faq.html


In article <aa********************************@4ax.com>,
Kent Feiler <zz**@zzzz.comwrote:

If I understand the general direction of recent posts, the idea is to
improve the quality of html/css by soliciting help from the various
browsers. Browsers can certainly detect problems but they have no
sensible place to report them and no way to prevent the same problem
from happening over-and-over in multiple sites around the world. That
idea simply doesn''t work.

But how about this one. Suppose we have all of those search engine
spiders do a cursory html/css edit check while they''re creeping around
on the internet, and not post items with errors into their search
files. Or perhaps flag them on their lists as having errors.

That''s a nice idea, Kent, but think about it -- as a search engine user,
when you do a search for "rutabaga recipes", do you care if the page it
takes you to is valid or not? A search engine that penalized invalid
pages would mostly be punishing itself. In an ideal world, all pages
would validate, but I think valid pages will always be the minority so
search engines have no choice but to deal with tag soup as best they
can. And they do a pretty good job, IMHO.

Now if someone wrote a spider whose sole purpose was validation, *that*
would be pretty interesting...

;)

--
Philip
http://NikitaTheSpider.com/
Whole-site HTML validation, link checking and more


这篇关于不是浏览器,虚拟,搜索引擎的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆