了解单目标迷宫的 A* 启发式 [英] Understanding A* heuristics for single goal maze

查看:33
本文介绍了了解单目标迷宫的 A* 启发式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个像下面这样的迷宫:

I have a maze like the following:

||||||||||||||||||||||||||||||||||||
|                                 P|
| ||||||||||||||||||||||| |||||||| |
| ||   |   |      |||||||   ||     |
| || | | | | |||| ||||||||| || |||||
| || | | | |             || ||     |
| || | | | | | ||||  |||    |||||| |
| |  | | |   |    || ||||||||      |
| || | | |||||||| ||        || |||||
| || |   ||       ||||||||| ||     |
|    |||||| |||||||      || |||||| |
||||||      |       |||| || |      |
|      |||||| ||||| |    || || |||||
| ||||||      |       ||||| ||     |
|        |||||| ||||||||||| ||  || |
||||||||||                  |||||| |
|+         ||||||||||||||||        |
||||||||||||||||||||||||||||||||||||

目标是P找到+,子目标是

  • +的路径成本最低(1 hop = cost+1)
  • 最小化搜索的单元格数量(节点扩展)
  • The path to + is the least cost (1 hop = cost+1)
  • The number of cells searched (nodes expanded) is minimized

我试图理解为什么我的 A* 启发式算法的性能比我为 Greedy Best First 实现的实现差这么多.这是每个的两位代码:

I'm trying to understand why my A* heuristic is performing so much worse than an implementation I have for Greedy Best First. Here are the two bits of code for each:

#Greedy Best First -- Manhattan Distance
self.heuristic = abs(goalNodeXY[1] - self.xy[1]) + abs(goalNodeXY[0] - self.xy[0])

#A* -- Manhattan Distance + Path Cost from 'startNode' to 'currentNode'
return abs(goalNodeXY[1] - self.xy[1]) + abs(goalNodeXY[0] - self.xy[0]) + self.costFromStart

在这两种算法中,我都使用 heapq,根据启发式值确定优先级.两者的主要搜索循环相同:

In both algorithms, I'm using a heapq, prioritizing based on the heuristic value. The primary search loop is the same for both:

theFrontier = []
heapq.heappush(theFrontier, (stateNode.heuristic, stateNode)) #populate frontier with 'start copy' as only available Node

#while !goal and frontier !empty
while not GOAL_STATE and theFrontier:
    stateNode = heapq.heappop(theFrontier)[1] #heappop returns tuple of (weighted-idx, data)
    CHECKED_NODES.append(stateNode.xy)
    while stateNode.moves and not GOAL_STATE:
        EXPANDED_NODES += 1
        moveDirection = heapq.heappop(stateNode.moves)[1]

        nextNode = Node()
        nextNode.setParent(stateNode)
        #this makes a call to setHeuristic
        nextNode.setLocation((stateNode.xy[0] + moveDirection[0], stateNode.xy[1] + moveDirection[1]))
        if nextNode.xy not in CHECKED_NODES and not isInFrontier(nextNode):
            if nextNode.checkGoal(): break
            nextNode.populateMoves()
            heapq.heappush(theFrontier, (nextNode.heuristic,nextNode))

所以现在我们来解决这个问题.虽然 A* 找到了最佳路径,但这样做的代价非常大.为了找到cost:68的最优路径,它扩展(导航和搜索)452个节点来这样做.

So now we come to the issue. While A* finds the optimal path, it's pretty expensive at doing so. To find the optimal path of cost:68, it expands (navigates and searches through) 452 nodes to do so.

虽然我的贪婪最佳实现仅在 160 次扩展中找到了次优路径(成本:74).

While the Greedy Best implementation I have finds a sub-optimal path (cost: 74) in only 160 expansions.

我真的很想弄清楚我哪里出错了.我意识到 Greedy Best First 算法可以自然地表现出这样的行为,但是节点扩展的差距太大了,我觉得这里必须出了问题......任何帮助将不胜感激.如果我上面粘贴的内容在某些方面不清楚,我很乐意添加详细信息.

I'm really trying to understand where I'm going wrong here. I realize that Greedy Best First algorithms can behave like this naturally, but the gap in node expansions is just so large I feel like something has to be wrong here.. any help would be appreciated. I'm happy to add details if what I've pasted above is unclear in some way.

推荐答案

A* 提供了问题的最佳答案,贪婪的最佳优先搜索提供了任何解决方案.

A* provides the optimal answer to the problem, greedy best first search provides any solution.

预计 A* 需要做更多的工作.

It's expected that A* has to do more work.

如果您想要一个不再是最优的 A* 变体,而是更快地返回解决方案,您可以查看加权 A*.它只包括对启发式(权重 > 1)施加权重.在实践中,它给你带来了巨大的性能提升

If you want a variation of A* that is not optimal anymore but returns a solution much faster, you can look at weighted A*. It just consists of putting a weight to the heuristic (weight > 1). In practice, it gives you a huge performance increase

例如,你能不能试试这个:

For example, could you try this :

return 2*(abs(goalNodeXY[1] - self.xy[1]) + abs(goalNodeXY[0] - self.xy[0])) + self.costFromStart

这篇关于了解单目标迷宫的 A* 启发式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆