A *和N-益智优化 [英] A* and N-Puzzle optimization

查看:151
本文介绍了A *和N-益智优化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我写了N个益智解算器(见 http://en.wikipedia.org/维基/ Fifteen_puzzle

I am writing a solver for the N-Puzzle (see http://en.wikipedia.org/wiki/Fifteen_puzzle)

现在,我使用的是unordered_map来存储拼图板的哈希值, 和曼哈顿距离作为启发式的算法,这是一个普通的DFS

Right now I am using a unordered_map to store hash values of the puzzle board, and manhattan distance as the heuristic for the algorithm, which is a plain DFS.

让我有

    auto pred = [](Node * lhs, Node * rhs){ return lhs->manhattanCost_ < rhs->manhattanCost_; };
    std::multiset<Node *, decltype(pred)> frontier(pred);
    std::vector<Node *> explored; // holds nodes we have already explored
    std::tr1::unordered_set<unsigned> frontierHashTable;
    std::tr1::unordered_set<unsigned> exploredHashTable;

这对于n = 2和3的伟大工程。 然而,它真的碰运气对于n = 4及以上。 (STL无法分配内存的一个新的节点)

This works great for n = 2 and 3. However, its really hit and miss for n=4 and above. (stl unable to allocate memory for a new node)

我还怀疑我得到哈希冲突的unordered_set

I also suspect that I am getting hash collisions in the unordered_set

unsigned makeHash(const Node & pNode)
{
unsigned int b    = 378551;
unsigned int a    = 63689;
unsigned int hash = 0;

for(std::size_t i = 0; i < pNode.data_.size(); i++)
{
    hash = hash * a + pNode.data_[i];
    a    = a * b;
}

return hash;
}

16! = 2×10 ^ 13(可安排)

16! = 2 × 10^13 (possible arrangements)

2 ^ 32 = 4×10 ^ 9(可能哈希值在32位的散列值)

2^32 = 4 x 10^9 (possible hash values in a 32 bit hash)

我的问题是我如何优化code,解决了N = 4和n = 5? 我知道从这里 http://kociemba.org/fifteen/fifteensolver.html

My question is how can I optimize my code to solve for n=4 and n=5? I know from here http://kociemba.org/fifteen/fifteensolver.html

HTTP://www.ic-net。 or.jp/home/takaken/e/15pz/index.html

这n = 4时,可以在不到一秒钟的平均

that n=4 is possible in less than a second on average.

编辑: 该算法本身是在这里:

edit: The algorithm itself is here:

bool NPuzzle::aStarSearch()
 {
auto pred = [](Node * lhs, Node * rhs){ return lhs->manhattanCost_ < rhs->manhattanCost_; };
std::multiset<Node *, decltype(pred)> frontier(pred);
std::vector<Node *> explored; // holds nodes we have already explored
std::tr1::unordered_set<unsigned> frontierHashTable;
std::tr1::unordered_set<unsigned> exploredHashTable;

// if we are in the solved position in the first place, return true
if(initial_ == target_)
{
    current_ = initial_;
    return true;
}

frontier.insert(new Node(initial_)); // we are going to delete everything from the frontier later..

for(;;)
{
    if(frontier.empty())
    {
        std::cout << "depth first search " << "cant solve!" << std::endl;
        return false;
    }


    // remove a node from the frontier, and place it into the explored set
    Node * pLeaf = *frontier.begin();
    frontier.erase(frontier.begin());
    explored.push_back(pLeaf);

    // do the same for the hash table
    unsigned hashValue = makeHash(*pLeaf);
    frontierHashTable.erase(hashValue);
    exploredHashTable.insert(hashValue);

    std::vector<Node *> children = pLeaf->genChildren();
    for( auto it = children.begin(); it != children.end(); ++it)
    {
        unsigned childHash = makeHash(**it);
        if(inFrontierOrExplored(frontierHashTable, exploredHashTable, childHash))
        {
            delete *it;
        }
        else
        {
            if(**it == target_)
            {
                explored.push_back(*it);
                current_ = **it;

                // delete everything else in children
                for( auto it2 = ++it; it2 != children.end(); ++it2)
                    delete * it2;

                // delete everything in the frontier
                for( auto it = frontier.begin(); it != frontier.end(); ++it)
                    delete *it;

                // delete everything in explored
                explored_.swap(explored);
                for( auto it = explored.begin(); it != explored.end(); ++it)
                    delete *it;

                return true;
            }
            else
            {
                frontier.insert(*it);
                frontierHashTable.insert(childHash);
            }
        }
    }
}

}

推荐答案

由于这是功课,我会提出一些策略,你可以尝试。

Since this is homework I will suggest some strategies you might try.

首先,请尝试使用Valgrind的或类似的工具来检查内存泄漏。你可能有一些内存泄漏,如果你不删除所有你新的。

First, try using valgrind or a similar tool to check for memory leaks. You may have some memory leaks if you don't delete everything you new.

其次,计算出结合的上应该探讨节点的数量。跟踪节点你探索的数量。如果你通过了约束,则可能无法正确​​检测周期。

Second, calculate a bound on the number of nodes that should be explored. Keep track of the number of nodes you do explore. If you pass the bound, you might not be detecting cycles properly.

三,试算法,深度优先搜索,而不是A *。它的内存需求应该是线性的,在树的深度,它应该只是改变排序顺序($ P $概率pd)的问题。如果DFS工作,你的A *搜索可以探索节点过多或内存结构可能是效率太低。如果DFS不工作,这又可能与循环有问题。

Third, try the algorithm with depth first search instead of A*. Its memory requirements should be linear in the depth of the tree and it should just be a matter of changing the sort ordering (pred). If DFS works, your A* search may be exploring too many nodes or your memory structures might be too inefficient. If DFS doesn't work, again it might be a problem with cycles.

四,尝试更紧凑的内存结构。例如,性病:: multiset的你想要做什么,但性病:: priority_queue有一个std :: deque的可以占用更少的内存。还有其他一些变化,你可以尝试,看看他们是否提高的东西。

Fourth, try more compact memory structures. For example, std::multiset does what you want but std::priority_queue with a std::deque may take up less memory. There are other changes you could try and see if they improve things.

这篇关于A *和N-益智优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆