迭代深化星(IDA *)来解决Java中的n-puzzle(滑动拼图) [英] Iterative Deepening A Star (IDA*) to solve n-puzzle (sliding puzzle) in Java

查看:473
本文介绍了迭代深化星(IDA *)来解决Java中的n-puzzle(滑动拼图)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经实施了一个能够用A *解决 n-puzzle问题的程序。由于状态的空间太大,我无法预编译它,我必须在运行时计算可能的状态。通过这种方式,A *适用于3拼图,但对于4拼图可能需要太长时间。使用曼哈顿距离调整线性冲突,如果最佳解决方案需要大约25次移动仍然很快,大约35次需要10秒,40次需要180秒。我还没有尝试过更多。

我认为这是因为我必须保留所有访问过的状态,因为我使用的功能是可以接受的但是(我认为)不一致(我也尝试过Hamming和Gaschnig的距离和再多一点)。由于解的空间是图,启发式也必须是一致的,否则算法可能循环或不是最优的。这就是为什么我保留所有访问过的节点(它也写在AI:现代方法一书中)。但无论如何,这个存储根本不会放慢速度。什么慢的是保持要访问的节点队列有序。

所以我决定尝试IDA *,正如我所见,不需要这个存储(但我仍然要保持所有访问状态以避免循环)。对于需要35次或更少移动的解决方案来说速度更快,但对于40来说速度要慢得多。

这是我的代码。我做错了吗?

I've implemented a program able to solve the n-puzzle problem with A*. Since the space of the states is too big I cannot precompile it and I have to calculate the possible states at runtime. In this way A* works fine for a 3-puzzle, but for a 4-puzzle can take too long. Using Manhattan distance adjusted with linear conflicts, if the optimal solution requires around 25 moves is still fast, around 35 takes 10 seconds, for 40 takes 180 seconds. I haven't tried more yet.
I think that's because I must keep all visited states, since I'm using functions admissible but (I think) not consistent (i tried also with Hamming and Gaschnig distances and a few more). Since the space of the solution is a graph the heuristic must also be consistent, otherwise the algorithm can loop or be not optimal. That's why I keep all visited nodes (it's also written in the book "AI: A modern approach"). But anyway, this storage does not slow at all. What slows is keeping the queue of nodes to be visited ordered.
So I decided to try IDA* that, as I saw, does not require this storage (but still I have to keep all visited states to avoid loops). It's faster for solutions that require 35 or less moves, but for 40 it's much slower.
Here is my code. Am I doing something wrong?

public static State solveIDAStar(State initialState) {
    int limit = initialState.getManhattanDistance() + 2 * initialState.getLinearConflicts();
    State result = null;
    while(result == null) {
        visitedStates.add(initialState); // It's a global variable
        result = limitedSearch(initialState, limit);
        limit = newLimit;
        visitedStates.clear();
    }
    return result;
}

public static State limitedSearch(State current, int limit) {
    for(State s : current.findNext()) {
        if(s.equals(GOAL)) {
            s.setParent(current);
            return s;
        }
        if(!visitedStates.contains(s)) {
            s.setPathCost(current.getPathCost() + 1);
            s.setParent(current);
            int currentCost = s.getManhattanDistance() + 2 * s.getLinearConflicts() + s.getPathCost();
            if(currentCost <= limit) {
                visitedStates.add(s);
                State solution = limitedSearch(s, limit);
                if(solution != null)
                    return solution;
            } else {
                if(currentCost < newLimit)
                    newLimit = currentCost;
            }
        }
    }
    return null;
}


推荐答案

旧东西向下移动。

更改以便newLimit可以跳过步骤(bestSolution内容):

Changes so that newLimit can skip steps (the bestSolution stuff):

State bestSolution; // add this global

public static State solveIDAStar(State initialState) {
    int limit = initialState.getManhattanDistance() + 2 * initialState.getLinearConflicts();
    bestSolution = null; // reset global to null
    State result = null;
    while(result == null) {
        visitedStates.add(initialState); // It's a global variable
        newLimit = INFINITY;
        result = limitedSearch(initialState, limit);
        limit = newLimit;
        visitedStates.clear();
    }
    return result;
}

public static State limitedSearch(State current, int limit) {
    for(State s : current.findNext()) {
        if(s.equals(GOAL)) {
            s.setParent(current);
            return s;
        }
        if(!visitedStates.contains(s)) {
            s.setPathCost(current.getPathCost() + 1);
            s.setParent(current);
            int currentCost = s.getManhattanDistance() + 2 * s.getLinearConflicts() + s.getPathCost();
            if(currentCost <= limit) {
                visitedStates.add(s);
                State solution = limitedSearch(s, limit);
                if(solution != null &&
                   (bestSolution == null || solution.getPathCost() < bestSolution.getPathCost()))
                        bestSolution = solution; // cache solution so far
            } else {
                if(currentCost < newLimit)
                    newLimit = currentCost;
            }
        }
    }
    return null;
}






原始答案



所以我发现了一个开源实现。奇迹般地,它也在java中。


Original answer

So I found an open source implementation. Miraculously, it is also in java.

可以在这里测试应用程序:
http://n-puzzle-solver.appspot.com/

The application can be tested here: http://n-puzzle-solver.appspot.com/

与源代码特别相关的是:
http://code.google.com/p/julien-labs/source/browse/trunk/SlidingPuzzle/src/be/dramaix/ai/slidingpuzzle/server/search/IDAStar.java

And the source code specifically relevant is: http://code.google.com/p/julien-labs/source/browse/trunk/SlidingPuzzle/src/be/dramaix/ai/slidingpuzzle/server/search/IDAStar.java

不确定下面建议的第一个更改可能会改变所花费的时间,但我确信您需要进行第二次更改。

Not sure how much the 1st change suggested below might change the time taken, but I am quite sure that you need to make the 2nd change.

通过比较代码,你会发现这个函数

By comparing the code, you will find that this function

private Node depthFirstSearch(Node current, int currentCostBound, State goal)

基本上就是你的功能e

is basically your function here

public static State limitedSearch(State current, int limit)

和Julien Dramaix的实施没有:

and Julien Dramaix's implementation doesn't have:

if(!visitedStates.contains(s)) {
    ...
    visitedStates.add(s);

所以请将这两行用于测试。

So take those two lines out to test.

您的函数 public static State solveIDAStar(State initialState)在while循环中做了一些奇怪的事情。

Your function public static State solveIDAStar(State initialState) does something weird in the while loop.

失败一次后,将最大深度(限制)设置为无穷大。基本上,第一次迭代,您尝试找到与您的启发式一样好的解决方案。然后你试着找到任何解决方案。这不是迭代加深

After you fail once, you set the maximum depth (limit) to infinity. Basically, 1st iteration, you try find a solution as good as your heuristic. Then you try to find any solution. This is not iterative deepening.

迭代加深意味着每次尝试时都会更深入。

Iterative deepening means every time you try, go a little bit deeper.

确实,在公共PuzzleSolution解析(状态开始,州目标)中查看while循环,你会发现 nextCostBound + = 2; 。这意味着,每次尝试时,请尝试找到最多2次移动的解决方案。

Indeed, looking at the while loop in public PuzzleSolution resolve(State start, State goal), you will find nextCostBound+=2;. That means, every time you try, try find solutions with up to 2 more moves.

否则,其他一切看起来都相似(虽然你对State类的确切实现可能略有不同)。

Otherwise, everything else looks similar (although your exact implementation of the State class might be slightly different).

如果它运行得更好,你可能还想在 http:/ /code.google.com/p/julien-labs/source/browse/#svn%2Ftrunk%2FSlidingPuzzle%2Fsrc%2Fbe%2Fdramaix%2Fai%2Fslidingpuzzle%2Fclient

If it works better, you might also want to try some of the other heuristics at http://code.google.com/p/julien-labs/source/browse/#svn%2Ftrunk%2FSlidingPuzzle%2Fsrc%2Fbe%2Fdramaix%2Fai%2Fslidingpuzzle%2Fclient.

启发式可在服务器/搜索/启发式文件夹中找到。

The heuristics are found in the server/search/heuristic folder.

这篇关于迭代深化星(IDA *)来解决Java中的n-puzzle(滑动拼图)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆