如何优化这个次优集封面的解决方案? [英] How to optimize this suboptimal Set-Cover solution?
问题描述
我写了这个程序来测试,将采取多长时间解决设置覆盖问题
I wrote this program to test how long it would take to "solve" the set-cover problem.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;
using MoreLinq;
namespace SetCover
{
class Program
{
const int maxNumItems = 10000;
const int numSets = 5000;
const int maxItemsPerSet = 300;
static void Main(string[] args)
{
var rand = new Random();
var sets = new List<HashSet<int>>(numSets);
var cover = new List<HashSet<int>>(numSets);
var universe = new HashSet<int>();
HashSet<int> remaining;
var watch = new Stopwatch();
Console.Write("Generating sets...");
for (int i = 0; i < numSets; ++i)
{
int numItemsInSet = rand.Next(1, maxItemsPerSet);
sets.Add(new HashSet<int>());
for (int j = 0; j < numItemsInSet; ++j)
{
sets[i].Add(rand.Next(maxNumItems));
}
}
Console.WriteLine("Done!");
Console.Write("Computing universe...");
foreach (var set in sets)
foreach (var item in set)
universe.Add(item);
Console.WriteLine("Found {0} items.", universe.Count);
watch.Start();
//Console.Write("Removing subsets...");
//int numSetsRemoved = sets.RemoveAll(subset => sets.Any(superset => subset != superset && subset.IsSubsetOf(superset)));
//Console.WriteLine("Removed {0} subsets.", numSetsRemoved);
//Console.Write("Sorting sets...");
//sets = sets.OrderByDescending(s => s.Count).ToList();
//Console.WriteLine("{0} elements in largest set.", sets[0].Count);
Console.WriteLine("Computing cover...");
remaining = universe.ToHashSet();
while (remaining.Any())
{
Console.Write(" Finding set {0}...", cover.Count + 1);
var nextSet = sets.MaxBy(s => s.Intersect(remaining).Count());
remaining.ExceptWith(nextSet);
cover.Add(nextSet);
Console.WriteLine("{0} elements remaining.", remaining.Count);
}
Console.WriteLine("{0} sets in cover.", cover.Count);
watch.Stop();
Console.WriteLine("Computed cover in {0} seconds.", watch.Elapsed.TotalSeconds);
Console.ReadLine();
}
}
public static class Extensions
{
public static HashSet<TValue> Clone<TValue>(this HashSet<TValue> set)
{
var tmp = new TValue[set.Count];
set.CopyTo(tmp, 0);
return new HashSet<TValue>(tmp);
}
public static HashSet<TSource> ToHashSet<TSource>(this IEnumerable<TSource> source)
{
return new HashSet<TSource>(source);
}
}
}
这仅仅是一个贪婪的次优解,但也花费了147秒运行。我的认为的但是,这种解决方案应该是pretty的的代码的最优,因此它应该是对我来说已经足够好。我怎样才能加快速度,但?
This is just a greedy sub-optimal solution, but it still took 147 seconds to run. I think however, that this solution should be pretty close to optimal, so it should be good enough for my purposes. How can I speed it up though?
我评论了几行字,因为他们做弊大于利。 编辑:计算宇宙实际上应该不是分开的时机......可以预先知道
I commented out a few lines because they do more harm than good. Computing the universe should actually not be apart of the timing... that can be known beforehand.
推荐答案
我还没有深入到code /算法的细节,但我会用一些理论建议你。作为亨克评论说,为了执行一个好的标杆,你必须删除所有不必要的code和在Release模式与全面优化,并从命令行运行程序。
I haven't gone deeply into the detail of your code/algorithm, but I'm gonna use some theory to advice you. As henk commented, in order to perform a "good" benchmark you MUST remove all unneeded code and run your program in Release mode with full optimization and from commandline.
然后,请记住,你正在运行管理code:C#(和Java)设计的互操作性,而不是性能,而他们仍然有好平台。你应该尝试要么重新实现你的code在C ++中,如果你需要的性能,或者,如果你愿意的话,尽量使用单声道与AOT(提前- - 时间编译器):它连射性能提升不少。
Then, remember that you are running managed code: C# (and Java) are designed for interoperability, not for performance, while they are still both good platforms. You should try either to reimplement your code in C++ if you need performance, or, if you wish, try to use Mono with AOT (ahead-of-time compiler): it bursts performance a lot
mono --aot=full YourProgram.exe
现在更多关于基准和最优:你有没有比较你与他人的结果吗?你有没有在您的同一硬件上运行另一组覆盖的算法,或者你的硬件比较其他人跑同样的算法?
Now more about benchmarks and optimality: have you compared your results with others? Did you run other set-cover algorithms on your same hardware, or can you compare your hardware to others that ran the same algorithm?
和...多么接近您的解决方案最优?你能否提供[自己]的估计?最关键的是在LINQ,我恨,因为你失去了你的code为code的简单控制。什么是一个LINQ的复杂性?如果每个LINQ是O(n),你的算法是O(n ^ 3),但我可能会建议你更换
And... how close is your solution to optimal? Can you provide [yourself] an estimate? The key is in LINQ, which I hate because you lose control of your code for simplicity of code. What's the complexity of a LINQ? If each LINQ is O(n), your algorithm is O(n^3) but I might suggest you to replace
remaining.Any()
与
remaining.Count > 0
获得复杂的幅度。
to gain a magnitude of complexity.
矿山都只是建议,希望能够一直帮助
Mine are just advices, hope to have been of help
这篇关于如何优化这个次优集封面的解决方案?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!