如何避免"堆指针面条"在动态图形? [英] How to avoid "heap pointer spaghetti" in dynamic graphs?

查看:162
本文介绍了如何避免"堆指针面条"在动态图形?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

通用问题

假设正在编码,它由一个图形的一个系统,以及可依赖于邻近节点的配置被激活图形重写规则。也就是说,你有一个生长着一种动态图/运行时pdictably收缩未$ P $。如果简单地使用的malloc ,新的节点会在内存随机位置被分配;足够的时间后,你的堆将是一个指针意大利面,给你可怕的缓存效率。有没有轻巧,增量技术,使节点的那丝一起在内存并拢留

Suppose you are coding a system that consists of a graph, plus graph rewrite rules that can be activated depending on the configuration of neighboring nodes. That is, you have a dynamic graph that grows/shrinks unpredictably during runtime. If you naively use malloc, new nodes are going to be allocated in random positions in memory; after enough time, your heap will be a pointer spaghetti, giving you terrible cache efficiency. Is there any lightweight, incremental technique to make nodes that wire together stay close together in memory?

我试过

我能想到的与该击退一些物理仿真弹力笛卡尔空间被嵌入节点的唯一的事/引来节点。这会保持有线节点在一起,但看起来很傻,我想模拟的开销会比缓存效率加速大。

The only thing I could think of is embedding the nodes in a cartesian space with some physical elastic simulation that repulsed/attracted nodes. That'd keep wired nodes together, but looks silly and I guess the overhead of the simulation would be bigger than the cache efficiency speedup.

固体例如

是我试图实施该系统。 是code我想在C的This 回购是一个原型,在JS执行工作,可怕的缓存效率(和语言本身)。 该视频显示系统在行动图形。

This is the system I'm trying to implement. This is a brief snippet of the code I'm trying to optimize in C. This repo is a prototypal, working implementation in JS, with terrible cache efficiency (and of the language itself). This video shows the system in action graphically.

推荐答案

什么你正在寻找解决的是的线性排列问题的。完善的解决方案被认为是NP难,但一些很好的近似存在。下面是这应该是开始的好地方

What you are looking to solve is the Linear Arrangement Problem. Perfect solutions are considered to be NP-hard, but some good approximations exist. Here is a paper which should be a good place to start.

这篇关于如何避免"堆指针面条"在动态图形?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆