Cobb-Douglas 函数会极大地减慢运行速度.如何加快 Python 中的非线性计算? [英] Cobb-Douglas functions slows running tremendously. How to expedite a non-linear calculation in Python?

查看:45
本文介绍了Cobb-Douglas 函数会极大地减慢运行速度.如何加快 Python 中的非线性计算?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个运行 10 个模块和 2000 个代理的有效微观经济模型,运行时间长达 10 年.该程序运行速度很快,可在几秒钟内提供结果、输出和图形.

I have a working microeconomic model running with 10 modules 2000 agents, for up to 10 years. The program was running fast, providing results, output and graphics in a matter of seconds.

但是,当我实施非线性 Cobb-Douglas 生产函数来更新公司要生产的数量时,程序变慢以在 3 分钟内生成结果,具体取决于参数.

However, when I implemented a non-linear Cobb-Douglas production function to update the quantity to be produced in the firms, the program slowed to produce results in 3 minutes, depending on the parameters.

有人知道我如何加快计算速度并快速获得结果吗?

Does anybody know how I could expedite the calculation and get back to fast results?

函数代码如下:阿尔法 = 0.5

Here is the code of the function: ALPHA = 0.5

def update_product_quantity(self):
    if len(self.employees) > 0 and self.total_balance > 0:
        dummy_quantity = self.total_balance ** parameters.ALPHA * \
                         self.get_sum_qualification() ** (1 - parameters.ALPHA)
        for key in self.inventory.keys():
            while dummy_quantity > 0:
                self.inventory[key].quantity += 1
                dummy_quantity -= 1

之前运行速度很快的线性函数是:

The previous linear function that was working fast was:

def update_product_quantity(self):
    if len(self.employees) > 0:
        dummy_quantity = self.get_sum_qualification()
        for key in self.inventory.keys():   
            while dummy_quantity > 0:
                self.inventory[key].quantity += 1
                dummy_quantity -= 1

推荐答案

如果没有看到其余代码中的上下文,很难说如何修复它;但可能会加快速度的一件事是使用 numpy 预先计算虚拟数量.例如,您可以为每个代理的 total_balancesum_qualification 创建一个 numpy 数组,计算相应的 dummy_quantities 数组,然后将其分配回代理.

It's hard to say how to fix it without seeing the context in the rest of your code; but one thing that might speed it up is pre-computing dummy quantities with numpy. For example, you could make a numpy array of each agent's total_balance and sum_qualification, compute a corresponding array of dummy_quantities and then assign that back to the agents.

这是一个高度简化的加速演示:

Here's a highly-simplified demonstration of the speedup:

%%timeit
vals = range(100000)
new_vals = [v**0.5 for v in vals]
> 100 loops, best of 3: 15 ms per loop

现在,使用 numpy:

Now, with numpy:

%%timeit
vals = np.array(range(100000))
new_vals = np.sqrt(vals)
> 100 loops, best of 3: 6.3 ms per loop

然而,从几秒到 3 分钟的减速对于计算差异来说似乎是极端的.模型与 C-D 函数的行为方式是否相同,或者模型动力学的变化是导致放缓的真正原因?如果是后者,那么您可能需要在其他地方寻找要优化的瓶颈.

However, a slow-down from a few seconds to 3 minutes seems extreme for the difference in calculation. Is the model behaving the same way with the C-D function, or is that driving changes in the model dynamics which are the real reason for the slowdown? If the latter, then you might need to look elsewhere for the bottleneck to optimize.

这篇关于Cobb-Douglas 函数会极大地减慢运行速度.如何加快 Python 中的非线性计算?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆