训练有素的机器学习模型太大 [英] Trained Machine Learning model is too big

查看:64
本文介绍了训练有素的机器学习模型太大的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们已经为某些回归任务训练了Extra Tree模型.我们的模型由3棵额外的树组成,每棵都有200棵深度为30的树.在这3棵额外的树之上,我们使用了岭回归.我们将模型训练了几个小时,然后将经过训练的模型(整个类对象)腌制,以备后用.但是,保存的训练模型的大小太大,大约140 GB!有没有办法减小保存的模型的大小?泡菜中是否有任何有用的配置,或者泡菜有其他替代选择?

We have trained an Extra Tree model for some regression task. Our model consists of 3 extra trees, each having 200 trees of depth 30. On top of the 3 extra trees, we use a ridge regression. We train our model for several hours and pickle the trained model (the entire class object), for later use. However, the size of saved trained model is too big, about 140 GB! Is there a way to reduce the size of the saved model? are there any configuration in pickle that could be helpful, or any alternative for pickle?

推荐答案

在最佳情况下(二叉树),您将具有 3 * 200 *(2 ^ 30-1)= 644245094400 节点或 434Gb ,假设每个节点只需花费1个字节即可存储.我认为140GB的容量相当不错.

In the best case (binary trees), you will have 3 * 200 * (2^30 - 1) = 644245094400 nodes or 434Gb assuming each one node would only cost 1 byte to store. I think that 140GB is a pretty decent size in comparision.

数学错误.

这篇关于训练有素的机器学习模型太大的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆