并行训练多个不同的sklearn模型 [英] Train multiple different sklearn models in parallel

查看:423
本文介绍了并行训练多个不同的sklearn模型的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否可以并行训练多个不同的sklearn模型?

Is it possible to train in parallel multiple different sklearn models?

例如,我想同时训练一个SVM,一个RandomForest和一个线性回归模型.所需的输出将是.fit方法返回的对象的列表.

For example, I'd like to train one SVM, one RandomForest and one Linear Regression model at the same time. The desired output would be a list of objects returned by the .fit method.

推荐答案

是否可以并行训练多个不同的sklearn模型?

训练多个模型吗?
是.

Is it possible to train in parallel multiple different sklearn models?

Training multiple models?
YES.

以true- [PARALLEL] 调度的方式训练多个模型吗?
否.

Training multiple models in true-[PARALLEL] scheduling fashion?
NO.

使用某种类型的低级,细粒度(如果不是直接硅线连接)的矢量化/ILP并行性来训练一个特定模型,并改善时间局部性和缓存一致性的影响吗?

如果资源和低级代码允许,那么已经部署的这些级别主要受到工作包有效负载v/s开销的低比率的限制-参考. 重新制定了阿姆达尔定律 ,以便同时考虑间接费用和资源(时间尺度)和某种处理冲刺的不可分割原子性(在时间尺度的上端..正是由于sklearn ML中常见的原子处理段的不可分割实现处理管道).

Training one particular model, using some sort of low-level, fine-grain ( if not directly silicon-wired ) sorts of vectorisation / ILP-parallelism and improved temporal-locality and effects of cache-coherence?
YES,
already deployed, if resources and low-level code permit, yet these levels are principally constrained by low ratio of work-package payload v/s overheads - ref. re-formulated Amdahl's Law so as to respect both the overheads, resources ( on lower end of the time-scale ) and indivisible atomicity of some sorts of processing-sprint(s) ( on the upper end of the time-scale .. exactly due to the indivisible implementation of the atomic processing-segments so common in the sklearn ML-processing pipelines ).

只是"- [CONCURRENT] 调度的方式训练不同的模型吗?
是.
使用智能的

登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆