OpenShift v3上的 pandas [英] Pandas on OpenShift v3

查看:85
本文介绍了OpenShift v3上的 pandas 的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

现在,OpenShift Online V2已经宣布终止服务,我希望将我的Python应用程序迁移到OpenShift Online V3(又名OpenShift NextGen).熊猫是必需项(列在requirements.txt中)

Now that OpenShift Online V2 has announced its end of service, I am looking to migrate my Python application to OpenShift Online V3, aka OpenShift NextGen. Pandas is a requirement (and listed in requirements.txt)

安装熊猫已经很简单了在v2 中,但是V3不允许在构建过程中进行手动交互(或者是吗?).

It already has been non-trivial to get pandas installed in v2 but V3 does not allow manual interaction in the build process (or does it?).

当我尝试构建我的应用程序时,构建过程将在一个小时后停止. pip已下载并安装了requirements.txt的内容,并且正在为选定的程序包运行setup.py.日志文件的和是

When I try to build my app the build process stops after an hour. pip has downloaded and installed the contents of the requirements.txt and is running setup.py for selected packages. The and of the log file is

Running setup.py install for numpy
Running setup.py install for Bottleneck
Running setup.py install for numexpr
Running setup.py install for pandas

然后该过程停止,没有任何错误消息.

Then the process stops without any error message.

有人知道如何在OpenShift V3上构建需要熊猫的Python应用程序吗?

Does anyone have a clue how to build Python applications that require pandas on OpenShift V3?

推荐答案

这将是两件事之一.

任何一种编译Pandas都是巨大的内存消耗,可能是由于编译器遇到了一些病理情况造成的.或者,此时生成的映像的大小超过了内部限制,因此用完了已分配的磁盘空间.

Either compiling Pandas is a huge memory hog, possibly caused by the compiler hitting some pathological case. Or, the size of the generated image at that point exceeds an internal limit and so runs out of disk space allocated.

如果是内存,则需要增加分配给构建容器的内存.在线默认情况下,此值为512Mi.

If it was memory, you would need to increase the memory allocated to the build pod. By default in Online this is 512Mi.

要增加限制,您将需要从Web控制台或使用oc edit在命令行中为构建配置编辑YAML/JSON.

To increase the limit you will need to edit the YAML/JSON for the build configuration from the web console, or from the command line using oc edit.

对于YAML,您需要添加以下内容:

For YAML, you need to add the following:

  resources:
    limits:
      memory: 1Gi

这是在设置字段:

$ oc explain bc.spec.resources.limits FIELD: limits <object>

DESCRIPTION:
     Limits describes the maximum amount of compute resources allowed. More
     info: http://kubernetes.io/docs/user-guide/compute-resources/

最大为1Gi.增大此值似乎可以完成构建,而将其增加到768Mi还不够.

The maximum is 1Gi. It appears an increase to this value does allow the build to complete, where as increasing it to 768Mi wasn't enough.

请注意,这会在运行时占用compute-resources-timebound配额的内存,并且由于它是在构建过程中全部使用的,因此您尝试同时执行的其他操作可能会受阻.

Do be aware that this takes memory away from the quota for compute-resources-timebound when running and since it is using it all during the build, other things you try and do at the same time could be held up.

FWIW,仅在本地生成的图像大小(不在Online中生成):

FWIW, the image size on a local build, not in Online, only produced:

172.30.1.1:5000/mysite/osv3test              latest               f323d9b036f6        About an hour ago   910MB

因此,除非在清理之前使用中间空间是一个问题,否则这不是问题.

Thus unless intermediary space used before things were cleaned up was an issue, it isn't an issue.

因此增加用于构建的内存似乎是答案.

So increasing memory used for the build appears to be the answer.

这篇关于OpenShift v3上的 pandas 的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆