超时错误:带有 400 状态代码的错误:“要求失败:会话未处于活动状态". [英] Timeout error: Error with 400 StatusCode: "requirement failed: Session isn't active."
问题描述
我正在使用 Zeppelin v0.7.3
笔记本来运行 Pyspark
脚本.在一个段落中,我正在运行脚本以将数据从 dataframe
写入 Blob 文件夹中的 parquet
文件.文件按国家/地区进行分区.数据帧的行数为 99,452,829
.当脚本到达1小时
时,遇到错误——
I'm using Zeppelin v0.7.3
notebook to run Pyspark
scripts. In one paragraph, I am running script to write data from dataframe
to a parquet
file in a Blob folder. File is partitioned per country. Number of rows of dataframe is 99,452,829
. When the script reaches 1 hour
, an error is encountered -
400 状态码错误:要求失败:会话不是活跃.
Error with 400 StatusCode: "requirement failed: Session isn't active.
我的笔记本默认解释器是 jdbc
.我已经阅读了 timeoutlifecyclemanager
并在解释器设置中添加了 zeppelin.interpreter.lifecyclemanager.timeout.threshold
并将其设置为 7200000
但仍然遇到了在处理完成 33% 时达到 1 小时的运行时间后出现错误.
My default interpreter for the notebook is jdbc
. I have read about timeoutlifecyclemanager
and added in the interpreter setting zeppelin.interpreter.lifecyclemanager.timeout.threshold
and set it to 7200000
but still encountered the error after it reaches 1 hour runtime at 33% processing completion.
我在 1 小时超时后检查了 Blob 文件夹,并且镶木地板文件已成功写入 Blob,这些文件确实按国家/地区进行了分区.
I checked the Blob folder after the 1 hr timeout and parquet files were successfully written to Blob which are indeed partitioned per country.
我正在运行将 DF 写入 parquet Blob 的脚本如下:
The script I am running to write DF to parquet Blob is below:
trdpn_cntry_fct_denom_df.write.format("parquet").partitionBy("CNTRY_ID").mode("overwrite").save("wasbs://tradepanelpoc@blobasbackupx2066561.blob.core.windows.net/cbls/hdi/trdpn_cntry_fct_denom_df.parquet")
这是 Zeppelin 超时问题吗?怎么能延长1小时以上的运行时间?感谢您的帮助.
Is this Zeppelin timeout issue? How can it be extended for more than 1 hour runtime? Thanks for the help.
推荐答案
根据输出判断,如果您的应用程序没有以 FAILED 状态结束,这听起来像是 Livy 超时错误:您的应用程序可能比 Livy 会话的定义超时(默认为 1 小时)花费的时间更长,因此即使尽管 Spark 应用程序成功,但如果应用程序花费的时间超过 Livy 会话的超时时间,您的笔记本将收到此错误.
Judging by the output, if your application is not finishing with a FAILED status, that sounds like a Livy timeout error: your application is likely taking longer than the defined timeout for a Livy session (which defaults to 1h), so even despite the Spark app succeeds your notebook will receive this error if the app takes longer than the Livy session's timeout.
如果是这种情况,以下是解决方法:
If that's the case, here's how to address it:
1. edit the /etc/livy/conf/livy.conf file (in the cluster's master node)
2. set the livy.server.session.timeout to a higher value, like 8h (or larger, depending on your app)
3. restart Livy to update the setting: sudo restart livy-server in the cluster's master
4. test your code again
这篇关于超时错误:带有 400 状态代码的错误:“要求失败:会话未处于活动状态".的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!