为什么在尝试加载巨大的记录时批处理作用域表现奇怪-Mule ESB [英] Why Batch scope behave strange when trying to load a Huge Records- Mule ESB

查看:85
本文介绍了为什么在尝试加载巨大的记录时批处理作用域表现奇怪-Mule ESB的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在批处理"的过程记录"阶段遇到问题,请提示-我正在尝试加载一些KB文件(大约有5000条记录).对于成功方案,它可以工作. 如果假设第一个命中在输入阶段发生错误,并且流停止,则当第二次尝试命中相同记录时,流程停止. Mule在 Process Record步骤中停止执行.加载阶段后未运行.请在下面找到运行时日志

I'm facing issues in Process Record Phase of Batch, Kindly suggest- I'm trying to load the some KB file ( which has about 5000 record). For the success scenario it works. If suppose error happened in input phase for the first hit and the flows stops, when the second time when it try to hit the same record. Mule stops executing in Process Record step.It is not running After loading Phase. Please find the run time logs below

11:55:33  INFO  info.org.mule.module.logging.DispatchingLogger - Starting loading phase for   instance 'ae67601a-5fbe-11e4-bc4d-f0def1ed6871' of job 'test'
11:55:33  INFO  info.org.mule.module.logging.DispatchingLogger - Finished loading phase for instance ae67601a-5fbe-11e4-bc4d-f0def1ed6871 of job order. 5000 records were loaded
11:55:33  INFO  info.org.mule.module.logging.DispatchingLogger - **Started execution of instance 'ae67601a-5fbe-11e4-bc4d-f0def1ed6871' of job 'test**

实例启动后它停止了处理-我不确定这里发生了什么. 当我停止流程并从工作区中删除 .mule 文件夹时.然后就可以了. 我希望在使用临时队列加载阶段m子时,不会在输入阶段发生异常时将其自动删除,但是不确定这可能是真正的原因.

It stopped processing after instance starts- I'm not sure what is happening here. When i stop the flow and delete the .mule folder from the workspace. It then works. I hope in loading phase mule using temporary queue it is not being deleted automatically when exception happens in input phase, but not sure this could be the real cause.

每次都无法实时删除.muleFolder.

I cant go and delete each time the .muleFolder in a real time.

您能否请任何人在这里提出导致异常行为的原因.我如何摆脱这个问题.请找到配置xml

Could you please anyone suggest what makes the strange behavior here. How to i get rid of this issue. Please find config xml

  <batch:job name="test">
    <batch:threading-profile poolExhaustedAction="WAIT"/>
    <batch:input>

        <component class="com.ReadFile" doc:name="File Reader"/>
        <mulexml:jaxb-xml-to-object-transformer returnClass="com.dto" jaxbContext-ref="JAXB_Context" doc:name="XML to JAXB Object"/>
        <component class="com.Transformer" doc:name="Java"/>
    </batch:input>
    <batch:process-records>
        <batch:step name="Batch_Step" accept-policy="ALL">
            <batch:commit doc:name="Batch Commit" streaming="true">

                <logger message="************after Data mapper" level="INFO" doc:name="Logger"/>
                <data-mapper:transform config-ref="Orders_Pojo_To_XML"  stream="true" doc:name="Transform_CanonicalToHybris"/>
                <file:outbound-endpoint responseTimeout="10000" doc:name="File" path="#[sessionVars.uploadFilepath]"">        

         </file:outbound-endpoint>       
            </batch:commit>
        </batch:step>
    </batch:process-records>
    <batch:on-complete>

       <set-payload value="BatchJobInstanceId:#[payload.batchJobInstanceId+'\n'], Number of TotalRecords: #[payload.totalRecords+'\n'], Number of loadedRecord: #[payload.loadedRecords+'\n'],  ProcessedRecords: #[payload.processedRecords+'\n'], Number of sucessfull Records: #[payload.successfulRecords+'\n'], Number of failed Records: #[payload.failedRecords+'\n'], ElapsedTime: #[payload.elapsedTimeInMillis+'\n'], InpuPhaseException #[payload.inputPhaseException+'\n'], LoadingPhaseException: #[payload.loadingPhaseException+'\n'], CompletePhaseException: #[payload.onCompletePhaseException+'\n'] " doc:name="Set Batch Result"/>

        <logger message="afterSetPayload: #[payload]" level="INFO" doc:name="Logger"/> 

        <flow-ref name="log" doc:name="Logger" />     

    </batch:on-complete>

我很长时间以来一直对这种行为感到震惊.您的帮助将不胜感激. 版本:3.5.1 预先感谢.

I'm in struck with this behavior quite a long days. Your help will be much appreciated. Version:3.5.1 Thanks in advance.

推荐答案

将max-failed-records设置为-1,以便即使存在异常也可以继续执行批处理作业 <batch:job name="test" max-failed-records="-1">

Set max-failed-records to -1 so that batch job will continue even though there an exception <batch:job name="test" max-failed-records="-1">

在实时环境中,您无需清理.mule文件夹

in the real time environment you don't have the situation to clean .mule folder

仅当您使用Anypoint Studio时,这种情况才会发生

this happens only when you are working with Anypoint Studio

这篇关于为什么在尝试加载巨大的记录时批处理作用域表现奇怪-Mule ESB的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆