如何在Prem Dev环境中改善CRM 2011中的删除超时问题? [英] How To Improve Delete Timeout Issues In CRM 2011 On Prem Dev Environment?

查看:149
本文介绍了如何在Prem Dev环境中改善CRM 2011中的删除超时问题?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

背景



我有一个单元测试框架,该框架为我的单元测试创​​建实体,执行测试,然后自动删除实体。除某些实体在我们的开发环境中需要15到30秒才能删除外,它一直运行良好。



我最近在Amazon Cloud中收到了VM设置,以执行一些长时间术语更改需要几个发布周期才能完成。当我在VM上运行单元测试时,我不断收到尝试删除实体的SQL超时错误。



步骤



我已经执行了以下发现/操作步骤:


  1. 打开跟踪,发现超时发生在 fn_CollectForCascadeWrapper 用于处理级联删除。我的单元测试中只有6个实体,并且以不需要级联删除的方式将其删除。在其上运行估计执行计划,并添加了它要求的一些索引。这仍然不能解决超时问题。

  2. 打开VM上的资源管理器以查看磁盘访问/内存/ CPU。当我尝试删除时,CPU命中20%的时间为2秒,然后下降到接近0。内存没有变化,但是Resource Manager上的磁盘读取访问变得疯狂起来,并保持了7-10分钟。 li>
  3. fn_CollectForCascadeWrapper 进行硬编码以返回结果,这意味着在单元测试中不需要对6个实体进行级联。运行单元测试,然后再次出现SQL超时错误。根据跟踪,实际的删除语句正在超时:



 从[New_inquiryExtensionBase]中删除,其中([New_inquiryId] ='7e250a5f-890e-40ae-9d2d-c55bbd7250cd'); 
从[New_inquiryBase]中删除
输出。[New_inquiryId],10012
进入SubscriptionTrackingDeletedObject(ObjectId,ObjectTypeCode)
其中([[New_inquiryId] ='7e250a5f-890e-40ae-9d2d -c55bbd7250cd')




  1. 在SQL Management Studio中手动运行查询。花了大约3分钟才能完成。桌上没有触发器,所以我认为时间一定是由于插入。查看 SubscriptionTrackingDeletedObject 表,发现其中有2100条记录。删除表中的所有记录,然后重新运行我的单元测试。

  2. 研究并发现 SubscriptionTrackingDeletedObject 的用途,并且异步服务将其清除。注意,异步服务未在服务器上运行。打开服务,等待10分钟,然后再次查询表。我的6个实体仍在此处列出。在跟踪日志中查看并看到超时错误:清理主体对象访问表时出错

  3. 研究了POA并执行了桌上的SELECT COUNT(*),7分钟后,它返回了 2.61亿记录!研究了如何清理表,我发现的唯一内容是。这非常有帮助。


    Background

    I have a unit test framework that creates entities for my unit tests, preforms the test, then automagically deletes the entities. It had been working fine except that some entities take 15 - 30 seconds to delete in our dev environment.

    I recently received a VM setup in the Amazon Cloud to perform some long term changes requiring a couple release cycles to complete. When I run a unit test on VM, I'm continually getting SQL Timeout Errors attempting to delete the entity.

    Steps

    I've gone down this set of discovery / action steps:

    1. Turned on tracing, saw that timeout was occurring on fn_CollectForCascadeWrapper which is used to handle cascading deletes. My unit test only has 6 entities in it, and they are deleted in such a way that no cascading deletes are needed. Ran Estimated Execution Plan on it and added some of the indexes it requested. This still didn't fix the timeout issue.
    2. Turned on the Resource Manager on the VM to look at Disk Access / Memory / CPU. When I attempt a delete, the CPU hits 20% for 2 seconds, then drops down to near 0. Memory is unchanged, but Disk Read Access on the Resource Manager Goes crazy high, and stays that way for 7-10 minutes.
    3. Hard Coded the fn_CollectForCascadeWrapper to return a result meaning nothing is required to be cascaded for the 6 entities in my unit test. Ran the unit test and again got the SQL Timeout Error. According to the Tracing, the actual delete statement was timing out:

    delete from [New_inquiryExtensionBase] where ([New_inquiryId] = '7e250a5f-890e-40ae-9d2d-c55bbd7250cd');
    delete from [New_inquiryBase]
    OUTPUT DELETED.[New_inquiryId], 10012
    into SubscriptionTrackingDeletedObject (ObjectId, ObjectTypeCode)
    where ([New_inquiryId] = '7e250a5f-890e-40ae-9d2d-c55bbd7250cd')
    

    1. Ran the query manually in SQL Management Studio. Took around 3 minutes to complete. No Triggers on the tables, so I thought the time must be due to the insert. Looked at the SubscriptionTrackingDeletedObject table, and noticed it had 2100 records in it. Deleted all records in the table, and reran my unit test. It actually worked in the normal 15-30 second time frame for deletes.
    2. Researched and discovered what the SubscriptionTrackingDeletedObject is used for, and that the Async Service cleans it up. Noticed that the Async Service was not running on the server. Turned the service on, waited 10 minutes and queried the table again. My 6 entities were still listed there. Looked in trace log and saw timeout errors: Error cleaning up Principal Object Access Table
    3. Researched POA and performed a SELECT COUNT(*) on the table and 7 minutes later it returned 261 million records! Researched how to cleanup the table and the only thing I found was for Role Up 6 (we're currently on 11).

    What Next?

    Could the POA be affecting the Delete? Or is it just the POA that is affecting the Async Service that is affecting the delete? Could inserting into the SubscriptionTrackingDeletedObject really be causing my problem?

    解决方案

    I ended up turning on SQL Server Profiling, and running the delete statement listed in my question. It took 3.5 minutes to execute. I was expecting it to be kicking something else off that hit the POA table, but nope, it was just deleting those records.

    I took a second look at the Query Execution Plan and noticed there were lots of Nested loops:

    that were looking at the child tables that contain a reference to it (see the 13 tiny branches in the tree structure insert in the bottom right?) . So all the reads were being performed on the indexes themselves, and taking forever to get loaded on my uber slow VM.

    I ended up running the same query for a different id, and it ran in 2 seconds. I then attempted my unit test, and finally it completed successfully.

    I'm guessing each time I attempted a delete, a transaction was started, and then the time out on CRM rolled back the transaction, never allowing the child entity indexes to load. So my current fix is to ensure the child indexes are loaded in memory before actually performing the delete. How I'm going to do that, I'm not sure (perform a query by id for each of the child entities?).

    Edit

    We had a performance analyst from Microsoft come out and they wrote up a report that was over 200 pages long. 98% said the POA table was too long. Over Christmas we ended up turning off CRM and running some scripts to cleanup the POA table. This has been extremely helpful.

    这篇关于如何在Prem Dev环境中改善CRM 2011中的删除超时问题?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆