消息:索引超出了数组的范围 [英] Message: Index was outside the bounds of the array

查看:107
本文介绍了消息:索引超出了数组的范围的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

从AAD获取用户时出现错误.

I got an error when get user from AAD.

"消息:索引在数组的边界之外.
 内部例外: 
  Stacktrace:   在System.Array.Clear(数组数组,Int32索引,Int32长度)上
  在System.Collections.Generic.List`1.Clear()
  在System.Data.Services.Client.AtomMaterializerLog.MergeEntityDescriptorInfo(EntityDescriptor trackedEntityDescriptor,EntityDescriptorEntityDescriptorFromMaterializer,布尔值mergeInfo,MergeOption mergeOption)处
  在System.Data.Services.Client.AtomMaterializerLog.ApplyToContext()
  在System.Data.Services.Client.MaterializeAtom.MoveNextInternal()
  在System.Data.Services.Client.MaterializeAtom.MoveNext()
  在System.Linq.Enumerable上< CastIterator> d__94`1.MoveNext()
  在System.Collections.Generic.List`1..ctor(IEnumerable`1集合)中
  在System.Linq.Enumerable.ToList [TSource](IEnumerable`1源)
  在Microsoft.Azure.ActiveDirectory.GraphClient.Extensions.PagedCollection`2..ctor(DataServiceContextWrapper上下文,QueryOperationResponse`1 qor)中
  在Microsoft.Azure.ActiveDirectory.GraphClient.Extensions.DataServiceContextWrapper中.<> c__DisplayClass4b`2.< ExecuteAsync> b__49(IAsyncResult r)
  在System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar,Func` 2 endFunction,Action`1 endAction,Task`1 promise,Boolean requireSynchronization)
---从之前引发异常的位置开始的堆栈跟踪---
  在System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(任务任务)上
  在System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(任务任务)上
  在Microsoft.Azure.ActiveDirectory.GraphClient.Extensions.DataServiceContextWrapper中.< ExecuteAsync> d__4d`2.MoveNext()
---从之前引发异常的位置开始的堆栈跟踪---
  在System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(任务任务)上
  在System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(任务任务)上
  在Microsoft.Azure.ActiveDirectory.GraphClient.DirectoryObjectCollection中.<< ExecuteAsync> b__2> d__3.MoveNext()
---从之前引发异常的位置开始的堆栈跟踪---
  在System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(任务任务)上
  在System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(任务任务)上

"Message: Index was outside the bounds of the array.
 Inner Exception: 
 Stacktrace:    at System.Array.Clear(Array array, Int32 index, Int32 length)
   at System.Collections.Generic.List`1.Clear()
   at System.Data.Services.Client.AtomMaterializerLog.MergeEntityDescriptorInfo(EntityDescriptor trackedEntityDescriptor, EntityDescriptor entityDescriptorFromMaterializer, Boolean mergeInfo, MergeOption mergeOption)
   at System.Data.Services.Client.AtomMaterializerLog.ApplyToContext()
   at System.Data.Services.Client.MaterializeAtom.MoveNextInternal()
   at System.Data.Services.Client.MaterializeAtom.MoveNext()
   at System.Linq.Enumerable.<CastIterator>d__94`1.MoveNext()
   at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
   at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
   at Microsoft.Azure.ActiveDirectory.GraphClient.Extensions.PagedCollection`2..ctor(DataServiceContextWrapper context, QueryOperationResponse`1 qor)
   at Microsoft.Azure.ActiveDirectory.GraphClient.Extensions.DataServiceContextWrapper.<>c__DisplayClass4b`2.<ExecuteAsync>b__49(IAsyncResult r)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.Azure.ActiveDirectory.GraphClient.Extensions.DataServiceContextWrapper.<ExecuteAsync>d__4d`2.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.Azure.ActiveDirectory.GraphClient.DirectoryObjectCollection.<<ExecuteAsync>b__2>d__3.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)

它是在日本东部发生事故的最后一天发生的.

It occured after Japan east accident last day.

"从2017年3月8日UTC开始,在日本东部的使用虚拟机,HD Insight,Redis缓存或App Service \ Web Apps的部分客户可能会遇到连接到该区域托管的资源的困难.工程师们已经确定 这是由目前正在调查的基础存储事件引起的.利用此区域中的存储的其他服务可能正在受到与此相关的影响,并且其他服务将在Azure状态运行状况仪表板上列出. 工程师已意识到此问题,并正在积极调查.下次更新将在60分钟内提供,或者视情况而定"

"Starting at 12:42 UTC on 08 Mar 2017, a subset of customers using Virtual Machines, HD Insight, Redis Cache or App Service \ Web Apps in Japan East may experience difficulties connecting to resources hosted in this region. Engineers have determined that this is caused by an underlying Storage incident which is currently under investigation. Other services that leverage Storage in this region may be experiencing impact related to this and additional services will be listed on the Azure Status Health Dashboard. Engineers are aware of this issue and are actively investigating. The next update will be provided in 60 minutes, or as events warrant"

此致

推荐答案

对日本东部的影响-仓储已得到缓解.如果您仍然看到错误,请告诉我们.

The impact in the Japan East - Storage has been mitigated. Do let us know if you are still seeing the error.

您可以在 Azure状态历史记录页面.

You can find the RCA of the Impact on the Azure Status History page.

RCA-存储-日本东部:

影响摘要::在世界标准时间2017年3月8日12:40到14:38之间,在日本东部使用Storage的部分客户可能在连接到该区域托管的资源时遇到困难.基于我们的存储服务构建的Azure服务 在该地区还产生了影响,包括:应用服务\ Web应用,站点恢复,虚拟机,Redis缓存,数据移动,StorSimple,逻辑应用,媒体服务,密钥库,HDInsight,SQL数据库,自动化,流分析,备份,物联网中心, 和云服务.我们的监视和警报系统检测到该问题,该系统检查Storage Service的连续运行状况.警报触发了我们的工程响应,并采取了恢复措施,从而允许Stream Manager流程 在存储服务中开始处理请求并恢复服务运行状况.恢复存储服务后,基于存储服务构建的所有Azure服务也将恢复.

Summary of impact: Between 12:40 and 14:38 UTC on 08 Mar 2017, a subset of customers using Storage in Japan East may have experienced difficulties connecting to resources hosted in this region. Azure services built on our Storage service in this region also experienced impact including: App Service \ Web Apps, Site Recovery, Virtual Machines, Redis Cache, Data Movement, StorSimple, Logic Apps, Media Services, Key Vault, HDInsight, SQL Database, Automation, Stream Analytics, Backup, IoT Hub, and Cloud Services. The issue was detected by our monitoring and alerting systems that check the continuous health of the Storage service. The alerting triggered our engineering response and recovery actions were taken which allowed the Stream Manager process in the Storage service to begin processing requests and recover the service health. All Azure services built on our Storage service also recovered once the Storage service was recovered.

解决方法::SQL数据库客户配置了活动的地理复制的SQL数据库可以通过执行故障转移到地理辅助数据库来减少停机时间.这将导致少于5秒的交易损失.全部 客户可以执行地理位置恢复,而损失的交易时间不到5分钟.请拜访 https://azure.microsoft.com/zh-cn/documentation/articles/sql-database-business-continuity ,以获取有关这些功能的更多信息.

Workaround: SQL database customers who had SQL Database configured with active geo-replication could have reduced downtime by performing failover to geo-secondary. This would have caused a loss of less than 5 seconds of transactions. All customers could perform a geo-restore, with loss of less than 5 minutes of transactions. Please visit https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity for more information on these capabilities.

根本原因和缓解措施::在日本东部的Storage规模单位上,作为管理Storage Service中数据放置的后端组件的Stream Manager进入罕见的不正常状态,这导致了故障在处理请求中. 这导致在上述时间段内对存储服务的请求失败. Stream Manager具有保护措施,可帮助其从此类状态(包括自动故障转移)进行自我恢复,但是,有一个错误导致自动自我修复失败.

Root cause and mitigation: On a Storage scale unit in Japan East, the Stream Manager that is the backend component that manages data placement in the Storage service entered a rare unhealthy state, which caused a failure in processing requests. This resulted in requests to Storage service failing for the above period of time. The Stream Manager has protections to help it self-recover from such states (including auto-failover), however, a bug caused the automatic self-healing to be unsuccessful.

下一步:对于对受影响客户的影响,我们深表歉意.我们将不断采取措施来改善Microsoft Azure平台和我们的流程,以帮助确保将来不会发生此类事件.在这种情况下, 它包括(但不限于):

-自我修复机制的错误修正将作为跨存储规模单位的修补程序推出.
-实施辅助服务修复机制,该机制旨在从不正常状态自动恢复,并针对此故障情况进行附加监视.

Next steps: We sincerely apologize for the impact to affected customers. We are continuously taking steps to improve the Microsoft Azure Platform and our processes, to help ensure such incidents do not occur in the future. In this case, it includes (but is not limited to):

- The bugfix for the self-healing mechanism will be rolled out as a hotfix across Storage scale units.
- Implement secondary service healing mechanism, designed to auto-recover from unhealthy state, as well as additional monitoring for this failure scenario.

提供反馈:请通过进行调查来帮助我们改善Azure客户沟通体验 https://survey.microsoft.com/313074

Provide feedback: Please help us improve the Azure customer communications experience by taking our survey https://survey.microsoft.com/313074


这篇关于消息:索引超出了数组的范围的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆