动态地将节点添加到缓存群集以增加缓存大小? [英] Dynamically add nodes to cache cluster to increase cache size?

查看:74
本文介绍了动态地将节点添加到缓存群集以增加缓存大小?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述


我正在做一个测试,以证明AppFabric缓存可以通过缓存集群中的节点数量来增加。所以,我最初在我的集群中有一个节点。我在节点1中创建了一个命名缓存,比如Catalog,默认情况下是LeadHost(XML配置)。
我使用flags -Eviction None -TimeToLive 1440创建了这个缓存,因为我不想要驱逐进程& GC收集。我一直将数据(Put()放在while循环中)存入AppFabric缓存。当它达到最大限制(我的物理内存很低)时,缓存主机
(服务器)按预期进入限制状态。进一步的Put()失败了服务器节流的cos。到现在为止还挺好。然后我在app fabric中安装了一个新节点(安装向导,加入集群,指向XML的相同共享路径等)。然后我用PS提示使用Start-CacheHost命令启动那个
缓存主机并验证这次我看到2个节点UP&运行。之后,我使用Get-CacheClusterHealth命令查看我的命名缓存的重新分配(之前填充/超出/饱和了限制错误)。
我没有在2个节点上重新分配名称缓存,比如Catalog。它在node1下列出了静止状态。为了验证并确认这一点,我再次启动缓存集群来存储数据,期望这次集群应该成功地将数据放入缓存中
,因为我这次添加了node2以用于额外的缓存(内存)。但它失败的原因与先前说的"......失败.....服务器进入节流状态"相同。甚至在我动态添加node2之后。但是,我确实用Restart-CacheCluster PS cmdlet重新启动了缓存集群
,它重新启动了具有2个节点的集群。运行,这次我可以看到命名缓存目录在我的AppFabric缓存集群的节点上重新分配。但是,问题是当我需要这种定期维护计划/任务时,我不想使用Restart-CacheCluster
,就像在这里添加更多服务器一样(在这种情况下)。因为这个Restart-CacheCluster命令清除了我的命名缓存,并且我丢失了之前在node1中存储的对象。这种情况是否会在现实生活中永远不会发生,或者我可能正在测试一些不同的东西,或者是否有更好的方法来执行集群节点的动态添加,而不会导致整个集群关闭并丢失数据现有节点。请注意,对我来说这将是
的很大帮助。


谢谢,


Syed。


MSAH

解决方案


我认为没有更新缓存群集设置是不可能的降低群集。看起来,只有在停止&重新启动群集,更新的缓存群集设置将发生/生效。这就是为什么在PS中运行Restart-CacheCluster
命令后,集群更新了2个节点列出了命名缓存Catalog。因此,这种节点的添加或删除需要一些缓存集群的停机时间。它们可以按Import-CacheClusterConfig,网络共享中的编辑群集配置设置文件
(xml)文件,按顺序排序的Stop-CacheCluster,Export-CacheClusterConfig,Start-CacheCluster或使用单个Restart-CacheCluster cmdlet /完成。命令。因此,由于数据的临时性,无法保留现有数据。所以,
我在阅读与此问题/关注相关的AppFabric文档后,我只是想分享我的理解。我不确定这个理论是否与LeadHost和Non LeadHost相同或不同。


谢谢,


Syed。


Hi,

I am doing a test to prove that the AppFabric cache can be increased by the number of nodes in the cache cluster. So, I have initially one node in my cluster. I created a named cache, say Catalog, in the node 1 - which is LeadHost by default (XML config). I created this cache with flags -Eviction None -TimeToLive 1440 as I do not want the eviction process & GC collect. I keep storing data (Put() in while loop) into AppFabric cache. When it reached maximum limit (my physical memory is low), the cache host (server) entered in throttled state as I expected. Further Put() failed cos of Server Throttle. So far so good. Then I installed a new node in app fabric (install wizard, join cluster, point to same share path of XML etc.). Then I used PS prompt to start that cache host using Start-CacheHost command and verified that this time I see 2 nodes UP & running. After that, I used Get-CacheClusterHealth command to see the re-distribution of my named cache (that was earlier filled/exceeded/saturated with throttled error). I did not redistribute that named cache, say Catalog, across the 2 nodes. It was stilled listed under node1. To verify and confirm this, I started once again cache cluster to store the data, expecting this time the cluster should Put() data in cache successfully as I have added node2 this time for extra cache (memory). But it failed with the same reason saying earlier "......put failed.....server entered in throttled state." even after I dynamically added node2. But, but, I did restarted the cache cluster with Restart-CacheCluster PS cmdlet, it restarted the cluster with 2 nodes up & running and this time I can see the named cache Catalog is redistributed across the nodes of my AppFabric cache cluster. But, the problem is I don't want to use Restart-CacheCluster when I need this periodic maintenance schedule/task like this kind of adding more servers here (in this scenario). Because this Restart-CacheCluster command cleared my named cache and I lost my previously stored objects in node1. Is this kind of scenario would never happen in real life situations or may be am I testing something diffrerent or is there any better way to perform the dynamic addition of cluster nodes w/o bringing the whole cluster down and loosing data in existing nodes. Please advise, it would be of great help for me.

Thanks,

Syed.


MSAH

解决方案

Hi,

I think its not possible to update the cache cluster settings without bringing the cluster down. Looks like, only after a stop & restart of the cluster, the updated cache cluster settings would take place/effect. That's why, after running the Restart-CacheCluster command in PS, the cluster was updated with 2 nodes listed the named cache, Catalog. So, such kind of addition or deletion of nodes require some downtime of cache cluster. They can be either done as Import-CacheClusterConfig, Edit cluster config settings file (xml) file in network share, Stop-CacheCluster, Export-CacheClusterConfig, Start-CacheCluster in sequence order or with a single Restart-CacheCluster cmdlet/command. SO, there is no way to keep the existing data due to the temporary nature of the data. So, I just wanted to share my understanding after I went through the AppFabric documentation related this issue/concern. I am not sure if this theory same or different with respect to LeadHost and Non LeadHost though.

Thanks,

Syed.


这篇关于动态地将节点添加到缓存群集以增加缓存大小?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆