加载hazelcast的所有实现 [英] load all implementation for hazelcast

查看:125
本文介绍了加载hazelcast的所有实现的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试在多个节点上使用hazelcast服务器。我已经在map store实现中实现了load all。我想知道是否应该只在服务器节点或所有服务器节点上启用它?如果我在所有节点上部署相同的内容,那么这不会创建不需要的数据库读取操作。如果我只需要在一个节点上部署加载,那么最好的策略(基于代码/ API调用或配置)是什么,这将允许我干净地实现场景,从而只有一个服务器节点实现地图存储的加载所有实现。我总是可以在不同的服务器上部署不同的代码,但我想避免这种情况并且想知道更好的选择。

I am trying to use hazelcast server over multiple nodes. I have implemented the load all in the map store implementation. I am wondering whether this should only be enabled on on server node or all of them? If I deploy the same on all nodes, would this not create database read operations which should not be needed. If I need to deploy the load all only on one node, what is the best strategy (code/API call based or config) that would allow me to cleanly implement the scenario whereby only one server node implements the load all implementation for map store. I can always deploy different code on different servers but I would like to avoid that and wondering about better choices.

推荐答案

每个节点都需要具有相同的配置/罐等。

Every node needs to have the same configuration/jars etc.

目前,MapLoader.loadAllKeys在集群中的一个节点上执行。加载密钥后,它们将被分配到拥有分区,其中使用MapLoader.loadAll(keys)方法加载实际数据。

Currently the MapLoader.loadAllKeys is executed on one of the nodes in the cluster. Once the keys are loaded, they are assigned to the owning partitions where the actual data is loaded using the MapLoader.loadAll(keys) method.

您认为共享密钥是否相同配置/罐子有问题吗?

Do you think sharing the same configuration/jars is a problem?

这篇关于加载hazelcast的所有实现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆