群集中Aerospike实例之间的错误平衡 [英] Wrong balance between Aerospike instances in cluster

查看:1454
本文介绍了群集中Aerospike实例之间的错误平衡的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个批量读取操作的高负载应用程序。我的Aerospike群集(v3.7.2)有14台服务器,每台服务器都有7GB RAM和2个CPU在Google Cloud中。



通过查看Google Cloud Monitoring Graphs,我注意到服务器之间的负载非常不平衡:一些服务器的CPU负载几乎为100%,而其他服务器的负载低于50%(下图)。即使经过数小时的运行,群集不平衡模式也不会改变。



是否有任何配置可以改变,以使该群集更加均匀?如何优化节点平衡?



< img src =https://i.stack.imgur.com/JVRna.pngalt =aerospike实例监控图>



编辑1



群集中的所有服务器具有相同的 aerospike.conf 文件:



Aerospike数据库配置文件。



 服务{
user root
组root
paxos-single-replica-limit 1#副本计数自动减少到1的节点数量
paxos-recovery-policy auto-reset-master
pidfile /var/run/aerospike/asd.pid
服务线程32
事务队列32
事务线程每队列32
批处理索引线程32
proto-fd-max 15000
batch-max-requests 200000
}

logging {
#日志文件必须是绝对路径。
档案/var/log/aerospike/aerospike.log {
上下文任何信息
}
}

网路{
服务{
#address任何
端口3000
}

心跳{
模式网格
网格种子地址端口10.240.0.6 3002
mesh-seed-address-port 10.240.0.5 3002 $ b $ port 3002

interval 150
timeout 20
}

fabric {
端口3001
}

信息{
端口3003
}
}

命名空间测试{
复制因子3
内存大小5G
default-ttl 0#30天,使用0永不过期/ evict。
ldt-enabled true

存储引擎设备{
文件/data/aerospike.dat
写入块大小1M
文件大小180G


编辑2

  $ asinfo 
1:节点
BB90600F00A0142
2:statistics
cluster_size = 14; cluster_key = E3C3672DCDD7F51; cluster_integrity = TRUE;对象= 3739898;子记录= 0;总字节盘= 193273528320;使用字节盘= 26018492544;自由PCT-磁盘= 86;总字节内存= 5368709120;使用字节内存= 239353472;数据使用的字节内存= 0;索引使用的字节内存= 239353472; SINDEX使用的字节内存= 0;自由PCT-存储器= 95; stat_read_reqs = 2881465329; stat_read_reqs_xdr = 0; stat_read_success = 2878457632; stat_read_errs_notfound = 3007093; stat_read_errs_other = 0; stat_write_reqs = 551398; stat_write_reqs_xdr = 0; stat_write_success = 549522; stat_write_errs = 90; stat_xdr_pipe_writes = 0; stat_xdr_pipe_miss = 0; stat_delete_success = 4; stat_rw_timeout = 1862; udf_read_reqs = 0; udf_read_success = 0; udf_read_errs_other = 0; udf_write_reqs = 0; udf_write_success = 0; udf_write_err_others = 0; udf_delete_reqs = 0; udf_delete_success = 0; udf_delete_err_others = 0; udf_lua_errs = 0; udf_scan_rec_reqs = 0; udf_query_rec_reqs = 0; udf_replica_writes = 0; stat_proxy_reqs = 7021; stat_proxy_reqs_xdr = 0; stat_proxy_success = 2121; stat_proxy_errs = 4739; stat_ldt_proxy = 0; stat_cluster_key_err_ack_dup_trans_reenqueue = 607; stat_expired_objects = 0; stat_evicted_objects = 0; stat_deleted_set_objects = 0; stat_evicted_objects_time = 0; stat_zero_bin_records = 0; stat_nsup_deletes_not_shipped = 0; stat_compressed_pkts_received = 0; err_tsvc_requests = 110; err_tsvc_requests_timeout = 0; err_out_of_space = 0; err_duplicate_proxy_request = 0; err_rw_request_not_found = 17; err_rw_pending_limit = 19; err_rw_cant_put_unique = 0; geo_region_query_count = 0; geo_region_query_cells = 0; geo_region_query_points = 0; geo_region_query_falsepos = 0; fabric_msgs_sent = 58002818; fabric_msgs_rcvd = 57998870; paxos_principal = BB92B00F00A0142; migrate_msgs_sent = 55749290; MIGR ate_msgs_recv = 55759692; migrate_progress_send = 0; migrate_progress_recv = 0; migrate_num_incoming_accepted = 7228; migrate_num_incoming_refused = 0;队列= 0;交易= 101978550; reaped_fds = 6; scans_active = 0; basic_scans_succeeded = 0; basic_scans_failed = 0; aggr_scans_succeeded = 0; aggr_scans_failed = 0; udf_bg_scans_succeeded = 0; udf_bg_scans_failed = 0; batch_index_initiate = 40457778; batch_index_queue = 0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0: 0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0, 0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0; batch_index_complete = 40456708; batch_index_timeout = 1037; batch_index_errors = 33; batch_index_unused_buffers = 256; batch_index_huge_buffers = 217168717; batch_index_created_buffers = 217583519; batch_index_destroyed_buffers = 217583263; batch_initiate = 0; batch_queue = 0; batch_tree_count = 0; batch_timeout = 0; batch_errors = 0; info_queue = 0; delete_queue = 0; proxy_in_progress = 0; proxy_initiate = 7021; proxy_action = 5519; proxy_retry = 0; proxy_retry_q_full = 0; proxy_unproxy = 0; proxy_retry_same_dest = 0; proxy_retry_new_dest = 0; write_m翠菊= 551089; write_prole = 1055431; read_dup_prole = 14232; rw_err_dup_internal = 0; rw_err_dup_cluster_key = 1814; rw_err_dup_send = 0; rw_err_write_internal = 0; rw_err_write_cluster_key = 0; rw_err_write_send = 0; rw_err_ack_internal = 0; rw_err_ack_nomatch = 1767; rw_err_ack_badnode = 0;时client_connections = 366; waiting_transactions = 0; tree_count = 0; record_refs = 3739898; record_locks = 0; migrate_tx_objs = 0; migrate_rx_objs = 0; ongoing_write_reqs = 0; err_storage_queue_full = 0; partition_actual = 296; partition_replica = 572; partition_desync = 0; partition_absent = 3228; partition_zombie = 0; partition_object_count = 3739898; partition_ref_count = 4096; system_free_mem_pct = 61; sindex_ucgarbage_found = 0; sindex_gc_locktimedout = 0; sindex_gc_inactivity_dur = 0; sindex_gc_activity_dur = 0; sindex_gc_list_creation_time = 0; sindex_gc_list_deletion_time = 0; sindex_gc_objects_validated = 0; sindex_gc_garbage_found = 0; sindex_gc_garbage_cleaned = 0; system_swapping = FALSE; err_replica_null_node = 0; err_replica_non_null_node = 0; err_sync_copy_null_master = 0; storage_defrag_corrupt_record = 0; err_write_fail_prole_unknown = 0; err_write_fail_prole_generation = 0; err_write_fail_unknown = 0; err_write_fail_key_exists = 0; err_write_fail_generation = 0; err_write_fail_generation_xdr = 0; err_write_fail_bin_exists = 0; err_write_fail_parameter = 0; err_write_fail_incompatible_type = 0; err_write_fail_noxdr = 0; err_write_fail_prole_delete = 0; err_write_fail_not_found = 0; err_write_fail_key_mismatch = 0; err_write_fail_record_too_big = 90; err_write_fail_bin_name = 0; err_write_fail_bin_not_found = 0; err_write_fail_forbidden = 0; stat_duplicate_operation = 53184;运行时间= 1001388; stat_write_errs_notfound = 0; stat_write_errs_other = 90; heartbeat_received_self = 0; heartbeat_received_foreign = 145137042; query_reqs = 0; query_success = 0; query_fail = 0; query_abort = 0; query_avg_rec_count = 0; query_short_running = 0; query_long_running = 0; query_short_queue_full = 0; query_long_queue_full = 0; query_short_reqs = 0; query_long_reqs = 0; query_agg = 0; query_agg_success = 0; query_agg_err = 0; query_agg_abort = 0; query_agg_avg_rec_count = 0; query_lookups = 0; query_lookup_success = 0; query_ lookup_err = 0; query_lookup_abort = 0; query_lookup_avg_recount = 0
3:features
cdt-list;流水线; geo; float; batch-index; replicas-all; replicas-master; replicas-prole; udf
4:集群生成
61
5:分区生成
11811
6:版本
Aerospike社区版
7:版本
Aerospike Community Edition build 3.7.2
8:build
3.7.2
9:services
10.0.3.1:3000;10.240.0.14:3000;10.0.3.1: 3000; 10.240.0.27:3000; 10.0.3.1:3000; 10.240.0.5:3000; 10.0.3.1:3000; 10.240.0.43:3000; 10.0.3.1:3000; 10.240.0.30:3000; 10.0.3.1:3000; 10.240.0.18:3000;10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240。 0.37:3000; 10.0.3.1:3000; 10.240.0.41:3000; 10.0.3.1:3000; 10.240.0.13:3000; 10.0.3.1:3000; 10.240.0.23:3000
10:services-alumni
10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000 ; 10.240.0.14:3000; 10.0.3.1:3000; 10.240.0.18:3000; 10.0.3.1:3000; 10.240.0.23:3000; 10.0.3.1:3000; 10.240.0.24:3000; 10.0.3.1:3000; 10.240 .0.27:3000; 10.0.3.1:3000; 10.240.0.30:3000; 10.0.3.1:3000; 10.240.0.37:3000; 10.0.3.1:3000; 10.240.0.43:3000; 10.0.3.1:3000; 10.240.0.33 :3000; 10.0.3.1:3000; 10.240.0.41:3000


解决方案

我对你的配置有一些评论。首先, 事务线程每队列 应设置为3或4(不要将其设置为核心数量)。



第二个必须做你的批量阅读调整。您正在使用(默认)批量索引协议,您需要调整批处理读取性能的配置参数为:


  • 您有 batch-max -requests 设置得非常高。这可能会影响CPU负载和内存消耗。足够的是,您访问每个节点的密钥数量存在轻微的不平衡,并且会反映在您显示的图表中。至少,这可能是问题。您最好迭代较小的批处理,而不是一次尝试每个节点获取200K条记录。 batch-index-threads - 默认情况下它的值是4,并且你设置它到32(最大64)。您应该通过运行相同的测试并对性能进行基准测试来逐步完成此操作。在每次迭代时,如果性能下降,则调整为较高,然后调低。例如:使用32,+8 = 40,+8 = 48,-4 = 44进行测试。对于设置没有简单的经验法则,您需要调整您将使用的硬件的迭代次数,并监控性能。

  • batch-max-buffer-per-queue - 这更直接地与节点可以支持的并行批处理读取操作的数量。每个批处理读取请求将至少消耗一个缓冲区(如果数据不适合128K,则会更多)。如果您没有足够的这些分配来支持并发批量读取请求的数量,那么您将得到错误代码为152的异常 BATCH_QUEUES_FULL 。清楚地跟踪和记录此类事件,因为这意味着您需要提高此值。请注意,这是缓冲区数 per-queue 。每个批处理响应工作者线程都有自己的队列,因此您将拥有批处理索引线程 x batch-max-buffer-per-queue 缓冲区,每个缓冲区都需要128K的RAM。 batch-max-unused-buffers 将所有这些缓冲区的内存使用量合并在一起,销毁未使用的缓冲区,直到其数量减少。分配和销毁这些缓冲区会产生开销,所以您不希望将其设置得太低。您目前的成本是 32 x 256 x 128KB = 1GB



您正在将数据存储在文件系统中。这对开发实例很好,但不推荐用于生产。在GCE中,您可以为您的数据存储配置SATA SSD或NVMe SSD,并且这些应该是 rel =vladiv/> rel =nofollow>初始化,并用作块设备。请参阅 GCE建议了解更多详情。我怀疑你的日志中有关于设备不能跟上的警告。



很可能你的一个节点对于它所拥有的分区数量来说是异常的因此对象的数量)。你可以用 asadm -e'asinfo -vobjects'来确认它。如果是这种情况,您可以终止该节点,并创建一个新节点。这将强制重新分配分区。这确实会触发一次迁移,CE服务器的迁移时间要长于EE迁移时间。


I have an application with a high load for batch read operations. My Aerospike cluster (v 3.7.2) has 14 servers, each one with 7GB RAM and 2 CPUs in Google Cloud.

By looking at Google Cloud Monitoring Graphs, I noticed a very unbalanced load between servers: some servers have almost 100% CPU load, while others have less than 50% (image below). Even after hours of operation, the cluster unbalanced pattern doesn't change.

Is there any configuration that I could change to make this cluster more homogeneous? How to optimize node balancing?

Edit 1

All servers in the cluster have the same identical aerospike.conf file:

Aerospike database configuration file.

service {
    user root
    group root
    paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
        paxos-recovery-policy auto-reset-master
    pidfile /var/run/aerospike/asd.pid
    service-threads 32
    transaction-queues 32
    transaction-threads-per-queue 32
        batch-index-threads 32
    proto-fd-max 15000
        batch-max-requests 200000
}

logging {
    # Log file must be an absolute path.
    file /var/log/aerospike/aerospike.log {
        context any info
    }
}

network {
    service {
        #address any
        port 3000
    }

    heartbeat {
                mode mesh
                mesh-seed-address-port 10.240.0.6 3002
                mesh-seed-address-port 10.240.0.5 3002
                port 3002

        interval 150
        timeout 20
    }

    fabric {
        port 3001
    }

    info {
        port 3003
    }
}

namespace test {
    replication-factor 3
    memory-size 5G
    default-ttl 0 # 30 days, use 0 to never expire/evict.
        ldt-enabled true

    storage-engine device {
          file /data/aerospike.dat
          write-block-size 1M
          filesize 180G
        }
}

Edit 2:

$ asinfo
1 :  node
     BB90600F00A0142
2 :  statistics
     cluster_size=14;cluster_key=E3C3672DCDD7F51;cluster_integrity=true;objects=3739898;sub-records=0;total-bytes-disk=193273528320;used-bytes-disk=26018492544;free-pct-disk=86;total-bytes-memory=5368709120;used-bytes-memory=239353472;data-used-bytes-memory=0;index-used-bytes-memory=239353472;sindex-used-bytes-memory=0;free-pct-memory=95;stat_read_reqs=2881465329;stat_read_reqs_xdr=0;stat_read_success=2878457632;stat_read_errs_notfound=3007093;stat_read_errs_other=0;stat_write_reqs=551398;stat_write_reqs_xdr=0;stat_write_success=549522;stat_write_errs=90;stat_xdr_pipe_writes=0;stat_xdr_pipe_miss=0;stat_delete_success=4;stat_rw_timeout=1862;udf_read_reqs=0;udf_read_success=0;udf_read_errs_other=0;udf_write_reqs=0;udf_write_success=0;udf_write_err_others=0;udf_delete_reqs=0;udf_delete_success=0;udf_delete_err_others=0;udf_lua_errs=0;udf_scan_rec_reqs=0;udf_query_rec_reqs=0;udf_replica_writes=0;stat_proxy_reqs=7021;stat_proxy_reqs_xdr=0;stat_proxy_success=2121;stat_proxy_errs=4739;stat_ldt_proxy=0;stat_cluster_key_err_ack_dup_trans_reenqueue=607;stat_expired_objects=0;stat_evicted_objects=0;stat_deleted_set_objects=0;stat_evicted_objects_time=0;stat_zero_bin_records=0;stat_nsup_deletes_not_shipped=0;stat_compressed_pkts_received=0;err_tsvc_requests=110;err_tsvc_requests_timeout=0;err_out_of_space=0;err_duplicate_proxy_request=0;err_rw_request_not_found=17;err_rw_pending_limit=19;err_rw_cant_put_unique=0;geo_region_query_count=0;geo_region_query_cells=0;geo_region_query_points=0;geo_region_query_falsepos=0;fabric_msgs_sent=58002818;fabric_msgs_rcvd=57998870;paxos_principal=BB92B00F00A0142;migrate_msgs_sent=55749290;migrate_msgs_recv=55759692;migrate_progress_send=0;migrate_progress_recv=0;migrate_num_incoming_accepted=7228;migrate_num_incoming_refused=0;queue=0;transactions=101978550;reaped_fds=6;scans_active=0;basic_scans_succeeded=0;basic_scans_failed=0;aggr_scans_succeeded=0;aggr_scans_failed=0;udf_bg_scans_succeeded=0;udf_bg_scans_failed=0;batch_index_initiate=40457778;batch_index_queue=0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0,0:0;batch_index_complete=40456708;batch_index_timeout=1037;batch_index_errors=33;batch_index_unused_buffers=256;batch_index_huge_buffers=217168717;batch_index_created_buffers=217583519;batch_index_destroyed_buffers=217583263;batch_initiate=0;batch_queue=0;batch_tree_count=0;batch_timeout=0;batch_errors=0;info_queue=0;delete_queue=0;proxy_in_progress=0;proxy_initiate=7021;proxy_action=5519;proxy_retry=0;proxy_retry_q_full=0;proxy_unproxy=0;proxy_retry_same_dest=0;proxy_retry_new_dest=0;write_master=551089;write_prole=1055431;read_dup_prole=14232;rw_err_dup_internal=0;rw_err_dup_cluster_key=1814;rw_err_dup_send=0;rw_err_write_internal=0;rw_err_write_cluster_key=0;rw_err_write_send=0;rw_err_ack_internal=0;rw_err_ack_nomatch=1767;rw_err_ack_badnode=0;client_connections=366;waiting_transactions=0;tree_count=0;record_refs=3739898;record_locks=0;migrate_tx_objs=0;migrate_rx_objs=0;ongoing_write_reqs=0;err_storage_queue_full=0;partition_actual=296;partition_replica=572;partition_desync=0;partition_absent=3228;partition_zombie=0;partition_object_count=3739898;partition_ref_count=4096;system_free_mem_pct=61;sindex_ucgarbage_found=0;sindex_gc_locktimedout=0;sindex_gc_inactivity_dur=0;sindex_gc_activity_dur=0;sindex_gc_list_creation_time=0;sindex_gc_list_deletion_time=0;sindex_gc_objects_validated=0;sindex_gc_garbage_found=0;sindex_gc_garbage_cleaned=0;system_swapping=false;err_replica_null_node=0;err_replica_non_null_node=0;err_sync_copy_null_master=0;storage_defrag_corrupt_record=0;err_write_fail_prole_unknown=0;err_write_fail_prole_generation=0;err_write_fail_unknown=0;err_write_fail_key_exists=0;err_write_fail_generation=0;err_write_fail_generation_xdr=0;err_write_fail_bin_exists=0;err_write_fail_parameter=0;err_write_fail_incompatible_type=0;err_write_fail_noxdr=0;err_write_fail_prole_delete=0;err_write_fail_not_found=0;err_write_fail_key_mismatch=0;err_write_fail_record_too_big=90;err_write_fail_bin_name=0;err_write_fail_bin_not_found=0;err_write_fail_forbidden=0;stat_duplicate_operation=53184;uptime=1001388;stat_write_errs_notfound=0;stat_write_errs_other=90;heartbeat_received_self=0;heartbeat_received_foreign=145137042;query_reqs=0;query_success=0;query_fail=0;query_abort=0;query_avg_rec_count=0;query_short_running=0;query_long_running=0;query_short_queue_full=0;query_long_queue_full=0;query_short_reqs=0;query_long_reqs=0;query_agg=0;query_agg_success=0;query_agg_err=0;query_agg_abort=0;query_agg_avg_rec_count=0;query_lookups=0;query_lookup_success=0;query_lookup_err=0;query_lookup_abort=0;query_lookup_avg_rec_count=0
3 :  features
     cdt-list;pipelining;geo;float;batch-index;replicas-all;replicas-master;replicas-prole;udf
4 :  cluster-generation
     61
5 :  partition-generation
     11811
6 :  edition
     Aerospike Community Edition
7 :  version
     Aerospike Community Edition build 3.7.2
8 :  build
     3.7.2
9 :  services
     10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.41:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.23:3000
10 :  services-alumni
     10.0.3.1:3000;10.240.0.42:3000;10.0.3.1:3000;10.240.0.5:3000;10.0.3.1:3000;10.240.0.13:3000;10.0.3.1:3000;10.240.0.14:3000;10.0.3.1:3000;10.240.0.18:3000;10.0.3.1:3000;10.240.0.23:3000;10.0.3.1:3000;10.240.0.24:3000;10.0.3.1:3000;10.240.0.27:3000;10.0.3.1:3000;10.240.0.30:3000;10.0.3.1:3000;10.240.0.37:3000;10.0.3.1:3000;10.240.0.43:3000;10.0.3.1:3000;10.240.0.33:3000;10.0.3.1:3000;10.240.0.41:3000

解决方案

I have a few comments about your configuration. First, transaction-threads-per-queue should be set to 3 or 4 (don't set it to the number of cores).

The second has to do with your batch-read tuning. You're using the (default) batch-index protocol, and the config params you'll need to tune for batch-read performance are:

  • You have batch-max-requests set very high. This is probably affecting both your CPU load and your memory consumption. It's enough that there's a slight imbalance in the number of keys you're accessing per-node, and that will reflect in the graphs you've shown. At least, this is possibly the issue. It's better that you iterate over smaller batches than try to fetch 200K records per-node at a time.
  • batch-index-threads – by default its value is 4, and you set it to 32 (of a max of 64). You should do this incrementally by running the same test and benchmarking the performance. On each iteration adjust higher, then down if it's decreased in performance. For example: test with 32, +8 = 40 , +8 = 48, -4 = 44. There's no easy rule-of-thumb for the setting, you'll need to tune through iterations on the hardware you'll be using, and monitor the performance.
  • batch-max-buffer-per-queue – this is more directly linked to the number of concurrent batch-read operations the node can support. Each batch-read request will consume at least one buffer (more if the data cannot fit in 128K). If you do not have enough of these allocated to support the number of concurrent batch-read requests you will get exceptions with error code 152 BATCH_QUEUES_FULL . Track and log such events clearly, because it means you need to raise this value. Note that this is the number of buffers per-queue. Each batch response worker thread has its own queue, so you'll have batch-index-threads x batch-max-buffer-per-queue buffers, each taking 128K of RAM. The batch-max-unused-buffers caps the memory usage of all these buffers combined, destroying unused buffers until their number is reduced. There's an overhead to allocating and destroying these buffers, so you do not want to set it too low compared to the total. Your current cost is 32 x 256 x 128KB = 1GB.

Finally, you're storing your data on a filesystem. That's fine for development instances, but not recommended for production. In GCE you can provision either a SATA SSD or an NVMe SSD for your data storage, and those should be initialized, and used as block devices. Take a look at the GCE recommendations for more details. I suspect you have warnings in your log about the device not keeping up.

It's likely that one of your nodes is an outlier with regards to the number of partitions it has (and therefore number of objects). You can confirm it with asadm -e 'asinfo -v "objects"'. If that's the case, you can terminate that node, and bring up a new one. This will force the partitions to be redistributed. This does trigger a migration, which takes quite longer in the CE server than in the EE one.

这篇关于群集中Aerospike实例之间的错误平衡的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆