群集分片客户端未与主机连接 [英] Cluster sharding client not connecting with host

查看:104
本文介绍了群集分片客户端未与主机连接的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

经过最近的调查和栈溢出问题我意识到,与分片一致性哈希路由器相比,分片是更好的选择.但是我无法使2进程集群正常运行.

After recent investigation and a Stack over flow question I realise that the cluster sharding is a better option than a cluster-consistent-hash-router. But I am having trouble getting a 2 process cluster going.

一个过程是种子,另一个过程是客户.种子节点似乎不断抛出死信消息(请参阅此问题的结尾).

One process is the Seed and the other is the Client. The Seed node seems to continuously throw dead letter messages (see the end of this question).

此种子HOCON如下:

This Seed HOCON follows:

akka {
loglevel = "INFO"                    

actor {
    provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
    serializers {
        wire = "Akka.Serialization.WireSerializer, Akka.Serialization.Wire"
    }
    serialization-bindings {
        "System.Object" = wire
    }
}                    

remote {
    dot-netty.tcp {
        hostname = "127.0.0.1"
        port = 5000
    }
}

persistence {
    journal {
        plugin = "akka.persistence.journal.sql-server"
        sql-server {
            class = "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer"
            schema-name = dbo
            auto-initialize = on
            connection-string = "Data Source=localhost;Integrated Security=True;MultipleActiveResultSets=True;Initial Catalog=ClusterExperiment01"
            plugin-dispatcher = "akka.actor.default- dispatcher"
            connection-timeout = 30s
            table-name = EventJournal
            timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
            metadata-table-name = Metadata
        }
    }

    sharding {
        connection-string = "Data Source=localhost;Integrated Security=True;MultipleActiveResultSets=True;Initial Catalog=ClusterExperiment01"
        auto-initialize = on
        plugin-dispatcher = "akka.actor.default-dispatcher"
        class = "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer"
        connection-timeout = 30s
        schema-name = dbo
        table-name = ShardingJournal
        timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
        metadata-table-name = ShardingMetadata
    }
}

snapshot-store {
    sharding {
        class = "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer"
        plugin-dispatcher = "akka.actor.default-dispatcher"
        connection-string = "Data Source=localhost;Integrated Security=True;MultipleActiveResultSets=True;Initial Catalog=ClusterExperiment01"
        connection-timeout = 30s
        schema-name = dbo
        table-name = ShardingSnapshotStore
        auto-initialize = on
    }
}

cluster {
    seed-nodes = ["akka.tcp://my-cluster-system@127.0.0.1:5000"]
    roles = ["Seed"]

    sharding {
        journal-plugin-id = "akka.persistence.sharding"
        snapshot-plugin-id = "akka.snapshot-store.sharding"
    }
}}

我有一种基本上将上述内容转换为Config的方法,

I have a method that essentially turns the above into a Config like so:

var config = NodeConfig.Create(/* HOCON above */).WithFallback(ClusterSingletonManager.DefaultConfig());

在没有"WithFallback"的情况下,我从配置生成中得到了一个空引用异常.

Without the "WithFallback" I get a null reference exception out of the config generation.

然后像这样生成系统:

var system = ActorSystem.Create("my-cluster-system", config);

客户端以相同的方式创建其系统,并且HOCON除了以下内容几乎相同:

The client creates its system in the same manner and the HOCON is almost identical aside from:

{
remote {
    dot-netty.tcp {
        hostname = "127.0.0.1"
        port = 5001
    }
}
cluster {
    seed-nodes = ["akka.tcp://my-cluster-system@127.0.0.1:5000"]
    roles = ["Client"]
    role.["Seed"].min-nr-of-members = 1
    sharding {
        journal-plugin-id = "akka.persistence.sharding"
        snapshot-plugin-id = "akka.snapshot-store.sharding"
    }
}}

种子节点创建分片,如下所示:

The Seed node creates the sharding like so:

ClusterSharding.Get(system).Start(
   typeName: "company-router",
   entityProps: Props.Create(() => new CompanyDeliveryActor()),                    
   settings: ClusterShardingSettings.Create(system),
   messageExtractor: new RouteExtractor(100)
);

客户端将像这样创建分片代理:

And the client creates a sharding proxy like so:

ClusterSharding.Get(system).StartProxy(
    typeName: "company-router",
    role: "Seed",
    messageExtractor: new RouteExtractor(100));

RouteExtractor是:

The RouteExtractor is:

public class RouteExtractor : HashCodeMessageExtractor
{
    public RouteExtractor(int maxNumberOfShards) : base(maxNumberOfShards)
    {   
    }
    public override string EntityId(object message) => (message as IHasRouting)?.Company?.VolumeId.ToString();
    public override object EntityMessage(object message) => message;
}

在这种情况下,VolumeId始终相同(仅出于实验目的).

In this scenario the VolumeId is always the same (just for experiment sake).

这两个过程都可以实现,但是种子不断向日志中抛出此错误:

Both processes come to life but the Seed keeps throwing this error to the log:

[INFO] [7/05/2017 9:00:58 AM] [线程0003] [akka://my-cluster-system/user/sharding /company-routerCoordinator/singleton/coordinator]来自akka.tcp的消息注册 ://my-cluster-system@127.0.0.1:5000/user/sharding/company-router到akka://my-cl 用户系统/用户/分片/公司路由器协调器/单人/协调器为n 尚未交付.遇到4个死信.

[INFO][7/05/2017 9:00:58 AM][Thread 0003][akka://my-cluster-system/user/sharding /company-routerCoordinator/singleton/coordinator] Message Register from akka.tcp ://my-cluster-system@127.0.0.1:5000/user/sharding/company-router to akka://my-cl uster-system/user/sharding/company-routerCoordinator/singleton/coordinator was n ot delivered. 4 dead letters encountered.

Ps.我不使用灯塔.

Ps. I am not using Lighthouse.

推荐答案

感谢Horusiath,它已解决该问题:

Thanks Horusiath, that's fixed it:

return sharding.Start(
   typeName: "company-router",
   entityProps: Props.Create(() => new CompanyDeliveryActor()),                    
   settings: ClusterShardingSettings.Create(system).WithRole("Seed"),
                messageExtractor: new RouteExtractor(100)                
            );

集群分片现在在两个进程之间进行通信.非常感谢.

The clustered shard is now communicating between the 2 processes. Thanks very much for that bit.

这篇关于群集分片客户端未与主机连接的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆