Spring Data GemFire以及启动时Spring Boot与GemFire缓存之间的松散耦合 [英] Spring Data GemFire and loose coupling between GemFire cache with Spring Boot on startup

查看:125
本文介绍了Spring Data GemFire以及启动时Spring Boot与GemFire缓存之间的松散耦合的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我们有一个带有2个定位器和2个缓存节点的GemFire集群.

We have a GemFire cluster with 2 Locators and 2 Cache nodes.

我们的Spring Boot服务将作为客户端连接到GemFire集群,并将具有客户端区域.我们正在使用Spring Data GemFire通过GemFire XML配置和属性来引导客户端区域.

Our Spring Boot services will connect to the GemFire cluster as clients and will have client Regions. We are using Spring Data GemFire to bootstrap client Regions with GemFire XML config and properties.

当GemFire集群关闭时,由于无法满足GemFire Region依赖关系(UnsatisfiedDependecyException),因此无法启动Spring Boot服务.

When the GemFire cluster is down the Spring Boot service is not coming up as it couldn’t satisfy the GemFire Region dependencies (UnsatisfiedDependecyException) .

有没有一种方法可以松散地结合Spring Boot启动和GemFire?

Is there a way to loosely couple Spring Boot startup and GemFire?

本质上,即使GemFire集群关闭,我们也希望启动Spring Boot服务.

In essence, we want the Spring Boot service to start even when the GemFire cluster is down.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:gfe="http://www.springframework.org/schema/gemfire"
       xmlns:util="http://www.springframework.org/schema/util"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="
                http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/gemfire http://www.springframework.org/schema/gemfire/spring-gemfire.xsd
                http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd">

   <util:properties id="gemfireProperties" location="classpath:gemfire.properties"/>

     <bean id="autoSerializer" class="org.apache.geode.pdx.ReflectionBasedAutoSerializer">
        </bean>

    <gfe:client-cache pdx-serializer-ref="autoSerializer" pdx-read-serialized="true" pool-name="POOL" properties-ref="gemfireProperties"/>

<gfe:pool id="POOL" subscription-enabled="true" >
     <gfe:locator host="${gf.cache.locator1}" port="${gf.cache.locator1.port}"/>
     <gfe:locator host="${gf.cache.locator2}" port="${gf.cache.locator2.port}"/>
 </gfe:pool>

<gfe:client-region id="xyz" shortcut="CACHING_PROXY" pool-name="POOL">
    <gfe:regex-interest pattern=".*" result-policy="KEYS_VALUES"/>
    </gfe:client-region>


</beans>


@ImportResource({"classpath:gemfire-config.xml"})

推荐答案

您所要求的是可以做到的,但是如果没有一些自定义代码,就无法做到.

What you are asking is possible to do, but not without some custom code.

而且,使用基于Java的 Spring 容器配置以及 Spring(Data GemFire)XML配置或(我读得对吗?您正在(可能)使用...)

And, it would be much easier to accomplish using Java-based, Spring Container Configuration along with SDG's API than using either Spring (Data GemFire) XML config or (Did I read this right?? You are (possibly) using...) GemFire XML config.

不过,首先,我不知道您使用Pivotal GemFire的能力是什么,您的Spring Boot应用程序(或服务)并不严格要求GemFire运行(服务器端)才能正常运行,以便您的Spring Boot应用程序/服务仍会启动并满足客户的需求吗?

First, though, I wonder in what capacity are you using Pivotal GemFire that your Spring Boot applications (or services) do not strictly require GemFire to be running (server-side) to function properly and so that your Spring Boot apps/services will still startup and service your customers' needs?

很明显,在这种情况下,Pivotal GemFire未被用作您的Spring Boot服务的记录系统(SOR).但是,如果您只是使用Pivotal GemFire进行缓存",这可能很有意义,例如作为

Clearly, Pivotal GemFire is not being used as a System of Record (SOR) for your Spring Boot services in this case. However, it would make sense if you were simply using Pivotal GemFire for "caching", perhaps as a caching provider (or this) in Spring's Cache Abstraction? Is this what you are doing?

反正...

我认为,证明这一点的最佳方法是使用

I think the best way to demonstrate this is by example with an Integration Test, ;-)

我编写了一个简单的集成测试ResilientClientServerIntegrationTests,其中 Example "),并演示了它可以"有条件"在客户端/服务器模式和仅本地模式之间切换.

I wrote a simple Integration Test, ResilientClientServerIntegrationTests, where the test is functioning as an application (to put/get data to/from a Region, i.e. "Example") and demonstrates that it can "conditionally" switch between client/server and local-only mode.

测试(或基于Spring的应用程序)在客户端/服务器模式和仅本地模式之间切换的关键是通过实现

The key to the test (or Spring-based application) to switch between client/server and local-only mode is by implementing a custom Spring Condition and then using the @Conditional Spring annotation on the application (client) configuration class, as shown here.

但是,我并没有完全禁用服务器群集不可用时完全禁用GemFire客户端,我只是将应用程序(也称为测试)切换为仅在客户端本地模式下运行.

However, instead of completely disabling the GemFire client when the server cluster is not available, I simply switch the application (a.k.a. test) to run in client, local-only mode.

我专门通过

I specifically do this by configuring the client Regions to use the ClientRegionShortcut.LOCAL setting. I then use this setting in the configuration of my client-side GemFire objects, e.g. on the "Example" client Region, see here, then here.

现在,如果我运行此测试,则无论我是否正在运行(服务器的)GemFire集群,它都将通过,因为如果没有可用的GemFire集群,则它将仅在仅本地模式下运行.

Now, if I run this test, it will pass whether or not I have a GemFire cluster (of servers) running, because if there is no GemFire cluster available, then it will simply function in local-only mode.

如果已将GemFire集群提供给应用程序,则它也将按预期工作,并且可以在不更改任何客户端应用程序代码或配置的情况下使用该集群,整洁吧!

If a GemFire cluster has been made available to the application, then it will also work as expected and use the cluster without changing any client application code or configuration, neat huh!

因此,举例来说,假设我像这样使用Gfsh启动集群...

So, by way of example, suppose I start a cluster using Gfsh, like so...

$ echo $GEMFIRE
/Users/jblum/pivdev/apache-geode-1.6.0

$ gfsh
    _________________________     __
   / _____/ ______/ ______/ /____/ /
  / /  __/ /___  /_____  / _____  / 
 / /__/ / ____/  _____/ / /    / /  
/______/_/      /______/_/    /_/    1.6.0

Monitor and Manage Apache Geode
gfsh>


gfsh>start locator --name=LocatorOne --log-level=config
Starting a Geode Locator in /Users/jblum/pivdev/lab/LocatorOne...
.....
Locator in /Users/jblum/pivdev/lab/LocatorOne on 10.99.199.24[10334] as LocatorOne is currently online.
Process ID: 9737
Uptime: 3 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_192
Log File: /Users/jblum/pivdev/lab/LocatorOne/LocatorOne.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.log-level=config -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=10.99.199.24, port=1099]

Cluster configuration service is up and running.


gfsh>start server --name=ServerOne --log-level=config
Starting a Geode Server in /Users/jblum/pivdev/lab/ServerOne...
....
Server in /Users/jblum/pivdev/lab/ServerOne on 10.99.199.24[40404] as ServerOne is currently online.
Process ID: 9780
Uptime: 3 seconds
Geode Version: 1.6.0
Java Version: 1.8.0_192
Log File: /Users/jblum/pivdev/lab/ServerOne/ServerOne.log
JVM Arguments: -Dgemfire.default.locators=10.99.199.24[10334] -Dgemfire.start-dev-rest-api=false -Dgemfire.use-cluster-configuration=true -Dgemfire.log-level=config -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar


gfsh>list members
   Name    | Id
---------- | ----------------------------------------------------------------
LocatorOne | 10.99.199.24(LocatorOne:9737:locator)<ec><v0>:1024 [Coordinator]
ServerOne  | 10.99.199.24(ServerOne:9780)<v1>:1025


gfsh>create region --name=Example --type=PARTITION
 Member   | Status
--------- | ----------------------------------------
ServerOne | Region "/Example" created on "ServerOne"


gfsh>list regions
List of regions
---------------
Example


gfsh>describe region --name=/Example
..........................................................
Name            : Example
Data Policy     : partition
Hosting Members : ServerOne

Non-Default Attributes Shared By Hosting Members  

 Type  |    Name     | Value
------ | ----------- | ---------
Region | size        | 0
       | data-policy | PARTITION

现在,我再次运行测试,测试通过,然后评估集群的状态:

Now, I run the test again, it passes, and then I assess the state of the cluster:

gfsh>describe region --name=/Example
..........................................................
Name            : Example
Data Policy     : partition
Hosting Members : ServerOne

Non-Default Attributes Shared By Hosting Members  

 Type  |    Name     | Value
------ | ----------- | ---------
Region | size        | 1
       | data-policy | PARTITION



gfsh>get --region=Example --key=1 --key-class=java.lang.Integer
Result      : true
Key Class   : java.lang.Integer
Key         : 1
Value Class : java.lang.String
Value       : test

酷!有效!我们的示例"区域包含一个由我们的测试/应用程序放置的条目.

Cool! It worked! Our "Example" Region contains an entry put their by our test/application.

当然,如果我停止集群并重新运行测试,它仍然会通过,因为代码/配置可以智能地无缝切换回仅本地模式,而无需执行任何操作.

If I stop the cluster and re-run the test, of course, it will still pass because the code/configuration smartly switches back to local-only mode, seamlessly without doing anything.

如果您不确定/不确定测试是否按照我说的去做,那么只需注释掉

If you are unclear/uncertain that the test is doing what I say it is doing, then simply comment out the @Conditional annotation that is responsible for A) determining whether the GemFire cluster is available and B) deciding how to handle the situation when the cluster is unavailable, which in this case we simply switch to local-only mode.

但是,通过注释掉该条件,您将看到类似于以下内容的异常:

But, by commenting out that condition, you would see an Exception similar to the following:

org.apache.geode.cache.client.NoAvailableLocatorsException: Unable to connect to any locators in the list [LocatorAddress [socketInetAddress=localhost/127.0.0.1:10334, hostname=localhost, isIpString=false]]

    at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.findServer(AutoConnectionSourceImpl.java:158)
    at org.apache.geode.cache.client.internal.ConnectionFactoryImpl.createClientToServerConnection(ConnectionFactoryImpl.java:234)
    at org.apache.geode.cache.client.internal.pooling.ConnectionManagerImpl.borrowConnection(ConnectionManagerImpl.java:242)
    at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:148)
    at org.apache.geode.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:127)
    at org.apache.geode.cache.client.internal.PoolImpl.execute(PoolImpl.java:782)
    at org.apache.geode.cache.client.internal.PutOp.execute(PutOp.java:91)
    at org.apache.geode.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:159)
    at org.apache.geode.internal.cache.LocalRegion.serverPut(LocalRegion.java:3010)
    at org.apache.geode.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3121)
    at org.apache.geode.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:239)
    at org.apache.geode.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5631)
    at org.apache.geode.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:152)
    at org.apache.geode.internal.cache.LocalRegion.basicPut(LocalRegion.java:5059)
    at org.apache.geode.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1597)
    at org.apache.geode.internal.cache.LocalRegion.put(LocalRegion.java:1584)
    at org.apache.geode.internal.cache.AbstractRegion.put(AbstractRegion.java:413)
    at example.tests.spring.data.geode.clientserver.ResilientClientServerIntegrationTests.exampleRegionDataAccessOperationsAreSuccessful(ResilientClientServerIntegrationTests.java:86)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)

NoAvailableLocatorsException,因为默认"为PROXY(再次

That is NoAvailableLocatorsException, because the "default" is PROXY (again, here) which expects that a cluster with the corresponding client Region (i.e. "Example") exists in the server cluster.

当然,如果在群集不可用时绝对绝对不希望任何GemFire客户端对象起作用,则可以完全禁用[Spring [Boot]]应用程序/服务中的任何GemFire客户端配置.您只需返回假

Of course, you can completely disable any GemFire client configuration in your [Spring [Boot]] application/services if you absolutely and strictly don't want any GemFire client objects functional when the cluster is not available. You would simply return false here. You just have to be careful that your application has not auto-wired any GemFire objects in this case, for example.

此外,您也可以使用Spring XML config实现类似的效果,但是使用基于Java的Spring配置更容易演示,我将其作为练习供您了解.

Also, you can accomplish a similar effect with Spring XML config as well, but using Java-based Spring configuration was much easier to demonstrate and I leave it as an exercise for you to figure out.

此外,

Additionally, the logic to test the availability of the cluster, while effective (and hardcoded, :P), is crude and I leave it to you to add more "robust" logic.

但是,我相信这足以解决您的问题.

But, I trust this addresses your question adequately.

希望这会有所帮助!

干杯!

这篇关于Spring Data GemFire以及启动时Spring Boot与GemFire缓存之间的松散耦合的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆