hazelcast / hazelcast-hibernate

A distributed second-level cache for Hibernate

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Hz as Hibernate 2nd level cache - Enum de/serialization issue

ryanharis89 opened this issue · comments

Project Setup Description
Hazelcast topology used : Cluster (client/server)
We have a spring boot application where we want to use hazelcast as distributed Hibernate 2nd level cache and distributed query cache (When both are enabled or not by setting the appropriate properties). This spring boot app acts as client to our cluster. The hz server is set up using docker images (Docker desktop on windows 10). Note that 10.70.37.29 is the ip of my pc where everything is running. Management center is also running by executing it's war file downloaded separately.

  • First we run the management center, by executing the below command:
    java -jar hazelcast-mancenter-3.12-ALPHA-1.war 8083 hazelcast-mancenter

  • Then we run the hz server (1st member) with the below command:
    docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=10.70.37.29:5701 -Dhazelcast.diagnostics.enabled=true" -e MANCENTER_URL="http://10.70.37.29:8083/hazelcast-mancenter" -p 5701:5701 --name hazelcast1 hazelcast/hazelcast:3.12.1

  • Then we can add a 2nd member to the cluster (docker image) with the below command but this is something optional and is not related to our problem:
    docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=10.70.37.29:5702 -Dhazelcast.diagnostics.enabled=true" -e MANCENTER_URL="http://10.70.37.29:8083/hazelcast-mancenter" -p 5702:5701 --name hazelcast2 hazelcast/hazelcast:3.12.1

  • Then we run our spring boot app where we have configured the ClientConfig in order to connect to the cluster and config our maps and caches. Inside the HazelcastConfigurationBase.java class we have all the required config (attached):

Everything works perfect and we can see in the management center all the correct info. The only issue we face is related to custom Enum de/serialization upon updating(via a post rest service) an entity the has an enum as a field. we get this exception in the below method:

Class: ClientProxy.java
Method:

protected <T> T invokeOnPartition(ClientMessage clientMessage, int partitionId) {
        try {
            final Future future = new ClientInvocation(getClient(), clientMessage, getName(), partitionId).invoke();
            return (T) future.get();
        } catch (Exception e) {
            throw rethrow(e);
        }
    }

java.util.concurrent.ExecutionException: com.hazelcast.core.OperationTimeoutException: ClientInvocation{clientMessage = ClientMessage{connection=null, length=22, correlationId=7681, operation=Map.executeOnKey, messageType=132, partitionId=67, isComplete=false, isRetryable=false, isEvent=false, writeOffset=0}, objectName = applicationParameters, target = partition 67, sendConnection = ClientConnection{alive=true, connectionId=2, channel=NioChannel{/10.70.37.29:54318->/10.70.37.29:5701}, remoteEndpoint=[10.70.37.29]:5701, lastReadTime=2019-09-11 14:32:47.521, lastWriteTime=2019-09-11 14:32:47.519, closedTime=never, connected server version=3.12.1}} timed out because exception occurred after client invocation timeout 120000 ms. Current time: 2019-09-11 14:32:47.521. Start time: 2019-09-11 14:30:46.782. Total elapsed time: 120739 ms.
Caused by
com.hazelcast.nio.serialization.HazelcastSerializationException: There is no suitable de-serializer for type -205. This exception is likely to be caused by differences in the serialization configuration between members or between clients and members.

We are using :
Java 8, hazelcast-all 3.12.1, cache-api 1.1.0, hibernate-core 5.0.12.Final

Notes

  • FYI int the above steps that reproduce the issue we are trying to update an entity that contains custom enum fields, named ApplicationParameter.java (attached)
  • We have tried several configurations in the client's serialization config section but no luck.
  • We have tried adding custom hazelcast.xml to the server side and add custom serialization config also but no luck.
  • If we add hibernate-core-5.0.12.Final.jar and hazelcast-hibernate5-1.3.2.jar inside server container to path /opt/hazelcast/lib/ then the above exception transforms into something more clear saying that the entity class which contains the custom enum, cannot be found. ClassNotFoundException
  • Then if we add our persistence.jar (that contains all our entity/model classes plus enums) to the CLASSPATH of the server container obviously the exception is gone and eveything works, but this is quite weird and not desirable.

ApplicationParameter.txt

HazelcastConfigurationBase.txt

ApplicationParameterDataType.txt

ApplicationParameterViewFlag.txt

Could you please assist to overcome this issue? Is this a bug or do we need to add some special configuration to our client or server. Thanks in advacne.

Hi,

When I was trying to reproduce your case I encountered the similar error even with an entity class only contains primitive types and no enum or etc. In client mode, any attempt to delete or update an entity resides in either query or the second level cache yields the error caused by com.hazelcast.nio.serialization.HazelcastSerializationException. So I think it's not an enum related issue. (btw it works fine on P2P mode)

Further, when I upgrade the hibernate-core and hazelcast-hibernate5-1.3.2 versions to 5.3.x and hazelcast-hibernate53-1.3.2 without changing any client or server configuration on Hazelcast side, the problem disappeared and it worked fine. Thus I think it's not related to serialization/Hazelcast config as well.

In short, IMO it looks like a bug fixed since hazelcast-hibernate53.

Hi enozcan and thanks for the reply. When i tried removing the enum fields from ApplicationParameter class and left only primitives it worked like a charm and only appeared when inserted enums again. That's why i believed it's related to Enum. I will try to update to the latest versions and see if it solves my issue.

@ryanharis89

Another approach might be that since Hibernate5.2 Jcache implementations can also be used as the second level cache provider. So without using hazeclast-hibernateplug-in you can still use Hazelcast as secondary cache via configuring HazelcastCachingProvider as hibernate.javax.cache.provider. Here are the configuration details:

  • Instead of providing factory class, set
 <property name = "hibernate.javax.cache.provider">com.hazelcast.cache.HazelcastCachingProvider</property>
  • Add cache-api<version>.jar to your Hazelcast instance's classpath:
<dependency>   
     <groupId>javax.cache</groupId>
     <artifactId>cache-api</artifactId>
</dependency>
  • Add hibernate-jcache.jar to your Hibernate app's classpath:
<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-jcache</artifactId>
    <version>${hibernate.version}</version>
</dependency>
  • To see your caches in management center, you have to enable statistics in the hazelcast.xml since the default value is false. The configuration of the cache must match with your cache region name. You either set it explicitly or it will be in the form of <package>.<entity-class-name>. Default query cache name also will be default-query-results-region.
<cache name="X.Y.Z.persistance.ApplicationParameters">
     <statistics-enabled>true</statistics-enabled>
     <management-enabled>true</management-enabled>
     ...
</cache>

<cache name="default-query-results-region">
    <statistics-enabled>true</statistics-enabled>
    <management-enabled>true</management-enabled>
     ....
</cache>
  • When running your application include the argument:

-Dhazelcast.jcache.provider.type=client

@ryanharis89

Another approach might be that since Hibernate5 Jcache implementations can also be used as the second level cache provider. So without using hazeclast-hibernateplug-in you can still use Hazelcast as secondary cache via configuring HazelcastCachingProvider as hibernate.javax.cache.provider. Here are the configuration details:

  • Instead of providing factory class, set
 <property name = "hibernate.javax.cache.provider">com.hazelcast.cache.HazelcastCachingProvider</property>
  • Add cache-api<version>.jar to your Hazelcast instance's classpath:
<dependency>   
     <groupId>javax.cache</groupId>
     <artifactId>cache-api</artifactId>
     <version>1.1.1</version>
</dependency>
  • To see your caches in management center, you have to enable statistics in the hazelcast.xml since the default value is false. The configuration of the cache must match with your cache region name. You either set it explicitly or it will be in the form of <package>.<entity-class-name>. Default query cache name also will be default-query-results-region.
<cache name="X.Y.Z.persistance.ApplicationParameters">
     <statistics-enabled>true</statistics-enabled>
     <management-enabled>true</management-enabled>
     ...
</cache>

<cache name="default-query-results-region">
    <statistics-enabled>true</statistics-enabled>
    <management-enabled>true</management-enabled>
     ....
</cache>
  • When running your application include the argument:

-Dhazelcast.jcache.provider.type=client

So far i have trouble updating my project to the latest versions of hibernate and hazelcast-hiberante as you suggested in the first response (due to custom implementations we have regarding AbstractGeneralRegion and DistributeQueryCacheHazelcastFactory and general distributed usage of query cache). Can you please tell if if i can apply the second approach (about cache provider) without updating my dependencies and staying to current ones(hazelcast-all 3.12.1, cache-api 1.1.0, hibernate-core 5.0.12.Final)? If yes shouldn't i use hazelcast-hibernate maven dependency at all in my pom? Thank again for your help

cache-api v1.1.0 is fine but unfortunately it looks like the support has started since Hibernate 5.2 (see Maven repository of hibernate-jcache here). I'm modifying the previous comment accordingly.

And if you use secondary cache in that way, yes you do not use hazelcast-hibernate plug-in.

  • Instead of providing factory class, set
 <property name = "hibernate.javax.cache.provider">com.hazelcast.cache.HazelcastCachingProvid

Also i cannot start my application without setting the hibernate.cache.region.factory_class as you suggested. I get the below exception.

org.hibernate.cache.NoCacheRegionFactoryAvailableException: Second-level cache is used in the application, but property hibernate.cache.region.factory_class is not given; please either disable second level cache or set correct region factory using the hibernate.cache.region.factory_class setting and make sure the second level cache provider (hibernate-infinispan, e.g.) is available on the classpath.

Did you mean to keep it and also set the hibernate.javax.cache.provider property also?

Can you plase add this property and try again?

 <property name = "hibernate.cache.region.factory_class">org.hibernate.cache.jcache.JCacheRegionFactory</property>

Remember, it's valid since Hibernate5.2 and you have to add hibernate-jcache dependency of the same version with hibernate-core. You can remove hazelcast-hibernate dependency from pom.xml file.

Hello again, i added hibernate-jcache dependency, removed hazelcast-hibernate and updated hibernate to 5.2.18.Final. Also added the above property specifying factory class as JCacheRegionFactory. Now it seems the SerializationException is gone and i can update my entity. There is only one side effect i don't like. In management center--> Clients tab i now see 2 rows instead of one i had before. Apart from the hazelcast instance i have configured with a custom name(hazelcast-sumo), there is another one propably generated by jcache called _hzinstance_jcache_shared. I tried to add a property
hibernateProperties.put("hibernate.cache.hazelcast.instance_name", "sumo_hazelcast");

but it does not change anything. Do you have any idea about that?

Thanks again for your guidance and help.

Hi @ryanharis89

_hzinstance_jcache_shared is the default instance name for the client created for HazelcastClientCachingProvider and if you see this client created, then this must be the one used for the secondary cache connection. Check for the client name used for the connection in Hibernate app's log when it's starting:

INFO: <CLIENT_INSTANCE_NAME> [<CLUSTER_GROUP_NAME>] [3.12.1] HazelcastClient 3.12.1 (20190611 - 0a0ee66) is STARTING

You can change the instance name in hazelcast-client.xml file in the project directory:

<hazelcast-client ... >
    <instance-name>CUSTOM_NAME</instance-name>
</hazelcast-client>

About the other client started with the name hazelcast-sumo, probably this client is not used in Hibernate app but created due to the Bean in the HazelcastConfigurationBase.java if you still use this configuration:

@Bean
@Primary
HazelcastInstance getHazelcastInstance(){
    final HazelcastInstance hazelcastInstance = 
        HazelcastClient.newHazelcastClient(hazelCastConfig());
    initializeMaps(hazelcastInstance);
    return hazelcastInstance;
}

So try not to create a client explicitly since the one which has to be used is actually _hzinstance_jcache_shared instance created by the caching provider itself.

Hi @enozcan ,
Before using your 2nd proposal (about using the JCacheRegionFactory) i had created my custom instance with name hazelcast-sumo inside the client config and then i had instructed hibernate to using this one instead of creating another one with the default name. To do this i had the below properties:

hibernateProperties.put("hibernate.cache.hazelcast.use_native_client", "true"); hibernateProperties.put("hibernate.cache.hazelcast.native_client_address", "10.70.37.29"); hibernateProperties.put("hibernate.cache.hazelcast.native_client_group", "dev"); hibernateProperties.put("hibernate.cache.hazelcast.native_client_password", "dev-pass"); **hibernateProperties.put("hibernate.cache.hazelcast.native_client_instance_name", "sumo_hazelcast");** hibernateProperties.put("hibernate.cache.hazelcast.lock_timeout_in_seconds", "300"); hibernateProperties.put("hibernate.javax.cache.provider", com.hazelcast.cache.HazelcastCachingProvider.class .getName());

The above were passed to my entityManagerFactory.

Along with the below code as you correctly said:

` @bean
@primary
HazelcastInstance getHazelcastInstance()
{
final HazelcastInstance hazelcastInstance = HazelcastClient.newHazelcastClient(hazelCastConfig());
initializeMaps(hazelcastInstance);
return hazelcastInstance;
}

ClientConfig hazelCastConfig()
{
    final ClientConfig configuration = buildHazelcastClientConfiguration();
    setAdditionalConfiguration(configuration);
    return configuration;
}

protected ClientConfig buildHazelcastClientConfiguration()
{
    final ClientConfig clientConfig = new ClientConfig();
    clientConfig.setInstanceName(instanceName);
    clientConfig.setProperty("hazelcast.client.statistics.enabled", "true")
                .setProperty("hazelcast.logging.type", "log4j");
    clientConfig.getNetworkConfig()
                .addAddress(address)
                .setSmartRouting(true)
                .setRedoOperation(true)
                .setConnectionAttemptLimit(connectionAttemptLimit);
    clientConfig.getConnectionStrategyConfig()
                .setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ON);
    clientConfig.getGroupConfig()
                .setName(groupConfigName)
                .setPassword(groupConfigPwd);
    final SerializationConfig serializationConfig = clientConfig.getSerializationConfig();
    serializationConfig.setPortableVersion(0)
                       .setUseNativeByteOrder(false)
                       .setByteOrder(ByteOrder.BIG_ENDIAN)
                       .setCheckClassDefErrors(false);
    return clientConfig;
}`

This was working and i am still using it now. But i have added JCacheRegionFactory, i cannot instruct it to use my existing one hazelcast-sumo and it's creating the default one.
It would be nice to be able to use only my custom as before.

Also note that i am not using any hazelcast.xml(either a client one) so far and i would like that to remain if possible. I prefer java configuration
Thanks again.

I have not seen any programmatic config for a Jcache provider in the hibernate doc. They recommend to configure via xml file (see here). Unfortunately I have no idea how to configure it programmatically or if it is even possible.

SO if i want to do it via xml, i have to create a hazelcast-client.xml file in my spring boot app and just paste
<hazelcast-client ... > <instance-name>hazelcast-sumo</instance-name> </hazelcast-client>
Do i also need to specify the address where the server is located?

Instead of configuring the properties programmatically, just configure them in hazelcast-client.xml. I see that you customized a few properties in buildHazelcastClientConfiguration(). You may want to apply them in xml config again. You can see the xsd format here.

You may need to specify the group name of Hazelcast cluster if you have more than one running cluster in your network. You may also need to specify the address if your cluster and your app is running on different networks etc.

Hi @ryanharis89

I think I solved the problem you have. Sorry for coming up with the solution after you tried to upgrade hibernate version, use jcache impl etc stuff.

Until hazelcast-hibernate53, the prior versions use EntryProcessors for updating & deleting the entries existing in the secondary cache. Those processors simply execute some tasks on particular entries in IMap and are defined in com.hazelcast.hibernate.distributed package.

When you try to update or delete an entity, your Hazelcast client asks Hazelcast cluster to execute those processors on a particular entry to be deleted or updated. However, if you do not add hazelcast-hibernate<version> dependency on the server side, the cluster will not be able to find requested EntryProcessors. That's why you have to include hazelcast-hibernatedependency in which the EntryProcessors are defined in your server's pom.xml. Further, since hazelcast-hibernate uses hibernate-core dependencies you have to include it as well.

Now that the EntityProcessors are defined on the server side, your client can ask the cluster to update & delete entities in the cache. However, if your entity stored in the DB has non-primitive type fields the cluster will have trouble again when processing the entity in the cache. Thus you have to add these non-primitive type classes to the server's classpath. In your case, create X.Y.Z.persistence.model.constants package on server side and put ApplicationParameterDataType.java inside it.

So that's why the problem is gone when you apply these steps:

If we add hibernate-core-5.0.12.Final.jar and hazelcast-hibernate5-1.3.2.jar inside server container to path /opt/hazelcast/lib/ then the above exception transforms into something more clear saying that the entity class which contains the custom enum, cannot be found. ClassNotFoundException

Then if we add our persistence.jar (that contains all our entity/model classes plus enums) to the CLASSPATH of the server container obviously the exception is gone and eveything works, but this is quite weird and not desirable.

You do not need to add whole persistence.jar but only enums or other non-primitive types used in Entity classes.

Actually the issue is already pointed out in the hazelcast-hibernate documentation:

To use Native Client, add hazelcast-client-.jar into your classpath. Refer to Hazelcast Java Client chapter for more information. Moreover, to configure a Hazelcast Native Client for Hibernate, put the configuration file named hazelcast-client.xml into the root of your classpath.

To use Native Client, add hazelcast-.jar,hazelcast-hibernate(3,4)-.jar and hibernate-core-.jar into your remote cluster's classpath.

If your domain (persisted) classes only have Java primitive type fields, you do not need to add your domain classes into your remote cluster's classpath. However, if your classes have non-primitive type fields, you need to add only these fields' classes (not your domain class) to your cluster's classpath.

Hello @enozcan ,

Thanks for the reply. Except the part where you explain in details the actual reason why we need to add the jars hazelcast-hibernate and hibernate-core in the server's classpath (i knew i had to do it but not why i had to do it), i am already aware of the limitation with non-primitive types. In the beggining i thought it was only Enums but after your very first comment i read the documantation and found out that :
"If your domain (persisted) classes only have Java primitive type fields, you do not need to add your domain classes into your remote cluster's classpath. However, if your classes have non-primitive type fields, you need to add only these fields' classes (not your domain class) to your cluster's classpath."

But this is not an accepted behaviour, to have to include our model classes in the hz server.

Hello @enozcan ,

After experimenting for a while, i tried the two solutions you suggested.

  1. Leaving my hazelcast configuration the same and just updated my hibernate and hazelcast version to the latest. Everything worked fine as it should. The only limitation with this one is the fact that distributed query cache is not supported. It's documented,(NOTE: QueryCache is always LOCAL to the member and never distributed across Hazelcast Cluster. see here) understood but we need it.

  2. Using the hibernate-jcache dependency and making all the required changes you mentioned in the above comments. Again it is working fine but we have 2 separate clients showing up in the management center. The one if constructed via Java by me inside HazelcastConfigurationBase.java (it's purpose is to set up all the cache details like eviction, TTL, expiry etc) The second one is created upon SessionFactory initialization and it;s name is defined inside the file hazelcast-client.xml i added and looks like this. I only specify the desired name and the network related stuff.

<group> <name>dev</name> <password>dev-pass</password> </group> <instance-name>hazelcast_hiberante</instance-name> <network> <cluster-members> <address>10.70.105.214</address> </cluster-members> <smart-routing>true</smart-routing> <redo-operation>true</redo-operation> <connection-timeout>60000</connection-timeout> <connection-attempt-period>3000</connection-attempt-period> <connection-attempt-limit>300</connection-attempt-limit> </network>

With this everything works fine, including distributed query cache between hz members of the 
 cluster.

The only question i have is if there is any problem/issue when using two separate hazelcast instances. So far i haven't seen anything weird but i have to ask for your experience and guidance.

Thank you very much for your help during this issue.
Regards, Xaris

Hi @ryanharis89

I believe only the second client created during the SessionFactory initialization is used for talking to the cluster to fetch/put data from/to Hibernate secondary cache.

I wonder if you use the other client created manually anywhere in the application. If not (probably you left it unchanged when you start using Jcache impl), I don't think that this configuration is gonna be used during SecondLevelCache creation and hence it's unnecessary. If so, as long as you use this client to connect another cluster or to connect to the same cluster but for different purpose (for some reason) that shouldn't be a problem at all.

I do not have that much knowledge on Spring Beans but it seems like the HazelcastInstance bean is used by Spring's cache autoconfiguration and hence a new instance is created.

Further, HazelcastCachingProvider provides an option of creating a caching provider from an existing client/server without creating a new one (see here). I was trying to configure Hibernate properties such that not a new client/server is created for the second level cache but an existing one is used instead. I could not come up with a solution yet but I will notify you if I find a way of it. If this can be done, your question upon configuring the cache programmatically will be answered as well.

Bests

Hi @ryanharis89

Since there was no communication since September, I assume all your questions were answered. If that's not the case, feel free to reopen.