rakhmad / rhdg8-server

This repository demonstrates some of the basic features of the latest release of Red Hat Data Grid 8 and how to deploy a RHDG cluster on OCP and RHEL

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Red Hat Data Grid 8 server

1. Introduction

Red Hat Data Grid is an in-memory, distributed, NoSQL datastore solution. Your applications can access, process, and analyze data at in-memory speed to deliver a superior user experience.

Red Hat Data Grid provides value as a standard architectural component in application infrastructures for a variety of real-world scenarios and use cases:

  • Data caching and transient data storage.

  • Primary data store.

  • Low latency compute grid.

1.1. Features and benefits

To support modern data management requirements with rapid data processing, elastic scalability, and high availability, Red Hat Data Grid offers:

  • NoSQL data store. Provides simple, flexible storage for a variety of data without the constraints of a fixed data model.

  • Apache Spark and Hadoop integration. Offers full support as an in-memory data store for Apache Spark and Hadoop, with support for Spark resilient distributed datasets (RDDs) and Discretized Streams (Dstreams), as well as Hadoop I/O format.

  • Rich querying. Provides easy search for objects using values and ranges, without the need for key-based lookups or an object’s exact location.

  • Polyglot client and access protocol support. Offers read/write capabilities that let applications written in multiple programming languages easily access and share data. Applications can access the data grid remotely, using REST, or Hot Rod—for Java™, C++, and .NET.

  • Distributed parallel execution. Quickly process large volumes of data and support long-running compute applications using simplified Map-Reduce parallel operations.

  • Flexible persistence. Increase the lifespan of information in the memory for improved durability through support for both shared nothing and shared database—RDBMS or NoSQL—architectures.

  • Comprehensive security. Authentication, role-based authorization, and access control are integrated with existing security and identity structures to give only trusted users, services, and applications access to the data grid.

  • Cross-datacenter replication. Replicate applications across datacenters and achieve high availability to meet service-level agreement (SLA) requirements for data within and across datacenters.

  • Rolling upgrades. Upgrade your cluster without downtime for continuous, uninterrupted operations for remote users and applications.

1.2. RHDG Operator

The Data Grid Operator provides operational intelligence and reduces management complexity for deploying Data Grid on OpenShift, automatically upgrading clusters when new image versions become available.

In order to upgrade Data Grid clusters, the operator checks the version of the image of each Data Grid node. If the operator determines that a new version of the image is available, it gracefully shuts down all nodes, applies the new image, and restarts the nodes.

On OpenShift, the Operator Lifecycle Manager (OLM) enables upgrades for Data Grid Operator.

2. Deploying RHDG on RHEL

This section explains how to configure, run, and monitor Data Grid servers. The binaries are available on the RH downloads website.

2.1. Running RHDG locally in standalone mode

This is the basic installation option, valid for basic exploration and development purposes. Follow these steps to run the server in this mode:

Unzip the server
cd ~/Downloads
unzip redhat-datagrid-8.3.0-server.zip
Create the developer user
./redhat-datagrid-8.3.0-server/bin/cli.sh user create developer -p developer -g admin
Launch the server
./redhat-datagrid-8.3.0-server/bin/server.sh
2022-02-14 18:36:43,541 INFO  (main) [BOOT] JVM OpenJDK 64-Bit Server VM Red Hat, Inc. 11.0.14+9
[...]
2022-02-14 18:36:48,086 INFO  (main) [org.infinispan.SERVER] ISPN080004: Connector SinglePort (default) listening on 127.0.0.1:11222
2022-02-14 18:36:48,087 INFO  (main) [org.infinispan.SERVER] ISPN080034: Server 'hat2-29797' listening on http://127.0.0.1:11222
2022-02-14 18:36:48,126 INFO  (main) [org.infinispan.SERVER] ISPN080001: Red Hat Data Grid Server 8.3.0.GA started in 4526ms

2.2. Running RHDG locally in clustering mode

Unzip the server as in the previous example, locate the folder named server, duplicate it and then use one with each server instance.

Unzip the server
cd ~/Downloads
unzip redhat-datagrid-8.3.0-server.zip
Duplicate the server config
cp -r redhat-datagrid-8.3.0-server/server redhat-datagrid-8.3.0-server/server-01
cp -r redhat-datagrid-8.3.0-server/server redhat-datagrid-8.3.0-server/server-02
Create the developer user in both instances
./redhat-datagrid-8.3.0-server/bin/cli.sh user create developer -p developer -g admin --server-root=server-01
./redhat-datagrid-8.3.0-server/bin/cli.sh user create developer -p developer -g admin --server-root=server-02
Launch both server instances
./redhat-datagrid-8.3.0-server/bin/server.sh --node-name=node-01 --server-root=redhat-datagrid-8.3.0-server/server-01 --port-offset=0
./redhat-datagrid-8.3.0-server/bin/server.sh --node-name=node-02 --server-root=redhat-datagrid-8.3.0-server/server-02 --port-offset=100

After running both commands, you will see in both terminals similar logs to the ones shown below:

[...]
2022-02-14 18:38:04,606 INFO  (main) [org.infinispan.SERVER] ISPN080034: Server 'node-01' listening on http://127.0.0.1:11222
2022-02-14 18:38:04,720 INFO  (main) [org.infinispan.SERVER] ISPN080001: Red Hat Data Grid Server 8.3.0.GA started in 5521ms
2022-02-14 18:38:12,903 INFO  (jgroups-6,node-01) [org.infinispan.CLUSTER] ISPN000094: Received new cluster view for channel cluster: [node-01|1] (2) [node-01, node-02]
2022-02-14 18:38:12,912 INFO  (jgroups-6,node-01) [org.infinispan.CLUSTER] ISPN100000: Node node-02 joined the cluster
2022-02-14 18:38:13,621 INFO  (jgroups-12,node-01) [org.infinispan.CLUSTER] [Context=org.infinispan.CLIENT_SERVER_TX_TABLE]ISPN100002: Starting rebalance with members [node-01, node-02], phase READ_OLD_WRITE_ALL, topology id 2
[...]
2022-02-14 18:38:14,457 INFO  (jgroups-14,node-01) [org.infinispan.CLUSTER] [Context=___hotRodTopologyCache_hotrod-default]ISPN100010: Finished rebalance with members [node-01, node-02], topology id 5

3. Deploying RHDG on OCP using the Operator

An Operator is a method of packaging, deploying and managing a Kubernetes-native application. A Kubernetes-native application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling.

Install Data Grid Operator into a OpenShift namespace to create and manage Data Grid clusters.

3.1. Deploying the RHDG operator

Create subscriptions to Data Grid Operator on OpenShift so you can install different Data Grid versions and receive automatic updates.

To deploy the RHDG operator, you will need to create three different objects:

  • Two Openshift projects that will contain the operator and the objects of the RHDG cluster.

  • An OperatorGroup, which provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. As we are not deploying our operator in the default namespace (openshift-operators), we will need to create one to set the namespaces where the Data Grid operator will be able to create and monitorize clusters.

ℹ️
The OperatorGroup resource allows to configure four possible namespace-scopes for the operator. Please check Annex: Configure the scope of your operator before executing the commands of this section.
  • A Subscription, which represents an intention to install an Operator. It is the custom resource that relates an Operator to a CatalogSource. Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually.

I have created an OCP template to quickly deploy this operator. Just execute the following command have it up and running on your cluster.

Bear in mind that you will need cluster-admin permissions to deploy an operator, as it is necessary to create cluster-wide CRDs (Custom Resource Definitions).
oc process -f rhdg-operator/rhdg-01-operator.yaml | oc apply -f -

This template provides two parameters to modify the project where the operator and the cluster is installed. It is possible to deploy both on the same project or in different projects. By default, values are:

  • OPERATOR_NAMESPACE = rhdg8-operator

  • CLUSTER_NAMESPACE = rhdg8

Modify them just passing arguments to the template:

oc process -f rhdg-operator/rhdg-01-operator.yaml -p OPERATOR_NAMESPACE="other-namespace" -p CLUSTER_NAMESPACE="another-namespace" | oc apply -f -

It is also possible to install the operator from the web console. For more information, please check the official documentation.

3.2. Deploying a RHDG cluster

Data Grid Operator lets you create, configure, and manage Data Grid clusters. Data Grid Operator adds a new Custom Resource (CR) of type Infinispan that lets you handle Data Grid clusters as complex units on OpenShift.

Data Grid Operator watches for Infinispan Custom Resources (CR) that you use to instantiate and configure Data Grid clusters and manage OpenShift resources, such as StatefulSets and Services. In this way, the Infinispan CR is your primary interface to Data Grid on OpenShift.

I have created an OCP template to quickly deploy a basic RHDG cluster with 3 replicas. Execute the following command have it up and running on your cluster.

oc process -f rhdg-operator/rhdg-02-cluster.yaml | oc apply -f -

This template provides two parameters to modify the project where the cluster is installed and the name of the cluster to deploy. The cluster namespace should be the same as in the previous step. By default, values are:

  • CLUSTER_NAMESPACE = rhdg8

  • CLUSTER_NAME = rhdg

Modify them just passing arguments to the template:

oc process -f rhdg-operator/rhdg-02-cluster.yaml -p CLUSTER_NAMESPACE="another-namespace" -p CLUSTER_NAME="my-cluster" | oc apply -f -

3.3. Creating RHDG caches using the Operator CRD

⚠️
Creating caches with Data Grid Operator is no longer technology preview. However, modifications of the Cache CR does not reflect on the Dat Grid cluster. Therefore, you will have to delete and recreate those CRs manually.

Data Grid stores entries into caches, which can be created using several methods: REST, CLI, programmatically using the Java Client or using the Cache CRD. In this section, we will explore how to create caches using the Operator.

ℹ️
For other ways of creating caches, please check this other Git repository with information about the Data Grid client.

To create caches with Data Grid Operator, you use Cache CRs to add caches from templates or XML configuration. Bear in mind the following constrains:

  • You can create a single cache for each Cache CR.

  • If you edit caches in the OpenShift Web Console, changes do not take effect on the Data Grid cluster. You must delete the CR and create it again with the new configuration.

  • Deleting Cache CRs in the OpenShift Web Console, does not remove caches from Data Grid clusters. You must delete caches through the console or CLI.

I have created an OCP template to quickly set up two caches on the RHDG cluster:

  • operator-cache-01: Based on an xml configuration.

  • operator-cache-02: Based on an already defined templated.

In order to apply this template, just execute the following command:

oc process -f rhdg-operator/rhdg-03-caches.yaml | oc apply -f -

This template provides two parameters to modify the project where the cluster is installed and the name of the cluster to deploy. The cluster namespace should be the same as in the previous step. By default, values are:

  • CLUSTER_NAMESPACE = rhdg8

  • CLUSTER_NAME = rhdg

Modify them just passing arguments to the template:

oc process -f rhdg-operator/rhdg-03-caches.yaml -p CLUSTER_NAMESPACE="another-namespace" -p CLUSTER_NAME="my-cluster" | oc apply -f -

Interact with the newly created caches with he following commands:

# Set your variables. These are default:
CLUSTER_NAMESPACE="rhdg8"
CLUSTER_NAME="rhdg"
RHDG_URL=$(oc get route ${CLUSTER_NAME}-external -n ${CLUSTER_NAMESPACE} -o template='https://{{.spec.host}}')

# Check all the caches on your cluster
curl -X GET -k -u developer:developer -H "Content-Type: application/json" ${RHDG_URL}/rest/v2/caches | jq

# Check information about an specific cache
curl -X GET -k -u developer:developer -H "Content-Type: application/json" ${RHDG_URL}/rest/v2/caches/${CACHE_NAME} | jq

# Delete a cache
curl -X DELETE -k -u developer:developer ${RHDG_URL}/rest/v2/caches/${CACHE_NAME}

For more information about how to create caches using the CRD, please check the official documentation.

3.4. Monitoring RHDG with Prometheus

Data Grid exposes a metrics endpoint that provides statistics and events to Prometheus.

After installing OpenShift Container Platform 4.6, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. You can then query metrics, review dashboards, and manage alerting rules and silences for your own projects in the OpenShift Container Platform web console. We are going to take advantage of this feature.

⚠️
Enabling monitoring for user-defined projects

Monitoring of user-defined projects is not enabled by default. To enable it, you need to modify a ConfigMap of the openshift-monitoring. You need permissions to create and modify ConfigMaps in this project. You only have to execute this command once per OCP cluster. Please, do not execute it before checking if this was done before, you can override work from your colleagues

oc apply -f ocp/ocp-01-user-workload-monitoring.yaml

After executing the command above, you will see some pods in the following namespace:

oc get pods -n openshift-user-workload-monitoring

I have created an OCP template to quickly configure metrics monitoring of a RHDG cluster. Execute the following command:

oc process -f rhdg-operator/rhdg-04-monitoring.yaml | oc apply -f -

This template provides two parameters to modify the project where the cluster was installed and the name of the cluster itself. By default, values are:

  • CLUSTER_NAMESPACE = rhdg8

  • CLUSTER_NAME = rhdg

Modify them just passing arguments to the template:

oc process -f rhdg-operator/rhdg-04-monitoring.yaml -p CLUSTER_NAMESPACE="another-namespace" -p CLUSTER_NAME="my-cluster" | oc apply -f -

For more information, access the Openshift documentation for the monitoring stack and the RHDG documenation to configure monitoring for RHDG 8 on OCP.

4. Deploying RHDG on OCP using Helm Charts [Outdated to DG 8.2]

Helm is an application package manager for Kubernetes, which coordinates the download, installation, and deployment of apps. The original goal of Helm was to provide users with a better way to manage all the Kubernetes YAML files we create on Kubernetes projects using Helm Charts. A Chart is basically a set of templates and a file containing variables used to fill these templates. Let’s have a look at an example.

4.1. Option 1: Using an official Infinispan Helm Chart release

In order to create your first deployment easily, first add the OpenShift Helm Charts repository:

helm repo add openshift-helm-charts https://charts.openshift.io/

Create a new OCP project:

oc new-project rhdg8-helm --display-name="RHDG 8 - Helm" --description="This namespace contains a deployment of RHDG using the official Helm Chart"

Then, modify the rhdg-chart/infinispan-values.yaml to configure your deployment:

helm install infinispan openshift-helm-charts/infinispan-infinispan -f rhdg-chart/default-values.yaml

You will be able to authenticate to the cluster using the credentials obtained from the following command:

oc get secret infinispan-generated-secret \
-o jsonpath="{.data.identities-batch}" | base64 --decode

If you want to make changes, you need to update the values file and use the helm upgrade command:

helm upgrade infinispan openshift-helm-charts/infinispan-infinispan -f rhdg-chart/default-values.yaml

If you want to customize the server deployment (The infinispan.yaml file), you will need to provide server configuration in YAML format. Documentation is not fully GA yet, so you can use the following examples:

4.2. Option 2: Customizing the official Helm Chart

To customize the Helm Chart, you will need to fork the official upstream chart adn modify the configuration needed. In my case, I have forked it here, in order to change several aspects of the configuration.

  1. Clone your own git repo in the parent folder:

cd ..
git clone https://github.com/alvarolop/infinispan-helm-charts.git
cd infinispan-helm-charts

+ 2. Create a new OCP project:

oc new-project rhdg8-helm-customized --display-name="RHDG 8 - Helm Customized" --description="This namespace contains a deployment of RHDG using a customized Helm Chart"

+ 3. In order to deploy this unpackaged version of the Helm Chart, you just have to use Helm to render the OCP objects using the default values file and apply the result in your OCP cluster:

helm template --validate --set deploy.nameOverride="infinispan" . | oc apply -f -

Alternatively, you can use the values.yaml files defined in this repository:

helm template --validate --set deploy.nameOverride="infinispan" -f ../rhdg8-server/rhdg-chart/default-values.yaml . | oc apply -f -
ℹ️

In the previous commands, you need the following parameters: * --validate: By default, helm template does not validate your manifests against the Kubernetes cluster you are currently pointing at. You need to force it. (Helm install do validate by default, that is why this param is only necessary in this section). * --set deploy.nameOverride="infinispan": By default, the packaged Helm Chart uses the name of the package infinispan. As this is not the packaged version, the name defaults to RELEASE-NAME which is not a lowercase RFC 1123 subdomain.

For more information, check the following links:

5. Monitoring RHDG with Grafana

A typical OpenShift monitoring stack includes Prometheus for monitoring both systems and services, and Grafana for analyzing and visualizing metrics.

Administrators are often looking to write custom queries and create custom dashboards in Grafana. However, Grafana instances provided with the monitoring stack (and its dashboards) are read-only. To solve this problem, we can use the community-powered Grafana operator provided by OperatorHub.

To deploy the community-powered Grafana operator on OCP 4.9 just follow these steps:

5.1. Deploy the Grafana operator

oc process -f grafana/grafana-01-operator.yaml | oc apply -f -

5.2. Create a Grafana instance

Now, we will create a Grafana instance using the operator:

oc process -f grafana/grafana-02-instance.yaml | oc apply -f -

5.3. Create a Grafana data source

Now, we will create a Grafana data source:

PROJECT=grafana

oc adm policy add-cluster-role-to-user cluster-monitoring-view -z grafana-serviceaccount -n ${PROJECT}
BEARER_TOKEN=$(oc serviceaccounts get-token grafana-serviceaccount -n ${PROJECT})
oc process -f grafana/grafana-03-datasource.yaml -p BEARER_TOKEN=${BEARER_TOKEN} | oc apply -f -

5.4. Create a Grafana dashboard

Now, we will create a Grafana dashboard:

DASHBOARD_NAME="grafana-dashboard-rhdg8"
# Create a configMap containing the Dashboard
oc create configmap $DASHBOARD_NAME --from-file=dashboard=grafana/$DASHBOARD_NAME.json -n $PROJECT
# Create a Dashboard object that automatically updates Grafana
oc process -f grafana/grafana-04-dashboard.yaml -p DASHBOARD_NAME=$DASHBOARD_NAME | oc apply -f -
ℹ️
Here you can find information of other ways of creating dashboards.

5.5. Access Grafana as admin

After accessing Grafana using the OCP SSO, you may log in as admin. Retrieve the credentials from the secret using the following commands:

oc get secret grafana-admin-credentials -n $PROJECT -o jsonpath='{.data.GF_SECURITY_ADMIN_USER}' | base64 --decode
oc get secret grafana-admin-credentials -n $PROJECT -o jsonpath='{.data.GF_SECURITY_ADMIN_PASSWORD}' | base64 --decode

For more information, access the Grafana main documentation or the Grafana operator documentation.

6. Alerting with PrometheusRules

oc process -f rhdg-operator/rhdg-05-alerting-rules.yaml | oc apply -f -

Annex: Configure the scope of your operator

An Operator group, defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators.

If you want to modify the default behavior of the template provided in this repository, modify lines 26 to 33 of this template.

1) AllNamespaces: The Operator can be a member of an Operator group that selects all namespaces (target namespace set is the empty string ""). This configuration allows us to create DG clusters in every namespace of the cluster:

- apiVersion: operators.coreos.com/v1
  kind: OperatorGroup
  metadata:
    name: datagrid
    namespace: ${OPERATOR_NAMESPACE}
  spec: {}

2) MultiNamespace: The Operator can be a member of an Operator group that selects more than one namespace. Choose this option if you want to have several operators that manage RHDG clusters. For example, if you want to have a different operator per Business Unit managing several Openshift projects:

- apiVersion: operators.coreos.com/v1
  kind: OperatorGroup
  metadata:
    name: datagrid
    namespace: ${OPERATOR_NAMESPACE}
  spec:
    targetNamespaces:
      - ${CLUSTER_NAMESPACE-1}
      - ${CLUSTER_NAMESPACE-2}

3) SingleNamespace: The Operator can be a member of an Operator group that selects one namespace. This is useful if we want every application (Each OCP namespace) to be able to configure and deploy their own DG clusters:

- apiVersion: operators.coreos.com/v1
  kind: OperatorGroup
  metadata:
    name: datagrid
    namespace: ${OPERATOR_NAMESPACE}
  spec:
    targetNamespaces:
      - ${CLUSTER_NAMESPACE}

For more information, check the Openshift documentation about Operator Groups and the official documentation to install DG on Openshift.

Annex: Stern - Tail logs from multiple pods

In some situations, you will need to monitor logs from several pods of the same application and maybe you want to check to which pod did the request arrived. Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging.

First, you will need to install it on your machine. After that, log in to your cluster and monitoring the previous deployment is as simple as executing the following command:

stern --namespace=$CLUSTER_NAMESPACE -l clusterName=$CLUSTER_NAME

The previous command will show all the logs from all the pods from a namespace that contain a given label.

There are many filters and configuration options. Check the documentation for a full list of them

Annex: Deploying the Infinispan operator

The same configuration rules from the previous chapter apply.

oc process -f rhdg-operator/infinispan-01-operator.yaml -p OPERATOR_NAMESPACE="infinispan-operator" -p CLUSTER_NAMESPACE="infinispan" | oc apply -f -
oc process -f rhdg-operator/rhdg-02-cluster-basic.yaml -p CLUSTER_NAMESPACE="infinispan" -p CLUSTER_NAME="rhdg" | oc apply -f -

It is also possible to install the operator from the web console. For more information, please check the official documentation.

Annex: Advanced stats, and reporting for RHDG

Retrieve Queries stats

Since Infinispan 12, Data Grid includes metrics specifically related to Queries on the server side. Retrieve them using the following script:

CACHE_NAME="operator-cache-01"
oc project $RHDG_NAMESPACE
for pod in $(oc get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}')
do
  echo "$pod: Get stats"
  oc exec $pod -- bash -c 'curl $HOSTNAME:$RHDG_SERVICE_PORT_INFINISPAN/rest/v2/caches/$CACHE_NAME/search/stats' | jq
done

Retrieve server reports from Openshift

Since Infinispan 12, Data Grid includes an option to download a server report from each pod. Retrieve it using the following script:

oc project $RHDG_NAMESPACE
for pod in $(oc get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}')
do
  echo "$pod: Generate report"
  oc exec $pod -- bash -c 'echo "server report" | ./bin/cli.sh -c $HOSTNAME:$RHDG_SERVICE_PORT_INFINISPAN -f -'
  echo "$pod: Download report"
  oc exec $pod -- bash -c 'files=( *tar.gz* ); cat "${files[0]}"' > $(date +"%Y-%m-%d-%H-%M")-$pod-report.tar.gz
  echo "$pod: Remove report"
  oc exec rhdg-0 -- bash -c 'rm -rf *tar.gz*'
done

Annex: Convert cache configurations

In Data Grid 7, caches were defined in XML format. Since RHDG 8, it is possible to use XML, JSON or YAML. The server includes some tools to automatically convert from one to the other.

Option 1: Cache already exists in the cluster

CACHE_NAME="___protobuf_metadata"
# Get in XML
curl --digest -u developer:$DEV_PASS -H "Accept: application/xml" $INFINISPAN_SERVICE_HOST:11222/rest/v2/caches/$CACHE_NAME?action=config
# Get in JSON
curl --digest -u developer:$DEV_PASS -H "Accept: application/json" $INFINISPAN_SERVICE_HOST:11222/rest/v2/caches/$CACHE_NAME?action=config
# Get in YAML
curl --digest -u developer:$DEV_PASS -H "Accept: application/yaml" $INFINISPAN_SERVICE_HOST:11222/rest/v2/caches/$CACHE_NAME?action=config

Option 2: The cache is not in the cluster

The following example converts a XML definition to YAML:

curl localhost:11222/rest/v2/caches?action=convert\
  --digest -u developer:developer \
  -X POST \
  -H "Accept: application/yaml" \
  -H "Content-Type: application/xml" \
  -d '<?xml version="1.0" encoding="UTF-8"?><replicated-cache mode="SYNC" statistics="false"><encoding media-type="application/x-protostream"/><expiration lifespan="300000" /><memory max-size="400MB" when-full="REMOVE"/><state-transfer enabled="true" await-initial-transfer="false"/></replicated-cache>'

The result is the following YAML:

replicatedCache:
  mode: "SYNC"
  statistics: "false"
  encoding:
    key:
      mediaType: "application/x-protostream"
    value:
      mediaType: "application/x-protostream"
  expiration:
    lifespan: "300000"
  memory:
    maxSize: "400MB"
    whenFull: "REMOVE"
  stateTransfer:
    enabled: "true"
    awaitInitialTransfer: "false"

Annex: Getting full CR example

  1. Download the Infinispan CRD:

# Infinispan Operator 2.1.X
URL="https://raw.githubusercontent.com/infinispan/infinispan-operator/2.1.x/deploy/crds/infinispan.org_infinispans_crd.yaml"

# Infinispan Operator 2.2.X
URL="https://raw.githubusercontent.com/infinispan/infinispan-operator/2.2.x/config/crd/bases/infinispan.org_infinispans.yaml"

curl -o rhdg-crds/infinispan-2.2.x.yaml $URL

+ 2. Edit the file in order to create a new CRD instead of modify the previous one

+ 3. Create the object in the cluster: +,

oc apply -f rhdg-crds/infinispan-2.2.x.yaml

+ 4. Get the full list of options:

oc explain custominfinispan --recursive

About

This repository demonstrates some of the basic features of the latest release of Red Hat Data Grid 8 and how to deploy a RHDG cluster on OCP and RHEL

License:GNU General Public License v3.0


Languages

Language:Shell 97.4%Language:Python 2.6%