This is a deployment of Grafana on OpenShift 4. We couldn’t use Grafana Operator for OpenShift 4 as at the time of writing this blog, it is not supported for disconnected environment. for more info Red Hat Operators Supported in Disconnected Mode
The idea is take advantage of Grafana Auth Proxy Authentication and use OpenShift Oauth Proxy as a HTTP reverse proxy to handle authentication.
A reverse proxy that provides authentication and authorization to an OpenShift OAuth server or Kubernetes master supporting the 1.6+ remote authorization endpoints to validate access to content. It is intended for use within OpenShift clusters to make it easy to run both end-user and infrastructure services that don’t provide their own authentication.
It allows You to configure Grafana to let a HTTP reverse proxy handle authentication. Popular web servers have a very extensive list of pluggable authentication modules, and any of them can be used with the AuthProxy feature. Below we detail the configuration options for auth proxy.
[auth.proxy]
enabled = true
#header_name = X-WEBAUTH-USER
#header_name = REMOTE_USER
header_name = X-Forwarded-User
header_property = username
auto_sign_up = true
The Grafana dashboard is deployed in its own project (grafana-app-dashboard). In case you want to change the project name, please change the following
...
namespace: grafana-app-dashboard (1)
# I'm changing the names to match the namespace name.
# Please change "grafana-app-dashboard" prefix to match your namespace name
patches: (2)
- target:
kind: ClusterRoleBinding
name: grafana-proxy
patch: |-
- op: replace
path: /metadata/name
value: grafana-app-dashboard-proxy
- op: replace
path: /subjects/0/namespace
value: grafana-app-dashboard
- target:
kind: ClusterRoleBinding
name: grafana-cluster-monitoring-view
patch: |-
- op: replace
path: /metadata/name
value: grafana-app-dashboard-cluster-monitoring-view
- op: replace
path: /subjects/0/namespace
value: grafana-app-dashboard
- target:
kind: Group
name: namespace-admins
patch: |-
- op: replace
path: /metadata/name
value: grafana-app-dashboard-admins
- target:
kind: Group
name: namespace-users
patch: |-
- op: replace
path: /metadata/name
value: grafana-app-dashboard-users
- target:
kind: RoleBinding
name: namespace-admins-rb
patch: |-
- op: replace
path: /subjects/0/name
value: grafana-app-dashboard-admins
- target:
kind: RoleBinding
name: namespace-users-rb
patch: |-
- op: replace
path: /subjects/0/name
value: grafana-app-dashboard-users
...
-
change namespace name
-
Change the prefix "grafana-app-dashboard" of all values to match your new namespace name
To pull images from registry.redhat.io you need to create a pull secret (redhat-pull-secret) in the monitoring (grafana-app-dashboard) project.
{
"auths":{
"quay.io": {
"auth": "...",
"email": ""
},
"registry.redhat.io": {
"auth": "..."
}
}
}
You need this to pull the image (registry.redhat.io/oauth4/ose-oauth-proxy:latest).
Please use https://access.redhat.com/terms-based-registry/ to create the Registry Service Accounts for the pull secret.
For more information, please refer to Red Hat Container Registry Authentication
Applications running in OpenShift Container Platform might have to discover information about the built-in OAuth server. For more info. please refer to OAuth server metadata
To configure the OAUTH_ISSUER, run the following command
oc run --rm -i -t box --image=registry.redhat.io/ubi8-minimal --restart=Never -- curl https://openshift.default.svc/.well-known/oauth-authorization-server --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
The command returns a JSON file like.
{
"issuer": "https://oauth-openshift.apps.cluster-1d9e.gcp.testdrive.openshift.com",
"authorization_endpoint": "https://oauth-openshift.apps.cluster-1d9e.gcp.testdrive.openshift.com/oauth/authorize",
"token_endpoint": "https://oauth-openshift.apps.cluster-1d9e.gcp.testdrive.openshift.com/oauth/token",
"scopes_supported": [
"user:check-access",
"user:full",
"user:info",
"user:list-projects",
"user:list-scoped-projects"
],
"response_types_supported": [
"code",
"token"
],
"grant_types_supported": [
"authorization_code",
"implicit"
],
"code_challenge_methods_supported": [
"plain",
"S256"
]
}
then modify overlays/oauth/kustomization.yaml, like following example
configMapGenerator:
- literals:
- OAUTH_ISSUER=https://oauth-openshift.apps.cluster-1d9e.gcp.testdrive.openshift.com (1)
name: oauth-issuer
-
Please use the field 'issuer' as OAUTH_ISSUER
We need to connect Grafana to the cluster monitoring Thanos instance in the openshift-monitoring namespace.
For this reason we defined a grafana-config configmap containing the details for a datasource which includes authentication token.
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-config
data:
...
datasources.yaml: |
apiVersion: 1
datasources:
- access: proxy
editable: true
isDefault: true
jsonData:
httpHeaderName1: 'Authorization'
timeInterval: 5s
tlsSkipVerify: true
name: Prometheus
secureJsonData:
httpHeaderValue1: 'Bearer BEARER_TOKEN' (1)
type: prometheus
url: 'https://thanos-querier.openshift-monitoring.svc.cluster.local:9091'
...
-
bearer token, to connect to thanos
this token comes from the grafana serviceaccount and can only be determined at runtime. To manage this, we deploy a kubernetes job in order to patch the configmap and recreate the Pod with the appropriate token.
apiVersion: batch/v1
kind: Job
metadata:
name: patch-grafana-ds
spec:
template:
spec:
containers:
- image: registry.redhat.io/openshift4/ose-cli:v4.6
command:
- /bin/bash
- -c
- |
set -e
echo "Patching grafana datasource with token for authentication to prometheus"
TOKEN=`oc serviceaccounts get-token grafana`
oc get cm grafana-config -o yaml | sed "s/BEARER_TOKEN/${TOKEN}/" | oc apply -f -
oc delete pod -l deployment=grafana
imagePullPolicy: Always
name: patch-grafana-ds
dnsPolicy: ClusterFirst
restartPolicy: OnFailure
serviceAccount: generate-grafana-ds-token-job-sa
serviceAccountName: generate-grafana-ds-token-job-sa
terminationGracePeriodSeconds: 30
This job runs using a special ServiceAccount which gives the job just enough access to retrieve the token, patch the configmap, and delete Pod.
To use the Kustomize to deploy the grafana, then
kustomize build overlays/oauth |oc apply -f -
For Anonymous access we will use Anonymous authentication
[auth.anonymous]
enabled = true
# Organization name that should be used for unauthenticated users
org_name = Main Org.
# Role for unauthenticated users, other valid values are `Editor` and `Admin`
org_role = Viewer