feat: adds helm chart with redis and es

This commit is contained in:
Sean Norwood 2021-10-20 17:04:42 -05:00
parent 09584e43b8
commit 6d1f4aee71
86 changed files with 3551 additions and 0 deletions

23
chart/.helmignore Normal file
View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

24
chart/Chart.yaml Normal file
View File

@ -0,0 +1,24 @@
apiVersion: v2
name: tubearchivist
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "v0.0.6"

View File

@ -0,0 +1,2 @@
tests/
.pytest_cache/

View File

@ -0,0 +1,12 @@
apiVersion: v1
appVersion: 7.15.0
description: Official Elastic helm chart for Elasticsearch
home: https://github.com/elastic/helm-charts
icon: https://helm.elastic.co/icons/elasticsearch.png
maintainers:
- email: helm-charts@elastic.co
name: Elastic
name: elasticsearch
sources:
- https://github.com/elastic/elasticsearch
version: 7.15.0

View File

@ -0,0 +1 @@
include ../helpers/common.mk

View File

@ -0,0 +1,456 @@
# Elasticsearch Helm Chart
[![Build Status](https://img.shields.io/jenkins/s/https/devops-ci.elastic.co/job/elastic+helm-charts+master.svg)](https://devops-ci.elastic.co/job/elastic+helm-charts+master/) [![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/elastic)](https://artifacthub.io/packages/search?repo=elastic)
This Helm chart is a lightweight way to configure and run our official
[Elasticsearch Docker image][].
<!-- development warning placeholder -->
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Requirements](#requirements)
- [Installing](#installing)
- [Install released version using Helm repository](#install-released-version-using-helm-repository)
- [Install development version from a branch](#install-development-version-from-a-branch)
- [Upgrading](#upgrading)
- [Usage notes](#usage-notes)
- [Configuration](#configuration)
- [Deprecated](#deprecated)
- [FAQ](#faq)
- [How to deploy this chart on a specific K8S distribution?](#how-to-deploy-this-chart-on-a-specific-k8s-distribution)
- [How to deploy dedicated nodes types?](#how-to-deploy-dedicated-nodes-types)
- [Clustering and Node Discovery](#clustering-and-node-discovery)
- [How to deploy clusters with security (authentication and TLS) enabled?](#how-to-deploy-clusters-with-security-authentication-and-tls-enabled)
- [How to migrate from helm/charts stable chart?](#how-to-migrate-from-helmcharts-stable-chart)
- [How to install plugins?](#how-to-install-plugins)
- [How to use the keystore?](#how-to-use-the-keystore)
- [Basic example](#basic-example)
- [Multiple keys](#multiple-keys)
- [Custom paths and keys](#custom-paths-and-keys)
- [How to enable snapshotting?](#how-to-enable-snapshotting)
- [How to configure templates post-deployment?](#how-to-configure-templates-post-deployment)
- [Contributing](#contributing)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
<!-- Use this to update TOC: -->
<!-- docker run --rm -it -v $(pwd):/usr/src jorgeandrada/doctoc --github -->
## Requirements
* Kubernetes >= 1.14
* [Helm][] >= 2.17.0
* Minimum cluster requirements include the following to run this chart with
default settings. All of these settings are configurable.
* Three Kubernetes nodes to respect the default "hard" affinity settings
* 1GB of RAM for the JVM heap
See [supported configurations][] for more details.
## Installing
This chart is tested with the latest 7.15.0 version.
### Install released version using Helm repository
* Add the Elastic Helm charts repo:
`helm repo add elastic https://helm.elastic.co`
* Install it:
- with Helm 3: `helm install elasticsearch --version <version> elastic/elasticsearch`
- with Helm 2 (deprecated): `helm install --name elasticsearch --version <version> elastic/elasticsearch`
### Install development version from a branch
* Clone the git repo: `git clone git@github.com:elastic/helm-charts.git`
* Checkout the branch : `git checkout 7.15`
* Install it:
- with Helm 3: `helm install elasticsearch ./helm-charts/elasticsearch --set imageTag=7.15.0`
- with Helm 2 (deprecated): `helm install --name elasticsearch ./helm-charts/elasticsearch --set imageTag=7.15.0`
## Upgrading
Please always check [CHANGELOG.md][] and [BREAKING_CHANGES.md][] before
upgrading to a new chart version.
## Usage notes
* This repo includes a number of [examples][] configurations which can be used
as a reference. They are also used in the automated testing of this chart.
* Automated testing of this chart is currently only run against GKE (Google
Kubernetes Engine).
* The chart deploys a StatefulSet and by default will do an automated rolling
update of your cluster. It does this by waiting for the cluster health to become
green after each instance is updated. If you prefer to update manually you can
set `OnDelete` [updateStrategy][].
* It is important to verify that the JVM heap size in `esJavaOpts` and to set
the CPU/Memory `resources` to something suitable for your cluster.
* To simplify chart and maintenance each set of node groups is deployed as a
separate Helm release. Take a look at the [multi][] example to get an idea for
how this works. Without doing this it isn't possible to resize persistent
volumes in a StatefulSet. By setting it up this way it makes it possible to add
more nodes with a new storage size then drain the old ones. It also solves the
problem of allowing the user to determine which node groups to update first when
doing upgrades or changes.
* We have designed this chart to be very un-opinionated about how to configure
Elasticsearch. It exposes ways to set environment variables and mount secrets
inside of the container. Doing this makes it much easier for this chart to
support multiple versions with minimal changes.
## Configuration
| Parameter | Description | Default |
|------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------|
| `antiAffinityTopologyKey` | The [anti-affinity][] topology key. By default this will prevent multiple Elasticsearch nodes from running on the same Kubernetes node | `kubernetes.io/hostname` |
| `antiAffinity` | Setting this to hard enforces the [anti-affinity][] rules. If it is set to soft it will be done "best effort". Other values will be ignored | `hard` |
| `clusterHealthCheckParams` | The [Elasticsearch cluster health status params][] that will be used by readiness [probe][] command | `wait_for_status=green&timeout=1s` |
| `clusterName` | This will be used as the Elasticsearch [cluster.name][] and should be unique per cluster in the namespace | `elasticsearch` |
| `enableServiceLinks` | Set to false to disabling service links, which can cause slow pod startup times when there are many services in the current namespace. | `true` |
| `envFrom` | Templatable string to be passed to the [environment from variables][] which will be appended to the `envFrom:` definition for the container | `[]` |
| `esConfig` | Allows you to add any config files in `/usr/share/elasticsearch/config/` such as `elasticsearch.yml` and `log4j2.properties`. See [values.yaml][] for an example of the formatting | `{}` |
| `esJavaOpts` | [Java options][] for Elasticsearch. This is where you could configure the [jvm heap size][] | `""` |
| `esMajorVersion` | Deprecated. Instead, use the version of the chart corresponding to your ES minor version. Used to set major version specific configuration. If you are using a custom image and not running the default Elasticsearch version you will need to set this to the version you are running (e.g. `esMajorVersion: 6`) | `""` |
| `extraContainers` | Templatable string of additional `containers` to be passed to the `tpl` function | `""` |
| `extraEnvs` | Extra [environment variables][] which will be appended to the `env:` definition for the container | `[]` |
| `extraInitContainers` | Templatable string of additional `initContainers` to be passed to the `tpl` function | `""` |
| `extraVolumeMounts` | Templatable string of additional `volumeMounts` to be passed to the `tpl` function | `""` |
| `extraVolumes` | Templatable string of additional `volumes` to be passed to the `tpl` function | `""` |
| `fullnameOverride` | Overrides the `clusterName` and `nodeGroup` when used in the naming of resources. This should only be used when using a single `nodeGroup`, otherwise you will have name conflicts | `""` |
| `healthNameOverride` | Overrides `test-elasticsearch-health` pod name | `""` |
| `hostAliases` | Configurable [hostAliases][] | `[]` |
| `httpPort` | The http port that Kubernetes will use for the healthchecks and the service. If you change this you will also need to set [http.port][] in `extraEnvs` | `9200` |
| `imagePullPolicy` | The Kubernetes [imagePullPolicy][] value | `IfNotPresent` |
| `imagePullSecrets` | Configuration for [imagePullSecrets][] so that you can use a private registry for your image | `[]` |
| `imageTag` | The Elasticsearch Docker image tag | `7.15.0` |
| `image` | The Elasticsearch Docker image | `docker.elastic.co/elasticsearch/elasticsearch` |
| `ingress` | Configurable [ingress][] to expose the Elasticsearch service. See [values.yaml][] for an example | see [values.yaml][] |
| `initResources` | Allows you to set the [resources][] for the `initContainer` in the StatefulSet | `{}` |
| `keystore` | Allows you map Kubernetes secrets into the keystore. See the [config example][] and [how to use the keystore][] | `[]` |
| `labels` | Configurable [labels][] applied to all Elasticsearch pods | `{}` |
| `lifecycle` | Allows you to add [lifecycle hooks][]. See [values.yaml][] for an example of the formatting | `{}` |
| `masterService` | The service name used to connect to the masters. You only need to set this if your master `nodeGroup` is set to something other than `master`. See [Clustering and Node Discovery][] for more information | `""` |
| `maxUnavailable` | The [maxUnavailable][] value for the pod disruption budget. By default this will prevent Kubernetes from having more than 1 unhealthy pod in the node group | `1` |
| `minimumMasterNodes` | The value for [discovery.zen.minimum_master_nodes][]. Should be set to `(master_eligible_nodes / 2) + 1`. Ignored in Elasticsearch versions >= 7 | `2` |
| `nameOverride` | Overrides the `clusterName` when used in the naming of resources | `""` |
| `networkHost` | Value for the [network.host Elasticsearch setting][] | `0.0.0.0` |
| `networkPolicy` | The [NetworkPolicy](https://kubernetes.io/docs/concepts/services-networking/network-policies/) to set. See [`values.yaml`](./values.yaml) for an example | `{http.enabled: false,transport.enabled: false}` |
| `nodeAffinity` | Value for the [node affinity settings][] | `{}` |
| `nodeGroup` | This is the name that will be used for each group of nodes in the cluster. The name will be `clusterName-nodeGroup-X` , `nameOverride-nodeGroup-X` if a `nameOverride` is specified, and `fullnameOverride-X` if a `fullnameOverride` is specified | `master` |
| `nodeSelector` | Configurable [nodeSelector][] so that you can target specific nodes for your Elasticsearch cluster | `{}` |
| `persistence` | Enables a persistent volume for Elasticsearch data. Can be disabled for nodes that only have [roles][] which don't require persistent data | see [values.yaml][] |
| `podAnnotations` | Configurable [annotations][] applied to all Elasticsearch pods | `{}` |
| `podManagementPolicy` | By default Kubernetes [deploys StatefulSets serially][]. This deploys them in parallel so that they can discover each other | `Parallel` |
| `podSecurityContext` | Allows you to set the [securityContext][] for the pod | see [values.yaml][] |
| `podSecurityPolicy` | Configuration for create a pod security policy with minimal permissions to run this Helm chart with `create: true`. Also can be used to reference an external pod security policy with `name: "externalPodSecurityPolicy"` | see [values.yaml][] |
| `priorityClassName` | The name of the [PriorityClass][]. No default is supplied as the PriorityClass must be created first | `""` |
| `protocol` | The protocol that will be used for the readiness [probe][]. Change this to `https` if you have `xpack.security.http.ssl.enabled` set | `http` |
| `rbac` | Configuration for creating a role, role binding and ServiceAccount as part of this Helm chart with `create: true`. Also can be used to reference an external ServiceAccount with `serviceAccountName: "externalServiceAccountName"` | see [values.yaml][] |
| `readinessProbe` | Configuration fields for the readiness [probe][] | see [values.yaml][] |
| `replicas` | Kubernetes replica count for the StatefulSet (i.e. how many pods) | `3` |
| `resources` | Allows you to set the [resources][] for the StatefulSet | see [values.yaml][] |
| `roles` | A hash map with the specific [roles][] for the `nodeGroup` | see [values.yaml][] |
| `schedulerName` | Name of the [alternate scheduler][] | `""` |
| `secretMounts` | Allows you easily mount a secret as a file inside the StatefulSet. Useful for mounting certificates and other secrets. See [values.yaml][] for an example | `[]` |
| `securityContext` | Allows you to set the [securityContext][] for the container | see [values.yaml][] |
| `service.annotations` | [LoadBalancer annotations][] that Kubernetes will use for the service. This will configure load balancer if `service.type` is `LoadBalancer` | `{}` |
| `service.enabled` | Enable non-headless service | `true` |
| `service.externalTrafficPolicy` | Some cloud providers allow you to specify the [LoadBalancer externalTrafficPolicy][]. Kubernetes will use this to preserve the client source IP. This will configure load balancer if `service.type` is `LoadBalancer` | `""` |
| `service.httpPortName` | The name of the http port within the service | `http` |
| `service.labelsHeadless` | Labels to be added to headless service | `{}` |
| `service.labels` | Labels to be added to non-headless service | `{}` |
| `service.loadBalancerIP` | Some cloud providers allow you to specify the [loadBalancer][] IP. If the `loadBalancerIP` field is not specified, the IP is dynamically assigned. If you specify a `loadBalancerIP` but your cloud provider does not support the feature, it is ignored. | `""` |
| `service.loadBalancerSourceRanges` | The IP ranges that are allowed to access | `[]` |
| `service.nodePort` | Custom [nodePort][] port that can be set if you are using `service.type: nodePort` | `""` |
| `service.transportPortName` | The name of the transport port within the service | `transport` |
| `service.type` | Elasticsearch [Service Types][] | `ClusterIP` |
| `sysctlInitContainer` | Allows you to disable the `sysctlInitContainer` if you are setting [sysctl vm.max_map_count][] with another method | `enabled: true` |
| `sysctlVmMaxMapCount` | Sets the [sysctl vm.max_map_count][] needed for Elasticsearch | `262144` |
| `terminationGracePeriod` | The [terminationGracePeriod][] in seconds used when trying to stop the pod | `120` |
| `tests.enabled` | Enable creating test related resources when running `helm template` or `helm test` | `true` |
| `tolerations` | Configurable [tolerations][] | `[]` |
| `transportPort` | The transport port that Kubernetes will use for the service. If you change this you will also need to set [transport port configuration][] in `extraEnvs` | `9300` |
| `updateStrategy` | The [updateStrategy][] for the StatefulSet. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to `OnDelete` will allow you to manually delete each pod during upgrades | `RollingUpdate` |
| `volumeClaimTemplate` | Configuration for the [volumeClaimTemplate for StatefulSets][]. You will want to adjust the storage (default `30Gi` ) and the `storageClassName` if you are using a different storage class | see [values.yaml][] |
### Deprecated
| Parameter | Description | Default |
|-----------|---------------------------------------------------------------------------------------------------------------|---------|
| `fsGroup` | The Group ID (GID) for [securityContext][] so that the Elasticsearch user can read from the persistent volume | `""` |
## FAQ
### How to deploy this chart on a specific K8S distribution?
This chart is designed to run on production scale Kubernetes clusters with
multiple nodes, lots of memory and persistent storage. For that reason it can be
a bit tricky to run them against local Kubernetes environments such as
[Minikube][].
This chart is highly tested with [GKE][], but some K8S distribution also
requires specific configurations.
We provide examples of configuration for the following K8S providers:
- [Docker for Mac][]
- [KIND][]
- [Minikube][]
- [MicroK8S][]
- [OpenShift][]
### How to deploy dedicated nodes types?
All the Elasticsearch pods deployed share the same configuration. If you need to
deploy dedicated [nodes types][] (for example dedicated master and data nodes),
you can deploy multiple releases of this chart with different configurations
while they share the same `clusterName` value.
For each Helm release, the nodes types can then be defined using `roles` value.
An example of Elasticsearch cluster using 2 different Helm releases for master
and data nodes can be found in [examples/multi][].
#### Clustering and Node Discovery
This chart facilitates Elasticsearch node discovery and services by creating two
`Service` definitions in Kubernetes, one with the name `$clusterName-$nodeGroup`
and another named `$clusterName-$nodeGroup-headless`.
Only `Ready` pods are a part of the `$clusterName-$nodeGroup` service, while all
pods ( `Ready` or not) are a part of `$clusterName-$nodeGroup-headless`.
If your group of master nodes has the default `nodeGroup: master` then you can
just add new groups of nodes with a different `nodeGroup` and they will
automatically discover the correct master. If your master nodes have a different
`nodeGroup` name then you will need to set `masterService` to
`$clusterName-$masterNodeGroup`.
The chart value for `masterService` is used to populate
`discovery.zen.ping.unicast.hosts` , which Elasticsearch nodes will use to
contact master nodes and form a cluster.
Therefore, to add a group of nodes to an existing cluster, setting
`masterService` to the desired `Service` name of the related cluster is
sufficient.
### How to deploy clusters with security (authentication and TLS) enabled?
This Helm chart can use existing [Kubernetes secrets][] to setup
credentials or certificates for examples. These secrets should be created
outside of this chart and accessed using [environment variables][] and volumes.
An example of Elasticsearch cluster using security can be found in
[examples/security][].
### How to migrate from helm/charts stable chart?
If you currently have a cluster deployed with the [helm/charts stable][] chart
you can follow the [migration guide][].
### How to install plugins?
The recommended way to install plugins into our Docker images is to create a
[custom Docker image][].
The Dockerfile would look something like:
```
ARG elasticsearch_version
FROM docker.elastic.co/elasticsearch/elasticsearch:${elasticsearch_version}
RUN bin/elasticsearch-plugin install --batch repository-gcs
```
And then updating the `image` in values to point to your custom image.
There are a couple reasons we recommend this.
1. Tying the availability of Elasticsearch to the download service to install
plugins is not a great idea or something that we recommend. Especially in
Kubernetes where it is normal and expected for a container to be moved to
another host at random times.
2. Mutating the state of a running Docker image (by installing plugins) goes
against best practices of containers and immutable infrastructure.
### How to use the keystore?
#### Basic example
Create the secret, the key name needs to be the keystore key path. In this
example we will create a secret from a file and from a literal string.
```
kubectl create secret generic encryption-key --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
kubectl create secret generic slack-hook --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
To add these secrets to the keystore:
```
keystore:
- secretName: encryption-key
- secretName: slack-hook
```
#### Multiple keys
All keys in the secret will be added to the keystore. To create the previous
example in one secret you could also do:
```
kubectl create secret generic keystore-secrets --from-file=xpack.watcher.encryption_key=./watcher_encryption_key --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
```
keystore:
- secretName: keystore-secrets
```
#### Custom paths and keys
If you are using these secrets for other applications (besides the Elasticsearch
keystore) then it is also possible to specify the keystore path and which keys
you want to add. Everything specified under each `keystore` item will be passed
through to the `volumeMounts` section for mounting the [secret][]. In this
example we will only add the `slack_hook` key from a secret that also has other
keys. Our secret looks like this:
```
kubectl create secret generic slack-secrets --from-literal=slack_channel='#general' --from-literal=slack_hook='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
```
We only want to add the `slack_hook` key to the keystore at path
`xpack.notification.slack.account.monitoring.secure_url`:
```
keystore:
- secretName: slack-secrets
items:
- key: slack_hook
path: xpack.notification.slack.account.monitoring.secure_url
```
You can also take a look at the [config example][] which is used as part of the
automated testing pipeline.
### How to enable snapshotting?
1. Install your [snapshot plugin][] into a custom Docker image following the
[how to install plugins guide][].
2. Add any required secrets or credentials into an Elasticsearch keystore
following the [how to use the keystore][] guide.
3. Configure the [snapshot repository][] as you normally would.
4. To automate snapshots you can use [Snapshot Lifecycle Management][] or a tool
like [curator][].
### How to configure templates post-deployment?
You can use `postStart` [lifecycle hooks][] to run code triggered after a
container is created.
Here is an example of `postStart` hook to configure templates:
```yaml
lifecycle:
postStart:
exec:
command:
- bash
- -c
- |
#!/bin/bash
# Add a template to adjust number of shards/replicas
TEMPLATE_NAME=my_template
INDEX_PATTERN="logstash-*"
SHARD_COUNT=8
REPLICA_COUNT=1
ES_URL=http://localhost:9200
while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
```
## Contributing
Please check [CONTRIBUTING.md][] before any contribution or for any questions
about our development and testing process.
[7.15]: https://github.com/elastic/helm-charts/releases
[#63]: https://github.com/elastic/helm-charts/issues/63
[BREAKING_CHANGES.md]: https://github.com/elastic/helm-charts/blob/master/BREAKING_CHANGES.md
[CHANGELOG.md]: https://github.com/elastic/helm-charts/blob/master/CHANGELOG.md
[CONTRIBUTING.md]: https://github.com/elastic/helm-charts/blob/master/CONTRIBUTING.md
[alternate scheduler]: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/#specify-schedulers-for-pods
[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
[anti-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
[cluster.name]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/important-settings.html#cluster-name
[clustering and node discovery]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/README.md#clustering-and-node-discovery
[config example]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/config/values.yaml
[curator]: https://www.elastic.co/guide/en/elasticsearch/client/curator/7.9/snapshot.html
[custom docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/docker.html#_c_customized_image
[deploys statefulsets serially]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-management-policies
[discovery.zen.minimum_master_nodes]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/discovery-settings.html#minimum_master_nodes
[docker for mac]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/docker-for-mac
[elasticsearch cluster health status params]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/cluster-health.html#request-params
[elasticsearch docker image]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/docker.html
[environment variables]: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config
[environment from variables]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables
[examples]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/
[examples/multi]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/multi
[examples/security]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/security
[gke]: https://cloud.google.com/kubernetes-engine
[helm]: https://helm.sh
[helm/charts stable]: https://github.com/helm/charts/tree/master/stable/elasticsearch/
[how to install plugins guide]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/README.md#how-to-install-plugins
[how to use the keystore]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/README.md#how-to-use-the-keystore
[http.port]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/modules-http.html#_settings
[imagePullPolicy]: https://kubernetes.io/docs/concepts/containers/images/#updating-images
[imagePullSecrets]: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret
[ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[java options]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/jvm-options.html
[jvm heap size]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/heap-size.html
[hostAliases]: https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
[kind]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/kubernetes-kind
[kubernetes secrets]: https://kubernetes.io/docs/concepts/configuration/secret/
[labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
[lifecycle hooks]: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
[loadBalancer annotations]: https://kubernetes.io/docs/concepts/services-networking/service/#ssl-support-on-aws
[loadBalancer externalTrafficPolicy]: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
[loadBalancer]: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
[maxUnavailable]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/#specifying-a-poddisruptionbudget
[migration guide]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/migration/README.md
[minikube]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/minikube
[microk8s]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/microk8s
[multi]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/multi/
[network.host elasticsearch setting]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/network.host.html
[node affinity settings]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
[node-certificates]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/configuring-tls.html#node-certificates
[nodePort]: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
[nodes types]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/modules-node.html
[nodeSelector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
[openshift]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/openshift
[priorityClass]: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
[probe]: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
[resources]: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
[roles]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/modules-node.html
[secret]: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
[securityContext]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
[service types]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
[snapshot lifecycle management]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/snapshot-lifecycle-management.html
[snapshot plugin]: https://www.elastic.co/guide/en/elasticsearch/plugins/7.15/repository.html
[snapshot repository]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/modules-snapshots.html
[supported configurations]: https://github.com/elastic/helm-charts/tree/7.15/README.md#supported-configurations
[sysctl vm.max_map_count]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/vm-max-map-count.html#vm-max-map-count
[terminationGracePeriod]: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
[tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
[transport port configuration]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/modules-transport.html#_transport_settings
[updateStrategy]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
[values.yaml]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/values.yaml
[volumeClaimTemplate for statefulsets]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage

View File

@ -0,0 +1,21 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-config
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
secrets:
kubectl delete secret elastic-config-credentials elastic-config-secret elastic-config-slack elastic-config-custom-path || true
kubectl create secret generic elastic-config-credentials --from-literal=password=changeme --from-literal=username=elastic
kubectl create secret generic elastic-config-slack --from-literal=xpack.notification.slack.account.monitoring.secure_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd'
kubectl create secret generic elastic-config-secret --from-file=xpack.watcher.encryption_key=./watcher_encryption_key
kubectl create secret generic elastic-config-custom-path --from-literal=slack_url='https://hooks.slack.com/services/asdasdasd/asdasdas/asdasd' --from-literal=thing_i_don_tcare_about=test
test: secrets install goss
purge:
helm del $(RELEASE)

View File

@ -0,0 +1,27 @@
# Config
This example deploy a single node Elasticsearch 7.15.0 with authentication and
custom [values][].
## Usage
* Create the required secrets: `make secrets`
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/config-master 9200
curl -u elastic:changeme http://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/config/test/goss.yaml
[values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/config/values.yaml

View File

@ -0,0 +1,29 @@
http:
http://localhost:9200/_cluster/health:
status: 200
timeout: 2000
username: "{{ .Env.ELASTIC_USERNAME }}"
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"number_of_nodes":1'
- '"number_of_data_nodes":1'
http://localhost:9200:
status: 200
timeout: 2000
username: "{{ .Env.ELASTIC_USERNAME }}"
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- '"cluster_name" : "config"'
- "You Know, for Search"
command:
"elasticsearch-keystore list":
exit-status: 0
stdout:
- keystore.seed
- bootstrap.password
- xpack.notification.slack.account.monitoring.secure_url
- xpack.notification.slack.account.otheraccount.secure_url
- xpack.watcher.encryption_key

View File

@ -0,0 +1,32 @@
---
clusterName: "config"
replicas: 1
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-config-credentials
key: password
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-config-credentials
key: username
# This is just a dummy file to make sure that
# the keystore can be mounted at the same time
# as a custom elasticsearch.yml
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
path.data: /usr/share/elasticsearch/data
keystore:
- secretName: elastic-config-secret
- secretName: elastic-config-slack
- secretName: elastic-config-custom-path
items:
- key: slack_url
path: xpack.notification.slack.account.otheraccount.secure_url

View File

@ -0,0 +1 @@
supersecret

View File

@ -0,0 +1,14 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-default
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@ -0,0 +1,25 @@
# Default
This example deploy a 3 nodes Elasticsearch 7.15.0 cluster using
[default values][].
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/default/test/goss.yaml
[default values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/values.yaml

View File

@ -0,0 +1,19 @@
#!/usr/bin/env bash -x
kubectl proxy || true &
make &
PROC_ID=$!
while kill -0 "$PROC_ID" >/dev/null 2>&1; do
echo "PROCESS IS RUNNING"
if curl --fail 'http://localhost:8001/api/v1/proxy/namespaces/default/services/elasticsearch-master:9200/_search' ; then
echo "cluster is healthy"
else
echo "cluster not healthy!"
exit 1
fi
sleep 1
done
echo "PROCESS TERMINATED"
exit 0

View File

@ -0,0 +1,38 @@
kernel-param:
vm.max_map_count:
value: "262144"
http:
http://elasticsearch-master:9200/_cluster/health:
status: 200
timeout: 2000
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
http://localhost:9200:
status: 200
timeout: 2000
body:
- '"number" : "7.15.0"'
- '"cluster_name" : "elasticsearch"'
- "You Know, for Search"
file:
/usr/share/elasticsearch/data:
exists: true
mode: "2775"
owner: root
group: elasticsearch
filetype: directory
mount:
/usr/share/elasticsearch/data:
exists: true
user:
elasticsearch:
exists: true
uid: 1000
gid: 1000

View File

@ -0,0 +1,13 @@
default: test
RELEASE := helm-es-docker-for-mac
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@ -0,0 +1,23 @@
# Docker for Mac
This example deploy a 3 nodes Elasticsearch 7.15.0 cluster on [Docker for Mac][]
using [custom values][].
Note that this configuration should be used for test only and isn't recommended
for production.
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[custom values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/docker-for-mac/values.yaml
[docker for mac]: https://docs.docker.com/docker-for-mac/kubernetes/

View File

@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "hostpath"
resources:
requests:
storage: 100M

View File

@ -0,0 +1,17 @@
default: test
RELEASE := helm-es-kind
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
install-local-path:
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values-local-path.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@ -0,0 +1,36 @@
# KIND
This example deploy a 3 nodes Elasticsearch 7.15.0 cluster on [Kind][]
using [custom values][].
Note that this configuration should be used for test only and isn't recommended
for production.
Note that Kind < 0.7.0 are affected by a [kind issue][] with mount points
created from PVCs not writable by non-root users. [kubernetes-sigs/kind#1157][]
fix it in Kind 0.7.0.
The workaround for Kind < 0.7.0 is to install manually
[Rancher Local Path Provisioner][] and use `local-path` storage class for
Elasticsearch volumes (see [Makefile][] instructions).
## Usage
* For Kind >= 0.7.0: Deploy Elasticsearch chart with the default values: `make install`
* For Kind < 0.7.0: Deploy Elasticsearch chart with `local-path` storage class: `make install-local-path`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[custom values]: https://github.com/elastic/helm-charts/blob/7.15/elasticsearch/examples/kubernetes-kind/values.yaml
[kind]: https://kind.sigs.k8s.io/
[kind issue]: https://github.com/kubernetes-sigs/kind/issues/830
[kubernetes-sigs/kind#1157]: https://github.com/kubernetes-sigs/kind/pull/1157
[rancher local path provisioner]: https://github.com/rancher/local-path-provisioner
[Makefile]: https://github.com/elastic/helm-charts/blob/7.15/elasticsearch/examples/kubernetes-kind/Makefile#L5

View File

@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 100M

View File

@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 100M

View File

@ -0,0 +1,13 @@
default: test
RELEASE := helm-es-microk8s
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@ -0,0 +1,32 @@
# MicroK8S
This example deploy a 3 nodes Elasticsearch 7.15.0 cluster on [MicroK8S][]
using [custom values][].
Note that this configuration should be used for test only and isn't recommended
for production.
## Requirements
The following MicroK8S [addons][] need to be enabled:
- `dns`
- `helm`
- `storage`
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[addons]: https://microk8s.io/docs/addons
[custom values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/microk8s/values.yaml
[MicroK8S]: https://microk8s.io

View File

@ -0,0 +1,32 @@
---
# Disable privileged init Container creation.
sysctlInitContainer:
enabled: false
# Restrict the use of the memory-mapping when sysctlInitContainer is disabled.
esConfig:
elasticsearch.yml: |
node.store.allow_mmap: false
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "microk8s-hostpath"
resources:
requests:
storage: 100M

View File

@ -0,0 +1,10 @@
PREFIX := helm-es-migration
data:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values data.yaml $(PREFIX)-data ../../
master:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values master.yaml $(PREFIX)-master ../../
client:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values client.yaml $(PREFIX)-client ../../

View File

@ -0,0 +1,167 @@
# Migration Guide from helm/charts
There are two viable options for migrating from the community Elasticsearch Helm
chart from the [helm/charts][] repo.
1. Restoring from Snapshot to a fresh cluster
2. Live migration by joining a new cluster to the existing cluster.
## Restoring from Snapshot
This is the recommended and preferred option. The downside is that it will
involve a period of write downtime during the migration. If you have a way to
temporarily stop writes to your cluster then this is the way to go. This is also
a lot simpler as it just involves launching a fresh cluster and restoring a
snapshot following the [restoring to a different cluster guide][].
## Live migration
If restoring from a snapshot is not possible due to the write downtime then a
live migration is also possible. It is very important to first test this in a
testing environment to make sure you are comfortable with the process and fully
understand what is happening.
This process will involve joining a new set of master, data and client nodes to
an existing cluster that has been deployed using the [helm/charts][] community
chart. Nodes will then be replaced one by one in a controlled fashion to
decommission the old cluster.
This example will be using the default values for the existing helm/charts
release and for the Elastic helm-charts release. If you have changed any of the
default values then you will need to first make sure that your values are
configured in a compatible way before starting the migration.
The process will involve a re-sync and a rolling restart of all of your data
nodes. Therefore it is important to disable shard allocation and perform a synced
flush like you normally would during any other rolling upgrade. See the
[rolling upgrades guide][] for more information.
* The default image for this chart is
`docker.elastic.co/elasticsearch/elasticsearch` which contains the default
distribution of Elasticsearch with a [basic license][]. Make sure to update the
`image` and `imageTag` values to the correct Docker image and Elasticsearch
version that you currently have deployed.
* Convert your current helm/charts configuration into something that is
compatible with this chart.
* Take a fresh snapshot of your cluster. If something goes wrong you want to be
able to restore your data no matter what.
* Check that your clusters health is green. If not abort and make sure your
cluster is healthy before continuing:
```
curl localhost:9200/_cluster/health
```
* Deploy new data nodes which will join the existing cluster. Take a look at the
configuration in [data.yaml][]:
```
make data
```
* Check that the new nodes have joined the cluster (run this and any other curl
commands from within one of your pods):
```
curl localhost:9200/_cat/nodes
```
* Check that your cluster is still green. If so we can now start to scale down
the existing data nodes. Assuming you have the default amount of data nodes (2)
we now want to scale it down to 1:
```
kubectl scale statefulsets my-release-elasticsearch-data --replicas=1
```
* Wait for your cluster to become green again:
```
watch 'curl -s localhost:9200/_cluster/health'
```
* Once the cluster is green we can scale down again:
```
kubectl scale statefulsets my-release-elasticsearch-data --replicas=0
```
* Wait for the cluster to be green again.
* OK. We now have all data nodes running in the new cluster. Time to replace the
masters by firstly scaling down the masters from 3 to 2. Between each step make
sure to wait for the cluster to become green again, and check with
`curl localhost:9200/_cat/nodes` that you see the correct amount of master
nodes. During this process we will always make sure to keep at least 2 master
nodes as to not lose quorum:
```
kubectl scale statefulsets my-release-elasticsearch-master --replicas=2
```
* Now deploy a single new master so that we have 3 masters again. See
[master.yaml][] for the configuration:
```
make master
```
* Scale down old masters to 1:
```
kubectl scale statefulsets my-release-elasticsearch-master --replicas=1
```
* Edit the masters in [masters.yaml][] to 2 and redeploy:
```
make master
```
* Scale down the old masters to 0:
```
kubectl scale statefulsets my-release-elasticsearch-master --replicas=0
```
* Edit the [masters.yaml][] to have 3 replicas and remove the
`discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then redeploy the
masters. This will make sure all 3 masters are running in the new cluster and
are pointing at each other for discovery:
```
make master
```
* Remove the `discovery.zen.ping.unicast.hosts` entry from `extraEnvs` then
redeploy the data nodes to make sure they are pointing at the new masters:
```
make data
```
* Deploy the client nodes:
```
make client
```
* Update any processes that are talking to the existing client nodes and point
them to the new client nodes. Once this is done you can scale down the old
client nodes:
```
kubectl scale deployment my-release-elasticsearch-client --replicas=0
```
* The migration should now be complete. After verifying that everything is
working correctly you can cleanup leftover resources from your old cluster.
[basic license]: https://www.elastic.co/subscriptions
[data.yaml]: https://github.com/elastic/helm-charts/blob/7.15/elasticsearch/examples/migration/data.yaml
[helm/charts]: https://github.com/helm/charts/tree/7.15/stable/elasticsearch
[master.yaml]: https://github.com/elastic/helm-charts/blob/7.15/elasticsearch/examples/migration/master.yaml
[restoring to a different cluster guide]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/modules-snapshots.html#_restoring_to_a_different_cluster
[rolling upgrades guide]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/rolling-upgrades.html

View File

@ -0,0 +1,23 @@
---
replicas: 2
clusterName: "elasticsearch"
nodeGroup: "client"
esMajorVersion: 6
roles:
master: "false"
ingest: "false"
data: "false"
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 1Gi # Currently needed till pvcs are made optional
persistence:
enabled: false

View File

@ -0,0 +1,17 @@
---
replicas: 2
esMajorVersion: 6
extraEnvs:
- name: discovery.zen.ping.unicast.hosts
value: "my-release-elasticsearch-discovery"
clusterName: "elasticsearch"
nodeGroup: "data"
roles:
master: "false"
ingest: "false"
data: "true"

View File

@ -0,0 +1,26 @@
---
# Temporarily set to 3 so we can scale up/down the old a new cluster
# one at a time whilst always keeping 3 masters running
replicas: 1
esMajorVersion: 6
extraEnvs:
- name: discovery.zen.ping.unicast.hosts
value: "my-release-elasticsearch-discovery"
clusterName: "elasticsearch"
nodeGroup: "master"
roles:
master: "true"
ingest: "false"
data: "false"
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 4Gi

View File

@ -0,0 +1,13 @@
default: test
RELEASE := helm-es-minikube
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install
helm test $(RELEASE)
purge:
helm del $(RELEASE)

View File

@ -0,0 +1,38 @@
# Minikube
This example deploy a 3 nodes Elasticsearch 7.15.0 cluster on [Minikube][]
using [custom values][].
If helm or kubectl timeouts occur, you may consider creating a minikube VM with
more CPU cores or memory allocated.
Note that this configuration should be used for test only and isn't recommended
for production.
## Requirements
In order to properly support the required persistent volume claims for the
Elasticsearch StatefulSet, the `default-storageclass` and `storage-provisioner`
minikube addons must be enabled.
```
minikube addons enable default-storageclass
minikube addons enable storage-provisioner
```
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
[custom values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/minikube/values.yaml
[minikube]: https://minikube.sigs.k8s.io/docs/

View File

@ -0,0 +1,23 @@
---
# Permit co-located instances for solitary minikube virtual machines.
antiAffinity: "soft"
# Shrink default JVM heap.
esJavaOpts: "-Xmx128m -Xms128m"
# Allocate smaller chunks of memory per pod.
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
# Request smaller persistent volumes.
volumeClaimTemplate:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 100M

View File

@ -0,0 +1,19 @@
default: test
include ../../../helpers/examples.mk
PREFIX := helm-es-multi
RELEASE := helm-es-multi-master
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values master.yaml $(PREFIX)-master ../../
helm upgrade --wait --timeout=$(TIMEOUT) --install --values data.yaml $(PREFIX)-data ../../
helm upgrade --wait --timeout=$(TIMEOUT) --install --values client.yaml $(PREFIX)-client ../../
test: install goss
purge:
helm del $(PREFIX)-master
helm del $(PREFIX)-data
helm del $(PREFIX)-client

View File

@ -0,0 +1,29 @@
# Multi
This example deploy an Elasticsearch 7.15.0 cluster composed of 3 different Helm
releases:
- `helm-es-multi-master` for the 3 master nodes using [master values][]
- `helm-es-multi-data` for the 3 data nodes using [data values][]
- `helm-es-multi-client` for the 3 client nodes using [client values][]
## Usage
* Deploy the 3 Elasticsearch releases: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/multi-master 9200
curl -u elastic:changeme http://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[client values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/multi/client.yaml
[data values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/multi/data.yaml
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/multi/test/goss.yaml
[master values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/multi/master.yaml

View File

@ -0,0 +1,14 @@
---
clusterName: "multi"
nodeGroup: "client"
roles:
master: "false"
ingest: "false"
data: "false"
ml: "false"
remote_cluster_client: "false"
persistence:
enabled: false

View File

@ -0,0 +1,11 @@
---
clusterName: "multi"
nodeGroup: "data"
roles:
master: "false"
ingest: "true"
data: "true"
ml: "false"
remote_cluster_client: "false"

View File

@ -0,0 +1,11 @@
---
clusterName: "multi"
nodeGroup: "master"
roles:
master: "true"
ingest: "false"
data: "false"
ml: "false"
remote_cluster_client: "false"

View File

@ -0,0 +1,9 @@
http:
http://localhost:9200/_cluster/health:
status: 200
timeout: 2000
body:
- 'green'
- '"cluster_name":"multi"'
- '"number_of_nodes":9'
- '"number_of_data_nodes":3'

View File

@ -0,0 +1,14 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-networkpolicy
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@ -0,0 +1,37 @@
networkPolicy:
http:
enabled: true
explicitNamespacesSelector:
# Accept from namespaces with all those different rules (from whitelisted Pods)
matchLabels:
role: frontend-http
matchExpressions:
- {key: role, operator: In, values: [frontend-http]}
additionalRules:
- podSelector:
matchLabels:
role: frontend-http
- podSelector:
matchExpressions:
- key: role
operator: In
values:
- frontend-http
transport:
enabled: true
allowExternal: true
explicitNamespacesSelector:
matchLabels:
role: frontend-transport
matchExpressions:
- {key: role, operator: In, values: [frontend-transport]}
additionalRules:
- podSelector:
matchLabels:
role: frontend-transport
- podSelector:
matchExpressions:
- key: role
operator: In
values:
- frontend-transport

View File

@ -0,0 +1,13 @@
default: test
include ../../../helpers/examples.mk
RELEASE := elasticsearch
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: install goss
purge:
helm del $(RELEASE)

View File

@ -0,0 +1,24 @@
# OpenShift
This example deploy a 3 nodes Elasticsearch 7.15.0 cluster on [OpenShift][]
using [custom values][].
## Usage
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/elasticsearch-master 9200
curl localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[custom values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/openshift/values.yaml
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/openshift/test/goss.yaml
[openshift]: https://www.openshift.com/

View File

@ -0,0 +1,16 @@
http:
http://localhost:9200/_cluster/health:
status: 200
timeout: 2000
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
http://localhost:9200:
status: 200
timeout: 2000
body:
- '"number" : "7.15.0"'
- '"cluster_name" : "elasticsearch"'
- "You Know, for Search"

View File

@ -0,0 +1,11 @@
---
securityContext:
runAsUser: null
podSecurityContext:
fsGroup: null
runAsUser: null
sysctlInitContainer:
enabled: false

View File

@ -0,0 +1,38 @@
default: test
include ../../../helpers/examples.mk
RELEASE := helm-es-security
ELASTICSEARCH_IMAGE := docker.elastic.co/elasticsearch/elasticsearch:$(STACK_VERSION)
TIMEOUT := 1200s
install:
helm upgrade --wait --timeout=$(TIMEOUT) --install --values values.yaml $(RELEASE) ../../
test: secrets install goss
purge:
kubectl delete secrets elastic-credentials elastic-certificates elastic-certificate-pem elastic-certificate-crt|| true
helm del $(RELEASE)
pull-elasticsearch-image:
docker pull $(ELASTICSEARCH_IMAGE)
secrets:
docker rm -f elastic-helm-charts-certs || true
rm -f elastic-certificates.p12 elastic-certificate.pem elastic-certificate.crt elastic-stack-ca.p12 || true
password=$$([ ! -z "$$ELASTIC_PASSWORD" ] && echo $$ELASTIC_PASSWORD || echo $$(docker run --rm busybox:1.31.1 /bin/sh -c "< /dev/urandom tr -cd '[:alnum:]' | head -c20")) && \
docker run --name elastic-helm-charts-certs -i -w /app \
$(ELASTICSEARCH_IMAGE) \
/bin/sh -c " \
elasticsearch-certutil ca --out /app/elastic-stack-ca.p12 --pass '' && \
elasticsearch-certutil cert --name security-master --dns security-master --ca /app/elastic-stack-ca.p12 --pass '' --ca-pass '' --out /app/elastic-certificates.p12" && \
docker cp elastic-helm-charts-certs:/app/elastic-certificates.p12 ./ && \
docker rm -f elastic-helm-charts-certs && \
openssl pkcs12 -nodes -passin pass:'' -in elastic-certificates.p12 -out elastic-certificate.pem && \
openssl x509 -outform der -in elastic-certificate.pem -out elastic-certificate.crt && \
kubectl create secret generic elastic-certificates --from-file=elastic-certificates.p12 && \
kubectl create secret generic elastic-certificate-pem --from-file=elastic-certificate.pem && \
kubectl create secret generic elastic-certificate-crt --from-file=elastic-certificate.crt && \
kubectl create secret generic elastic-credentials --from-literal=password=$$password --from-literal=username=elastic && \
rm -f elastic-certificates.p12 elastic-certificate.pem elastic-certificate.crt elastic-stack-ca.p12

View File

@ -0,0 +1,29 @@
# Security
This example deploy a 3 nodes Elasticsearch 7.15.0 with authentication and
autogenerated certificates for TLS (see [values][]).
Note that this configuration should be used for test only. For a production
deployment you should generate SSL certificates following the [official docs][].
## Usage
* Create the required secrets: `make secrets`
* Deploy Elasticsearch chart with the default values: `make install`
* You can now setup a port forward to query Elasticsearch API:
```
kubectl port-forward svc/security-master 9200
curl -u elastic:changeme https://localhost:9200/_cat/indices
```
## Testing
You can also run [goss integration tests][] using `make test`
[goss integration tests]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/security/test/goss.yaml
[official docs]: https://www.elastic.co/guide/en/elasticsearch/reference/7.15/configuring-tls.html#node-certificates
[values]: https://github.com/elastic/helm-charts/tree/7.15/elasticsearch/examples/security/values.yaml

View File

@ -0,0 +1,44 @@
http:
https://security-master:9200/_cluster/health:
status: 200
timeout: 2000
allow-insecure: true
username: "{{ .Env.ELASTIC_USERNAME }}"
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
https://localhost:9200/:
status: 200
timeout: 2000
allow-insecure: true
username: "{{ .Env.ELASTIC_USERNAME }}"
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- '"cluster_name" : "security"'
- "You Know, for Search"
https://localhost:9200/_xpack/license:
status: 200
timeout: 2000
allow-insecure: true
username: "{{ .Env.ELASTIC_USERNAME }}"
password: "{{ .Env.ELASTIC_PASSWORD }}"
body:
- "active"
- "basic"
file:
/usr/share/elasticsearch/config/elasticsearch.yml:
exists: true
contains:
- "xpack.security.enabled: true"
- "xpack.security.transport.ssl.enabled: true"
- "xpack.security.transport.ssl.verification_mode: certificate"
- "xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- "xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- "xpack.security.http.ssl.enabled: true"
- "xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"
- "xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12"

View File

@ -0,0 +1,38 @@
---
clusterName: "security"
nodeGroup: "master"
roles:
master: "true"
ingest: "true"
data: "true"
protocol: https
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates
path: /usr/share/elasticsearch/config/certs

View File

@ -0,0 +1,16 @@
default: test
include ../../../helpers/examples.mk
CHART := elasticsearch
RELEASE := helm-es-upgrade
FROM := 7.4.0 # versions before 7.4.O aren't compatible with Kubernetes >= 1.16.0
install:
../../../helpers/upgrade.sh --chart $(CHART) --release $(RELEASE) --from $(FROM)
kubectl rollout status statefulset upgrade-master
test: install goss
purge:
helm del $(RELEASE)

View File

@ -0,0 +1,17 @@
# Upgrade
This example will deploy a 3 node Elasticsearch cluster chart using an old chart
version, then upgrade it.
## Usage
* Deploy and upgrade Elasticsearch chart with the default values: `make install`
## Testing
You can also run [goss integration tests][] using `make test`.
[goss integration tests]: https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/upgrade/test/goss.yaml

View File

@ -0,0 +1,76 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<-EOF
USAGE:
$0 [--release <release-name>] [--from <elasticsearch-version>]
$0 --help
OPTIONS:
--release <release-name>
Name of the Helm release to install
--from <elasticsearch-version>
Elasticsearch version to use for first install
EOF
exit 1
}
RELEASE="helm-es-upgrade"
FROM=""
while [[ $# -gt 0 ]]
do
key="$1"
case $key in
--help)
usage
;;
--release)
RELEASE="$2"
shift 2
;;
--from)
FROM="$2"
shift 2
;;
*)
log "Unrecognized argument: '$key'"
usage
;;
esac
done
if ! command -v jq > /dev/null
then
echo 'jq is required to use this script'
echo 'please check https://stedolan.github.io/jq/download/ to install it'
exit 1
fi
# Elasticsearch chart < 7.4.0 are not compatible with K8S >= 1.16)
if [[ -z $FROM ]]
then
KUBE_MINOR_VERSION=$(kubectl version -o json | jq --raw-output --exit-status '.serverVersion.minor' | sed 's/[^0-9]*//g')
if [ "$KUBE_MINOR_VERSION" -lt 16 ]
then
FROM="7.0.0-alpha1"
else
FROM="7.4.0"
fi
fi
helm repo add elastic https://helm.elastic.co
# Initial install
printf "Installing Elasticsearch chart %s\n" "$FROM"
helm upgrade --wait --timeout=600s --install "$RELEASE" elastic/elasticsearch --version "$FROM" --set clusterName=upgrade
kubectl rollout status sts/upgrade-master --timeout=600s
# Upgrade
printf "Upgrading Elasticsearch chart\n"
helm upgrade --wait --timeout=600s --set terminationGracePeriod=121 --install "$RELEASE" ../../ --set clusterName=upgrade
kubectl rollout status sts/upgrade-master --timeout=600s

View File

@ -0,0 +1,16 @@
http:
http://localhost:9200/_cluster/health:
status: 200
timeout: 2000
body:
- "green"
- '"number_of_nodes":3'
- '"number_of_data_nodes":3'
http://localhost:9200:
status: 200
timeout: 2000
body:
- '"number" : "7.15.0"'
- '"cluster_name" : "upgrade"'
- "You Know, for Search"

View File

@ -0,0 +1,2 @@
---
clusterName: upgrade

View File

@ -0,0 +1,6 @@
1. Watch all cluster members come up.
$ kubectl get pods --namespace={{ .Release.Namespace }} -l app={{ template "elasticsearch.uname" . }} -w
{{- if .Values.tests.enabled -}}
2. Test cluster health using Helm test.
$ helm --namespace={{ .Release.Namespace }} test {{ .Release.Name }}
{{- end -}}

View File

@ -0,0 +1,65 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "elasticsearch.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "elasticsearch.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "elasticsearch.uname" -}}
{{- if empty .Values.fullnameOverride -}}
{{- if empty .Values.nameOverride -}}
{{ .Values.clusterName }}-{{ .Values.nodeGroup }}
{{- else -}}
{{ .Values.nameOverride }}-{{ .Values.nodeGroup }}
{{- end -}}
{{- else -}}
{{ .Values.fullnameOverride }}
{{- end -}}
{{- end -}}
{{- define "elasticsearch.masterService" -}}
{{- if empty .Values.masterService -}}
{{- if empty .Values.fullnameOverride -}}
{{- if empty .Values.nameOverride -}}
{{ .Values.clusterName }}-master
{{- else -}}
{{ .Values.nameOverride }}-master
{{- end -}}
{{- else -}}
{{ .Values.fullnameOverride }}
{{- end -}}
{{- else -}}
{{ .Values.masterService }}
{{- end -}}
{{- end -}}
{{- define "elasticsearch.endpoints" -}}
{{- $replicas := int (toString (.Values.replicas)) }}
{{- $uname := (include "elasticsearch.uname" .) }}
{{- range $i, $e := untilStep 0 $replicas 1 -}}
{{ $uname }}-{{ $i }},
{{- end -}}
{{- end -}}
{{- define "elasticsearch.esMajorVersion" -}}
{{- if .Values.esMajorVersion -}}
{{ .Values.esMajorVersion }}
{{- else -}}
{{- $version := int (index (.Values.imageTag | splitList ".") 0) -}}
{{- if and (contains "docker.elastic.co/elasticsearch/elasticsearch" .Values.image) (not (eq $version 0)) -}}
{{ $version }}
{{- else -}}
7
{{- end -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,16 @@
{{- if .Values.esConfig }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "elasticsearch.uname" . }}-config
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
data:
{{- range $path, $config := .Values.esConfig }}
{{ $path }}: |
{{ $config | indent 4 -}}
{{- end -}}
{{- end -}}

View File

@ -0,0 +1,54 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "elasticsearch.uname" . -}}
{{- $httpPort := .Values.httpPort -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- if .ingressPath }}
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- else }}
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
{{- if $ingressPath }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $httpPort }}
{{- else }}
- host: {{ .host }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ .servicePort | default $httpPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,61 @@
{{- if (or .Values.networkPolicy.http.enabled .Values.networkPolicy.transport.enabled) }}
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ template "elasticsearch.uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
spec:
podSelector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
ingress: # Allow inbound connections
{{- if .Values.networkPolicy.http.enabled }}
# For HTTP access
- ports:
- port: {{ .Values.httpPort }}
from:
# From authorized Pods (having the correct label)
- podSelector:
matchLabels:
{{ template "elasticsearch.uname" . }}-http-client: "true"
{{- with .Values.networkPolicy.http.explicitNamespacesSelector }}
# From authorized namespaces
namespaceSelector:
{{ toYaml . | indent 12 }}
{{- end }}
{{- with .Values.networkPolicy.http.additionalRules }}
# Or from custom additional rules
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.networkPolicy.transport.enabled }}
# For transport access
- ports:
- port: {{ .Values.transportPort }}
from:
# From authorized Pods (having the correct label)
- podSelector:
matchLabels:
{{ template "elasticsearch.uname" . }}-transport-client: "true"
{{- with .Values.networkPolicy.transport.explicitNamespacesSelector }}
# From authorized namespaces
namespaceSelector:
{{ toYaml . | indent 12 }}
{{- end }}
{{- with .Values.networkPolicy.transport.additionalRules }}
# Or from custom additional rules
{{ toYaml . | indent 8 }}
{{- end }}
# Or from other ElasticSearch Pods
- podSelector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
{{- end }}
{{- end }}

View File

@ -0,0 +1,12 @@
---
{{- if .Values.maxUnavailable }}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: "{{ template "elasticsearch.uname" . }}-pdb"
spec:
maxUnavailable: {{ .Values.maxUnavailable }}
selector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
{{- end }}

View File

@ -0,0 +1,14 @@
{{- if .Values.podSecurityPolicy.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ default $fullName .Values.podSecurityPolicy.name | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
spec:
{{ toYaml .Values.podSecurityPolicy.spec | indent 2 }}
{{- end -}}

View File

@ -0,0 +1,25 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
{{- if eq .Values.podSecurityPolicy.name "" }}
- {{ $fullName | quote }}
{{- else }}
- {{ .Values.podSecurityPolicy.name | quote }}
{{- end }}
verbs:
- use
{{- end -}}

View File

@ -0,0 +1,24 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
subjects:
- kind: ServiceAccount
{{- if eq .Values.rbac.serviceAccountName "" }}
name: {{ $fullName | quote }}
{{- else }}
name: {{ .Values.rbac.serviceAccountName | quote }}
{{- end }}
namespace: {{ .Release.Namespace | quote }}
roleRef:
kind: Role
name: {{ $fullName | quote }}
apiGroup: rbac.authorization.k8s.io
{{- end -}}

View File

@ -0,0 +1,77 @@
---
{{- if .Values.service.enabled -}}
kind: Service
apiVersion: v1
metadata:
{{- if eq .Values.nodeGroup "master" }}
name: {{ template "elasticsearch.masterService" . }}
{{- else }}
name: {{ template "elasticsearch.uname" . }}
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4}}
{{- end }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
type: {{ .Values.service.type }}
selector:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
ports:
- name: {{ .Values.service.httpPortName | default "http" }}
protocol: TCP
port: {{ .Values.httpPort }}
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
- name: {{ .Values.service.transportPortName | default "transport" }}
protocol: TCP
port: {{ .Values.transportPort }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
{{- with .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml . | indent 4 }}
{{- end }}
{{- if .Values.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
{{- end }}
{{- end }}
---
kind: Service
apiVersion: v1
metadata:
{{- if eq .Values.nodeGroup "master" }}
name: {{ template "elasticsearch.masterService" . }}-headless
{{- else }}
name: {{ template "elasticsearch.uname" . }}-headless
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- if .Values.service.labelsHeadless }}
{{ toYaml .Values.service.labelsHeadless | indent 4 }}
{{- end }}
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
# Create endpoints also if the related pod isn't ready
publishNotReadyAddresses: true
selector:
app: "{{ template "elasticsearch.uname" . }}"
ports:
- name: {{ .Values.service.httpPortName | default "http" }}
port: {{ .Values.httpPort }}
- name: {{ .Values.service.transportPortName | default "transport" }}
port: {{ .Values.transportPort }}

View File

@ -0,0 +1,20 @@
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: v1
kind: ServiceAccount
metadata:
{{- if eq .Values.rbac.serviceAccountName "" }}
name: {{ $fullName | quote }}
{{- else }}
name: {{ .Values.rbac.serviceAccountName | quote }}
{{- end }}
annotations:
{{- with .Values.rbac.serviceAccountAnnotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
{{- end -}}

View File

@ -0,0 +1,378 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "elasticsearch.uname" . }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
annotations:
esMajorVersion: "{{ include "elasticsearch.esMajorVersion" . }}"
spec:
serviceName: {{ template "elasticsearch.uname" . }}-headless
selector:
matchLabels:
app: "{{ template "elasticsearch.uname" . }}"
replicas: {{ .Values.replicas }}
podManagementPolicy: {{ .Values.podManagementPolicy }}
updateStrategy:
type: {{ .Values.updateStrategy }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: {{ template "elasticsearch.uname" . }}
{{- if .Values.persistence.labels.enabled }}
labels:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{- end }}
{{- with .Values.persistence.annotations }}
annotations:
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{ toYaml .Values.volumeClaimTemplate | indent 6 }}
{{- end }}
template:
metadata:
name: "{{ template "elasticsearch.uname" . }}"
labels:
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}"
app: "{{ template "elasticsearch.uname" . }}"
{{- range $key, $value := .Values.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
annotations:
{{- range $key, $value := .Values.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
{{/* This forces a restart if the configmap has changed */}}
{{- if .Values.esConfig }}
configchecksum: {{ include (print .Template.BasePath "/configmap.yaml") . | sha256sum | trunc 63 }}
{{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
securityContext:
{{ toYaml .Values.podSecurityContext | indent 8 }}
{{- if .Values.fsGroup }}
fsGroup: {{ .Values.fsGroup }} # Deprecated value, please use .Values.podSecurityContext.fsGroup
{{- end }}
{{- if .Values.rbac.create }}
serviceAccountName: "{{ template "elasticsearch.uname" . }}"
{{- else if not (eq .Values.rbac.serviceAccountName "") }}
serviceAccountName: {{ .Values.rbac.serviceAccountName | quote }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 6 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- if or (eq .Values.antiAffinity "hard") (eq .Values.antiAffinity "soft") .Values.nodeAffinity }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
affinity:
{{- end }}
{{- if eq .Values.antiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "elasticsearch.uname" .}}"
topologyKey: {{ .Values.antiAffinityTopologyKey }}
{{- else if eq .Values.antiAffinity "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: {{ .Values.antiAffinityTopologyKey }}
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- "{{ template "elasticsearch.uname" . }}"
{{- end }}
{{- with .Values.nodeAffinity }}
nodeAffinity:
{{ toYaml . | indent 10 }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriod }}
volumes:
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- if .defaultMode }}
defaultMode: {{ .defaultMode }}
{{- end }}
{{- end }}
{{- if .Values.esConfig }}
- name: esconfig
configMap:
name: {{ template "elasticsearch.uname" . }}-config
{{- end }}
{{- if .Values.keystore }}
- name: keystore
emptyDir: {}
{{- range .Values.keystore }}
- name: keystore-{{ .secretName }}
secret: {{ toYaml . | nindent 12 }}
{{- end }}
{{ end }}
{{- if .Values.extraVolumes }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraVolumes) }}
{{ tpl .Values.extraVolumes . | indent 8 }}
{{- else }}
{{ toYaml .Values.extraVolumes | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
enableServiceLinks: {{ .Values.enableServiceLinks }}
{{- if .Values.hostAliases }}
hostAliases: {{ toYaml .Values.hostAliases | nindent 8 }}
{{- end }}
{{- if or (.Values.extraInitContainers) (.Values.sysctlInitContainer.enabled) (.Values.keystore) }}
initContainers:
{{- if .Values.sysctlInitContainer.enabled }}
- name: configure-sysctl
securityContext:
runAsUser: 0
privileged: true
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command: ["sysctl", "-w", "vm.max_map_count={{ .Values.sysctlVmMaxMapCount}}"]
resources:
{{ toYaml .Values.initResources | indent 10 }}
{{- end }}
{{ if .Values.keystore }}
- name: keystore
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- sh
- -c
- |
#!/usr/bin/env bash
set -euo pipefail
elasticsearch-keystore create
for i in /tmp/keystoreSecrets/*/*; do
key=$(basename $i)
echo "Adding file $i to keystore key $key"
elasticsearch-keystore add-file "$key" "$i"
done
# Add the bootstrap password since otherwise the Elasticsearch entrypoint tries to do this on startup
if [ ! -z ${ELASTIC_PASSWORD+x} ]; then
echo 'Adding env $ELASTIC_PASSWORD to keystore as key bootstrap.password'
echo "$ELASTIC_PASSWORD" | elasticsearch-keystore add -x bootstrap.password
fi
cp -a /usr/share/elasticsearch/config/elasticsearch.keystore /tmp/keystore/
env: {{ toYaml .Values.extraEnvs | nindent 10 }}
envFrom: {{ toYaml .Values.envFrom | nindent 10 }}
resources: {{ toYaml .Values.initResources | nindent 10 }}
volumeMounts:
- name: keystore
mountPath: /tmp/keystore
{{- range .Values.keystore }}
- name: keystore-{{ .secretName }}
mountPath: /tmp/keystoreSecrets/{{ .secretName }}
{{- end }}
{{ end }}
{{- if .Values.extraInitContainers }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraInitContainers) }}
{{ tpl .Values.extraInitContainers . | indent 6 }}
{{- else }}
{{ toYaml .Values.extraInitContainers | indent 6 }}
{{- end }}
{{- end }}
{{- end }}
containers:
- name: "{{ template "elasticsearch.name" . }}"
securityContext:
{{ toYaml .Values.securityContext | indent 10 }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: "{{ .Values.clusterHealthCheckParams }}" )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no
http () {
local path="${1}"
local args="${2}"
set -- -XGET -s
if [ "$args" != "" ]; then
set -- "$@" $args
fi
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
fi
curl --output /dev/null -k "$@" "{{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}${path}"
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
HTTP_CODE=$(http "/" "-w %{http_code}")
RC=$?
if [[ ${RC} -ne 0 ]]; then
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}/ failed with RC ${RC}"
exit ${RC}
fi
# ready if HTTP code 200, 503 is tolerable if ES version is 6.x
if [[ ${HTTP_CODE} == "200" ]]; then
exit 0
elif [[ ${HTTP_CODE} == "503" && "{{ include "elasticsearch.esMajorVersion" . }}" == "6" ]]; then
exit 0
else
echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} {{ .Values.protocol }}://127.0.0.1:{{ .Values.httpPort }}/ failed with HTTP code ${HTTP_CODE}"
exit 1
fi
else
echo 'Waiting for elasticsearch cluster to become ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
if http "/_cluster/health?{{ .Values.clusterHealthCheckParams }}" "--fail" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "{{ .Values.clusterHealthCheckParams }}" )'
exit 1
fi
fi
{{ toYaml .Values.readinessProbe | indent 10 }}
ports:
- name: http
containerPort: {{ .Values.httpPort }}
- name: transport
containerPort: {{ .Values.transportPort }}
resources:
{{ toYaml .Values.resources | indent 10 }}
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
{{- if eq .Values.roles.master "true" }}
{{- if ge (int (include "elasticsearch.esMajorVersion" .)) 7 }}
- name: cluster.initial_master_nodes
value: "{{ template "elasticsearch.endpoints" . }}"
{{- else }}
- name: discovery.zen.minimum_master_nodes
value: "{{ .Values.minimumMasterNodes }}"
{{- end }}
{{- end }}
{{- if lt (int (include "elasticsearch.esMajorVersion" .)) 7 }}
- name: discovery.zen.ping.unicast.hosts
value: "{{ template "elasticsearch.masterService" . }}-headless"
{{- else }}
- name: discovery.seed_hosts
value: "{{ template "elasticsearch.masterService" . }}-headless"
{{- end }}
- name: cluster.name
value: "{{ .Values.clusterName }}"
- name: network.host
value: "{{ .Values.networkHost }}"
{{- if .Values.esJavaOpts }}
- name: ES_JAVA_OPTS
value: "{{ .Values.esJavaOpts }}"
{{- end }}
{{- range $role, $enabled := .Values.roles }}
- name: node.{{ $role }}
value: "{{ $enabled }}"
{{- end }}
{{- if .Values.extraEnvs }}
{{ toYaml .Values.extraEnvs | indent 10 }}
{{- end }}
{{- if .Values.envFrom }}
envFrom:
{{ toYaml .Values.envFrom | indent 10 }}
{{- end }}
volumeMounts:
{{- if .Values.persistence.enabled }}
- name: "{{ template "elasticsearch.uname" . }}"
mountPath: /usr/share/elasticsearch/data
{{- end }}
{{ if .Values.keystore }}
- name: keystore
mountPath: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
{{ end }}
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
{{- if .subPath }}
subPath: {{ .subPath }}
{{- end }}
{{- end }}
{{- range $path, $config := .Values.esConfig }}
- name: esconfig
mountPath: /usr/share/elasticsearch/config/{{ $path }}
subPath: {{ $path }}
{{- end -}}
{{- if .Values.extraVolumeMounts }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraVolumeMounts) }}
{{ tpl .Values.extraVolumeMounts . | indent 10 }}
{{- else }}
{{ toYaml .Values.extraVolumeMounts | indent 10 }}
{{- end }}
{{- end }}
{{- if .Values.lifecycle }}
lifecycle:
{{ toYaml .Values.lifecycle | indent 10 }}
{{- end }}
{{- if .Values.extraContainers }}
# Currently some extra blocks accept strings
# to continue with backwards compatibility this is being kept
# whilst also allowing for yaml to be specified too.
{{- if eq "string" (printf "%T" .Values.extraContainers) }}
{{ tpl .Values.extraContainers . | indent 6 }}
{{- else }}
{{ toYaml .Values.extraContainers | indent 6 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,36 @@
---
{{- if .Values.tests.enabled -}}
apiVersion: v1
kind: Pod
metadata:
{{- if .Values.healthNameOverride }}
name: {{ .Values.healthNameOverride | quote }}
{{- else }}
name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
{{- end }}
annotations:
"helm.sh/hook": test
"helm.sh/hook-delete-policy": hook-succeeded
spec:
securityContext:
{{ toYaml .Values.podSecurityContext | indent 4 }}
containers:
{{- if .Values.healthNameOverride }}
- name: {{ .Values.healthNameOverride | quote }}
{{- else }}
- name: "{{ .Release.Name }}-{{ randAlpha 5 | lower }}-test"
{{- end }}
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
imagePullPolicy: "{{ .Values.imagePullPolicy }}"
command:
- "sh"
- "-c"
- |
#!/usr/bin/env bash -e
curl -XGET --fail '{{ template "elasticsearch.uname" . }}:{{ .Values.httpPort }}/_cluster/health?{{ .Values.clusterHealthCheckParams }}'
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 4 }}
{{- end }}
restartPolicy: Never
{{- end -}}

View File

@ -0,0 +1,342 @@
---
clusterName: "elasticsearch"
nodeGroup: "master"
# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: ""
# Elasticsearch roles that will be applied to this nodeGroup
# These will be set as environment variables. E.g. node.master=true
roles:
master: "true"
ingest: "true"
data: "true"
remote_cluster_client: "true"
ml: "true"
replicas: 1
minimumMasterNodes: 1
esMajorVersion: ""
# Allows you to add any config files in /usr/share/elasticsearch/config/
# such as elasticsearch.yml and log4j2.properties
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
# log4j2.properties: |
# key = value
# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
# name: env-secret
# - configMapRef:
# name: config-map
# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []
# - name: elastic-certificates
# secretName: elastic-certificates
# path: /usr/share/elasticsearch/config/certs
# defaultMode: 0755
hostAliases: []
#- ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "7.15.1"
imagePullPolicy: "IfNotPresent"
podAnnotations:
{}
# iam.amazonaws.com/role: es-cluster
# additionals labels
labels: {}
esJavaOpts: "-Xms512m -Xmx512m" # example: "-Xmx1g -Xms1g"
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "1000m"
memory: "2Gi"
initResources:
{}
# limits:
# cpu: "25m"
# # memory: "128Mi"
# requests:
# cpu: "25m"
# memory: "128Mi"
networkHost: "0.0.0.0"
volumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 8Gi
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir
persistence:
enabled: true
labels:
# Add default labels for the volumeClaimTemplate of the StatefulSet
enabled: false
annotations: {}
extraVolumes:
[]
# - name: extras
# emptyDir: {}
extraVolumeMounts:
[]
# - name: extras
# mountPath: /usr/share/extras
# readOnly: true
extraContainers:
[]
# - name: do-something
# image: busybox
# command: ['do', 'something']
extraInitContainers:
[]
# - name: do-something
# image: busybox
# command: ['do', 'something']
# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""
# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"
# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "hard"
# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}
# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"
# The environment variables injected by service links are not used, but can lead to slow Elasticsearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true
protocol: http
httpPort: 9200
transportPort: 9300
service:
enabled: true
labels: {}
labelsHeadless: {}
type: ClusterIP
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
# readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
# How long to wait for elasticsearch to stop gracefully
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
# https://www.elastic.co/guide/en/elasticsearch/reference/7.15/cluster-health.html#request-params wait_for_status
clusterHealthCheckParams: "wait_for_status=green&timeout=1s"
## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
imagePullSecrets: []
nodeSelector: {}
tolerations: []
# Enabling this will publically expose your Elasticsearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
enabled: false
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
nameOverride: ""
fullnameOverride: ""
healthNameOverride: ""
lifecycle:
{}
# preStop:
# exec:
# command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
# postStart:
# exec:
# command:
# - bash
# - -c
# - |
# #!/bin/bash
# # Add a template to adjust number of shards/replicas
# TEMPLATE_NAME=my_template
# INDEX_PATTERN="logstash-*"
# SHARD_COUNT=8
# REPLICA_COUNT=1
# ES_URL=http://localhost:9200
# while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
# curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'
sysctlInitContainer:
enabled: true
keystore: []
networkPolicy:
## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
## In order for a Pod to access Elasticsearch, it needs to have the following label:
## {{ template "uname" . }}-client: "true"
## Example for default configuration to access HTTP port:
## elasticsearch-master-http-client: "true"
## Example for default configuration to access transport port:
## elasticsearch-master-transport-client: "true"
http:
enabled: false
## if explicitNamespacesSelector is not set or set to {}, only client Pods being in the networkPolicy's namespace
## and matching all criteria can reach the DB.
## But sometimes, we want the Pods to be accessible to clients from other namespaces, in this case, we can use this
## parameter to select these namespaces
##
# explicitNamespacesSelector:
# # Accept from namespaces with all those different rules (only from whitelisted Pods)
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
##
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
transport:
## Note that all Elasticsearch Pods can talks to themselves using transport port even if enabled.
enabled: false
# explicitNamespacesSelector:
# matchLabels:
# role: frontend
# matchExpressions:
# - {key: role, operator: In, values: [frontend]}
# additionalRules:
# - podSelector:
# matchLabels:
# role: frontend
# - podSelector:
# matchExpressions:
# - key: role
# operator: In
# values:
# - frontend
tests:
enabled: true
# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,24 @@
apiVersion: v2
name: rejson
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.0.8"

View File

@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "rejson.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "rejson.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "rejson.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "rejson.labels" -}}
helm.sh/chart: {{ include "rejson.chart" . }}
{{ include "rejson.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "rejson.selectorLabels" -}}
app.kubernetes.io/name: {{ include "rejson.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "rejson.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "rejson.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,61 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "rejson.fullname" . }}
labels:
{{- include "rejson.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "rejson.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "rejson.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "rejson.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 6379
protocol: TCP
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,28 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "rejson.fullname" . }}
labels:
{{- include "rejson.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "rejson.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "rejson.fullname" . }}
labels:
{{- include "rejson.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "rejson.selectorLabels" . | nindent 4 }}

View File

@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "rejson.serviceAccountName" . }}
labels:
{{- include "rejson.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "rejson.fullname" . }}-test-connection"
labels:
{{- include "rejson.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "rejson.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

View File

@ -0,0 +1,69 @@
# Default values for rejson.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: redislabs/rejson
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext:
{}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 6379
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}

View File

@ -0,0 +1 @@
1. Get the application URL by running these commands:

View File

@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "chart.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "chart.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "chart.labels" -}}
helm.sh/chart: {{ include "chart.chart" . }}
{{ include "chart.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "chart.selectorLabels" -}}
app.kubernetes.io/name: {{ include "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "chart.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "chart.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,72 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "chart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "chart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: ES_URL
value: elasticsearch-master:9200
- name: REDIS_HOST
value: tubearchivist-rejson
- name: HOST_GID
value: "1000"
- name: HOST_UID
value: "1000"
ports:
- name: http
containerPort: 8000
protocol: TCP
# livenessProbe:
# httpGet:
# path: /
# port: http
# initialDelaySeconds: 40
# readinessProbe:
# httpGet:
# path: /
# port: http
# initialDelaySeconds: 40
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

28
chart/templates/hpa.yaml Normal file
View File

@ -0,0 +1,28 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "chart.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,61 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "chart.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "chart.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "chart.selectorLabels" . | nindent 4 }}

View File

@ -0,0 +1,12 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "chart.serviceAccountName" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "chart.fullname" . }}-test-connection"
labels:
{{- include "chart.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "chart.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

86
chart/values.yaml Normal file
View File

@ -0,0 +1,86 @@
# Default values for chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: bbilly1/tubearchivist
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext:
{}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
className: ""
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}