The Piraeus Operator integrates with a number of optional external components. Not every cluster is configured to provide these external components by default. Piraeus provides integration with:
The operator also installs some optional, piraeus-specific components by default:
These components are installed to show the full feature set of Piraeus. They can be disabled without affecting the other components.
Snapshot support components
Snapshots in Kubernetes require 3 different components to work together. Not all Kubernetes distributions package these components by default. Follow the steps below to find out how you can enable snapshots on your cluster.
The cluster needs to have the snapshot CRDs installed. To check whether your cluster has them installed or not, run:
$ kubectl get crds volumesnapshots.snapshot.storage.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io
NAME CREATED AT
If your cluster doesn't have them installed, you can install them from here:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.1.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.1.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.1.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
NOTE: you should replace
v4.1.1in the above commands with the latest release recommended for your Kubernetes version.
Snapshot requests in Kubernetes are first processed by a cluster-wide snapshot controller. If you had to manually add the CRDs to the cluster in the step above, chances are you also need to deploy the snapshot controller.
NOTE: If in step 1 the CRDs where already pre-installed in your cluster, you almost certainly can skip this step. Your Kubernetes distribution should already include the snapshot controller.
Deployment should work out of the box on most clusters. Additional configuration options are available, please take a look at the chart documentation linked above.
$ kubectl create namespace snapshot-controller
$ helm repo add piraeus-charts https://piraeus.io/helm-charts/
$ helm install validation-webhook piraeus-charts/snapshot-validation-webhook --namespace snapshot-controller
$ helm install snapshot-controller piraeus-charts/snapshot-controller --namespace snapshot-controller
The last component is a driver-specific snapshot implementation. This is included in any Piraeus installation and requires no further steps. Every CSI Controller deployment of Piraeus also deploys the snapshotter sidecar, that ultimately triggers snapshot creation in LINSTOR.
NOTE: If the CRDs are not deployed, the snapshotter sidecar will continuously warn about the missing CRDs. This can be ignored.
To use snapshots, you first need to create a
You can then use this snapshot class to create a snapshot from an existing LINSTOR PVC:
After a short wait, the snapshot will be ready:
$ kubectl describe volumesnapshots.snapshot.storage.k8s.io my-first-linstor-snapshot
Persistent Volume Claim Name: my-first-linstor-snapshot
Volume Snapshot Class Name: my-first-linstor-snapshot-class
Bound Volume Snapshot Content Name: snapcontent-b6072ab7-6ddf-482b-a4e3-693088136d2c
Creation Time: 2020-06-04T13:02:28Z
Ready To Use: true
Restore Size: 500Mi
You can restore the content of this snaphost by creating a new PVC with the snapshot as source:
CSI Volume Cloning
Based on the concept of snapshots LINSTOR also supports cloning of persistent volumes - or to be more precise: of existing persistent volume claims (PVC). The CSI specification mentions some restrictions regarding namespace and storage classes of a PVC clone (see Kubernetes documentation for details). In regard to LINSTOR a clone requires that the volume was created using a LINSTOR storage pool which supports snapshots (i.e. a LVMTHIN pool). The new volume will be placed on the same nodes as the original (this can later change during use, but you can't directly clone to a completely different node).
To clone a volume create a new PVC and define the origin PVC in the dataSource:
Monitoring with Prometheus
Starting with operator version 1.5.0, you can use Prometheus to monitor Piraeus components.
The operator will set up monitoring containers along the existing components and make them available as a
If you use the Prometheus Operator, the Piraeus Operator will also set up the
instances. The metrics will automatically be collected by the Prometheus instance associated to the operator, assuming
watching the Piraeus namespace is enabled.
Linstor Controller Monitoring
The Linstor Controller exports cluster-wide metrics. Metrics are exported on the existing controller service, using the
DRBD Resource Monitoring
All satellites are bundled with a secondary container that uses
to export metrics directly from DRBD. The metrics are available on port 9942, for convenience a headless service named
<linstorsatelliteset-name>-monitoring is provided.
If you want to disable the monitoring container, set
"" in your LinstorSatelliteSet resource.
High Availability Controller
The Piraeus High Availability (HA) Controller will speed up the fail over process for stateful workloads using Piraeus for storage. Using the HA Controller reduces the time it takes for Kubernetes to reschedule a Pod using faulty storage from 15min to 45seconds (exact times depend on your Kubernetes set up).
To mark your stateful applications as managed by Piraeus, use the
linstor.csi.linbit.com/on-storage-lost: remove label.
For example, Pod Templates in a StatefulSet should look like:
This way, the Piraeus High Availability Controller will not interfere with applications that do not benefit or even support it's primary use.
To disable deployment of the HA Controller use:
Usage with STORK
STORK is a scheduler extender plugin and storage health monitoring tool (see below). There is considerable overlap between the functionality of STORK and the HA Controller.
Like the HA Controller, STORK will also delete Pods which use faulty volumes. In contrast to the HA Controller, STORK does not discriminate based on labels on the Pod.
Another difference between the two is that the HA Controller reacts faster on storage failures, as it watches the raw event stream from Piraeus, while STORK just periodically checks the volume status.
While they overlap in functionality, there are no known compatibility issues when running both STORK and the HA Controller.
Stork is a scheduler extender plugin for Kubernetes which allows a storage driver to give the Kubernetes scheduler hints about where to place a new pod so that it is optimally located for storage performance. You can learn more about the project on its GitHub page.
By default, the operator will install the components required for Stork, and register a new scheduler called
with Kubernetes. This new scheduler can be used to place pods near to their volumes.
- name: busybox
command: ["tail", "-f", "/dev/null"]
- name: my-first-linstor-volume
- containerPort: 80
- name: my-first-linstor-volume
Deployment of the scheduler can be disabled using