LinstorSatelliteConfiguration
¶
This resource controls the state of one or more LINSTORĀ® satellites.
.spec
¶
Configures the desired state of satellites.
.spec.nodeSelector
¶
Selects which nodes the LinstorSatelliteConfiguration should apply to. If empty, the configuration applies to all nodes.
Example¶
This example sets the AutoplaceTarget
property to no
on all nodes labelled piraeus.io/autoplace: "no"
.
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: disabled-nodes
spec:
nodeSelector:
piraeus.io/autplace: "no"
properties:
- name: AutoplaceTarget
value: "no"
.spec.nodeAffinity
¶
Selects which nodes the LinstorSatelliteConfiguration should apply to. If empty, the configuration applies to all nodes.
When this is used together with .spec.nodeSelector
, both need to match in order for the configuration to apply to a
node.
Example¶
This example sets the AutoplaceTarget
property to no
on all non-worker nodes:
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: disabled-nodes
spec:
nodeAffinity:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
properties:
- name: AutoplaceTarget
value: "no"
.spec.properties
¶
Sets the given properties on the LINSTOR Satellite level.
The property value can either be set directly using value
, inherited from the Kubernetes Node's metadata using
valueFrom
, or expanded from Kubernetes Node's metadata using expandFrom
. Metadata fields are specified using the
same syntax as the Downward API for Pods.
Using expandFrom
allows for using field references matching more than one field. Either specify a field that is
already a map (metadata.labels
or metadata.annotations
), or select a subset by using *
at the end of a key. Using
*
will select the keys and values matching the prefix up to the *
character. There are two ways to use expandFrom
:
- Setting a
nameTemplate
will create one property per matched field. The property name is generated by taking thename
field and appending the expandednameTemplate
.nameTemplate
supports the following expansions: $1
is replaced with the field key, i.e. the part matched by the+
character.$2
is replaced with the value of the matched field.
The valueTemplate
is expanded using the same replacements, and sets the property value.
* Setting no nameTemplate
will create one property using name
. The value of the property is the
joined expansion of the valueTemplate
field for every matched field. See above for supported expansions. The result
is joined using the optional delimiter
value.
In addition, setting optional
to true means the property is only applied if the value is not empty. This is useful
in case the property value should be inherited from the node's metadata
Example¶
This examples sets the following Properties on every satellite:
* PrefNic
(the preferred network interface) is always set to default-ipv6
* Aux/example-property
(an auxiliary property, unused by LINSTOR itself) takes the value from the piraeus.io/example
label of the Kubernetes Node. If a node has no piraeus.io/example
label, the property value will be ""
.
* AutoplaceTarget
(if set to no
, will exclude the node from LINSTOR's Autoplacer) takes the value from the
piraeus.io/autoplace
annotation of the Kubernetes Node. If a node has no piraeus.io/autoplace
annotation, the
property will not be set.
* Aux/role/
copies all node-role.kubernetes.io/*
label keys and values. For example, a worker node with the
node-role.kubernetes.io/worker: "true"
label will have Aux/role/worker
set to "true"
.
* Aux/features
copies the names of all feature.example.com/*
label keys to the value, joined by ,
. For example,
a node with feature.example.com/gpu
and feature.example.com/storage
will have Aux/features
set to
"gpu,storage"
.
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: ipv6-nodes
spec:
properties:
- name: PrefNic
value: "default-ipv6"
- name: Aux/example-property
valueFrom:
nodeFieldRef: metadata.labels['piraeus.io/example']
- name: AutoplaceTarget
valueFrom:
nodeFieldRef: metadata.annotations['piraeus.io/autoplace']
optional: yes
- name: Aux/role/
expandFrom:
nodeFieldRef: metadata.labels['node-role.kubernetes.io/*']
nameTemplate: "$1"
valueTemplate: "$2"
- name: Aux/features
expandFrom:
nodeFieldRef: metadata.labels['feature.example.com/*']
valueTemplate: "$1"
delimiter: ","
.spec.storagePools
¶
Configures LINSTOR Storage Pools.
Every Storage Pool needs at least a name
, and a type. Types are specified by setting a (potentially empty) value on
the matching key. Available types are:
lvmPool
: Configures a LVM Volume Group as storage pool. Defaults to using the storage pool name as the VG name. Can be overridden by settingvolumeGroup
.lvmThinPool
: Configures a LVM Thin Pool as storage pool. Defaults to using the storage pool name as name for the thin pool volume and the storage pool name prefixed bylinstor_
as the VG name. Can be overridden by settingthinPool
andvolumeGroup
.filePool
: Configures a file system based storage pool. Configures a host directory as location for the volume files. Defaults to using the/var/lib/linstor-pools/<storage pool name>
directory.fileThinPool
: Configures a file system based storage pool. Behaves the same asfilePool
, except the files will be thinly allocated on file systems that support sparse files.zfsPool
: Configure a ZFS ZPool as storage pool. Defaults to using the storage pool name as name for the zpool. Can be overriden by settingzPool
.zfsThinPool
: Configure a ZFS ZPool as storage pool. Behaves the same aszfsPool
, except the contained zVol will be created using sparse reservation.
Optionally, you can configure LINSTOR to automatically create the backing pools. source.hostDevices
takes a list
of raw block devices, which LINSTOR will prepare as the chosen backing pool.
All storage pools also can also be configured with properties
. Properties are set on the Storage Pool level. The
configuration values have the same form as Satellite Properties.
Example¶
This example configures these LINSTOR Storage Pools on all satellites:
* A LVM Pool named vg1
. It will use the VG vg1
, which needs to exist on the nodes already.
* A LVM Thin Pool named vg1-thin
. It will use the thin pool vg1/thin
, which also needs to exist on the nodes.
* A LVM Pool named vg2-from-raw-devices
. It will use the VG vg2
, which will be created on demand from the raw
devices /dev/sdb
and /dev/sdc
if it does not exist already. In addition, it sets the StorDriver/LvcreateOptions
property to -i 2
, which causes every created LV to be striped across 2 PVs.
* A File System Pool named fs1
. It will use the /var/lib/linstor-pools/fs1
directory on the host, creating the
directory if necessary.
* A File System Pool named fs2
, using sparse files. It will use the custom /mnt/data
directory on the host.
* A ZFS Pool named zfs1
. It will use ZPool zfs1
, which needs to exist on the nodes already.
* A ZFS Thin Pool named zfs2
. It will use ZPool zfs-thin2
, which will be created on demand from the raw device
/dev/sdd
.
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: storage-satellites
spec:
storagePools:
- name: vg1
lvmPool: {}
- name: vg1-thin
lvmThinPool:
volumeGroup: vg1
thinPool: thin
- name: vg2-from-raw-devices
lvmPool:
volumeGroup: vg2
source:
hostDevices:
- /dev/sdb
- /dev/sdc
properties:
- name: StorDriver/LvcreateOptions
value: '-i 2'
- name: fs1
filePool: {}
- name: fs2
fileThinPool:
directory: /mnt/data
- name: zfs1
zfsPool: {}
- name: zfs2
zfsThinPool:
zPool: zfs-thin2
source:
hostDevices:
- /dev/sdd
.spec.internalTLS
¶
Configures a TLS secret used by the LINSTOR Satellites to:
* Validate the certificate of the LINSTOR Controller, that is the Controller must have certificates signed by ca.crt
.
* Provide a server certificate for authentication by the LINSTOR Controller, that is tls.key
and tls.crt
must be accepted by the Controller.
To configure TLS communication between Satellite and Controller, LinstorCluster.spec.internalTLS
must be set accordingly.
Setting a secretName
is optional, it will default to <node-name>-tls
, where <node-name>
is replaced with the
name of the Kubernetes Node.
Optional, a reference to a cert-manager Issuer
can be provided
to let the operator create the required secret.
Example¶
This example creates a manually provisioned TLS secret and references it in the LinstorSatelliteConfiguration, setting it for all nodes.
---
apiVersion: v1
kind: Secret
metadata:
name: my-node-tls
namespace: piraeus-datastore
data:
ca.crt: LS0tLS1CRUdJT...
tls.crt: LS0tLS1CRUdJT...
tls.key: LS0tLS1CRUdJT...
---
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: satellite-tls
spec:
internalTLS:
secretName: my-node-tls
Example¶
This example sets up automatic creation of the LINSTOR Satellite TLS secrets using a
cert-manager issuer named piraeus-root
.
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: satellite-tls
spec:
internalTLS:
certManager:
kind: Issuer
name: piraeus-root
.spec.ipFamilies
¶
Configures the IP Family (IPv4 or IPv6) to use to connect to the LINSTOR Satellite.
If unset, the LINSTOR Controller will attempt to reach the LINSTOR Satellite via all recognized addresses in the Satellite Pods' Status. If set, the LINSTOR Controller will only attempt to reach the LINSTOR Satellite via all addresses matching the listed IP Families.
Valid values are IPv4
and IPv6
.
Example¶
This example configures the LINSTOR Controller to only use IPv4, even in a dual stack cluster.
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: ipv4-only
spec:
ipFamilies:
- IPv4
.spec.podTemplate
¶
Configures the Pod used to run the LINSTOR Satellite.
The template is applied as a patch (see .spec.patches
) to the default resources, so it can be
"sparse".
Example¶
This example configures a resource request of cpu: 100m
on the satellite, and also enables host networking.
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: resource-and-host-network
spec:
podTemplate:
spec:
hostNetwork: true
containers:
- name: linstor-satellite
resources:
requests:
cpu: 100m
.spec.patches
¶
The given patches will be applied to all resources controlled by the operator. The patches are
forwarded to kustomize
internally, and take the same format.
The unpatched resources are available in the subdirectories of the pkg/resources/satellite
directory.
Warning¶
No checks are run on the result of user-supplied patches: the resources are applied as-is. Patching some fundamental aspect, such as removing a specific volume from a container may lead to a degraded cluster.
Example¶
This example configures the LINSTOR Satellite to use the "TRACE" log level, creating very verbose output.
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: all-satellites
spec:
patches:
- target:
kind: ConfigMap
name: satellite-config
patch: |-
apiVersion: v1
kind: ConfigMap
metadata:
name: satellite-config
data:
linstor_satellite.toml: |
[logging]
linstor_level = "TRACE"
.status
¶
Reports the actual state of the cluster.
.status.conditions
¶
The Operator reports the current state of the Satellite Configuration through a set of conditions. Conditions are
identified by their type
.
type |
Explanation |
---|---|
Applied |
The given configuration was applied to all LinstorSatellite resources. |