Step 3: Configure the Databases with Affinity
1. Create a Persistent Volume (PV) and Persistent Volume Claim
(PVC): Create a YAML file named postgres-pv-pvc.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: /mnt/data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
kubectl apply -f postgres-pv-pvc.yaml
Deploy PostgreSQL Cluster: Create a YAML file (e.g., postgres-
cluster.yaml) to deploy your PostgreSQL database with affinity:
apiVersion: postgres.crunchydata.com/v1
kind: Cluster
metadata:
name: my-postgres-cluster
spec:
instances:
- name: my-postgres
replicas: 2
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/worker
operator: Exists
storage:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: postgres-pvc
kubectl apply -f postgres-cluster.yaml
Final Steps
Verify Installation: Check if everything is running properly:
kubectl get pods
kubectl get pvc
++++++++++++++++++++++++++++++++++++++
In Kubernetes, matchExpressions are part of the node affinity configuration
that allows you to specify rules for scheduling pods onto nodes based on
node labels. In the context of Crunchy Data for Postgres, you can use
matchExpressions to define affinity rules for where the Postgres pods should
run.
Here's a breakdown of how matchExpressions can be structured:
Structure of matchExpressions
Each matchExpression has three main fields:
1. key: The label key that you want to match against.
2. operator: The relationship between the key and the values. Possible
operators include:
o In: The node must have the specified key with one of the values.
o NotIn: The node must not have the specified key with any of the
values.
o Exists: The node must have the specified key (regardless of its
value).
o DoesNotExist: The node must not have the specified key.
o Gt: The value of the key must be greater than the specified value
(only for numeric values).
o Lt: The value of the key must be less than the specified value
(only for numeric values).
3. values: An array of values that the key is compared against (used with
the In and NotIn operators).
Example of matchExpressions
Here’s an example configuration for nodeAffinity that uses
matchExpressions:
yaml
Copy code
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "disktype"
operator: In
values:
- "ssd"
- "hdd"
- key: "environment"
operator: NotIn
values:
- "development"
Explanation of Example
The first matchExpression checks if the node has the label disktype
with a value of either ssd or hdd.
The second matchExpression ensures that the node does not have the
label environment set to development.
This configuration ensures that Postgres pods are scheduled on nodes that
meet both criteria.
Summary
When configuring matchExpressions for Crunchy Data Postgres, you can use
them to fine-tune the scheduling of your database pods based on node
labels, ensuring they run in the appropriate environments or on nodes with
specific characteristics.
4o mini
You said:
what are some of the list of keys available in Crunchy Data for Postgres
ChatGPT said:
ChatGPT
In Crunchy Data for Postgres, the specific keys available for node affinity
typically depend on the Kubernetes environment and how the nodes are
labeled. However, some common keys that might be used in configurations
include:
1. disktype: Indicates the type of disk (e.g., ssd, hdd).
2. environment: Specifies the environment (e.g., production, staging,
development).
3. zone: Represents the availability zone where the node is located.
4. region: Specifies the geographical region of the node.
5. instance-type: Indicates the type of instance (e.g., m5.large,
c5.xlarge).
6. app: Used to tag nodes based on application deployment (e.g.,
postgres, web).
7. role: Defines the role of the node (e.g., master, replica).
8. tier: Specifies the tier level of the service (e.g., frontend, backend).
Custom Keys
Organizations often define custom keys based on their infrastructure needs,
such as:
dedicated: Indicating whether the node is dedicated to a specific
workload.
backup: Indicating nodes that can be used for backups.
Finding Available Keys
To see the specific labels available in your Kubernetes cluster, you can run:
bash
Copy code
kubectl get nodes --show-labels
This will list all nodes along with their labels, helping you identify which keys
are applicable for your setup.
4o mini
You said:
what of the control plane and worker nodes and keys
ChatGPT said:
ChatGPT
In a Kubernetes cluster, you typically have control plane nodes and worker
nodes, each serving different roles. The labels (or keys) associated with
these nodes can help in managing scheduling, resource allocation, and
service deployment. Here's an overview:
Control Plane Nodes
Control plane nodes manage the Kubernetes cluster and usually run
components like the API server, controller manager, and scheduler. Common
labels for control plane nodes might include:
1. node-role.kubernetes.io/master: Indicates that the node is a
master (control plane).
2. control-plane: Another label used to identify control plane nodes.
3. kubernetes.io/role: Often set to master or control-plane.
Worker Nodes
Worker nodes run the actual applications (pods). Common labels for worker
nodes might include:
1. node-role.kubernetes.io/worker: Indicates that the node is a
worker.
2. kubernetes.io/role: Typically set to worker or node.
3. disktype: Indicates the type of disk, useful for storage-aware
scheduling.
4. zone: Represents the availability zone.
5. instance-type: Specifies the instance type (e.g., t2.medium,
m5.large).
6. environment: Specifies the environment (e.g., production,
development).
Example Labels
Here’s how you might use these labels in a matchExpressions configuration:
yaml
Copy code
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "node-role.kubernetes.io/master"
operator: DoesNotExist
- key: "node-role.kubernetes.io/worker"
operator: Exists
Summary
Control Plane Nodes: Typically labeled to indicate their role in
managing the cluster.
Worker Nodes: Labeled for application deployment, resource
capabilities, and environment categorization.
Using these labels effectively helps in deploying applications where they can
perform optimally based on the node's role and capabilities.