iSCSI (Internet Small Computer Systems Interface) is a network protocol that allows SCSI commands to be sent over TCP/IP networks. It enables the linking of data storage facilities, allowing users to access remote storage devices as if they were local.
OpenShift can utilize iSCSI for persistent storage, which is essential for stateful applications.
Block storage via iSCSI is preferred for applications requiring high data integrity (e.g., databases). It provides a better performance and reliability compared to file-based systems like NFS. For enhanced availability, OpenShift can be configured to use multipath connections to iSCSI targets, ensuring continuous access even if one path fails.
In this tutorial, I will show how to configure iSCSI volumes in the OpenShift cluster.
Requirements:
- Red Hat OpenShift cluster (SNO here);
- oc;
- storage array to expose LUNs.
1. Enabling iSCSI service
Depending on the storage you’re using, remember to create a dedicated LUN (here is a rho-sno-iscsi-01), add a new host (match iqn with your node) and map it with this LUN. Add a read/write permissions. At below, you can find a few screens from Synology NAS.
By default, iscsi daemon is disabled on the the worker nodes. We need to enable it.
1. Log in to the Red Hat OpenShift shell session and list all available nodes. It this case, we have only one node because it’s a SNO deployment.
2. Access to the node using debug command and change the root directory to use host binaries.
oc debug node/rho-sno-01
chroot /host
3. Note iSCSI Initiator Name of this host. After enabling iscsi on the worker nodes, storage array can discover it.
cat /etc/iscsi/initiorname.iscsi
4. As I mention, iscsid service is not enabled and we need to start it. The easiest way is to start service directly on node (or nodes). But here, in OpenShift we will use MachineConfig to start iscsi service.
systemctl status iscsid
systemctl start iscsid ## in this case, not preferable solution
5. MachineConfig manages the configuration and updates of the operating system on nodes within an OpenShift cluster.
Create a MachineConfig yaml and apply it. After that, node is going to reboot. When the node is up, confirm that new mc is here.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: enable-iscsid-sno
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- enabled: true
name: "iscsid.service"
oc apply -f mc-iscsi.yaml
oc get mc
6. Now iscsid service is enabled
2. Preparing PV and PVC
1. Prepare PersistentVolume yaml file. Apply it and check if it’s created. Also, you can look for PersistentVolume from the Red Hat OpenShift web console.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ocp-sno-iscsi-01 #name of the iSCSI PV
spec:
capacity:
storage: 10Gi #size of the PV created on the storage
accessModes:
- ReadWriteOnce #type of the access mode
iscsi:
targetPortal: [Storage_IP_Address]:3260
iqn: iqn.2000-01.com.[IQN_of_the_target]
lun: 1 #depending on the storage, it can be different, try with 0 or 1
fsType: ext4 #filesystem created on the PV
readOnly: false #disable read only on the PV
2. Prepare PersistentVolumeClaim yaml file. Apply it and check if it’s created. Also, you can look for PersistentVolumeClaim from the Red Hat OpenShift web console.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ocp-sno-iscsi-01 #name of the PVC
spec:
accessModes:
- ReadWriteOnce #type of the access mode
resources:
requests:
storage: 10Gi #storage requests
3. List available PersistentVolume alb PersistentVolumeClaim objects.
3. Creating a Pod using new iSCSI volume
1. Prepare configuration yml file for a Pod deployment. Apply it and check if it’s created.
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-iscsi
spec:
containers:
- name: ubuntu-iscsi
image: ubuntu
command: ["sleep", "3600"]
volumeMounts:
- name: iscsi-nas
mountPath: /mnt/iscsi-storage-from-nas #folder created in the POD to mount iSCSI volume
volumes:
- name: iscsi-nas
persistentVolumeClaim:
claimName: pvc-ocp-sno-iscsi-01 #name of the dedicated PVC
2. Login to the newly created pod and display disks usage.
/dev/sdd is our iSCSI disk mounted to the /mnt/iscsi-storage-from-nas to the Pod.
oc exec -it ubuntu-iscsi -- /bin/bash
3. Checking iSCSI configuration on the node
1. On the OpenShfit node we can check all block devices attached to machine.
One of them is sdd. It’s a mapped 10 GB disk from NAS. Also, there is a new PersistentVolume – pv-ocp-sno-iscsi-01 and a target on NAS.
2. We can use df -Th command to check data usage on /dev/sdd.
3. The iscsci command is used to discovery and login to iSCSI targets. We can use it to list all current logged sessions. With “grep Lun” I want to limit output.
iscsiadm -m session -P 3 | grep LUN
4. Also, we can discover all available targets from a discovery portal. On my NAS, I have a 2 targets. One of them is a dedicated to the OpenShift cluster.
iscsiadm -m discovery -t st -p [NAS/Target_IP_Address]
5. And there is a part of the output after login into a dedicated target using iscsiadm command.
iscsiadm -m node -T [iqn_of_the_target]
Summary
iSCSI serves as a vital technology for providing block storage in OpenShift environments, enabling efficient management of persistent data across containerized applications. By leveraging iSCSI’s capabilities, organizations can enhance their data management strategies within cloud-native architectures..
Keep looking at my blog, new posts are coming! There is so much more great features to explore 😉
One Comment
Hi Mateusz Romaniuk. Greate Guide. I am working on a cluster setup with multiple nodes and there is one thing i am confused about right now.
So the iqn i have to use in the target is the IQN of the HOST Node which is mapped with the storage. Can or do i have to use multiple IQNS in this specific field if i have a Storage Cluster instead of a single node?