close
close

Implementing High Availability SafeLine WAF on K3s (Part 3)

Refer to my last two posts for deploying the k3s cluster and nginx-ingress services.

In this article, we are going to install the nfs provisioner Component and SafeLine WAF via HelmChart.

Image description
Image source: Vishnu ks

Installing the nfs-provisioner component via HelmChart

The nfs-subdir-external-provisioner service is a third-party component used in K8S or K3S clusters to automatically mount NFS directories as persistent data storage for the cluster. This document demonstrates how to implement it using HelmChart and how to create a storage class for the cluster.

Add and deploy the Helm Public Repository

  1. Check the Helm version:
   helm version
Go to full screen mode

Exit full screen mode

  1. List all added Helm repositories:
   helm repo list
Go to full screen mode

Exit full screen mode

  1. Add nfs-subdir Helm repository:
   helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
Go to full screen mode

Exit full screen mode

  1. Check added Helm repositories:
   helm repo list
Go to full screen mode

Exit full screen mode

   NAME                                  URL
   nfs-subdir-external-provisioner       https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
Go to full screen mode

Exit full screen mode

  1. Update Helm repositories:
   helm repo update
Go to full screen mode

Exit full screen mode

Check NFS-related HelmChart versions

  1. Look for nfs-subdir-external-provisioner diagram:
   helm search repo nfs-subdir-external-provisioner | grep nfs-subdir-external-provisioner
Go to full screen mode

Exit full screen mode

   nfs-subdir-external-provisioner/nfs-subdir-exte...      4.0.10          4.0.2           nfs-subdir-external-provisioner is an automatic...
Go to full screen mode

Exit full screen mode

Install NFS client

  1. Install NFS client:
   apt install -y nfs-common
Go to full screen mode

Exit full screen mode

Remark: All cluster nodes must install the NFS client to use NFS as backend storage.

Install nfs client provisioner

  1. Install using Helm:
   helm install --namespace kube-system nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
   --set nfs.server=192.168.1.103 \
   --set nfs.path=/nfs_data/waf-lan-k3s-data \
   --set image.repository=registry.cn-hangzhou.aliyuncs.com/k8s_sys/nfs-subdir-external-provisioner \
   --set image.tag=v4.0.2 \
   --set storageClass.name=cfs-client \
   --set storageClass.defaultClass=true \
   --set tolerations(0).operator=Exists \
   --set tolerations(0).effect=NoSchedule
Go to full screen mode

Exit full screen mode

Note: Deploy the Helm chart named nfs-client-provisioner in the cluster’s kube-system namespace. The storage class name for the cluster is: cfs-client.

Parameter options:

  • nfs.server: The IP address of the NFS server.
  • nfs.path: The directory path shared by the NFS server.
  • storageClass.name: The name of the storage class to set for the cluster.
  • storageClass.defaultClass: Whether to set this as the default storage class for the cluster.
  • Tolerances: Allow this service to run on nodes with scheduling constraints, such as master nodes.

Verify deployment

  1. Check deployed pods:
   kubectl get pod -n kube-system
Go to full screen mode

Exit full screen mode

   NAME                                               READY   STATUS      RESTARTS        AGE
   nfs-subdir-external-provisioner-6f5f6d764b-2z2ns   1/1     Running     3 (6d22h ago)   17d
Go to full screen mode

Exit full screen mode

  1. Check storage classes:
   kubectl get sc
Go to full screen mode

Exit full screen mode

   NAME                   PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
   cfs-client (default)   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate              true                   17d
Go to full screen mode

Exit full screen mode

Implementation of SafeLine WAF via Helm

SafeLine WAF officially only supports Docker standalone container deployment. However, the community offers a HelmChart implementation solution, which will be followed in this document. The link to the third party repository is provided at the top of the document.

  1. Get the HelmChart tgz package on the main node:
   cd /root/
   helm repo add safeline "https://g-otkk6267-helm.pkg.coding.net/Charts/safeline"
   helm repo update
   helm fetch --version 5.2.0 safeline/safeline
Go to full screen mode

Exit full screen mode

  1. Create the values.yaml file:
   detector:
     image:
       registry: 'swr.cn-east-3.myhuaweicloud.com/chaitin-safeline'
       repository: safeline-detector
   tengine:
     image:
       registry: 'swr.cn-east-3.myhuaweicloud.com/chaitin-safeline'
       repository: safeline-tengine
Go to full screen mode

Exit full screen mode

  1. Install SafeLine WAF in K3S Cluster:
   cd /root/
   helm install safeline --namespace safeline safeline-5.2.0.tgz --values values.yaml --create-namespace
Go to full screen mode

Exit full screen mode

  1. Upgrade SafeLine WAF:
   cd /root/
   helm upgrade -n safeline safeline safeline-5.2.0.tgz --values values.yaml
Go to full screen mode

Exit full screen mode

  1. Check the Pod Status:
   kubectl get pod -n safeline
Go to full screen mode

Exit full screen mode

   NAME                                 READY   STATUS      RESTARTS      AGE
   safeline-database-0                  1/1     Running     0             21h
   safeline-bridge-688c56547c-stdnd     1/1     Running     0             20h
   safeline-fvm-54fbf6967c-ns8rg        1/1     Running     0             20h
   safeline-luigi-787946d84f-bmzkf      1/1     Running     0             20h
   safeline-detector-77fbb59575-btwpl   1/1     Running     0             20h
   safeline-mario-f85cf4488-xs2kp       1/1     Running     1 (20h ago)   20h
   safeline-tengine-8446745b7f-wlknr    1/1     Running     0             20h
   safeline-mgt-667f9477fd-mtlpj        1/1     Running     0             20h
Go to full screen mode

Exit full screen mode

  1. Check service exposure:
   kubectl get svc -n safeline
Go to full screen mode

Exit full screen mode

   NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE
   safeline-tengine            ClusterIP   10.43.1.38              65443/TCP,80/TCP                             15d
   safeline-luigi              ClusterIP   10.43.119.40            80/TCP                                       15d
   safeline-fvm                ClusterIP   10.43.162.1             9004/TCP,80/TCP                              15d
   safeline-detector           ClusterIP   10.43.248.81            8000/TCP,8001/TCP                            15d
   safeline-mario              ClusterIP   10.43.156.13            3335/TCP                                     15d
   safeline-pg                 ClusterIP   10.43.176.51            5432/TCP                                     15d
   safeline-tengine-nodeport   NodePort    10.43.219.148           80:30080/TCP,443:30443/TCP                   15d
   safeline-mgt                NodePort    10.43.243.181           1443:31443/TCP,80:32009/TCP,8000:30544/TCP   15d
Go to full screen mode

Exit full screen mode

SafeLine WAF has been successfully implemented via Helm! SafeLine WAF console can be accessed via the K3S node IP + the NodePort exposed by safeline-mgt, e.g. https://192.168.1.9:31443.