Deploying Management Node to Cloud

A management node deployed in cloud is a Kubernetes POD running a single Docker container with 'platform-core' image. All management node PODs share a single persistent volume and each of them has their own private persistent volume. Private persistent volumes must be bound to a particular cluster member, that is why StatefulSet is used for the deployment and scaling of cloud management node PODs.

The management node helm chart includes these configuration parameters.

Parameter name Description Default value
replicaCount Number of pods in a cluster. 1
oss.db.secret Secret to use for connections to the OSS DB. oss-db
ossMnHost Hostname of the on-premise management node. 127.0.0.1
adminPswd Password of the user 'admin.'  
podNetworkCidr CIDR to use for pod IP addresses.  
provideCredentials If set, connections to OSS and BSS databases will occur without initialization. Use this flag if there is an on-premise MN deployed. false
dockerrepo Repository where to get Docker images. sandbox.repo.int.zone
resources.requests.cpu Minimum CPU cores to use for each container. 0.5
resources.requests.memory Minimum RAM to use for each container. 1Gi
resources.limits.cpu Maximum CPU cores to use for each container. 8
resources.limits.memory Maximum RAM to use for each container. 8Gi
storage.shared.class Volume plugin to use for provisioning shared storage. azurefile
storage.shared.size Shared storage size. 2Gi
storage.private.class Volume plugin to use for provisioning private storage. default
storage.private.size Private storage size. 256Mi
bssUI Billing UI URL. https://bss-www:8443
bss.secret Secret to use for billing OpenAPI calls. bss
bss.db.secret Secret to use for connections to the BSS DB. bss-db
pbaHost BackNet IP address of the billing application node. It is required only with the provideCredentials parameter set to True.  

Step 1: Configure azurefile storage

Check the UID and GID mount options of the 'azurefile' resource class, which is used for dynamic provisioning of shared persistent storage.

  1. Connect to the CloudBlue Commerce management node (MN) using SSH as root and run this command:

    # kubectl get sc azurefile -o jsonpath='{.mountOptions}' | xargs echo
  2. If the UID and GID values are not equal 1001, update them using this command:

    # kubectl patch sc azurefile --type 'json' -p '[{"op": "replace", "path": "/mountOptions", "value": ["dir_mode=0775", "file_mode=0775", "uid=1001", "gid=1001"]}]'
    

Step 2: Create a secret to use for connection to the OSS database

To create the secret, run this command on the MN:

# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: oss-db
type: Opaque
data:
  host: `echo -n '<oss_db_host>' | base64 -w0`
  dbname: `echo -n '<oss_db_name>' | base64 -w0`
  username: `echo -n '<oss_db_username>' | base64 -w0`
  password: `echo -n '<oss_db_password>' | base64 -w0`
EOF

where:

  • <oss_db_host> is the BackNet IP address of the OSS database host.
  • <oss_db_name> is the name of the OSS database.
  • <oss_db_username>, <oss_db_password> are user name and password to use for connection to the OSS database.

Step 3: Create a secret to use for connection to the BSS database

To create the secret, run this command on the MN:

# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: bss-db
type: Opaque
data:
  host: `echo -n '<bss_db_host>' | base64 -w0`
  dbname: `echo -n '<bss_db_name>' | base64 -w0`
  username: `echo -n '<bss_db_username>' | base64 -w0`
  password: `echo -n '<bss_db_password>' | base64 -w0`
EOF

where:

  • <bss_db_host> is the BackNet IP address of the BSS database host.
  • <bss_db_name> is the name of the BSS database.
  • <bss_db_username>, <bss_db_password> are user name and password to use for connection to the BSS database.

Step 4: Create a secret to use for Billing OpenAPI calls

To create the secret, run this command on the MN:

# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: bss
type: Opaque
data:
  openapi_password: `echo -n '<bss_openapi_password>' | base64 -w0`
EOF

where <bss_openapi_password> is the password to use for Billing OpenAPI calls.

Step 5: Deploy BSS-XMLRPC service for communication with BSS

To deploy the service, run this command on the MN:

# cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: bss-xmlrpc
spec:
  type: ExternalName
  externalName: <pba_host_addr>
EOF

where <pba_host_addr> is the BackNet IP address of the billing application node.

Step 6: Delete the secret used for communications between UI cluster and MN

A secret with the same name will be created in the next step of this instruction, to avoid errors delete the existing secret.

To delete the secret, run this command on the MN:

# kubectl delete secret oss-secret

Step 7: Deploy management node to cloud

To deploy a management node to cloud, run this command on the MN:

# helm install a8n/oss \
--timeout 900 \
--wait \
--version <version> \
--name oss \
--set dockerrepo=odindevops-a8n-docker.jfrog.io \
--set provideCredentials=true \
--set encryptionKey=<encryption_key> \
--set kernelPubKey=<kernel_pub_key> \
--set kernelPrivKey=<kernel_priv_key> \
--set-file caCert=/usr/local/pem/APS/certificates/ca.pem \
--set-file controllerCert=/usr/local/pem/APS/certificates/controller.pem \
--set ejbPswd=`echo -n '<ejb_password>' | base64 -w0` \
--set adminPswd=`echo -n '<admin_password>' | base64 -w0` \
--set ossMnHost=<mn_private_addr> \
--set podNetworkCidr=<pod_cidr> \
--set bssUI=<bm_ui_url> \
--set pbaHost=<pba_host_addr> \
--set storage.shared.class=azurefile \
--set storage.private.class=managed-premium \
--set storage.shared.size=<shared_pv_size> \
--set storage.private.size=<private_pv_size>

where

  • <version> - version of the chart to use for cloud MN deployment. It must be the same as of UI cluster chart.
  • <encryption_key> is the encryption key specified in /usr/local/pem/etc/pleskd.props.
  • <kernel_pub_key> is the public key from /usr/local/pem/etc/pleskd.props.
  • <kernel_priv_key> is the private key from /usr/local/pem/etc/Kernel.conf.
  • <admin_password> is the password of the user 'admin'.
  • <mn_private_addr> is the BackNet IP address of the MN.
  • <pod_cidr> is the AKS POD network CIDR, which is common for all AKS nodes. It is necessary for CORBA communication between on-premise MN and the MNs deployed in cloud.
  • <bm_ui_url> is the URL of the BSS user interface.
  • <pba_host_addr> is the BackNet IP address of the billing application node.
  • <shared_pv_size> is the size of shared persistent volume.
  • <private_pv_size> is the size of private persistent volume.

Important:
1) By default, the timeout is set to 15 minutes. It is enough to deploy a single cloud MN POD with 0.5 CPU and 1GB of RAM per AKS node. With higher resource limits available per AKS node, deployment is performed faster.
2) The --wait parameter is necessary to complete post-installation routines without errors.

You can list the deployed cloud MN PODs using the following command:

# kubectl get pods -l app=core-ear,release=oss

Step 8: Update MN configuration to work with cloud MNs

To update MN configuration, perform these steps :

  1. Mount a shared storage that will be used by all MNs.
    1. Obtain the name of the shared persistent volume. To do this, run this command on the MN:

      # kubectl get pvc oss-shared-storage -o jsonpath='{.spec.volumeName}'
    2. Get the name of the share and a secret with credentials to mount the share. Run these commands on the MN, replacing <shared_pv_name> with the persistent volume name:

      # kubectl get pv <shared_pv_name> -o jsonpath='{.spec.azureFile.shareName}'
      # kubectl get pv <shared_pv_name> -o jsonpath='{.spec.azureFile.secretName}'
    3. Using the obtained secret, get the account name and key necessary to mount the share. Run these commands on the MN, replacing <shared_pv_name> with the obtained persistent volume name:

      # kubectl get secret <secret_name> -o jsonpath='{.data.azurestorageaccountname}' | base64 -d
      # kubectl get secret <secret_name> -o jsonpath='{.data.azurestorageaccountkey}' | base64 -d
    4. Install CIFS utilities that Azure uses to mount shares. Run this command on the MN:

      # yum -y install cifs-utils
    5. Prepare a mount directory. Run this command on the MN:

      # mkdir -p /mnt/<shared_pv_name>
    6. Mount the share to this directory with 'jboss:pemgroup' ownership so that both WildFly and the pa-agent service could access the share. Run this command on the MN:

      # mount -t cifs //<acc_name>.file.core.windows.net/<share_name> /mnt/<shared_pv_name> -o vers=3.0,mfsymlinks,username=<acc_name>,uid=$(getent passwd jboss | cut -d: -f3),gid=$(getent group pemgroup | cut -d: -f3),file_mode=0775,dir_mode=0775,password=<acc_key>

      where

      • <acc_name>, <acc_key> are account name and key obtained in step c.
      • <share_name> is the share name obtained in step b.
      • <shared_pv_name> is the persistent volume name obtained in step a.

      To find the mounted share, use this command:

      # findmnt | grep '/mnt/<shared_pv_name>'

    7. Copy all installed APS packages and credentials to the mounted share. Run this command on the MN, replacing <shared_pv_name> with the persistent volume name:

      # rsync -a --ignore-existing /usr/local/pem/APS/ /mnt/<shared_pv_name>/APS/
      # rsync -a --ignore-existing /usr/local/pem/credentials/ /mnt/<shared_pv_name>/credentials/
    8. If you have APS endpoint nodes with applications deployed using Docker, copy Docker client certificates to the share. Run this command on the MN:

      # rsync -a /usr/local/pem/docker/ /mnt/<shared_pv_name>/docker/
      
    9. Bind the directories on MN to the corresponding directories on the share. Run these commands on the MN:

      # mount --bind /mnt/<shared_pv_name>/APS /usr/local/pem/APS
      # mount --bind /mnt/<shared_pv_name>/credentials /usr/local/pem/credentials
      # mount --bind /mnt/<shared_pv_name>/docker /usr/local/pem/docker
  2. Make WildFly on MN a cloud cluster member.
    1. Obtain the JGroups cluster token that is necessary to join the cluster. Run this command on the MN, and copy the value of the auth_value parameter in the output:

      # kubectl exec oss-node-0 -- find /usr/local/pem/wildfly-* -name jboss-cli.sh -execdir /bin/sh {} -c --output-json '/subsystem=jgroups/stack=tcp/protocol=org.jgroups.protocols.AUTH:read-attribute(name=properties)'
    2. Make WildFly a cloud cluster member. Run this command on the MN, replacing <shared_pv_name> and <auth_value> with the persistent volume name and token obtained in steps 1-a and 2-a respectively:

      # find /usr/local/pem/wildfly-* -name jboss-cli.sh -execdir /bin/sh {} --timeout=30000 'embed-server -c=standalone-full-ha.xml,/subsystem=jgroups/stack=tcp/protocol=TCPPING:remove,/subsystem=jgroups/stack=tcp/protocol=FILE_PING:add(add-index=0),/subsystem=jgroups/stack=tcp/protocol=FILE_PING/property=remove_all_data_on_view_change:add(value=true),/subsystem=jgroups/stack=tcp/protocol=FILE_PING:write-attribute(name=properties.location,value=/mnt/<shared_pv_name>/jboss/jgroups),/subsystem=jgroups/stack=tcp/protocol=org.jgroups.protocols.AUTH:write-attribute(name=properties,value={auth_class="org.jgroups.auth.SimpleToken", auth_value="<auth_value>"})' \;
    3. To apply the changes, run these commands on the MN:

      # service pau restart
      # service pa-agent restart

Step 9: Update billing configuration to work with cloud MNs

To update billing configuration, take these steps:

  1. Obtain cluster IP address of the cloud management node PODs. Run this command on the MN:

    # kubectl get service oss-proxy -o jsonpath='{.spec.clusterIP}'
  2. Connect to the billing application node using SSH as root.
  3. Run these commands, replacing <apsc_addr> with the cluster IP adress:

    # sed -i '/^\[environment\]$/ a\ENV_APS_URI = https://<apsc_addr>:6308' /usr/local/bm/etc/ssm.conf.d/global.conf
    # service pba restart

Step 10: Update UI cluster configuration to work with cloud MNs

To upgrade UI cluster configuration, take these steps:

  1. Connect to the MN using SSH as root.
  2. Run this command:

    # helm upgrade branding-ui-cluster a8n/branding-ui-cluster \
    --wait \
    --version <version> \
    
    --set oss.host=oss-proxy \
    --set oss.replicaHost=oss-proxy

    Note: If there are two or more branding-ui-cluster-ui PODs in the cluster, omit the --wait parameter.

  3. Check that all PODs were updated and are ready using this command:

    # kubectl get pod -l release=branding-ui-cluster

Now you have MN deployed to cloud.