Deploying a UI Cluster in the Cloud Using Kubernetes

To deploy a UI Cluster in the cloud using Kubernetes:

  1. Ensure the following prerequisites:
    1. To execute the deployment procedure, prepare a node which satisfies the following requirements:
    2. Make sure you have a deployed OSS DB replica.
  2. Ensure access from the UI node to the OSS DB replica:
    1. Create an OSS DB user as described in Configuring an Operations Database Replica Node. If you use a cloud DB solution, make sure that the OSS DB user name is unique for each OSS DB on your cloud PostgreSQL server.
    2. Allow connection from the UI node to the DB replica node by adding the IP range of Kubernetes nodes to the /var/lib/pgsql/<PostgreSQL_Version>/data/pg_hba.conf file on the OSS DB replica node. For example:

      host all all 10.240.0.0/24 md5
    3. Reload the postgresql service to apply the change.

  3. Register a new UI Cluster in Kubernetes. To do so, create a service in Kubernetes by running the following commands on your node with Helm installed:

    Note: Installations with extra management nodes are also supported. For instructions to add an extra management node, please go to Additional Management Node in Cloud.

    helm install a8n/branding-ui-cluster --version 7.4.0-1531_KB133328-244 --name=branding-ui-cluster --set ui.ossHost=<mn_ip_address>  --set ui.ossAgentPswd=`echo -n '<scs_password>'|base64` --set ui.dbrHost=<db_replica_ip_address> --set ui.dbrPswd=`echo -n '<db_replica_password>'|base64` --set ui.dbrUser=<db_replica_user> --set hcl.user=hcl --set hcl.password=`echo -n '<hcl-user-password>'|base64` --set imageRegistryHost=odindevops-a8n-docker.jfrog.io --set ui.persistence.storageClass=azurefile --set hcl.persistence.storageClass=managed-premium --set ui.persistence.volumeSize=10Gi --set hcl.persistence.volumeSize=1Gi

    Where:

    • name is the Helm chart release name. For the production environment, the release name must be branding-ui-cluster because the branding UI cluster is defined by this name in the CloudBlue Commerce distribution definition file and this name is expected by the upgrade script for automatic system updates.
    • mn_ip_address is the IP address of the OSS management node.
    • scs_password is the password of the system_for_scs user in CloudBlue Commerce to access CORBA MN and EJB remote interfaces.
    • db_replica_ip_address is the IP address of the OSS DB replica (for the APS Booster to send read-only requests).
    • db_replica_user, db_replica_password are the user and password to connect to the OSS DB replica (the user is created above).
    • hcl-user-password is any password which will be used for authentication to the HCL broadcasting node, a Kubernetes service which serves as a gateway for HCL commands which are sent from OA MN to the K8S UI cluster. You will need to pass this password to the pem.addK8sUICluster method as the value of the hclPassword parameter in the next step.

    Example:

    helm install a8n/branding-ui-cluster  --name=branding-ui-cluster --set ui.ossHost=10.4.1.73 --set ui.ossAgentPswd=`echo -n NNNFDDFGDFSG|base64` --set ui.dbrHost=10.4.1.89 --set ui.dbrPswd=`echo -n 'sadfEWERTG345'|base64` --set ui.dbrUser=aps_booster123 --set hcl.user=hcl --set hcl.password=`echo -n sdafsd35dsgf|base64` --set imageRegistryHost=odindevops-a8n-docker.jfrog.io --set ui.persistence.storageClass=azurefile --set hcl.persistence.storageClass=managed-premium --set ui.persistence.volumeSize=10Gi --set hcl.persistence.volumeSize=1Gi
  4. To be able to register brands on the UI cluster, identify the external IP address assigned to the load balancer of the UI cluster by Azure:

    # echo `kubectl get service branding-ui-cluster-ui -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`
    
  5. Register a virtual node in CloudBlue Commerce by calling the following OpenAPI method:

    from poaupdater import openapi
     
    api = openapi.OpenAPI()
     
    host_id = api.pem.addK8sUICluster(hclHostname = "branding-ui-cluster-hcl", hclUser = "hcl", hclPassword = "<hcl-user-password>", externalIpAddr = "<external-ip-address>")

    Where:

    Important: The branding-ui-cluster Helm chart cannot be reinstalled after adding the branding-ui-cluster-hcl host. This would change the loadbalancer's IP address and make the brands on the branding-ui-cluster-hcl host no longer accessible.

  6. Make sure that all configs from puitconf from CloudBlue Commerce management node appear in the same location on the new UI cluster (/usr/local/pem/wildfly-16.0.0.Final/puitconf, by default). Copy missing configs to the same location. Then, restart UI pods by deleting them from AKS.

  7. To ensure that you can access the billing functionality in the provider and reseller control panels (PCP and RCP):

    1. Find the IP from which the UI node connects to Billing Application node by searching the log file /var/log/httpd/pba_access_log on Billing Application node.

    2. Add the IP from which the UI node connects to Billing Application node to /usr/local/bm/conf/ip_access.db on Billing Application node.

      Note: Depending on your AKS configuration, the IP from which the UI node connects to Billing Application node can be To find the IP from which the UI connects to Billing, check /var/log/httpd/pba_access_log on Billing Application node. Sometimes it can be the IP of the Branding UI Load Balancer or another external IP assigned to AKS.

    3. Restart the pba service.

    4. Make sure that the Billing Application node URL is reachable from ui-branding pods (for example, https://bm.example.com).

      Note: To ensure this, make sure that the Billing Application node hostname is resolved by DNS and the firewall does not block access to the Billing Application node server.

  8. If you have locales other than English installed, ensure that they are installed to the UI cluster.

  9. If on your current installation you have existing brands which you want to preserve,migrate your existing installation with brands to the new K8s UI Cluster.

Extending a Single MN Configuration with an Additional MN

To extend a configuration with a single management node with an additional management node, perform the following actions.

Run the following command to upgrade the existing release:

helm upgrade <name> helm/branding-ui-cluster --version <version> --wait --reuse-values --set ui.ossReplicaHost=<mn-replica-ip_address>

where:

  • name is the Helm chart release name. For the production environment, the release name must be branding-ui-cluster because the branding UI cluster is defined by this name in the distribution definition file and this name is expected by the upgrade script for automatic system updates.
  • version is a version of the branding-ui-cluster chart that supports the configuration with an additional management node. This must be equal to or later than version 8.3.741.

    To output available chart versions, run this command:

    helm search helm/branding-ui-cluster -l
  • mn-replica-ip_address is the IP address of the additional OSS management node.

If a configuration with a single management node was deployed using a chart that supports the configuration with an additional management node, you can upgrade the current chart by specifying the same chart version with the extra option --recreate-pods.

Related topics: