Accessing Log Files

To access log files, you can use:

  • Azure Storage Explorer

  • A special VM called a jumper VM where log files are copied from the log file container.

To prepare a jumper VM and access log files with that jumper VM, complete these steps:

  1. Ensure that the Azure user whose credentials you are going to use has read access to the log file container. To learn how to configure access to blob data, please refer to the Azure documentation (see the Storage Blob Data Reader role).

  2. Prepare the following information:

    • The credentials of your Azure user

    • The ID of your tenant

    • The name of your storage account

  3. Create a VM that uses a minimal virtual machine size.

    Important: When creating that VM, ensure that the AADSSHLoginForLinux extension is enabled. To learn how to do this, please refer to the Azure documentation.

  4. Create a managed 256 GB disk and attach it to that VM.

    Note: That disk will be used for storing log file copies. It will be partitioned by a script, and the /home directory will be moved to a partition of that disk.

  5. Ensure that your Azure user has the Virtual Machine Administrator Login role to that VM. To learn how to do this, please refer to the Azure documentation.

  6. Ensure that the VM you created has network access to your storage account.
  7. On that VM, in your home directory, create this script:

    Copy
    set -x
     
    read -p "Enter tenant id: " tenant
    read -p "Enter Storage Account Name: " sa_name
     
    yum install -y -q -e 0 epel-release
    yum install -y -q -e 0 ansible
    yum install -y -q -e 0 fuse
    yum install -y -q -e 0 rclone
    ansible-galaxy install -i ericsysmin.chrony
    ansible-galaxy install -i linuxhq.yum_cron
    ansible-galaxy install brentwg.azure-cli
     
    cat << EOF > /root/playbook-cbcops.yaml
    - hosts: localhost
      connection: local
      gather_facts: true
      become: yes
     
      roles:
        - role: linuxhq.yum_cron
        - role: brentwg.azure-cli
     
      tasks:
        - name: sudoers - Allow AAD authenticated users NOPASSWD
          lineinfile:
            path: /etc/sudoers.d/aad_admins
            regexp: '^%aad_admins'
            line: '%aad_admins ALL=(ALL) NOPASSWD:ALL'
     
        - name: Check if /dev/sdc exists
          stat:
            path: /dev/sdc
          register: stat_sdc
     
        - name: Create data volume group, if sdc exists
          lvg:
            vg: data
            pvs: "{{ hostvars[inventory_hostname].ansible_devices.keys() | map('regex_search', 'sd.*') | select('string') | reject('match', 'sd[ab]') | list | join(',') | replace('s','/dev/s') }}"
          when: stat_sdc.stat.exists == True
     
        - name: Create logical volume data
          lvol:
            vg: data
            lv: data
            resizefs: yes
            size: +100%FREE
     
        - name: Create a filesystem for volume data
          filesystem:
            fstype: xfs
            dev: /dev/mapper/data-data
     
        - name: Delete keys in fstab
          lineinfile: dest=/etc/fstab
            state=absent
            regexp='^/home$'
     
        - name: Mount volume data
          mount:
            path: /home
            src: /dev/mapper/data-data
            fstype: xfs
            state: mounted
     
        - name: Creates temp data directory
          file:
            path: /mnt/data
            state: directory
     
        - name: Mount volume data
          mount:
            path: /mnt/data
            src: /dev/mapper/data-data
            fstype: xfs
            state: mounted
     
        - name: Creates new azureuser home directory
          file:
            path: /mnt/data/azureuser
            state: directory
     
    EOF
     
    cat << EOF > /usr/local/bin/generate-rclone.conf.sh
    #!/bin/bash
     
    set -oe pipefail
     
    if [ \$(id -u) -eq 0 ]
    then
      echo This script must not be run as root
      exit
    fi
     
    until az account show --query user.name
    do
      az login --tenant $tenant --query "[0].user"
    done
     
    export AZURE_STORAGE_ACCOUNT=$sa_name
    export container=kubernetes
     
    echo
    echo Generating SAS token
     
    export AZURE_STORAGE_SAS_TOKEN=\$(az storage container generate-sas --auth-mode login --as-user --permissions rl --https-only --expiry \$(date -u -d "2 hours" "+%Y-%m-%dT%H:%MZ") --name \${container} --output tsv)
     
    echo
    echo Generating rclone.conf
     
    mkdir -p .config/rclone/
    mkdir -p \${container}
     
    echo "[\${AZURE_STORAGE_ACCOUNT}]
    type = azureblob
    sas_url = https://\${AZURE_STORAGE_ACCOUNT}.blob.core.windows.net/\${container}?\${AZURE_STORAGE_SAS_TOKEN}
    " > .config/rclone/rclone.conf
     
    echo
    echo List files and directories in \${AZURE_STORAGE_ACCOUNT}/\${container}/logs
     
    set -x
    rclone lsf \${AZURE_STORAGE_ACCOUNT}:\${container}/logs
     
    rclone mount \${AZURE_STORAGE_ACCOUNT}:\${container} \${container}/ --daemon
     
    : Unmount with: fusermount -u \${container}/
    EOF
     
    cat << EOF > /root/playbook-postinstall.yaml
    - hosts: localhost
      connection: local
      gather_facts: true
      become: yes
     
      tasks:
        - name: Restart server
          command: /sbin/shutdown -r +1
          async: 0
          poll: 0
          ignore_errors: true
    EOF
     
    chmod +x /usr/local/bin/generate-rclone.conf.sh
    chattr +i /usr/local/bin/generate-rclone.conf.sh
    truncate -s 0 /var/log/ansible.log
    ansible-playbook /root/playbook-cbcops.yaml | tee -a /var/log/ansible.log
    ansible-playbook /root/playbook-postinstall.yaml | tee -a /var/log/ansible.log
     
    RECAPS=$(grep 'PLAY RECAP' /var/log/ansible.log | wc -l)
     
    if [ $RECAPS -eq 2 ]; then
      exit 0
    else
      exit 1
    fi
  8. In your home directory, run the script that you created. This script will install the packages that are necessary to mount the log file container to the VM, create the generate-rclone.conf.sh script that you will use to copy log files to the VM, and reboot the VM to apply changes.
  9. After the VM reboots, go to your home directory and execute the /usr/local/bin/generate-rclone.conf.sh script. This script will ask you to provide the information you prepared earlier, mount the log file container, and copy log files from that container to the VM.

    Notes:

    1. If the "Directory is not empty: kubernetes" error appears when running the script, remove the contents of the ~/kubernetes directory, execute the fusermount -u ~/kubernetes command, and run the script again.

    2. If the "Storage Account access errors" error appears when running the script, log in to Azure using the az login command and run the script again.

  10. After the /usr/local/bin/generate-rclone.conf.sh script completes copying log files, you can view those log files in the ~/kubernetes directory.

To access log files with an existing jumper VM, complete these steps:

  1. Ensure that the Azure user whose credentials you are going to use has read access to the log file container. To learn how to configure access to blob data, please refer to the Azure documentation (see the Storage Blob Data Reader role).

  2. Ensure that your Azure user has the Virtual Machine User Login role to that VM. To learn how to do this, please refer to the Azure documentation.

  3. Prepare the following information:

    • The credentials of your Azure user

    • The ID of your tenant

    • The name of your storage account

  4. Log in to the jumper VM. Then, go to your home directory and execute the /usr/local/bin/generate-rclone.conf.sh script. This script will ask you to provide the information you prepared earlier, mount the log file container, and copy log files from that container to the VM.

    Notes:

    1. If the "Directory is not empty: kubernetes" error appears when running the script, remove the contents of the ~/kubernetes directory, execute the fusermount -u ~/kubernetes command, and run the script again.

    2. If the "Storage Account access errors" error appears when running the script, log in to Azure using the az login command and run the script again.

  5. After the /usr/local/bin/generate-rclone.conf.sh script completes copying log files, you can view those log files in the ~/kubernetes directory.

CloudBlue, an Ingram Micro business, uses cookies to improve the usability of our site. By continuing to use this site and/or logging in you are accepting the use of these cookies. For more information, visit our Privacy Policy.