help-krib-tutorial

Intent
Help in bringing up a kubernetes cluster via drp-provision as presented in drp-krib-video and drp-krib-document on a drpfeature-test setup.
Success
Replicate drp-krib-video demo locally using drpcli, drp-ux and virtual-box vm’s PXE booting bare nodes to a kubeadmin cluster.

Manual Steps

  1. Configure drpfeature-test-network (mikrotek 192.168.88.1 router)
  2. Install drp-provision via drp-quickstart in the drpfeature-test-macosx configuration
  3. Configure drpfeature-test-vbox (virtual box)
  4. Start drp-provision (see note below)
  5. Browse to RackN-Portal at https://192.168.88.9:8092
    1. Login with your RackN user (rocketskates r0cketsk8ts is default but you cannot get to krib)
    2. Add the local endpoint 192.168.88.9:8092 to your Endpoints
      1. Top Left (hamburger icon)
      2. Under “ADD ENDPOINT” type in “192.168.88.9:8092” click Add
    3. Navigate to ENDPOINT by clicking on “192.168.88.9:8092”
    4. Click in gray view area to get rid of RackN-Portal overlay ?? UX ??
  6. Browse to RackN-Portal endpoint at https://192.168.88.9:8092
    1. Browse to Subnets
      1. Add the subnet “en0 192.168.88.9/24” (guessed by dr
      2. Disable all other Subnets
    2. Browse to Leases
      1. Clear or validate all current dhcp leases
    3. Browse to Boot ISOs
      1. Verify or load the following
        1. CentOS-7-x86_64-Minimal-1708.iso
        2. sledgehammer-f5ffd3ed10ba403ffff40c3621f1e31ada0c7e15.tar
      2. Load via drpcli
        1. ./drpcli bootenvs uploadiso ubuntu-16.04-install
        2. ./drpcli bootenvs uploadiso centos-7-install
        3. ./drpcli bootenvs uploadiso sledgehammer
    4. Browse to Content Packages
      1. Click Transfer on the krib content in Community
      2. krib should transfer to Endpoint Content
      3. Click Transfer on the task-library content in Community
      4. task-library should transfer to Endpoint Content
    5. Browse to Plugin Providers
      1. Click Transfer on the VirtualBox IPMI in Organization Plugins
      2. VirtualBox IPMI should transfer to Endpoint Plugin Providers
    6. Browse to Profiles to create k8s-cluster-install ( see Create-Profile )
      1. Name: k8s-cluster-install
      2. Description: drpfeature k8s-cluster-install
      3. Add “change-stage/map” krib/cluster-profile = my-k8s-cluster
      4. Click “Save”
    7. Browse to Info & Preferences set the following in System Preferences Global properties
      1. Default Stage -> discover
      2. Default BootEnv -> sledgehammer
      3. Unknown BootEnv -> discovery
      4. Click Save
    8. Browse to Workflow select Profile: global
      1. Add/select global profile
      2. Select: discover -> sledgehammer-wait : Success
      3. Click: Add Step to SAVE this STEP (?? UX ??)
    9. Browse to Workflow select Profile: k8s-cluster-install
      1. Add/select k8s-cluster-install profile. Click the “Add Step” after you select the change event using drop-downs (?? UX ??)
        1. centos-7-install -> runner-service:Success
        2. runner-service -> finish-install:Stop
        3. finish-install -> docker-install:Success
        4. docker-install -> krib-install:Success
        5. krib-install-> complete:Success
        6. discover->sledgehammer-wait:Success
      2. Click: Add Step to SAVE this STEP (?? UX ??)
    10. The KRIB setup via the UX is now COMPLETE
  7. Fire up test drpfeature-test-vbox vm’s bm1, bm2, bm3, bm4, bm5 and bm6
    1. Fire up these one at a time… I had issues
    2. Browse to RackN-Portal Machines
    3. Verify the bm1-6 machines are Stage: discover BootEnv: sledgehammer-wait
  8. Browse to RackN-Portal endpoint at https://192.168.88.9:8092
    1. Browse to Bulk Actions
    2. Select the machines you want in k8s-cluster-install (check box far right in machine list)
    3. In Profiles (top-2nd-left) Select-dropdown “k8s-cluster-install” and click “+” icon. Results: should show a green “Profiles” icon for each machine successfully applied.
    4. In Stages (top-center) Select-dropdown “centos-7-install” and click “+” icon below. Results: should show a green “Stages” icon for each machine successfully applied.

Note

Start drp-provision

sudo ./dr-provision --static-ip=192.168.88.9 --base-root=/Users/msops/Code/drpfeature/drpisolated/drp-data --local-content="" --default-content=""

Note

Generate admin.conf

./drpcli profiles get k8s-cluster-ram param krib/cluster-admin-conf > admin.conf

Note

Get node info via kubectl

kubectl --kubeconfig=admin.conf get nodes

Note

SETUP kubctl PROXY

kubectl --kubeconfig=admin.conf proxy

Note

Double Secrete Probation for kickseed

http://192.168.88.9:8091/machines/db1dcb0f-d0b6-4afb-9da9-e62b62a68e24/compute.ks

Video Track

  1. General Show UI Views
    1. tc590 Show KRIBnode[1..8] in Machines
    2. tc607 Show Profiles
    3. tc658 Show Bulk Actions
    4. tc678 Show Workflow k8s-cluster-ram
  2. Begin configuration to start k8s-cluster-install
    1. tc715 Set KRIBnode[1..4] to Stages -> Mount Local Disk
    2. tc736 Show LIVE events of above
    3. tc743 Click on KRIBnode1 to show what that node will go through
    4. tc722 Set KRIBnode[4..8] to Profiles -> k8s-cluster-install
    5. tc798 Set KRIBnode[4..8] to Boot Environmens -> centos-7-install
    6. tc802 Set KRIBnode[4..8] to Plugin Action -> powercycle
  3. General Exlaining while k8s-cluster builds
    1. tc860 Look into what k8s-cluster-ram in Profiles does (verbal explain)
    2. tc918 Navigate to Stages select krib-install which has task krib-install
    3. tc935 krib-install verbal explain of how tasks, jobs, alerts and workflow are composeable
    4. tc953 Pull up krib-install.sh.tmpl and explain template that is executed by runner
    5. tc990 Go look at current status of DRIBnode1 in Machines it is in docker-install stage of Stages
    6. tc1079 Show Jobs and bring up a job progress of a job log on a node.
    7. tc1102 Navigate to machine via the link in the Jobs listing to check on machine task which is now krib-install
    8. tc1104 Navigate to task via the link in the machine view to see the log of the krib-install task
    9. tc1115 Navigate to Profiles show k8s-cluster-ram see that node 56… has krib/cluster-master parameter so it WON the master election
    10. tc1160 Go back to SLIDES… finish slide talk esp about dynamic tokens, configuration injection and bootstraping
    11. tc1238 Go back to Profiles and refresh then pull up k8s-cluster-ram again. You see new parameter for cluster-join-command
  4. The COOL NEW STUFF
    1. tc1262 Show cluster-admin-conf and use that to create admin.conf
    2. tc1284 Generate admin.conf
    3. tc1318 Now go get node info via kubectl (note this is on the local system going cloud cluster)
    4. tc1337 SETUP kubectl PROXY via
    5. tc1375 FOR THE WIN: browse to http://localhost:8001 to get to the remote kubeadmin dashboard
  5. Finish off talking about Future
    1. tc1430 Back to SLIDES… talk about future issues…
    2. tc1484 Node Admission verbal walk through
    3. tc1626 Kubelet Dynamic configuration verbal walk through

Setup for testing drp endpoint

drpfeature-test setup uses the drpfeature-test-network and drpfeature-test-vbox running on a drpfeature-test-macosx with drp-provision running on the drpfeature-test-drpe endpoint for pxe boot of Proliant Blade servers in drpfeature-test-hpeC7000 configuration which then are assessable via drpfeature-test-drpe-ansible and can use drpfeature-test-drpe-ansible-blender to install a blender render grid worker node.