help-krib-tutorial¶
- Intent
- Help in bringing up a kubernetes cluster via drp-provision as presented in drp-krib-video and drp-krib-document on a drpfeature-test setup.
- Success
- Replicate drp-krib-video demo locally using drpcli, drp-ux and virtual-box vm’s PXE booting bare nodes to a kubeadmin cluster.
Manual Steps
- Configure drpfeature-test-network (mikrotek 192.168.88.1 router)
- Install drp-provision via drp-quickstart in the drpfeature-test-macosx configuration
- Configure drpfeature-test-vbox (virtual box)
- Start drp-provision (see note below)
- Browse to RackN-Portal at https://192.168.88.9:8092
- Login with your RackN user (rocketskates r0cketsk8ts is default but you cannot get to krib)
- Add the local endpoint 192.168.88.9:8092 to your Endpoints
- Top Left (hamburger icon)
- Under “ADD ENDPOINT” type in “192.168.88.9:8092” click Add
- Navigate to ENDPOINT by clicking on “192.168.88.9:8092”
- Click in gray view area to get rid of RackN-Portal overlay ?? UX ??
- Browse to RackN-Portal endpoint at https://192.168.88.9:8092
- Browse to Leases
- Clear or validate all current dhcp leases
- Browse to Boot ISOs
- Verify or load the following
- CentOS-7-x86_64-Minimal-1708.iso
- sledgehammer-f5ffd3ed10ba403ffff40c3621f1e31ada0c7e15.tar
- Load via drpcli
- ./drpcli bootenvs uploadiso ubuntu-16.04-install
- ./drpcli bootenvs uploadiso centos-7-install
- ./drpcli bootenvs uploadiso sledgehammer
- Browse to Content Packages
- Click Transfer on the krib content in Community
- krib should transfer to Endpoint Content
- Click Transfer on the task-library content in Community
- task-library should transfer to Endpoint Content
- Browse to Plugin Providers
- Click Transfer on the VirtualBox IPMI in Organization Plugins
- VirtualBox IPMI should transfer to Endpoint Plugin Providers
- Browse to Profiles to create k8s-cluster-install ( see Create-Profile )
- Name: k8s-cluster-install
- Description: drpfeature k8s-cluster-install
- Add “change-stage/map” krib/cluster-profile = my-k8s-cluster
- Click “Save”
- Browse to Info & Preferences set the following in System Preferences Global properties
- Default Stage -> discover
- Default BootEnv -> sledgehammer
- Unknown BootEnv -> discovery
- Click Save
- Browse to Workflow select Profile: global
- Add/select global profile
- Select: discover -> sledgehammer-wait : Success
- Click: Add Step to SAVE this STEP (?? UX ??)
- Browse to Workflow select Profile: k8s-cluster-install
- Add/select k8s-cluster-install profile. Click the “Add Step” after you select the change event using drop-downs (?? UX ??)
- centos-7-install -> runner-service:Success
- runner-service -> finish-install:Stop
- finish-install -> docker-install:Success
- docker-install -> krib-install:Success
- krib-install-> complete:Success
- discover->sledgehammer-wait:Success
- Click: Add Step to SAVE this STEP (?? UX ??)
- The KRIB setup via the UX is now COMPLETE
- Fire up test drpfeature-test-vbox vm’s bm1, bm2, bm3, bm4, bm5 and bm6
- Fire up these one at a time… I had issues
- Browse to RackN-Portal Machines
- Verify the bm1-6 machines are Stage: discover BootEnv: sledgehammer-wait
- Browse to RackN-Portal endpoint at https://192.168.88.9:8092
- Browse to Bulk Actions
- Select the machines you want in k8s-cluster-install (check box far right in machine list)
- In Profiles (top-2nd-left) Select-dropdown “k8s-cluster-install” and click “+” icon. Results: should show a green “Profiles” icon for each machine successfully applied.
- In Stages (top-center) Select-dropdown “centos-7-install” and click “+” icon below. Results: should show a green “Stages” icon for each machine successfully applied.
Note
Start drp-provision
sudo ./dr-provision --static-ip=192.168.88.9 --base-root=/Users/msops/Code/drpfeature/drpisolated/drp-data --local-content="" --default-content=""
Note
Generate admin.conf
./drpcli profiles get k8s-cluster-ram param krib/cluster-admin-conf > admin.conf
Note
Get node info via kubectl
kubectl --kubeconfig=admin.conf get nodes
Note
SETUP kubctl PROXY
kubectl --kubeconfig=admin.conf proxy
Note
Double Secrete Probation for kickseed
http://192.168.88.9:8091/machines/db1dcb0f-d0b6-4afb-9da9-e62b62a68e24/compute.ks
Video Track
- Begin configuration to start k8s-cluster-install
- tc715 Set KRIBnode[1..4] to Stages -> Mount Local Disk
- tc736 Show LIVE events of above
- tc743 Click on KRIBnode1 to show what that node will go through
- tc722 Set KRIBnode[4..8] to Profiles -> k8s-cluster-install
- tc798 Set KRIBnode[4..8] to Boot Environmens -> centos-7-install
- tc802 Set KRIBnode[4..8] to Plugin Action -> powercycle
- General Exlaining while k8s-cluster builds
- tc860 Look into what k8s-cluster-ram in Profiles does (verbal explain)
- tc918 Navigate to Stages select krib-install which has task krib-install
- tc935 krib-install verbal explain of how tasks, jobs, alerts and workflow are composeable
- tc953 Pull up krib-install.sh.tmpl and explain template that is executed by runner
- tc990 Go look at current status of DRIBnode1 in Machines it is in docker-install stage of Stages
- tc1079 Show Jobs and bring up a job progress of a job log on a node.
- tc1102 Navigate to machine via the link in the Jobs listing to check on machine task which is now krib-install
- tc1104 Navigate to task via the link in the machine view to see the log of the krib-install task
- tc1115 Navigate to Profiles show k8s-cluster-ram see that node 56… has krib/cluster-master parameter so it WON the master election
- tc1160 Go back to SLIDES… finish slide talk esp about dynamic tokens, configuration injection and bootstraping
- tc1238 Go back to Profiles and refresh then pull up k8s-cluster-ram again. You see new parameter for cluster-join-command
- The COOL NEW STUFF
- tc1262 Show cluster-admin-conf and use that to create admin.conf
- tc1284 Generate admin.conf
- tc1318 Now go get node info via kubectl (note this is on the local system going cloud cluster)
- tc1337 SETUP kubectl PROXY via
- tc1375 FOR THE WIN: browse to http://localhost:8001 to get to the remote kubeadmin dashboard
Setup for testing drp endpoint
drpfeature-test setup uses the drpfeature-test-network and drpfeature-test-vbox running on a drpfeature-test-macosx with drp-provision running on the drpfeature-test-drpe endpoint for pxe boot of Proliant Blade servers in drpfeature-test-hpeC7000 configuration which then are assessable via drpfeature-test-drpe-ansible and can use drpfeature-test-drpe-ansible-blender to install a blender render grid worker node.