Feat: Improving node documentation for onboarding

This commit is contained in:
brandtkeller 2022-02-03 20:34:26 -08:00
parent 4cb55a0026
commit 4e80a304e9

View File

@ -339,7 +339,7 @@
"type": "comment",
"z": "c86af370eff94afa",
"name": "Deploy K3d",
"info": "https://k3d.io/v5.2.2/\n\n### Choose your platform\nThe preferred free VM software is [VirtualBox](https://www.virtualbox.org/); however, it is not compatible with M1 chips! So, if you are using an M1 Mac, we highly recommend you go with the [Cloud](#Cloud) option below\n\n##### Cloud\nRecommend using an EC2 instance with the following specs:\n- AMI: Ubuntu Server (64-bit, x86)\n- Instance Type: t3a.2xlarge\n- 100 GB EBS\n- Security group rules for allowing web traffic\n\n##### Local\nUse [Vagrant](https://www.vagrantup.com/) to spin up a VM with at least the following specs:\n- 10GB RAM\n- 4 CPU cores\n- An IP that can be [accessed](https://www.vagrantup.com/docs/networking/private_network#static-ip) from the host machine. \n\nDepending on RAM/CPU availability, you may want to run 2 or more K3d nodes",
"info": "https://k3d.io/v5.2.2/\n\n### Choose your platform\nThe preferred free VM software is [VirtualBox](https://www.virtualbox.org/); however, it is not compatible with M1 chips! So, if you are using an M1 Mac, we highly recommend you go with the [Cloud](#Cloud) option below\n\n##### Cloud\nRecommend using an EC2 instance with the following specs:\n- AMI: Ubuntu Server (64-bit, x86)\n- Instance Type: t3a.2xlarge\n- 100 GB EBS\n- Security group rules for allowing web traffic\n\n##### Local\nUse [Vagrant](https://www.vagrantup.com/) to spin up a VM with at least the following specs:\n- 10GB RAM\n- 4 CPU cores\n- An IP that can be [accessed](https://www.vagrantup.com/docs/networking/private_network#static-ip) from the host machine. \n\nDepending on RAM/CPU availability, you may want to run 2 or more K3d nodes\n\n## Cluster creation\n\nOnce you have a host for deploying a k3d cluster - you can configure a cluster for future nodes in this training.\n\nOne possible configuration would be a 3 node cluster with 1 server and 2 agents (to get started).\n\nAnother consideration is exposing applications after the cluster is running and apps have been deployed w/ services and ingresses.\n\nA possible configuration might look like:\n\n`k3d cluster create -s 1 -a 2 -p \"8081:80@loadbalancer\" dev-cluster`\n\n-s 1 represents 1 server node\n-a 2 represents 2 agent nodes\n\n-p \"8081:80@loadbalancer\" represents mapping 8081 on this k3d host to ports 80 internally.",
"x": 240,
"y": 160,
"wires": [
@ -353,7 +353,7 @@
"type": "comment",
"z": "c86af370eff94afa",
"name": "Deploy a pod info application (with Kustomize)",
"info": "https://github.com/stefanprodan/podinfo\n`kubectl apply -k github.com/stefanprodan/podinfo//kustomize`",
"info": "# Kustomize\n\n\"Kustomize is a command-line configuration manager for Kubernetes objects. Integrated with kubectl since 1.14, it allows you to make declarative changes to your configurations without touching a template.\"\n\n## Recommended Reading\n\n`https://www.mirantis.com/blog/introduction-to-kustomize-part-1-creating-a-kubernetes-app-out-of-multiple-pieces/`\n\nThe above tutorial can be run on the k3d cluster you have created. This is a much more complex example than the podinfo example you will deploy below.\n\n## Podinfo\n\n\"Podinfo is a tiny web application made with Go that showcases best practices of running microservices in Kubernetes\"\n\n### Deployment\n\nNavigate to `https://github.com/stefanprodan/podinfo`\n\nClone the repository to your local machine\n\nFrom within the cloned directory, we can execute kustomize through the built in functionality in `kubectl`.\n\n`kubectl kustomize ./kustomize`\n\nThis will return the built manifest for the application to be deployed. You can then deploy the application to the cluster through `kubectl apply -k ./kustomize`\n\n## Success Criteria\n\n- 2 podinfo pods in the target namespace (default if not specified)\n- A podinfo service\n- A horizontal pod autoscaler\n\nWe can port-forward this applications service and visit it in browser to confirm functionality.\n\n`kubectl port-forward service/podinfo 9898:http`\n\nThis port fowards the podinfo service `http` port (as defined in the service) to the 9898 host port.",
"x": 580,
"y": 160,
"wires": [
@ -367,7 +367,7 @@
"type": "comment",
"z": "c86af370eff94afa",
"name": "Expose podinfo with an Ingress",
"info": "",
"info": "# Ingress\n\nIn the previous exercise, we deployed the podinfo application and confirmed functionality by visiting it in-browser through port-forwarding with kubectl. We can instead use an ingress and the default traefik ingress-controller to handle this functionaity more natively. \n\nOfficial k3d docs: https://k3d.io/v5.0.0/usage/exposing_services/\n\n## Ingress deployment\nDue to the cluster configuration that we executed in the first node (See the -p loadbalancer parameter). we can configure an ingress to expose the application, as is one of a few standard practices for exposing internal applications to external traffic.\n\n### Ingress template\n\nIngress docs: https://kubernetes.io/docs/concepts/services-networking/ingress/\n\nGiven the template from the docs/tutorial we can write an ingress to support this traffic.\n\n## Success Criteria\n\nAfter the ingress is deployed (and given you configured your cluster as described in the first node), then you should be able to access the podinfo application without port-forwarding at `http://localhost:8081` \n\n## Cleanup\n\nThis concludes this deployment of podinfo. You'll want to cleanup the resources we have created. ",
"x": 1010,
"y": 160,
"wires": [
@ -381,7 +381,7 @@
"type": "comment",
"z": "c86af370eff94afa",
"name": "Create a podinfo helm chart",
"info": "",
"info": "# Helm\n\"Helm is the package manager for Kubernetes\"\n\n## Recommended Reading\nhttps://helm.sh/\nhttps://helm.sh/docs/intro/\n\n## Podinfo Helm Chart from Scratch\n\nTODO - Insert content for:\n- deployment\n- service\n- ingress\n- HPA\n* All templated from scratch\n\n## Deploy the official podinfo chart from local files\n\nPreviously we cloned the podinfo repository to our/a local machine. Under the root of the project there is a `charts` directory with a `podinfo` directory that contains the podinfo chart content.\n\n## Basic deployment\n\nLet's create a testing namespace for our target\n`kubectl create ns testing`\n\nWith Helm installed and our k3d cluster still running/configured, we can install the chart in it's vanilla form (without enabling any additional content).\n\n(From the charts directory)\n`helm install podinfo-dev ./podinfo -n testing`\n\nThis will deploy the chart which results in the deployment and service creation in the target namespace.\n\n## Customizations\nWe can inject exposed customizations as outlined in the README/values.yaml for the purpose of configuring the end package being suited for our needs.\n\nWe can make an edit to the values.yaml and upgrade our application.\n\nhpa:\n enabled: true\n \n`helm upgrade podinfo-dev -n testing ./podinfo`\n\nThis should result in an HPA being deployed to our namespace for the application.\n",
"x": 1410,
"y": 160,
"wires": [
@ -458,4 +458,4 @@
[]
]
}
]
]