Kubernetes deployment automation with Spinnaker and Jenkins -Part1

Amartya Mandal
6 min readJul 2, 2020

Time to time I do need a full blown kubernetes cluster to test new ideas or upgrade my skills and learning which is ongoing and never ending. I might need the entire setup for a few hours before a client presentation or for a few days for a hands on lab or workshop and every time the base setup should be exactly the same. it is not always possible to test new ideas in a production or non-production environment due to the security concerns and time it takes to bypass chain of commands.

Along with a cluster few things always remain part of the basic operation

  1. Creation of a brand new cluster
  2. Which you can do at your home lab on a cheap server, but that won’t’ always give you the feel, it would lack on provisioning a load balancer or external ip, problem to configure dynamic DNS, if you need to expose a service externally. It would be resource hungry and you might only need it for a day so in most of the cases I end up creating them on any of the cloud providers which offer managed k8s.

Note1: Recently DigitalOcean started providing managed k8s and the pricing model is very upfront, it will updates the billing every day, no surprise, you would know in advance how much you gonna pay at the end of the months if you would forget to destroy your setup somehow. and if you are a new user, you can get a $100 for free for two months

  1. Along with a cluster you certainly need some kind of service mesh, if you want to demonstrate service mesh features and a product like istio will bring in all the required tooling along with it, for service visibility you would get Kiali, you can use Istio ingress and egress and VS to route traffic based on your DNS configuration
  2. DNS providers like google do not like any website on simple http and are very much critical about it, so you definitely need a k8s cert manager to issue and maintain a certificate from a free certificate provider like let’s encrypt.
  3. And then the same old sample applications which will finally let you explore or experiment new features or to demonstrate whatever you wanna do.
  4. Exposing the services to internet through ingress gateways
  5. Some sorts of visibility of the interactions of the services -standard tool like kiali
  6. And on top of that if you have to replicate a full blown CI/CD and all the steps involved you might end up clubbing everything with a CI like Jenkins and Github

So the above list operation almost always repeats and delays the actual objective to fulfill.

I wanted something which will bring me up everything within an hour’s notice before I need one.This would save me cost and time.

I am using DigitalOcean as my cloud provider, It is cheap and straight forward, Github as my source repository, Jenkins as my build server sitting on public cloud (so that I can easily configure webhook) and Spinnaker for deploying in clusters.

Note 2: spinnaker provisioned on premise, because this is the most expensive piece of resource in the entire setup, it is very tricky to maintain a spinnaker server so even if you can automate the creation of the of a spinnaker server (with vagrant, there are providers for bare metal hypervisor like KVM or even vagrant provider for digital ocean) the incremental setup to make it workable like following make it hard to recreate every time you need it, yes you can take a backup of the hal config, but that won’t’ guarantee ant API key associated with it will work properly when you’re restoring it back

  1. Enabling image registry
  2. Adding artifactories
  3. Updating different cloud providers time to time
  4. Updating new kube-contexts and kube-configs
  5. Updating CI servers
  6. Updating service accounts for new clusters

In fact it is resource hungry with almost 16 gigs of memory and 4 cores of cpu which is really expensive on any cloud provider and I want to keep it running which is ok if I keep it locally.

Note 3: I will discuss the infrastructure provision and configuration of spinnaker on a upcoming blogs

With the latest V2 provider it’s much more flexible, you can use manifest directly to deploy.

At the end of the day I can just destroy the cluster and I am ok until next time.

Following is a diagram of the entire process which automates the creation of the lab. You have a glimpse of the entire automated process in the youtube link provided at the beginning of the article.

Following points can brief about each of the stages, scrip or codes associated can be found in following repository link-

I also have specified the path of the modules for better understanding.

  1. Push code changes from VS code to Github
  2. Webhook configured in Github for Jenkins (jenkins residing on DO cloud)Webhook will trigger a build at jenkins, pipeline file resides in Github (path: k8s_practice/Jenkinsfile)-this step is responsible to compile the code, build all the docker images tag it properly with unique build number and push it to the docker registry
  3. The spinnaker server which is sitting on my local server on premise starts its first pipeline in the sequence because the trigger setup is linked with the docker registry, if any of the images updates, the pipeline will start and invoke a job back in jenkins to create a new cluster.
  4. Back in jenkin it is only one node, the master node for simplicity, but one can have multiple worker nodes
  5. In my case the master node will run a job to provision a new k8s cluster in the digital ocean and I am using their own command line tool “doctl”. (path: k8s_practice/Cluster/create-dgo-k8s-cluster.sh).I have also configured “kubectl” , later in the stage we do need “istioctl” which will be downloaded and configured when a job will run. Cluster creation will take several minutes, once it is completed it will save the config file and create a new kube-context and assign the build server kube context to the newly created one.
  6. Once a new cluster up and running Spinnaker pipeline will take a pause and wait, so that a manual stage is configured, this is required for the activities like updating the spinnaker server to update with new cluster contexts, changing the context to the new cluster so that all the upcoming pipeline in the sequence can use the new cluster to deploy applications, creating a new spinnaker service account to add to the new cluster and update spinnaker with the help of hal. (path: k8s_practice/Cluster/add-spinnaker-account-to-cluster.sh)
  7. Once above manual activities completes and submit the confirmation in the spinnaker manual stage will trigger the next pipeline which will invoke a job back in jenkins to deploy k8s certificate manager. (path: k8s_practice/kubernetes/kubectl/kubectl-cert-manager.sh)
  8. On completion it will invoke the next pipeline in Spinnaker and same will trigger a job in jenkins to deploy istio, frankly speaking istio should be installed by hel chart from Spinnaker itself (it has recently started supporting helm 3), but I never able to make it work properly and I do not see much of traction on that line. (path: k8s_practice/kubernetes/kubectl/deploy-istio.sh)
  9. During deployment of istio digital ocean will start provisioning a new external load balancer and assign a public ip, once the jenkins job finished it will print out the Ip address and then it will invoke a manual stage in the spinnaker pipeline which will ask to update the my DNS with the newly created IP address for the services I want to expose to internet
  10. This stage could have been automated anyway I am getting the IP address during my jenkin build which could have triggered api call to update DNS in my google DNS manager, but that is risky in so many ways and require so much secrest to be kept in open at least in my present condition, i thought to manually do it myself.
  11. Once I confirm DNS has been updated, spinnaker pipeline will resume and start deploying k8s artifacts like a cert issuer (with the help of let’s encrypt), a cert object, istio ingress gateway, virtual services and few sample applications

Note 4: Upcoming blogs in the series will have separate individual articles on each of the steps described above with hands on demonstration of, various configurations

Published By

Amartya Mandal

Originally published at https://www.linkedin.com.

--

--