Run istioctl in Spinnaker pipeline - kubernetes

Currently we deploy the custom istio ingress gateways(g/w) through helm using Spinnaker pipeline.(One time activity for every k8s namespace)
istio 1.6 is deprecating the helm way of creation of custom user g/w. Instead is asks to deploy it using istioctl command.
Since Spinnaker supports only Helm2 or Helm3 as rendering engine.
My specific ask is how can I now deploy the custom istio user g/w through helm pipeline using istioctl command?

Since I didn't get much response. Let me answer it myself.
Here's what I did:
I took a bitnami kubectl docker base image
Bundled on of the istio releases say 1.5.8 https://github.com/istio/istio/releases/download/1.5.8/istio-1.5.8-linux.tar.gz
Get the default manifest using istioctl manifest generate
Modify it accordingly to define a custom ingress-gateway
Run the following command in the entrypoint.sh for the Docker image
istioctl manifest generate -f manifest.yaml | kubecl apply -f -
Create a docker image including all the steps
In Spinnaker pipeline create a stage which deploys based on K8s file.
In the file define a Job and run the docker image created.
In this way once the job starts running it creates a K8s pod which internally creates the custom user istio ingress g/w.

Related

Installed prometheus-community / helm-charts but I can't get metrics on "default" namespace

I recently learned about helm and how easy it is to deploy the whole prometheus stack for monitoring a Kubernetes cluster, so I decided to try it out on a staging cluster at my work.
I started by creating a dedicates namespace on the cluster for monitoring with:
kubectl create namespace monitoring
Then, with helm, I added the prometheus-community repo with:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
Next, I installed the chart with a prometheus release name:
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
At this time I didn't pass any custom configuration because I'm still trying it out.
After the install is finished, it all looks good. I can access the prometheus dashboard with:
kubectl port-forward prometheus-prometheus-kube-prometheus-prometheus-0 9090 -n monitoring
There, I see a bunch of pre-defined alerts and rules that are monitoring but the problem is that I don't quite understand how to create new rules to check the pods in the default namespace, where I actually have my services deployed.
I am looking at http://localhost:9090/graph to play around with the queries and I can't seem to use any that will give me metrics on my pods in the default namespace.
I am a bit overwhelmed with the amount of information so I would like to know what did I miss or what am I doing wrong here?
The Prometheus Operator includes several Custom Resource Definitions (CRDs) including ServiceMonitor (and PodMonitor). ServiceMonitor's are used to define services to the Operator to be monitored.
I'm familiar with the Operator although not the Helm deployment but I suspect you'll want to create ServiceMonitors to generate metrics for your apps in any (including default) namespace.
See: https://github.com/prometheus-operator/prometheus-operator#customresourcedefinitions
ServiceMonitors and PodMonitors are CRDs for Prometheus Operator. When working directly with Prometheus helm chart (without operator), you need have to configure your targets directly in values.yaml by editing the scrape_configs section.
It is more complex to do it, so take a deep breath and start by reading this: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config

one or more valid Kubernetes manifests are required to run skaffold

When I run skaffold init in my app directory it shows me:
one or more valid Kubernetes manifests are required to run skaffold
The content of the directory:
Do I have to provide Kubernetes manifests file with for example Pod, Service, etc?
Yes, you need Kubernetes manifests in the same project. Typically a Deployment-manifest and perhaps Service and Ingress as well if you want.
A Deployment-manifest can be generated with (using > to direct output to a file):
kubectl create deployment my-app --image=my-image --dry-run -o yaml > deployment.yaml
Note: There is a alpha feature flag --generate-manifests that might do this for you.
E.g. with
skaffold init --generate-manifests

connect from helm to kubernetes cluster

I have an application that is deployed on kubernetes cluster. Accessing this application using rancher namespace. By specifying this namespace I am getting "get pods", and all information.
Now, this application I want to control from the helm. what do I need to do?
I have installed helm where my kubectl installation is there.
If you want to "control" applications on Kubernetes cluster with Helm, you should start with helm charts. You can create some if one is not already available. Once you have chart(s), you can target the Kubernetes cluster with the cluster's KUBECONFIG file.
If I had a Helm chart like my-test-app and a Kubernetes cluster called my-dev-cluster.
With Helm I can:
deploy - install
helm install test1 my-test-app/ --kubeconfig ~/.kubeconfigs/my-dev-cluster.kubeconfig
update - upgrade
helm upgrade test1 my-test-app/ --kubeconfig ~/.kubeconfigs/my-dev-cluster.kubeconfig
remove - uninstall
helm uninstall test1 my-test-app/ --kubeconfig ~/.kubeconfigs/my-dev-cluster.kubeconfig
Where my-dev-cluster.kubeconfig is the kubeconfig file for my cluster in ~/.kubeconfigs directory. Or you can set the path using KUBECONFIG environment variable.

Role of Helm install command vs kubectl command in Kubernetes cluster deployment

I have a Kubernetes cluster with 1 master node and 2 worker node. And I have another machine where I installed Helm. Actually I am trying to create Kubernetes resources using Helm chart and trying to deploy into remote Kubernetes cluster.
When I am reading about helm install command, I found that we need to use helm and kubectl command for deploying.
My confusion in here is that, when we using helm install, the created chart will deploy on Kubernetes and we can push it into chart repo also. So for deploying we are using Helm. But why we are using kubectl command with Helm?
Helm 3: No Tiller. Helm install just deploys stuff using kubectl underneath. So to use helm, you also need a configured kubectl.
Helm 2:
Helm/Tiller are client/server, helm needs to connect to tiller to initiate the deployment. Because tiller is not publicly exposed, helm uses kubectl underneath to open a tunnel to tiller. See here: https://github.com/helm/helm/issues/3745#issuecomment-376405184
So to use helm, you also need a configured kubectl. More detailed: https://helm.sh/docs/using_helm/
Chart Repo: is a different concept (same for helm2 / helm3), it's not mandatory to use. They are like artifact storage, for example in quay.io application registry you can audit who pushed and who used a chart. More detailed: https://github.com/helm/helm/blob/master/docs/chart_repository.md. You always can bypass repo and install from src like: helm install /path/to/chart/src

Deploying Images from gitlab in a new namespace in Kubernetes

I have integrated gitlab with Kubernetes cluster which is hosted on AWS. Currently it builds the code from gitlab to the default namespace. I have created two namespaces in kubernetes one for production and one for development. What are the steps if I want that to be deployed in a dev or a production namespace. Do I need to make changes at the gitlab level or on the kubernetes level.
This is done at the kubernetes level. Whether you're using helm or kubectl, you can specify the desired namespace in the command.
As in:
kubectl create -f deployment.yaml --namespace <desired-namespace>
helm install stable/gitlab-ce --namespace <desired-namespace>
Alternatively, you can just change your current namespace to the desired namespace and install as you did before. By default, helm charts or kuberenetes yaml files will install into your current namespace unless specified otherwise.