I followed the steps mentioned in https://kubernetes.github.io/ingress-nginx/deploy/#azure. While the service was created I do not any specific pods being created for nginx-ingress. Am I missing something here?
Note : I am running this on azure kubernetes service
Yes, you are missing installing the nginx ingress.
The following Mandatory Command is required for all deployments.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
you can also use helm for that: https://kubernetes.github.io/ingress-nginx/deploy/#using-helm
Related
Is there anyone who uses argo cd on eks fargate?? It seems that there is an issue with Argo setup on Fargate. All pods are in pending state
I’ve tried installing on argocd namespace and existing ones. Still doesn’t work
I tried to install it using the commands below:
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
Make sure you have created a fargate profile with the namespace selector as argocd. It might be one of the issues.
refer this https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile
Failed to create NodePort error, after deploying ingress
I have an ingress defined as in the screenshot:
Screenshot
The 2 replicas of an Ingress server are not spinning due to the Failed to create NodePort error. Please advice
Just like the error says. You are missing the NodePortPods CRD. It looks like that CRD existed at some point in time. But I don't see it anymore in the repo. You didn't specify how you deployed the ingress operator but you can make sure you install the latest.
helm repo add appscode https://charts.appscode.com/stable/
helm repo update
helm search repo appscode/voyager --version v13.0.0
# Generate the template to check or use helm install
helm template voyager-operator appscode/voyager --version v13.0.0 --namespace kube-system --no-hooks --set cloudProvider=baremetal 👈 Use the right cloud provider
✌️
Currently we deploy the custom istio ingress gateways(g/w) through helm using Spinnaker pipeline.(One time activity for every k8s namespace)
istio 1.6 is deprecating the helm way of creation of custom user g/w. Instead is asks to deploy it using istioctl command.
Since Spinnaker supports only Helm2 or Helm3 as rendering engine.
My specific ask is how can I now deploy the custom istio user g/w through helm pipeline using istioctl command?
Since I didn't get much response. Let me answer it myself.
Here's what I did:
I took a bitnami kubectl docker base image
Bundled on of the istio releases say 1.5.8 https://github.com/istio/istio/releases/download/1.5.8/istio-1.5.8-linux.tar.gz
Get the default manifest using istioctl manifest generate
Modify it accordingly to define a custom ingress-gateway
Run the following command in the entrypoint.sh for the Docker image
istioctl manifest generate -f manifest.yaml | kubecl apply -f -
Create a docker image including all the steps
In Spinnaker pipeline create a stage which deploys based on K8s file.
In the file define a Job and run the docker image created.
In this way once the job starts running it creates a K8s pod which internally creates the custom user istio ingress g/w.
I have developed my microservice ecosystem and i managed to deploy and run it localy using docker containers and minikube. For each service i have specified two files: deployment.yml (pod specification) and service.yml (service specification). When I deploy each service to minikube cluster i simply run:
kubectl create -f deployment.yml
and after that
kubectl create -f service.yml
Now I want to deploy microservice ecosystem to IBM Cloud Services. I spend some time researching the deployment procedures and I did not find any using the deployment.yml and service.yml when deploying services.
My question is, can I just somehow deploy my services using existing deployment.yml and service.yml files?
Thank you for the answers.
As long as it's kubernetes under the hood and kubernetes API is accessible (kubectl works) you can do exactly the same. Is it sustainable in long term, that depends on your case, but likely it is not and I would suggest looking into stuff like ie. Helm
So I was confused about the deployment steps.
I just needed to go to IBM Cloud Service dashboard, find my cluster, click on cluster link and follow the steps in the Access section on the page.
After finishing the steps described within the section we can deploy our services as we were using the minikube and kubectl locally.
When I deploy a deamonset in kubernetes(1.7+), i.e, nginx ingress as daemonset, do I need to set some rbac rule ??? I know I need to set some rbac rule if I use deployment.
To deploy ingress, you need to enable some RBAC rules. In the nginx controller repository you can find the RBAC rules: https://github.com/kubernetes/ingress/blob/master/examples/rbac/nginx/nginx-ingress-controller-rbac.yml
To create daemonset you don't need to create RBAC rules for it. You might need RBAC for what is running in your Pod, be it via Deployment, Daemonset or whatever. It is the software you're running inside that might want to interact with kubernetes API, like it is in case of Ingress Controller. So, it is in fact irrelevant how you make the Pod happen, the RBAC (Cluster)Role, bindings etc. It is what soft you deploy, that defines what access rules it needs.
I was able to enable RBAC using helm (--set rbac.create=true) and this error is not seen anymore, and the nginx ingress controller is working as expected!
helm install --name my-release stable/nginx-ingress --set rbac.create=true