How to create Kubernetes Cluster using Kops with insecure registry? - kubernetes

I have to create a cluster with support of insecure docker registry. I want to use Kops for this. Is there any way to create cluster with insecure registry using Kops?

You can set insecure registry at cluster config edit time, after kops create cluster ... command (navigate to clusterSpec part of file):
$ kops edit cluster $NAME
...
docker:
insecureRegistry: registry.example.com
logDriver: json-file
...
Original link

Related

Unable to switch from Minikube to AWS EKS on windows for Deployment

I have minikube on my local machine for testing deployment and I ran commands like
kubectl apply -f testingfile.yaml
and it worked fine. Now I want to perform the same on aws eks. I have followed all steps given in https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html. Created a config file and added that to the path. Commands like eksctl get cluster are correctly listing the clusters from aws eks but now when I run
kubectl apply -f testingfile.yaml
I am getting the following statement
deployment.apps/testingfile unchanged which means it is still applying the command inside minikube and not on aws eks. I have also deleted path variables related to minikube from environment variables but I am still unable to switch to aws eks for applying. I would like to deploy this on aws eks. Let me know what I am missing here
Checking your existing cluster contexts
There will multiple contexts one for Minikube and One for EKS
kubectl config get-contexs
change context to EKS if your config is set it will be there
kubectl config use-context <Name of context>
this way you can get changed to another clusters.

How to use kubectl command for kubernetes implemented in rancher run with docker?

I built a rancher using docker on server 1.
I created and added a kubernetes cluster on server 2, and I wanted to access the kubernetes with the kubectl command on server 2 local, but localhost:8080 error is displayed.
How can I apply kubectl command to kubernetes configured with docker rancher locally?
I fixed that issue modifying the kube config file.
The kubeconfig file can be checked by entering the rancher
The file to be modified is ~/.kube/config

KOPS reload ssh access key to cluster

I want to restart my Kubernetes access ssh key using commands from this website:
https://github.com/kubernetes/kops/blob/master/docs/security.md#ssh-access
so those:
kops delete secret --name <clustername> sshpublickey admin
kops create secret --name <clustername> sshpublickey admin -i ~/.ssh/newkey.pub
kops update cluster --yes
And when I type last command "kops update cluster --yes" I get that error:
completed cluster failed validation: spec.spec.kubeProxy.enabled: Forbidden: kube-router requires kubeProxy to be disabled
Does Anybody have any idea what can I change those secret key without disabling kubeProxy?
This problem comes from having set
spec:
networking:
kuberouter: {}
but not
spec:
kubeProxy:
enabled: false
in the cluster spec.
Export the config using kops get -o yaml > myspec.yaml, edit the config according to the error above. Then you can apply the spec using kops replace -f myspec.yaml.
It is considered a best practice to check the above yaml into version control to track any changes done to the cluster configuration.
Once the cluster spec has been amended, the new ssh key should work as well.
What version of kubernetes are you running? If you are running the latests one 1.18.xx the user its not admin but ubuntu.
One other thing that you could do is to first edit the cluster and set the spect of kubeproxy to enabled fist . Run kops update cluster and rolling update and then do the secret delete and creation.

How to access local kubernete cluster

I have deployed 1 master and 3 nodes on VM's.
I can run successfully "kubectl" command on the server's SSH CLI. I can deploy pods, all fine.
But I couldn't find how can I run "kubectl" command from my local and manage the K8S cluster? How can I do that?
Thanks!
You might have a kubeconfig file somewhere on the VMs. You can copy this one to your local device under $HOME/.kube/config, so kubectl knows how to access the cluster.
For more information, see the kubernetes documentation.
From your local machine run:
kubectl config get-contexts
Then run the below (replace cluster-name with the cluster name you want to communicate with):
kubectl config use-context cluster-name
If the cluster name you want to communicate with is not listed, it means you haven't got to context file to the cluster.

How to configure local kubectl to connect to kubernetes EKS cluster

I am very new at kubernetes.
I have created a cluster setup at AWS EKS
Now I want to configure kubectl at a local ubuntu server so that I can connect at AWS EKS cluster.
Need to understand the process. [ If at all it is possible ]
aws cli is used to create Kubernetes config (normally ~/.kube/config).
See details by:
aws eks update-kubeconfig help
You can follow this guide. You need to do following steps.
1.Installing kubectl
2.Installing aws-iam-authenticator
3.Create a kubeconfig for Amazon EKS
4.Managing Users or IAM Roles for your Cluster
Also take a look at configuring kubeconfig using AWS CLI here