I'm trying to connect an already existing GKE cluster to a Mongo Cluster in mongo atlas.
I followed this tutorial, with the only difference that I didn't create the GKE cluster after creating the peering, but the other way around: I created the GKE Cluster, and then, when I saw I needed the peering, I peered the VPC in which the GKE Cluster was deployed, to mongo.
Then I tried the following
kubectl run -i --tty mongo-test --image=mongo --restart=Never -- sh
and then mongosh <<myconnectionurl>>, but it fails with a timeout.
I fear the peering needs to be done BEFORE creating the GKE cluster, which would be extremely undesirable for me. Note that VPC-native traffic routing is enabled in said GKE Clustter.
Related
I'm trying to launch a prometheus pod in order to scrape the etcd metrics from within our kubernetes cluster.
I was trying to reproduce the solution proposed here: Access etcd metrics for Prometheus
Unfortunately, the etcd containers seem to be unavailable from the cluster.
# nc -vz etcd1 2379
nc: getaddrinfo for host "etcd1" port 2379: Name or service not known
In a way, this seems logical since no etcd container appear in the cluster:
kubectl get pods -A | grep -i etcd does not return anything.
However, when I connect onto the machine hosting the master nodes, I can find the containers using the docker ps command.
The cluster has been deployed using Kubespray.
Do you know if there is a way to reach the etcd containers from the cluster pods?
Duh… the etcd container is configured with the host network. Therefore, the metrics endpoint is directly accessible on the node.
I'm attempting to follow instructions for resolving a data congestion issue by enabling 2 unsafe sysctls for certain pods running in a Kubernetes cluster where the Pods are deployed by EKS. To do this, I must enable those parameters in the nodes running those pods. The following command is for enabling on a per-node basis:
kubelet --allowed-unsafe-sysctls \
'net.unix.max_dgram_qlen,net.core.somaxconn'
However, the Nodes in the cluster I am working with are deployed by EKS. The EKS cluster was deployed by using the Amazon dashboard (Not a yaml config file/terraform/etc). I am not sure how to translate the above step to have all nodes in my cluster have those systcl enabled.
I am very new at kubernetes.
I have created a cluster setup at AWS EKS
Now I want to configure kubectl at a local ubuntu server so that I can connect at AWS EKS cluster.
Need to understand the process. [ If at all it is possible ]
aws cli is used to create Kubernetes config (normally ~/.kube/config).
See details by:
aws eks update-kubeconfig help
You can follow this guide. You need to do following steps.
1.Installing kubectl
2.Installing aws-iam-authenticator
3.Create a kubeconfig for Amazon EKS
4.Managing Users or IAM Roles for your Cluster
Also take a look at configuring kubeconfig using AWS CLI here
My setup has a K8S Redis cluster with 8 nodes and 32 pods across them and a load balancer service on top.
I am using a Redis cluster client to access this cluster using the load balancer's external IP. However, when handling queries, as part of Redis cluster redirection (MOVED / ASK), the cluster client receives internal IP addresses of the 32 Pods, connection to which fails within the client.
For example, I provide the IP address of the load balancer (35.245.51.198:6379) but the Redis cluster client throws errors like -
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: Failed connecting to host 10.32.7.2:6379, which is an internal Pod IP.
Any ideas about how to deal with this situation will be much appreciated.
Thanks in advance.
If you're running on GKE, you can NAT the Pod IP using the IP masquerade agent:
Using IP masquerading in your clusters can increase their security by preventing individual Pod IP addresses from being exposed to traffic outside link-local range (169.254.0.0/16) and additional arbitrary IP ranges
Your issue specifically is that, the pod range is on 10.0.0.0/8, which is by default a non-masquerade CIDR.
You can change this using a ConfigMap to treat that range as masquerade so that it picks the node's external IP as source address.
Alternatively, you can change the pod range in your cluster to anything that is masked.
I have been struggling with the same problem in installing the bitnami/redis-cluster on gke.
In order to have the right networking settings you should create the cluster setting as a public cluster
The equivalent command line for creating the cluster in MYPROJECT is:
gcloud beta container --project "MYPROJECT" clusters create "redis-cluster" --zone "us-central1-c" --no-enable-basic-auth --cluster-version "1.21.5-gke.1802" --release-channel "regular" --machine-type "e2-medium" --image-type "COS_CONTAINERD" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --logging=SYSTEM,WORKLOAD --monitoring=SYSTEM --no-enable-ip-alias --network "projects/MYPROJECT/global/networks/default" --subnetwork "projects/oddsjam/regions/us-central1/subnetworks/default" --no-enable-intra-node-visibility --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 --workload-pool "myproject.svc.id.goog" --enable-shielded-nodes --node-locations "us-central1-c"
Then you need to create as many External IP addresses in the Network VPC product. Those IP addresses will be picked by the Redis nodes automatically.
Then you are ready to get the values.yaml of the Bitnami Redis Cluster Helm chart and change the conf accordingly to your use case. Add the list of external ips you created to the cluster.externalAccess.loadBalancerIP value.
Finally, you can run the command to install a Redis cluster on GKE by running
helm install cluster-name -f values.yaml bitnami/redis-cluster
This command will give you the password of the cluster. you can use redis-client to connect to the new cluster with:
redis-cli -c -h EXTERNAL_IP -p 6379 -a PASSWORD
Is it possible to add existing instances in the GCP to a kubernetes cluster?
I can see the inputs while creating the cluster in graphically mode is :
Cluster Name,
Location,
Zone,
Cluster Version,
Machine type,
Size.
In command mode:
gcloud container clusters create cluster-name --num-nodes=4
I'm having 10 running instances.
I need to create the kubernetes cluster with the already existing running instances
On your instance run the following
kubelet --api_servers=http://<API_SERVER_IP>:8080 --v=2 --enable_server --allow-privileged
kube-proxy --master=http://<API_SERVER_IP>:8080 --v=2
This will connect your slave node to your existing cluster. Its actually surprisingly simple