OpenShift import-image fails when behind corporate proxy - kubernetes

When i run
oc import-image centos:7 --confirm true
I am getting
The import completed with errors.
Name: centos
Namespace: pd-kube-ci
Created: Less than a second ago
Labels: <none>
Annotations: openshift.io/image.dockerRepositoryCheck=2018-12-27T21:00:26Z
Docker Pull Spec: docker-registry.default.svc:5000/pd-kube-ci/centos
Image Lookup: local=false
Unique Images: 0
Tags: 1
7
tagged from centos:7
! error: Import failed (InternalError): Internal error occurred: Get https://registry-1.docker.io/v2/: proxyconnect tcp: EOF
Less than a second ago
error: tag 7 failed: Internal error occurred: Get https://registry-1.docker.io/v2/: proxyconnect tcp: EOF
For the life of me, i cannot find the source of proxyconnect tcp: EOF. Its not found anywhere in the OpenShift/Kubernetes source. Google knows next to nothing about that.
I have also verified that i can docker pull centos from each node (including master and infra nodes). Its only when openshift tries to pull that image.
Any ideas?

Turns out it was a mis-configuration in our openshift_https_proxy ansible var. Specifically we had:
openshift_https_proxy=https://proxy.mycompany.com:8443
And we should have had
openshift_https_proxy=http://proxy.mycompany.com:8443
To fix this, we had to edit /etc/origin/master/master.env on the masters and /etc/sysconfig/docker on all nodes, then restart per the Working with HTTP Proxies documentation.

Related

GKE config connector issue - Post i/o timeout

I am running into the below error when creating compute IP.
Config connector is already enabled, and it is a private cluster hosted on a shared network.
Version 1.17.15-gke.800
$ kubectl apply -f webapp-compute-ip. yaml
Error from server (InternalError): error when creating "webapp-compute-ip.yaml": Internal error occurred: failed calling webhook "annotation-defaulter.cnrm.cloud.google.com": Post https://cnrm-validating-webhook.cnrm-system.svc:443/annotation-defaulter?timeout=30s: dial tcp 192.168.66.130:9443: i/o timeout
$cat webapp-compute-ip.yaml
apiVersion: compute.cnrm.cloud.google.com/v1beta1
kind: ComputeAddress
metadata:
name: webapp-ip-test
namespace: sandbox
labels:
app: webapp
environment: test
annotations:
cnrm.cloud.google.com/project-id: "cluster-name"
spec:
location: global`
This problem was due to a config connector version issue.
There was a change in the webhook default port, from 443 to 9443, see
Config connector version depends on GKE version, I did not have any control over it, moreover there no is public documentation available on what config connector version is available with the GKE version. There is an existing request here.
The solution was for me to add port 9443 in the firewall rule.

Kubernetes Ingress Controller: Failed calling webhook, dial tcp connect: connection refused

I have set up a Kubernetes cluster (a master and a worker) on two Centos 7 machines. They have the following IPs:
Master: 192.168.1.40
Worker: 192.168.1.41
They are accessible by SSH and I am not using a VPN. For both boxes, I have sudo access.
For the work I am doing, I had to add an Nginx Ingress Controller, which I did by doing:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/baremetal/deploy.yaml
This yaml file seems fine to me and is a common one that occurs when trying to add an nginx ingress controller to a kubernetes cluster.
I don't see any errors when I do the above command.
However, when I try to install a helm configuration, such as:
helm install dai eggplant/dai --version 0.6.5 -f dai.yaml --namespace dai
I am getting an error with my Nginx Ingress Controller:
W0119 11:58:00.550727 60628 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Error: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s": dial tcp 10.108.86.48:443: connect: connection refused
I think this is because of some kind of DNS error. I don't know where the IP 10.108.86.48:443 is coming from or how to find out.
I have also enabled a bunch of ports with firewall-cmd.
[root#manager-node ~]# sudo firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens33
sources:
services: dhcpv6-client ssh
ports: 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp 443/tcp 30154/tcp 31165/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
However, my nginx ingress pod doesn't seem to start either:
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-7bc44b4bb-rwmh2 0/1 ContainerCreating 0 19h
It remains as ContainerCreating for hours.
The issue is that as part of that kubectl apply -f you are also applying a ValidatingWebhookConfiguration (check the applied manifest file).
See Using Admission Controllers | Kubernetes
Using Admission Controllers | Kubernetes for more info.
The error you are seeing is because your Deployment is not starting up, and thus the ValidatingWebhook service configured as part of it isn't starting up either, so the Validating Controller in Kubernetes is failing every request.
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
Your pod is most likely not starting for another reason. More information is required to further debug.
I would recommend removing the ValidatingWebhookConfiguration from the applied manfiest.
You can also remove it manually with
kubectl delete ValidatingWebhookCOnfiguration ingress-nginx-admission
(Validating Controllers aren't namespaced)

metric-server : TLS handshake error from 20.99.219.64:57467: EOF

I have deployed metric server using :
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
Metric server pod is running but in logs I am getting error :
I0601 20:00:01.321004 1 log.go:172] http: TLS handshake error from 20.99.219.64:34903: EOF
I0601 20:00:01.321160 1 log.go:172] http: TLS handshake error from 20.99.219.64:22575: EOF
I0601 20:00:01.332318 1 log.go:172] http: TLS handshake error from 20.99.219.64:14603: EOF
I0601 20:00:01.333174 1 log.go:172] http: TLS handshake error from 20.99.219.64:22517: EOF
I0601 20:00:01.351649 1 log.go:172] http: TLS handshake error from 20.99.219.64:3598: EOF
IP : 20.99.219.64
This is not present in Cluster. I have checked using :
kubectl get all --all-namespaces -o wide | grep "20.99.219.64"
Nothing is coming as O/P.
I have using Calico and initialize the cluster with --pod-network-cidr=20.96.0.0/12
Also kubectl top nodes is not working, Getting error :
node#kubemaster:~/Desktop/dashboard$ kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
During deployment of metrics-server remember to add following line in args section:
- args:
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
Also add the following line at the spec:spec level outside of the previous containers level:
hostNetwork: true
restartPolicy: Always
Remember to apply changes.
Metrics-server attempts to authorize itself using token authentication. Please ensure that you're running your kubelets with webhook token authentication turned on.
Speaking about TLS directly. TLS message is big packet and MTU in calico is wrong so change it according calico-project-mtu.
Execute command:
$ kubectl edit configmap calico-config -n kube-system and change the MTU value from 1500 to 1430.
Take a look: metrics-server-mtu.
I also met the this problem that I can't make metrics-server to work in my k8s cluster (kubectl version is 1.25.4). I follow the instructions above and solve the issue!
I downloaded the components.yaml file and only add the - --kubelet-insecure-tls in the args of deployment. Then I got the metrics-server work!

unable to deploy local container image to k8s cluster

I have tried to deploy one of the local container images I created but keeps always getting the below error
Failed to pull image "webrole1:dev": rpc error: code = Unknown desc =
Error response from daemon: pull access denied for webrole1,
repository does not exist or may require 'docker login': denied:
requested access to
I have followed the below article to containerize my application and I was able to successfully complete this but when I try to deploy it to k8s pod I don't succeed
My pod.yaml looks like below
apiVersion: v1
kind: Pod
metadata:
name: learnk8s
spec:
containers:
- name: webrole1dev
image: 'webrole1:dev'
ports:
- containerPort: 8080
and below are some images from my PowerShell
I am new to dockers and k8s so thanks for the help in advance and would appreciate if I get some detailed response.
When you're working locally, you can use an image name like webrole, however that doesn't tell Docker where the image came from (because it didn't come from anywhere, you built it locally). When you start working with multiple hosts, you need to push things to a Docker registry. For local Kubernetes experiments you can also change your config so you build your image in the same Docker environment as Kubernetes is using, though the specifics of that depend on how you set up both Docker and Kubernetes.

Error when deploying kube-dns: No configuration has been provided

I have just installed a basic kubernetes cluster the manual way, to better understand the components, and to later automate this installation. I followed this guide: https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
The cluster is completely empty without addons after this. I've already deployed kubernetes-dashboard succesfully, however, when trying to deploy kube-dns, it fails with the log:
2017-01-11T15:09:35.982973000Z F0111 15:09:35.978104 1 server.go:55]
Failed to create a kubernetes client:
invalid configuration: no configuration has been provided
I used the following yaml template for kube-dns without modification, only filling in the cluster IP:
https://coreos.com/kubernetes/docs/latest/deploy-addons.html
What did I do wrong?
Experimenting with kubedns arguments, I added --kube-master-url=http://mykubemaster.mydomain:8080 to the yaml file, and suddenly it reported in green.
How did this solve it? Was the container not aware of the master for some reason?
In my case, I had to put numeric IP on "--kube-master-url=http://X.X.X.X:8080". It's on yaml file of RC (ReplicationController), just like:
...
spec:
containers:
- name: kubedns
...
args:
# command = "/kube-dns"
- --domain=cluster.local
- --dns-port=10053
- --kube-master-url=http://192.168.99.100:8080