kuberctl not work with remote cluster on windows - kubernetes

I want to control my remote k8s cluster but have a problem with it
I have kubeconfig file from k8s cluster admin. But when I try to connect cluster I take a mistake, when get with browser - result is ok
kubectl version --kubeconfig ./.kube/config -v=12 --insecure-skip-tls-verify=true --alsologtostderr
I0211 18:13:42.625408 12960 loader.go:359] Config loaded from file ./.kube/config
...
I0211 18:13:54.691273 12960 helpers.go:216] Connection error: Get https://k8s-t-deponl-01.raiffeisen.ru:8443/version?timeout=32s: Tunnel Connection Failed
F0211 18:13:54.692219 12960 helpers.go:116] Unable to connect to the server: Tunnel Connection Failed
With browser take answer:
{
"major": "1",
"minor": "11",
"gitVersion": "v1.11.5",
"gitCommit": "753b2dbc622f5cc417845f0ff8a77f539a4213ea",
"gitTreeState": "clean",
"buildDate": "2018-11-26T14:31:35Z",
"goVersion": "go1.10.3",
"compiler": "gc",
"platform": "linux/amd64"
}
Why I have that problem?

Related

Istio Pods Not Coming Up

I have Installed Istio-1.8.3 via Rancher UI long back and Istio Pods and
Ingress Gateway Pods are Up and Running and My Application is getting served by Istio.
Now, Recently We have Upgraded the K8's Cluster version from 1.21 to 1.22 and then 1.22 to 1.23.
Once We restart the kubelet, Istio Pods used to come up with No Issues.
Now, Because of few Issues We have rebooted the Node and Istio Pods got restarted, They are in Running State but Readiness Probe is getting failed.
The Error I was able to find is
failed to list CRDs: the server could not find the requested resource
Below are the full logs of Istio Pod.
stream logs failed container "discovery" in pod "istiod-5fbc9568cd-qgqkk" is waiting to start: ContainerCreating for istio-system/istiod-5fbc9568cd-qgqkk (discovery)
2022-06-27T05:35:32.772949Z info FLAG: --log_rotate_max_age="30"
2022-06-27T05:35:32.772952Z info FLAG: --log_rotate_max_backups="1000"
2022-06-27T05:35:32.772955Z info FLAG: --log_rotate_max_size="104857600"
2022-06-27T05:35:32.772958Z info FLAG: --log_stacktrace_level="default:none"
2022-06-27T05:35:32.772963Z info FLAG: --log_target="[stdout]"
2022-06-27T05:35:32.772971Z info FLAG: --mcpInitialConnWindowSize="1048576"
2022-06-27T05:35:32.772974Z info FLAG: --mcpInitialWindowSize="1048576"
2022-06-27T05:35:32.772977Z info FLAG: --mcpMaxMsgSize="4194304"
2022-06-27T05:35:32.772980Z info FLAG: --meshConfig="./etc/istio/config/mesh"
2022-06-27T05:35:32.772982Z info FLAG: --monitoringAddr=":15014"
2022-06-27T05:35:32.772985Z info FLAG: --namespace="istio-system"
2022-06-27T05:35:32.772988Z info FLAG: --networksConfig="/etc/istio/config/meshNetworks"
2022-06-27T05:35:32.772999Z info FLAG: --plugins="[authn,authz,health]"
2022-06-27T05:35:32.773002Z info FLAG: --profile="true"
2022-06-27T05:35:32.773005Z info FLAG: --registries="[Kubernetes]"
2022-06-27T05:35:32.773008Z info FLAG: --resync="1m0s"
2022-06-27T05:35:32.773011Z info FLAG: --secureGRPCAddr=":15012"
2022-06-27T05:35:32.773013Z info FLAG: --tlsCertFile=""
2022-06-27T05:35:32.773016Z info FLAG: --tlsKeyFile=""
2022-06-27T05:35:32.773018Z info FLAG: --trust-domain=""
2022-06-27T05:35:32.801976Z info klog Config not found: /var/run/secrets/remote/config[]
2022-06-27T05:35:32.803516Z info initializing mesh configuration ./etc/istio/config/mesh
2022-06-27T05:35:32.804499Z info mesh configuration: {
"proxyListenPort": 15001,
"connectTimeout": "10s",
"protocolDetectionTimeout": "0s",
"ingressClass": "istio",
"ingressService": "istio-ingressgateway",
"ingressControllerMode": "STRICT",
"enableTracing": true,
"defaultConfig": {
"configPath": "./etc/istio/proxy",
"binaryPath": "/usr/local/bin/envoy",
"serviceCluster": "istio-proxy",
"drainDuration": "45s",
"parentShutdownDuration": "60s",
"discoveryAddress": "istiod.istio-system.svc:15012",
"proxyAdminPort": 15000,
"controlPlaneAuthPolicy": "MUTUAL_TLS",
"statNameLength": 189,
"concurrency": 2,
"tracing": {
"zipkin": {
"address": "zipkin.istio-system:9411"
}
},
"envoyAccessLogService": {
},
"envoyMetricsService": {
},
"proxyMetadata": {
"DNS_AGENT": ""
},
"statusPort": 15020,
"terminationDrainDuration": "5s"
},
"outboundTrafficPolicy": {
"mode": "ALLOW_ANY"
},
"enableAutoMtls": true,
"trustDomain": "cluster.local",
"trustDomainAliases": [
],
"defaultServiceExportTo": [
"*"
],
"defaultVirtualServiceExportTo": [
"*"
],
"defaultDestinationRuleExportTo": [
"*"
],
"rootNamespace": "istio-system",
"localityLbSetting": {
"enabled": true
},
"dnsRefreshRate": "5s",
"certificates": [
],
"thriftConfig": {
},
"serviceSettings": [
],
"enablePrometheusMerge": true
}
2022-06-27T05:35:32.804516Z info version: 1.8.3-e282a1f927086cc046b967f0171840e238a9aa8c-Clean
2022-06-27T05:35:32.804699Z info flags:
2022-06-27T05:35:32.804706Z info initializing mesh networks
2022-06-27T05:35:32.804877Z info mesh networks configuration: {
"networks": {
}
}
2022-06-27T05:35:32.804938Z info initializing mesh handlers
2022-06-27T05:35:32.804949Z info initializing controllers
2022-06-27T05:35:32.804952Z info No certificates specified, skipping K8S DNS certificate controller
2022-06-27T05:35:32.814002Z error kube failed to list CRDs: the server could not find the requested resource
2022-06-27T05:35:33.816596Z error kube failed to list CRDs: the server could not find the requested resource
2022-06-27T05:35:35.819157Z error kube failed to list CRDs: the server could not find the requested resource
2022-06-27T05:35:39.821510Z error kube failed to list CRDs: the server could not find the requested resource
2022-06-27T05:35:47.823675Z error kube failed to list CRDs: the server could not find the requested resource
2022-06-27T05:36:03.827023Z error kube failed to list CRDs: the server could not find the requested resource
2022-06-27T05:36:35.829441Z error kube failed to list CRDs: the server could not find the requested resource
2022-06-27T05:37:35.831758Z error kube failed to list CRDs: the server could not find the requested resource
Upgrading Istio Pilot and Istio Ingress Gateway from 1.8.3 to 1.10.2 will work.
https://github.com/istio/istio/issues/34665
Istio version 1.8.x is too old version for kubernetes 1.23. You can refer istio documentation for k8s and istio combinations support and upgrade istio

Kubespray scale Ansible playbook cannot find /etc/kubernetes/admin.conf

I want to extend my Kubernetes cluster by one node.
So I run the scale.yaml Ansible playbook:
ansible-playbook -i inventory/local/hosts.ini --become --become-user=root scale.yml
But I am getting the error message when uploading the control plane certificates happens:
TASK [Upload control plane certificates] ***************************************************************************************************************************************************
ok: [jay]
fatal: [sam]: FAILED! => {"changed": false, "cmd": ["/usr/local/bin/kubeadm", "init", "phase", "--config", "/etc/kubernetes/kubeadm-config.yaml", "upload-certs", "--upload-certs"], "delta": "0:00:00.039489", "end": "2022-01-08 11:31:37.708540", "msg": "non-zero return code", "rc": 1, "start": "2022-01-08 11:31:37.669051", "stderr": "error execution phase upload-certs: failed to load admin kubeconfig: open /etc/kubernetes/admin.conf: no such file or directory\nTo see the stack trace of this error execute with --v=5 or higher", "stderr_lines": ["error execution phase upload-certs: failed to load admin kubeconfig: open /etc/kubernetes/admin.conf: no such file or directory", "To see the stack trace of this error execute with --v=5 or higher"], "stdout": "", "stdout_lines": []}
Anyone has an idea what the problem could be?
Thanks in advance.
I solved it myself.
I copied the /etc/kubernetes/admin.conf and /etc/kubernetes/ssl/ca.* to the new node and now the scale playbook works. Maybe this is not the right way, but it worked...

Cannot upgrade node using kubespray

A have test kubernetes on-premise cluster on centos 7.4. Current kubernetes version is 1.10.4. I am trying to upgrade to 1.11.5 using kubespray
The command is:
ansible-playbook upgrade-cluster.yml -b -i inventory/k8s-test/hosts.ini -e kube_version=v1.11.5
Masters are upgraded successfully, but nodes are not.
The error is:
fatal: [kubernodetst1]: FAILED! => {"changed": true, "cmd":
["/usr/local/bin/kubeadm", "join", "--config",
"/etc/kubernetes/kubeadm-client.conf",
"--ignore-preflight-errors=all",
"--discovery-token-unsafe-skip-ca-verification"], "delta":
"0:00:00.040038", "end": "2018-12-13 15:55:56.162387", "msg":
"non-zero return code", "rc": 3, "start": "2018-12-13
15:55:56.122349", "stderr": "discovery: Invalid value: \"\": using
token-based discovery without discoveryTokenCACertHashes can be
unsafe. set --discovery-token-unsafe-skip-ca-verification to
continue", "stderr_lines": ["discovery: Invalid value: \"\": using
token-based discovery without discoveryTokenCACertHashes can be
unsafe. set --discovery-token-unsafe-skip-ca-verification to
continue"], "stdout": "", "stdout_lines": []}
You have a incorrect CA for nodes, regenerate all and try again

Unable to fully collect metrics, when installing metric-server

I have installed the metric server on kubernetes, but its not working and logs
unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:xxx: unable to fetch metrics from Kubelet ... (X.X): Get https:....: x509: cannot validate certificate for 1x.x.
x509: certificate signed by unknown authority
I was able to get metrics if modified the deployment yaml and added
command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
this now collects metrics, and kubectl top node returns results...
but logs still show
E1120 11:58:45.624974 1 reststorage.go:144] unable to fetch pod metrics for pod dev/pod-6bffbb9769-6z6qz: no metrics known for pod
E1120 11:58:45.625289 1 reststorage.go:144] unable to fetch pod metrics for pod dev/pod-6bffbb9769-rzvfj: no metrics known for pod
E1120 12:00:06.462505 1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-1x.x.x.eu-west-1.compute.internal: unable to get CPU for container ...discarding data: missing cpu usage metric, unable to fully scrape metrics from source
so questions
1) All this works on minikube, but not on my dev cluster, why would that be?
2) In production i dont want to do insecure-tls.. so can someone please explain why this issue is arising... or point me to some resource.
Kubeadm generates the kubelet certificate at /var/lib/kubelet/pki and those certificates (kubelet.crt and kubelet.key) are signed by different CA from the one which is used to generate all other certificates at /etc/kubelet/pki.
You need to regenerate the kubelet certificates which is signed by your root CA (/etc/kubernetes/pki/ca.crt)
You can use openssl or cfssl to generate the new certificates(I am using cfssl)
$ mkdir certs; cd certs
$ cp /etc/kubernetes/pki/ca.crt ca.pem
$ cp /etc/kubernetes/pki/ca.key ca-key.pem
Create a file kubelet-csr.json:
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"<node_name>",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "US",
"ST": "NY",
"L": "City",
"O": "Org",
"OU": "Unit"
}]
}
Create a ca-config.json file:
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
Now generate the new certificates using above files:
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
--config=ca-config.json -profile=kubernetes \
kubelet-csr.json | cfssljson -bare kubelet
Replace the old certificates with newly generated one:
$ scp kubelet.pem <nodeip>:/var/lib/kubelet/pki/kubelet.crt
$ scp kubelet-key.pem <nodeip>:/var/lib/kubelet/pki/kubelet.key
Now restart the kubelet so that new certificates will take effect on your node.
$ systemctl restart kubelet
Look at the following tickets to get the context of issue:
https://github.com/kubernetes-incubator/metrics-server/issues/146
Hope this helps.

Sensu Client status

I am trying to see why my Sensu Client does not connect to my Sensu Server.
How can I see the status of the client and whether it tried, succeeded, failed in connecting with the server?
I have installed Sensu Server on CentOS using docker. I can connect to it, the RabbiMQ and Uchiwa panel from my host.
I have installed Sensu Client on Windows host.
I have added following configs:
C:\etc\sensu\conf.d\client.json
{
"client": {
"name": "DanWindows",
"address": " 192.168.59.3",
"subscriptions": [ "all" ]
}
}
C:\etc\sensu\config.json
{
"rabbitmq": {
"host": "192.168.59.103",
"port": 5671,
"vhost": "/sensu",
"user": "sensu",
"password": "password",
"ssl": {
"cert_chain_file": "C:/etc/sensu/ssl/cert.pem",
"private_key_file": "C:/etc/sensu/ssl/key.pem"
}
}
}
I have installed and started the Sensu Client service using following command:
sc create sensu-client binPath= C:\Tools\sensu\bin\sensu-client.exe DisplayName= "Sensu Client"
On the Uchiwa panel I do not see any clients.
The "sensu-client.err.log" and "sensu-client.out.log" are empty, while "sensu-client.wrapper.log" contains this:
2015-01-16 13:41:51 - Starting C:\Tools\sensu\embedded\bin\ruby C:\Tools\sensu\embedded\bin\sensu-client -d C:\etc\sensu\conf.d -l C:\Tools\sensu\sensu-client.log
2015-01-16 13:41:51 - Started 3800
How can I see the status of the Windows client and whether it tried, succeeded, failed in connecting with the server?
Question on the docker, is this one you built yourself? I recently built my own as well only using Ubuntu instead of CentOS.
Recent versions of sensu require the following two files in the /etc/sensu/conf.d directory:
/etc/sensu/conf.d/rabbitmq.json
/etc/sensu/conf.d/client.json
The client.json file will have contents similar to this:
{ "client": {
"name": "my-sensu-client",
"address": "192.168.x.x",
"subscriptions": [ "ALL" ] }
}
The only place I have heard of needing a config.json file is on the sensu-server. But I have only recently been looking at sensu so this may be an older sensu requirement.