Minikube Service URL not working | Windows 11 [duplicate] - kubernetes

I'm new to Kubernetes. I successfully created a deployment with 2 replicas of my Angular frontend application, but when I expose it with a service and try to access the service with 'minikube service service-name', the browser can't show me the application.
This is my docker file
FROM registry.gitlab.informatica.aci.it/ccsc/images/nodejs/10_15
LABEL maintainer="d.vaccaro#informatica.aci.it" name="assistenza-fo" version="v1.0.0" license=""
WORKDIR /usr/src/app
ARG PRODUCTION_MODE="false"
ENV NODE_ENV='development'
ENV HTTP_PORT=4200
COPY package*.json ./
RUN if [ "${PRODUCTION_MODE}" = "true" ] || [ "${PRODUCTION_MODE}" = "1" ]; then \
echo "Build di produzione"; \
npm ci --production ; \
else \
echo "Build di sviluppo"; \
npm ci ; \
fi
RUN npm audit fix
RUN npm install -g #angular/cli
COPY dockerize /usr/local/bin
RUN chmod +x /usr/local/bin/dockerize
COPY . .
EXPOSE 4200
CMD ng serve --host 0.0.0.0
pod description
Name: assistenza-fo-674f85c547-bzf8g
Namespace: default
Priority: 0
Node: minikube/172.17.0.2
Start Time: Sun, 19 Apr 2020 12:41:06 +0200
Labels: pod-template-hash=674f85c547
run=assistenza-fo
Annotations: <none>
Status: Running
IP: 172.18.0.6
Controlled By: ReplicaSet/assistenza-fo-674f85c547
Containers:
assistenza-fo:
Container ID: docker://ef2bfb66d22dea56b2dc0e49e875376bf1edff369274015445806451582703a0
Image: registry.gitlab.informatica.aci.it/apra/sta-r/assistenza/assistenza-fo:latest
Image ID: docker-pullable://registry.gitlab.informatica.aci.it/apra/sta-r/assistenza/assistenza-fo#sha256:8d02a3e69d6798c1ac88815ef785e05aba6e394eb21f806bbc25fb761cca5a98
Port: 4200/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 19 Apr 2020 12:41:08 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zdrwg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-zdrwg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zdrwg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
my deployment description
Name: assistenza-fo
Namespace: default
CreationTimestamp: Sun, 19 Apr 2020 12:41:06 +0200
Labels: run=assistenza-fo
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=assistenza-fo
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=assistenza-fo
Containers:
assistenza-fo:
Image: registry.gitlab.informatica.aci.it/apra/sta-r/assistenza/assistenza-fo:latest
Port: 4200/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: assistenza-fo-674f85c547 (2/2 replicas created)
Events: <none>
and my service description
Name: assistenza-fo
Namespace: default
Labels: run=assistenza-fo
Annotations: <none>
Selector: run=assistenza-fo
Type: LoadBalancer
IP: 10.97.3.206
Port: <unset> 4200/TCP
TargetPort: 4200/TCP
NodePort: <unset> 30375/TCP
Endpoints: 172.18.0.6:4200,172.18.0.7:4200
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
When i run the command
minikube service assistenza-fo
I get the following output:
|-----------|---------------|-------------|-------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|---------------|-------------|-------------------------|
| default | assistenza-fo | 4200 | http://172.17.0.2:30375 |
|-----------|---------------|-------------|-------------------------|
* Opening service default/assistenza-fo in default browser...
but Chrome prints out: "unable to reach the site" for timeout.
Thank you
EDIT
I create again the service, this time as a NodePort service. Still not working. This is the service description:
Name: assistenza-fo
Namespace: default
Labels: run=assistenza-fo
Annotations: <none>
Selector: run=assistenza-fo
Type: NodePort
IP: 10.107.46.43
Port: <unset> 4200/TCP
TargetPort: 4200/TCP
NodePort: <unset> 30649/TCP
Endpoints: 172.18.0.7:4200,172.18.0.8:4200
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

I was able to reproduce your issue.
It's actually a bug on latest version of Minikube for Windows running Docker Driver: --driver=docker
You can see it here: Issue - minikube service not working with Docker driver on Windows 10 Pro #7644
it was patched with the merge: Pull - docker driver: Add Service & Tunnel features to windows
it is available now on Minikube v1.10.0-beta.0
In order to make it work, download the beta version from the website:
https://github.com/kubernetes/minikube/releases/download/v1.10.0-beta.0/minikube-windows-amd64.exe
move it to your working folder and rename it to minikube.exe
C:\Kubernetes>rename minikube-windows-amd64.exe minikube.exe
C:\Kubernetes>dir
22/04/2020 21:10 <DIR> .
22/04/2020 21:10 <DIR> ..
22/04/2020 21:04 55.480.832 minikube.exe
22/04/2020 20:05 489 nginx.yaml
2 File(s) 55.481.321 bytes
If you haven't yet, stop and uninstall the older version, then start Minikube with the new binary:
C:\Kubernetes>minikube.exe start --driver=docker
* minikube v1.10.0-beta.0 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
* Using the docker driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Restarting existing docker container for "minikube" ...
* Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
- kubeadm.pod-network-cidr=10.244.0.0/16
* Enabled addons: dashboard, default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
C:\Kubernetes>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-76df748b9-t6q59 1/1 Running 1 78m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 85m
service/nginx-svc NodePort 10.100.212.15 <none> 80:31027/TCP 78m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 78m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-76df748b9 1 1 1 78m
Minikube is now running on version v1.10.0-beta.0, now you can run the service as intended (and note the command will be unavailable because it will be tunneling the connection:
The browser will open automatically and your service will be available:
If you have any doubts let me know in the comments.

Related

All pod staying in pending mode with Events None in a K3S ARM64 + AMD64 cluster

I found many issues about this on StackOverflow; most are non-responded and over-complicated.
I shrink my issue to a simple "Hello world" test in a brand-new empty cluster.
I have a K3s cluster, the master is an online bare-metal AMD64 server, and the nodes are local PI400 ARM64 Debian hosts.
I'm trying to deploy
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: hello-world
labels:
app: hello-world
spec:
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: nginxdemos/hello
Then all my pod stays in Pending states:
kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-world-qsv2d 0/1 Pending 0 7m53s
hello-world-6rn5d 0/1 Pending 0 7m53s
a description of one of my nodes gave me:
kubectl describe pod hello-world-6rn5d
Name: hello-world-6rn5d
Namespace: default
Priority: 0
Node: <none>
Labels: app=hello-world
controller-revision-hash=649569d94c
pod-template-generation=1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: DaemonSet/hello-world
Containers:
hello-world:
Image: hello-world
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8bh8p (ro)
Volumes:
kube-api-access-8bh8p:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events: <none>
I already use this same node in a local ARM64 cluster, and they are working fine.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
pi417 Ready <none> 11h v1.24.4+k3s1
pi400 Ready <none> 11h v1.24.4+k3s1
kubectl version --output=yaml
clientVersion:
buildDate: "2022-06-15T14:22:29Z"
compiler: gc
gitCommit: f66044f4361b9f1f96f0053dd46cb7dce5e990a8
gitTreeState: clean
gitVersion: v1.24.2
goVersion: go1.18.3
major: "1"
minor: "24"
platform: windows/amd64
kustomizeVersion: v4.5.4
serverVersion:
buildDate: "2022-08-25T03:45:26Z"
compiler: gc
gitCommit: c3f830e9b9ed8a4d9d0e2aa663b4591b923a296e
gitTreeState: clean
gitVersion: v1.24.4+k3s1
goVersion: go1.18.1
major: "1"
minor: "24"
platform: linux/amd64
a node description:
kubectl.exe describe node pi400
Name: pi400
Roles: <none>
Labels: adb=true
beta.kubernetes.io/arch=arm64
beta.kubernetes.io/instance-type=k3s
beta.kubernetes.io/os=linux
egress.k3s.io/cluster=true
kubernetes.io/arch=arm64
kubernetes.io/hostname=pi400
kubernetes.io/os=linux
node.kubernetes.io/instance-type=k3s
Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"26:a8:bd:f3:1d:fd"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.3.25
k3s.io/hostname: pi400
k3s.io/internal-ip: 192.168.3.25
k3s.io/node-args: ["agent"]
k3s.io/node-config-hash: CBEQF3QV5PMMQWO2GECMRPJVEIFSCEFARQFZKX4RNV4K5FPB7FGQ====
k3s.io/node-env:
{"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/8...2a","K3S_NODE_NAME":"pi400" ...}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 12 Sep 2022 20:44:50 +0300
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pi400
AcquireTime: <unset>
RenewTime: Tue, 13 Sep 2022 08:53:29 +0300
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 13 Sep 2022 08:51:08 +0300 Mon, 12 Sep 2022 21:33:41 +0300 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 13 Sep 2022 08:51:08 +0300 Mon, 12 Sep 2022 21:33:41 +0300 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 13 Sep 2022 08:51:08 +0300 Mon, 12 Sep 2022 21:33:41 +0300 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 13 Sep 2022 08:51:08 +0300 Mon, 12 Sep 2022 21:33:41 +0300 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.3.25
Hostname: pi400
Capacity:
cpu: 4
ephemeral-storage: 30473608Ki
memory: 3885428Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 29644725840
memory: 3885428Ki
pods: 110
System Info:
Machine ID: d2eb1415b12e45ebac766cc20ce58012
System UUID: d2eb1415b12e45ebac766cc20ce58012
Boot ID: c2531ffa-96b0-4463-9f51-08e0dce6d5c3
Kernel Version: 5.15.61-v8+
OS Image: Debian GNU/Linux 11 (bullseye)
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.6.6-k3s1
Kubelet Version: v1.24.4+k3s1
Kube-Proxy Version: v1.24.4+k3s1
PodCIDR: 10.42.1.0/24
PodCIDRs: 10.42.1.0/24
ProviderID: k3s://pi400
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events: <none>
the issue may be due to:
the mixing of AMD64 / ARM64
a connection issue between nodes
K3s
I do not know how to get more info about the situation, I do not have any ADM64 node for now for more tests.
After more Test:
I had the same issue when all nodes used the same arch.
The issue was due to an incompatibility with the distro on my master.
I could have spotted the issue just after the master k3s setup.
Once the setup is done on the server, take a kubectl get nodes on a K3S setup the master must be visible as a node. if it's not the case, do not try to add any more nodes.
So that my k3s setup:
STEP 1 preconfigure your k3s
mkdir -p /etc/rancher/k3s/
nano /etc/rancher/k3s/config.yaml
add all options that are not available via env variable in the config.yaml before stating the setup script.
the content may looks like:
write-kubeconfig-mode: "0644"
tls-san:
- "1.2.3.4"
STEP 2 the start the master node setup
curl -sfL https://get.k3s.io | sh -
STEP 3 check that the master node is live
kubectl get nodes
if you do not see the master node, start investigating. (kernel option, and more)
STEP 4 get your credencial
get your credencial to remote access your k3s with the file /etc/rancher/k3s/k3s.yaml, and change clusters.cluster.server IP from 127.0.0.1 to a valid remote IP. past the new config file in ~/.kube/config.
STEP 5 try to connect with a kubectl
kubectl get node
STEP 6 add nodes
Start adding your nodes using your token from /var/lib/rancher/k3s/server/node-token
# FOR SLAVE customise hostname
# export K3S_NODE_NAME=pi417
export K3S_TOKEN=<token from /var/lib/rancher/k3s/server/node-token>
export K3S_URL=https://<remote-ip>:6443
curl -sfL https://get.k3s.io | sh -
STEP 7 if the setup get stuck
CTRL+C
sudo systemctl restart kubepods.slice kubepods-besteffort.slice
or reboot
STEP 8 start a hello world on all node to check if everythink is ok
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: hello-world
labels:
app: hello-world
spec:
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: nginxdemos/hello

K8s tutorial fails on my local installation with i/o timeout

I'm working on a local kubernetes installation with three nodes. They are installed via geerlingguy/kubernetes Ansible role (with default settings). I've recreated the whole VMs multiple times. I try to follow the Kubernetes tutorials on https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-interactive/ to get services up and running inside the cluster and try to reach them now.
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
enceladus Ready <none> 162m v1.17.9
mimas Ready <none> 162m v1.17.9
titan Ready master 162m v1.17.9
I tried it with the 1.17.9 or 1.18.6, I tried it with https://github.com/geerlingguy/ansible-role-kubernetes and https://github.com/kubernetes-sigs/kubespray on fresh Debian-Buster VMs. I tried it with Flannel and Calico network plugin. There is no a firewall configured.
I can deploy the kubernetes-bootcamp and exec into it, but when I try to reach the pod via kubectl proxy and curl I'm getting an error.
# kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
# kubectl describe pods
Name: kubernetes-bootcamp-69fbc6f4cf-nq4tj
Namespace: default
Priority: 0
Node: enceladus/192.168.10.12
Start Time: Thu, 06 Aug 2020 10:53:34 +0200
Labels: app=kubernetes-bootcamp
pod-template-hash=69fbc6f4cf
Annotations: <none>
Status: Running
IP: 10.244.1.4
IPs:
IP: 10.244.1.4
Controlled By: ReplicaSet/kubernetes-bootcamp-69fbc6f4cf
Containers:
kubernetes-bootcamp:
Container ID: docker://77eae93ca1e6b574ef7b0623844374a5b2f3054075025492b708b23fc3474a45
Image: gcr.io/google-samples/kubernetes-bootcamp:v1
Image ID: docker-pullable://gcr.io/google-samples/kubernetes-bootcamp#sha256:0d6b8ee63bb57c5f5b6156f446b3bc3b3c143d233037f3a2f00e279c8fcc64af
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 06 Aug 2020 10:53:35 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kkcvk (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-kkcvk:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kkcvk
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10s default-scheduler Successfully assigned default/kubernetes-bootcamp-69fbc6f4cf-nq4tj to enceladus
Normal Pulled 9s kubelet, enceladus Container image "gcr.io/google-samples/kubernetes-bootcamp:v1" already present on machine
Normal Created 9s kubelet, enceladus Created container kubernetes-bootcamp
Normal Started 9s kubelet, enceladus Started container kubernetes-bootcamp
Update service list
# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d20h
I can exec curl inside the deployment. It is running.
# kubectl exec -ti kubernetes-bootcamp-69fbc6f4cf-nq4tj curl http://localhost:8080/
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-69fbc6f4cf-nq4tj | v=1
But, when I try to curl from master node the response is not good:
curl http://localhost:8001/api/v1/namespaces/default/pods/kubernetes-bootcamp-69fbc6f4cf-nq4tj/proxy/
Error trying to reach service: 'dial tcp 10.244.1.4:80: i/o timeout'
The curl itself needs ca. 30sec to return. The version etc. is available. The proxy is running fine.
# curl http://localhost:8001/version
{
"major": "1",
"minor": "17",
"gitVersion": "v1.17.9",
"gitCommit": "4fb7ed12476d57b8437ada90b4f93b17ffaeed99",
"gitTreeState": "clean",
"buildDate": "2020-07-15T16:10:45Z",
"goVersion": "go1.13.9",
"compiler": "gc",
"platform": "linux/amd64"
}
The tutorial shows on kubectl describe pods that the container has open ports (in my case it's <none>):
Port: 8080/TCP
Host Port: 0/TCP
Ok, I than created an apply-file bootcamp.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes-bootcamp
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-bootcamp
template:
metadata:
labels:
app: kubernetes-bootcamp
spec:
containers:
- name: kubernetes-bootcamp
image: gcr.io/google-samples/kubernetes-bootcamp:v1
ports:
- containerPort: 8080
protocol: TCP
I removed the previous deployment
# kubectl delete deployments.apps kubernetes-bootcamp --force
# kubectl apply -f bootcamp.yaml
But after that I'm getting still the same i/o timeout on the new deployment.
So, what is my problem?

microk8s clusterIP service appear to be doing source NAT when the documentation says it should not

I am having a networking issue in Kubernetes.
I am trying to preserve the source IP of incoming requests to a clusterIP service, but I find that the requests appear to be source NAT'd. That is, they carry the IP address of the node as the source IP rather than the IP of the pod making the request. I am following the example for cluster IPs here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-clusterip
but I find that the behavior of Kubernetes is totally different for me.
The above example has me deploy an echo server which reports the source IP. This is deployed behind a clusterIP service which I request from a separate pod running busybox. The response from the echo server is below:
CLIENT VALUES:
client_address=10.1.36.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://10.152.183.99:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
connection=close
host=10.152.183.99
user-agent=Wget
BODY
The source IP 10.1.36.1 belongs to the node. I expected to see the address of busybox which is 10.1.36.168.
Does anyone know why SNAT would be enabled for a clusterIP? It's really strange to me that this directly contradicts the official documentation. (edited)
All of this is running on the same node. The node is running in iptables mode. I am using microk8s.
My microk8s version:
Client:
Version: v1.2.5
Revision: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
Server:
Version: v1.2.5
Revision: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
Output of kubectl describe service clusterip:
Name: clusterip
Namespace: default
Labels: app=source-ip-app
Annotations: <none>
Selector: app=source-ip-app
Type: ClusterIP
IP: 10.152.183.106
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 10.1.36.225:8080
Session Affinity: None
Events: <none>
Output of kubectl describe pod source-ip-app-7c79c78698-xgd5w:
Name: source-ip-app-7c79c78698-xgd5w
Namespace: default
Priority: 0
Node: riley-virtualbox/10.0.2.15
Start Time: Wed, 12 Feb 2020 09:19:18 -0600
Labels: app=source-ip-app
pod-template-hash=7c79c78698
Annotations: <none>
Status: Running
IP: 10.1.36.225
IPs:
IP: 10.1.36.225
Controlled By: ReplicaSet/source-ip-app-7c79c78698
Containers:
echoserver:
Container ID: containerd://6775c010145d3951d067e3bb062bea9b70d305f96f84aa870963a8b385a4a118
Image: k8s.gcr.io/echoserver:1.4
Image ID: sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 12 Feb 2020 09:19:23 -0600
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7pszf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-7pszf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7pszf
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/source-ip-app-7c79c78698-xgd5w to riley-virtualbox
Normal Pulled 2m58s kubelet, riley-virtualbox Container image "k8s.gcr.io/echoserver:1.4" already present on machine
Normal Created 2m55s kubelet, riley-virtualbox Created container echoserver
Normal Started 2m54s kubelet, riley-virtualbox Started container echoserver

Kubernetes pod Troubleshoot

I deployed my container in kubernetes pod, andd pod and related services are up and running.
please find the below pod and deployment status of the pods
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
angular-deployment-5d5fbf967c-zvzvl 1/1 Running 0 70m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
angular-service NodePort 10.96.16.68 <none> 80:31000/TCP 79m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
angular-deployment 1/1 1 1 70m
please find the below curl access
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.0.0.1:31000
curl: (7) Failed to connect to 10.0.0.1 port 31000: Connection refused
command terminated with exit code 7
Even though i am not able to access my application services in the browser as like below.
https://10.0.0.1:31000
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-deployment
spec:
selector:
matchLabels:
app: frontend-app
replicas: 1
template:
metadata:
labels:
app: frontend-app
spec:
containers:
- name: frontend-app
image: ${IMAGE_NAME}:${IMAGE_TAG}
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: angular-service
spec:
selector:
app: frontend-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
root#jenkins-linux-vm:/home/admin# kubectl describe pod angular-deployment-556c47f666-9d2x4
Name: angular-deployment-556c47f666-9d2x4
Namespace: pre-release
Priority: 0
Node: poc-worker2/10.0.0.2
Start Time: Sat, 18 Jan 2020 08:47:35 +0000
Labels: app=frontend-app
pod-template-hash=556c47f666
Annotations: <none>
Status: Running
IP: 10.32.0.8
IPs:
IP: 10.32.0.8
Controlled By: ReplicaSet/angular-deployment-556c47f666
Containers:
frontend-app:
Container ID: docker://43fea22e4c1d49e0c94fc8aca3a4b41df44b5f91f45ea29ede263c5a6bcf6503
Image: frontend-app:future-master-fix-f2d2a8bd
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 18 Jan 2020 08:47:40 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned pre-release/angular-deployment-556c47f666-9d2x4 to poc-worker2
Normal Pulled 17s kubelet, poc-worker2 Container image "frontend-app:future-master-fix-f2d2a8bd" already present on machine
Normal Created 16s kubelet, poc-worker2 Created container frontend-app
Normal Started 16s kubelet, poc-worker2 Started container frontend-app
please find the below are the svc for deployment
root#jenkins-linux-vm:/home/admin# kubectl describe svc angular-service
Name: angular-service
Namespace: pre-release
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"angular-service","namespace":"pre-release"},"spec":{"ports":[{"no...
Selector: app=frontend-app
Type: NodePort
IP: 10.96.227.143
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31000/TCP
Endpoints: 10.32.0.4:80,10.32.0.8:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Please find the docker file here
FROM node:12.2.0
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
# add app
COPY . /app
# start app
CMD ng serve --host 0.0.0.0
Can you please some one help me how to fix this issue
Here I could see the issue is with your Angular Dockerfile, you are using ng serve if you see your dependencies in package.json you will see "#angular/cli": "*" in order to have that inside your docker you need to add RUN npm install this will install all your dependencies inside your docker container, and you can do ng serve but ng serve is for development locally, it's not a good approach I would say.
To identify these kind of issues, it's advisable to run something like this on local machine in order to find if your docker container is working fine or not, before you could deploy it onto to k8s cluster, as you know kubernetes is a very big universe it will take time to identify the actual problem.
Ok coming to the issue (I can simply add a single command to your dockerfile and add my answer, but I wouldn't suggest that approach. So adding the complete answer which would look good), when you are deploying some frontend related application your docker image need to have the capabilities of serving the index.html page it's the end product after you build your Angular or React applications.
There are several ways this could be done. And there are several tutorials explaining the same, here is something I would suggest, your dockerfile should look like this. Adding comments what they do.
#Stage0: builder, based on Node alpine imagine to build and compile your angular code
FROM node:10-alpine as builder
WORKDIR /app
COPY package*.json /app/
# This is one thing you forgot in your dockerfile, if you add this it might work
RUN npm install
COPY . .
# This is normal build, it does ng build
RUN npm run build
#Stage 1, based on Nginx imagine, to have only the compile app inside nginx folders to serve
FROM nginx:1.15
COPY --from=builder /app/dist/ /usr/share/nginx/html
# This one copies the local nginx.conf file as default.conf for nginx to let it serve
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
Please make sure you have nginx.conf file inside your code, file level is same as package.json
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
#try_files $uri $uri/ /index.html =404;
expires -1d;
alias /usr/share/nginx/html/;
try_files $uri$args $uri$args/ /index.html =404;
location ~* \.(?:ico|css|js|gif|jpe?g|png|svg|woff|woff2|ttf|eot)$ {
add_header Access-Control-Allow-Origin *;
}
}
}
Make sure you run docker run on local machine before you deploy it on to cluster.
Hope this helps.
If you are accessing the pod via nodeip:nodeport which will only work if you are accessing the pod from outside the cluster using a browser.
Here is a guide on how to expose an application via NodePort. In this case the nodes of your kubernetes cluster need to be accessible i.e should have public ip.
You should be using the cluster ip to access the pod from within the cluster i.e via exec from another pod as shown in below command.
kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.96.16.68:80
I have a feeling that your docker container is not listening on port 80.

Jenkins app is not accessible outside Kubernetes cluster

On CentOS 7.4, I have set up a Kubernetes master node, pulled down jenkins image and deployed it to the cluster defining the jenkins service on a NodePort as below.
I can curl the jenkins app from the worker or master nodes using the IP defined by the service. But, I can not access the Jenkins app (dashboard) from my browser (outside cluster) using the public IP of the master node.
[administrator#abcdefgh ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
abcdefgh Ready master 19h v1.13.1
hgfedcba Ready <none> 19h v1.13.1
[administrator#abcdefgh ~]$ sudo docker pull jenkinsci/jenkins:2.154-alpine
[administrator#abcdefgh ~]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 5 days ago 80.2MB
k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 5 days ago 146MB
k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 5 days ago 181MB
k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 5 days ago 79.6MB
jenkinsci/jenkins 2.154-alpine aa25058d8320 2 weeks ago 222MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 6 weeks ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 10 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 12 months ago 742kB
[administrator#abcdefgh ~]$ ls -l
total 8
-rw------- 1 administrator administrator 678 Dec 18 06:12 jenkins-deployment.yaml
-rw------- 1 administrator administrator 410 Dec 18 06:11 jenkins-service.yaml
[administrator#abcdefgh ~]$ cat jenkins-service.yaml
apiVersion: v1
kind: Service
metadata:
name: jenkins-ui
spec:
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
selector:
app: jenkins-master
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-discovery
spec:
selector:
app: jenkins-master
ports:
- protocol: TCP
port: 50000
targetPort: 50000
name: jenkins-slaves
[administrator#abcdefgh ~]$ cat jenkins-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
template:
metadata:
labels:
app: jenkins-master
spec:
containers:
- image: jenkins/jenkins:2.154-alpine
name: jenkins
ports:
- containerPort: 8080
name: http-port
- containerPort: 50000
name: jnlp-port
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
emptyDir: {}
[administrator#abcdefgh ~]$ kubectl create -f jenkins-service.yaml
service/jenkins-ui created
service/jenkins-discovery created
[administrator#abcdefgh ~]$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-discovery ClusterIP 10.98.--.-- <none> 50000/TCP 19h
jenkins-ui NodePort 10.97.--.-- <none> 8080:31587/TCP 19h
kubernetes ClusterIP 10.96.--.-- <none> 443/TCP 20h
[administrator#abcdefgh ~]$ kubectl create -f jenkins-deployment.yaml
deployment.extensions/jenkins created
[administrator#abcdefgh ~]$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
jenkins 1/1 1 1 19h
[administrator#abcdefgh ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default jenkins-6497cf9dd4-f9r5b 1/1 Running 0 19h
kube-system coredns-86c58d9df4-jfq5b 1/1 Running 0 20h
kube-system coredns-86c58d9df4-s4k6d 1/1 Running 0 20h
kube-system etcd-abcdefgh 1/1 Running 1 20h
kube-system kube-apiserver-abcdefgh 1/1 Running 1 20h
kube-system kube-controller-manager-abcdefgh 1/1 Running 5 20h
kube-system kube-flannel-ds-amd64-2w68w 1/1 Running 1 20h
kube-system kube-flannel-ds-amd64-6zl4g 1/1 Running 1 20h
kube-system kube-proxy-9r4xt 1/1 Running 1 20h
kube-system kube-proxy-s7fj2 1/1 Running 1 20h
kube-system kube-scheduler-abcdefgh 1/1 Running 8 20h
[administrator#abcdefgh ~]$ kubectl describe pod jenkins-6497cf9dd4-f9r5b
Name: jenkins-6497cf9dd4-f9r5b
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: hgfedcba/10.41.--.--
Start Time: Tue, 18 Dec 2018 06:32:50 -0800
Labels: app=jenkins-master
pod-template-hash=6497cf9dd4
Annotations: <none>
Status: Running
IP: 10.244.--.--
Controlled By: ReplicaSet/jenkins-6497cf9dd4
Containers:
jenkins:
Container ID: docker://55912512a7aa1f782784690b558d74001157f242a164288577a85901ecb5d152
Image: jenkins/jenkins:2.154-alpine
Image ID: docker-pullable://jenkins/jenkins#sha256:b222875a2b788f474db08f5f23f63369b0f94ed7754b8b32ac54b8b4d01a5847
Ports: 8080/TCP, 50000/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Tue, 18 Dec 2018 07:16:32 -0800
Ready: True
Restart Count: 0
Environment:
JAVA_OPTS: -Djenkins.install.runSetupWizard=false
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wqph5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
jenkins-home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-wqph5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wqph5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
[administrator#abcdefgh ~]$ kubectl describe svc jenkins-ui
Name: jenkins-ui
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=jenkins-master
Type: NodePort
IP: 10.97.--.--
Port: ui 8080/TCP
TargetPort: 8080/TCP
NodePort: ui 31587/TCP
Endpoints: 10.244.--.--:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# Check if NodePort along with Kubernetes ports are open
[administrator#abcdefgh ~]$ sudo su root
[root#abcdefgh administrator]# systemctl start firewalld
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API Server
Warning: ALREADY_ENABLED: 6443:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
Warning: ALREADY_ENABLED: 2379-2380:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
Warning: ALREADY_ENABLED: 10250:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler
Warning: ALREADY_ENABLED: 10251:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager
Warning: ALREADY_ENABLED: 10252:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=10255/tcp # Read-Only Kubelet API
Warning: ALREADY_ENABLED: 10255:tcp
success
[root#abcdefgh administrator]# firewall-cmd --permanent --add-port=31587/tcp # NodePort of jenkins-ui service
Warning: ALREADY_ENABLED: 31587:tcp
success
[root#abcdefgh administrator]# firewall-cmd --reload
success
[administrator#abcdefgh ~]$ kubectl cluster-info
Kubernetes master is running at https://10.41.--.--:6443
KubeDNS is running at https://10.41.--.--:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[administrator#hgfedcba ~]$ curl 10.41.--.--:8080
curl: (7) Failed connect to 10.41.--.--:8080; Connection refused
# Successfully curl jenkins app using its service IP from the worker node
[administrator#hgfedcba ~]$ curl 10.97.--.--:8080
<!DOCTYPE html><html><head resURL="/static/5882d14a" data-rooturl="" data-resurl="/static/5882d14a">
<title>Dashboard [Jenkins]</title><link rel="stylesheet" ...
...
Would you know how to do that? Happy to provide additional logs. Also, I have installed jenkins from yum on another similar machine without any docker or kubernetes and it's possible to access it through 10.20.30.40:8080 in my browser so there is no provider firewall preventing me from doing that.
Your Jenkins Service is of type NodePort. That means that a specific port number, on any node within your cluster, will deliver your Jenkins UI.
When you described your Service, you can see that the port assigned was 31587.
You should be able to browse to http://SOME_IP:31587