Kubernetes Port Forwarding - Connection refused - kubernetes

I am getting the following error when forwarding port. Can anyone help?
mjafary$ sudo kubectl port-forward sa-frontend 88:82
Forwarding from 127.0.0.1:88 -> 82
Forwarding from [::1]:88 -> 82
The error log :
Handling connection for 88
Handling connection for 88
E1214 01:25:48.704335 51463 portforward.go:331] an error occurred forwarding 88 -> 82: error forwarding port 82 to pod a017a46573bbc065902b600f0767d3b366c5dcfe6782c3c31d2652b4c2b76941, uid : exit status 1: 2018/12/14 08:25:48 socat[19382] E connect(5, AF=2 127.0.0.1:82, 16): Connection refused
Here is the description of the pod. My expectation is that when i hit localhost:88 in the browser the request should forward to the jafary/sentiment-analysis-frontend container and the application page should load
mjafary$ kubectl describe pods sa-frontend
Name: sa-frontend
Namespace: default
Node: minikube/192.168.64.2
Start Time: Fri, 14 Dec 2018 00:51:28 -0700
Labels: app=sa-frontend
Annotations: <none>
Status: Running
IP: 172.17.0.23
Containers:
sa-frontend:
Container ID: docker://a87e614545e617be104061e88493b337d71d07109b0244b2b40002b2f5230967
Image: jafary/sentiment-analysis-frontend
Image ID: docker-pullable://jafary/sentiment-analysis-frontend#sha256:5ac784b51eb5507e88d8e2c11e5e064060871464e2c6d467c5b61692577aeeb1
Port: 82/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 14 Dec 2018 00:51:30 -0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mc5cn (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-mc5cn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mc5cn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>

The reason the connection is refused is that there is no process listening on port 82. The dockerfile used to create the nginx image exposes port 80, and in your pod spec you have also exposed port 82. However, nginx is configured to listen on port 80.
What this means is your pod has two ports that have been exposed: 80 and 82. The nginx application, however, is actively listening on port 80 so only requests to port 80 work.
To make your setup work using port 82, you need to change the nginx config file so that it listens on port 82 instead of 80. You can either do this by creating your own docker image with the changes built into your image, or you can use a configMap to replace the default config file with the settings you want

As #Patric W said, the connection is refused because there is no process listening on port 82. That port hasn't been exposed.
Now, to get the port on which your pod is listening to, you can run the commands
NB: Be sure to change any value in <> with real values.
First, get the name of the pods in the specified namespace kubectl get po -n <namespace>
Now check the exposed port of the pod you'll like to forward.
kubectl get pod <pod-name> -n <namespace> --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
Now use the resulting exposed port above to run port-forward with the command
kubectl port-forward pod/<pod-name> <local-port>:<exposed-port>
where local-port is the port from which the container will be accessed from the browser ..localhost:<local-port> while the exposed-port is the port on which the container listens to. Usually defined with the EXPOSE command in the Dockerfile
Get more information here

As Patrick pointed out correctly. I had this same issue which plagued me for 2 days. So steps would be:
Ensure your Dockerfile is using your preferred port (EXPOSE 5000)
In your pod.yml file ensure containerPort is 5000 (containerPort: 5000)
Apply the kubectl command to reflect the above:
kubectl port-forward pod/my-name-of-pod 8080:5000

Related

Kubernetes pod Troubleshoot

I deployed my container in kubernetes pod, andd pod and related services are up and running.
please find the below pod and deployment status of the pods
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get pods
NAME READY STATUS RESTARTS AGE
angular-deployment-5d5fbf967c-zvzvl 1/1 Running 0 70m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
angular-service NodePort 10.96.16.68 <none> 80:31000/TCP 79m
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
angular-deployment 1/1 1 1 70m
please find the below curl access
root#jenkins-linux-vm:/home/admin/kubernetes# kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.0.0.1:31000
curl: (7) Failed to connect to 10.0.0.1 port 31000: Connection refused
command terminated with exit code 7
Even though i am not able to access my application services in the browser as like below.
https://10.0.0.1:31000
apiVersion: apps/v1
kind: Deployment
metadata:
name: angular-deployment
spec:
selector:
matchLabels:
app: frontend-app
replicas: 1
template:
metadata:
labels:
app: frontend-app
spec:
containers:
- name: frontend-app
image: ${IMAGE_NAME}:${IMAGE_TAG}
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: angular-service
spec:
selector:
app: frontend-app
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 31000
type: NodePort
root#jenkins-linux-vm:/home/admin# kubectl describe pod angular-deployment-556c47f666-9d2x4
Name: angular-deployment-556c47f666-9d2x4
Namespace: pre-release
Priority: 0
Node: poc-worker2/10.0.0.2
Start Time: Sat, 18 Jan 2020 08:47:35 +0000
Labels: app=frontend-app
pod-template-hash=556c47f666
Annotations: <none>
Status: Running
IP: 10.32.0.8
IPs:
IP: 10.32.0.8
Controlled By: ReplicaSet/angular-deployment-556c47f666
Containers:
frontend-app:
Container ID: docker://43fea22e4c1d49e0c94fc8aca3a4b41df44b5f91f45ea29ede263c5a6bcf6503
Image: frontend-app:future-master-fix-f2d2a8bd
Image ID: docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 18 Jan 2020 08:47:40 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-r67p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-r67p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned pre-release/angular-deployment-556c47f666-9d2x4 to poc-worker2
Normal Pulled 17s kubelet, poc-worker2 Container image "frontend-app:future-master-fix-f2d2a8bd" already present on machine
Normal Created 16s kubelet, poc-worker2 Created container frontend-app
Normal Started 16s kubelet, poc-worker2 Started container frontend-app
please find the below are the svc for deployment
root#jenkins-linux-vm:/home/admin# kubectl describe svc angular-service
Name: angular-service
Namespace: pre-release
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"angular-service","namespace":"pre-release"},"spec":{"ports":[{"no...
Selector: app=frontend-app
Type: NodePort
IP: 10.96.227.143
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 31000/TCP
Endpoints: 10.32.0.4:80,10.32.0.8:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Please find the docker file here
FROM node:12.2.0
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /app/package.json
# add app
COPY . /app
# start app
CMD ng serve --host 0.0.0.0
Can you please some one help me how to fix this issue
Here I could see the issue is with your Angular Dockerfile, you are using ng serve if you see your dependencies in package.json you will see "#angular/cli": "*" in order to have that inside your docker you need to add RUN npm install this will install all your dependencies inside your docker container, and you can do ng serve but ng serve is for development locally, it's not a good approach I would say.
To identify these kind of issues, it's advisable to run something like this on local machine in order to find if your docker container is working fine or not, before you could deploy it onto to k8s cluster, as you know kubernetes is a very big universe it will take time to identify the actual problem.
Ok coming to the issue (I can simply add a single command to your dockerfile and add my answer, but I wouldn't suggest that approach. So adding the complete answer which would look good), when you are deploying some frontend related application your docker image need to have the capabilities of serving the index.html page it's the end product after you build your Angular or React applications.
There are several ways this could be done. And there are several tutorials explaining the same, here is something I would suggest, your dockerfile should look like this. Adding comments what they do.
#Stage0: builder, based on Node alpine imagine to build and compile your angular code
FROM node:10-alpine as builder
WORKDIR /app
COPY package*.json /app/
# This is one thing you forgot in your dockerfile, if you add this it might work
RUN npm install
COPY . .
# This is normal build, it does ng build
RUN npm run build
#Stage 1, based on Nginx imagine, to have only the compile app inside nginx folders to serve
FROM nginx:1.15
COPY --from=builder /app/dist/ /usr/share/nginx/html
# This one copies the local nginx.conf file as default.conf for nginx to let it serve
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
Please make sure you have nginx.conf file inside your code, file level is same as package.json
server {
listen 80;
sendfile on;
default_type application/octet-stream;
gzip on;
gzip_http_version 1.1;
gzip_disable "MSIE [1-6]\.";
gzip_min_length 1100;
gzip_vary on;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
gzip_comp_level 9;
root /usr/share/nginx/html;
location / {
#try_files $uri $uri/ /index.html =404;
expires -1d;
alias /usr/share/nginx/html/;
try_files $uri$args $uri$args/ /index.html =404;
location ~* \.(?:ico|css|js|gif|jpe?g|png|svg|woff|woff2|ttf|eot)$ {
add_header Access-Control-Allow-Origin *;
}
}
}
Make sure you run docker run on local machine before you deploy it on to cluster.
Hope this helps.
If you are accessing the pod via nodeip:nodeport which will only work if you are accessing the pod from outside the cluster using a browser.
Here is a guide on how to expose an application via NodePort. In this case the nodes of your kubernetes cluster need to be accessible i.e should have public ip.
You should be using the cluster ip to access the pod from within the cluster i.e via exec from another pod as shown in below command.
kubectl exec -it angular-deployment-5d5fbf967c-zvzvl curl 10.96.16.68:80
I have a feeling that your docker container is not listening on port 80.

Kubernetes api/dashboard issue

I posted this on serverfault, too, but will hopefully get more views/feedback here:
Trying to get the Dashboard UI working in a kubeadm cluster using kubectl proxy for remote access. Getting
Error: 'dial tcp 192.168.2.3:8443: connect: connection refused'
Trying to reach: 'https://192.168.2.3:8443/'
when accessing http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ via remote browser.
Looking at API logs, I see that I'm getting the following errors:
I1215 20:18:46.601151 1 log.go:172] http: TLS handshake error from 10.21.72.28:50268: remote error: tls: unknown certificate authority
I1215 20:19:15.444580 1 log.go:172] http: TLS handshake error from 10.21.72.28:50271: remote error: tls: unknown certificate authority
I1215 20:19:31.850501 1 log.go:172] http: TLS handshake error from 10.21.72.28:50275: remote error: tls: unknown certificate authority
I1215 20:55:55.574729 1 log.go:172] http: TLS handshake error from 10.21.72.28:50860: remote error: tls: unknown certificate authority
E1215 21:19:47.246642 1 watch.go:233] unable to encode watch object *v1.WatchEvent: write tcp 134.84.53.162:6443->134.84.53.163:38894: write: connection timed out (&streaming.encoder{writer:(*metrics.fancyResponseWriterDelegator)(0xc42d6fecb0), encoder:(*versioning.codec)(0xc429276990), buf:(*bytes.Buffer)(0xc42cae68c0)})
I presume this is related to not being able to get the Dashboard working, and if so am wondering what the issue with the API server is. Everything else in the cluster appears to be working.
NB, I have admin.conf running locally and am able to access the cluster via kubectl with no issue.
Also, of note is that this had been working when I first got the cluster up. However, I was having networking issues, and had to apply this in order to get CoreDNS to work Coredns service do not work,but endpoint is ok the other SVCs are normal only except dns, so I am wondering if this maybe broke the proxy service?
* EDIT *
Here is output for the dashboard pod:
[gms#thalia0 ~]$ kubectl describe pod kubernetes-dashboard-77fd78f978-tjzxt --namespace=kube-system
Name: kubernetes-dashboard-77fd78f978-tjzxt
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: thalia2.hostdoman/hostip<redacted>
Start Time: Sat, 15 Dec 2018 15:17:57 -0600
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=77fd78f978
Annotations: cni.projectcalico.org/podIP: 192.168.2.3/32
Status: Running
IP: 192.168.2.3
Controlled By: ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
kubernetes-dashboard:
Container ID: docker://ed5ff580fb7d7b649d2bd1734e5fd80f97c80dec5c8e3b2808d33b8f92e7b472
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
Image ID: docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64#sha256:1d2e1229a918f4bc38b5a3f9f5f11302b3e71f8397b492afac7f273a0008776a
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Running
Started: Sat, 15 Dec 2018 15:18:04 -0600
Ready: True
Restart Count: 0
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-mrd9k (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
kubernetes-dashboard-token-mrd9k:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-mrd9k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
I checked the service:
[gms#thalia0 ~]$ kubectl -n kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.103.93.93 <none> 443/TCP 4d23h
And also of note, if I curl http://localhost:8001/api from the master node, I do get a valid response.
So, in summary, I'm not sure which if any of these errors are the source of not being able to access the dashboard.
I just upgraded my cluster to 1.13.1, in hopes that this issue would be resolved, but alas, no.
When you do kubectl proxy , the default port 8001 only reachable from the localhost. If you ssh to the machine which the kubernetes is installed, you must map this port to your laptop or any device used to ssh.
You can ssh to master node and map the 8001 port to your localbox by :
ssh -L 8001:localhost:8001 hostname#master_node_IP
I upgraded all nodes in the cluster to version 1.13.1 and voila, the dashboard now works AND so far I have not had to apply the CoreDNS fix noted above.

kubernetes: Service endpoints available from within cluster, not outside

I have a service (LoadBalancer) definition in a k8s cluster, that is exposing 80 and 443 ports.
In the k8s dashboard, it indicates that these are the external endpoints:
(the k8s has been deployed using rancher for what that matters)
<some_rancher_agent_public_ip>:80
<some_rancher_agent_public_ip>:443
Here comes the weird (?) part:
From a busybox pod spawned within the cluster:
wget <some_rancher_agent_public_ip>:80
wget <some_rancher_agent_public_ip>:443
both succeed (i.e they fetch the index.html file)
From outside the cluster:
Connecting to <some_rancher_agent_public_ip>:80... connected.
HTTP request sent, awaiting response...
2018-01-05 17:42:51 ERROR 502: Bad Gateway.
I am assuming this is not a security groups issue given that:
it does connect to <some_rancher_agent_public_ip>:80
I have also tested this by allowing all traffic from all sources in the sg the instance with <some_rancher_agent_public_ip> belongs to
In addition, nmap-ing the above public ip, shows 80 and 443 in open state.
Any suggestions?
update:
$ kubectl describe svc ui
Name: ui
Namespace: default
Labels: <none>
Annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:eu-west-1:somecertid
Selector: els-pod=ui
Type: LoadBalancer
IP: 10.43.74.106
LoadBalancer Ingress: <some_rancher_agent_public_ip>, <some_rancher_agent_public_ip>
Port: http 80/TCP
TargetPort: %!d(string=ui-port)/TCP
NodePort: http 30854/TCP
Endpoints: 10.42.179.14:80
Port: https 443/TCP
TargetPort: %!d(string=ui-port)/TCP
NodePort: https 31404/TCP
Endpoints: 10.42.179.14:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
and here is the respective pod description:
kubectl describe pod <the_pod_id>
Name: <pod_id>
Namespace: default
Node: ran-agnt-02/<some_rancher_agent_public_ip>
Start Time: Fri, 29 Dec 2017 16:48:42 +0200
Labels: els-pod=ui
pod-template-hash=375086521
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"ui-deployment-7c94db965","uid":"5cea65ea-eca7-11e7-b8e0-0203f78b...
Status: Running
IP: 10.42.179.14
Created By: ReplicaSet/ui-deployment-7c94db965
Controlled By: ReplicaSet/ui-deployment-7c94db965
Containers:
ui:
Container ID: docker://some-container-id
Image: docker-registry/imagename
Image ID: docker-pullable://docker-registry/imagename#sha256:some-sha
Port: 80/TCP
State: Running
Started: Fri, 05 Jan 2018 16:24:56 +0200
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 05 Jan 2018 16:23:21 +0200
Finished: Fri, 05 Jan 2018 16:23:31 +0200
Ready: True
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8g7bv (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-8g7bv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8g7bv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Kubernetes provides different ways of exposing pods to outside the cluster, mainly Services and Ingress. I'll focus on Servicessince you are having issues with that.
There are different Services types, among those:
ClusterIP: default type. Choosing this type means that your service gets an stable IP which is reachable only from inside of the cluster. Not relevant here.
NodePort: Besides having a cluster-internal IP, expose the service on a random port on each node of the cluster (the same port on each node). You’ll be able to contact the service on any NodeIP:NodePort address. That's why you can contact your rancher_agent_public_ip:NodePort from outside the cluster.
LoadBalancer: Besides having a cluster-internal IP and exposing service on a NodePort, ask the cloud provider for a load balancer that exposes the service externally using a cloud provider’s load balancer.
Creating a Service of type LoadBalancer makes it NodePort as well. That's why you can reach rancher_agent_public_ip:30854.
I have no experience on rancher, but it seems that creating a LoadBalancer Service deploys a HAProxy to act as a Load balancer. That HAProxy that was created by Rancher needs a public IP thats reachable from outside the cluster, and a port that will redirect requests to the NodePort.
But in your service, the IP looks like an internal IP 10.43.74.106. That IP won't be reachable from outside the cluster. You need a public IP.

Why does Kubernetes showing the nodes as ready even if they are not reachable?

I am running Kubernetes cluster which is configured with a master and 3 nodes.
#kubectl get nodes
NAME STATUS AGE
minion-1 Ready 46d
minion-2 Ready 46d
minion-3 Ready 46d
I have launched couple of pods in the cluster and found that the pods are in pending state.
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
httpd 0/1 Pending 0 10m <none>
nginx 0/1 Pending 0 11m <none>
One of the pod "httpd" yaml file:
# cat http.yaml
apiVersion: v1
kind: Pod
metadata:
name: httpd
labels:
env: test
spec:
containers:
- name: httpd
image: httpd
While debugging the reason for failure found that the couple of nodes configured are not ready. Only one node is reachable from master.
# ping minion-1
PING minion-1 (172.31.24.204) 56(84) bytes of data.
64 bytes from minion-1 (172.31.24.204): icmp_seq=1 ttl=64 time=0.575 ms
Whereas other nodes are not reachable:
# ping minion-2
PING minion-2 (172.31.29.95) 56(84) bytes of data.
From master (172.31.16.204) icmp_seq=1 Destination Host Unreachable
# ping minion-3
PING minion-3 (172.31.17.252) 56(84) bytes of data.
From master (172.31.16.204) icmp_seq=1 Destination Host Unreachable
The queries that I have here is
1) Why does Kubernetes showing the nodes as ready even if they are not
reachable from master?
2) Why are the pods creation failing?
Is it because of unavailability of nodes or any configuration issue in yaml file?
# kubectl describe pod httpd
Name: httpd
Namespace: default
Node: /
Labels: env=test
Status: Pending
IP:
Controllers: <none>
Containers:
httpd:
Image: httpd
Port:
Volume Mounts: <none>
Environment Variables: <none>
No volumes.
QoS Class: BestEffort
Tolerations: <none>
No events.
Following are the Kubernetes and etcd versions.
]# kubectl --version
Kubernetes v1.5.2
[root#raghavendar1 ~]# et
etcd etcdctl ether-wake ethtool
[root#raghavendar1 ~]# etcd --version
etcd Version: 3.2.5
Git SHA: d0d1a87
Go Version: go1.8.3
Go OS/Arch: linux/amd64
Kubernetes do not use ICMP protocol to check nodes master node connectivity.
Nodes become Ready when the communication node -> api-server works and this is done via https protocol.
You can read more about about node - master connectivity in kubernetes documentation https://kubernetes.io/docs/concepts/architecture/master-node-communication/
Why pod isn't scheduled?
The answer to this question is in the master logs probably, check kube-apiserver.log, kube-scheduler.log. The reason is cluster misconfiguration.
For start run it in a single network to get a grip of things and double check routing.

Kubernetes minikube, cannot expose service on public ip range

So I've been playing around with Minkube.
I've managed to deploy a simple python flask container:
PS C:\Users\Will> kubectl run test-flask-deploy --image
192.168.1.201:5000/test_flask:1
deployment "test-flask-deploy" created
I've also then managed to expose the deployment as a service:
PS C:\Users\Will> kubectl expose deployment/test-flask-deploy --
type="NodePort" --port 8080
service "test-flask-deploy" exposed
In the dashboard I can see that the service has a Cluster IP:
10.0.0.132.
I access the dashboard on a 192.168.xxx.xxx address, so I'm hoping I can expose the service on that external IP.
Any idea how I go about this?
A separate and slightly less important question: I've got minikube talking to a docker registry on my network. If i deploy an image (which has not yet been pulled local to the minikube) the deployment fails, yet when I run the docker pull command on minikube locally, the deployment then succeeds. So minikube is able to pull docker images, but when I deploy an image which is accessible via the registry, yet not pulled locally, it fails. Any thoughts?
EDIT: More detail in response to comment:
PS C:\Users\Will> kubectl describe pod test-flask-deploy
Name: test-flask-deploy-1049547027-rgf7d
Namespace: default
Node: minikube/192.168.99.100
Start Time: Sat, 07 Oct 2017 10:19:58 +0100
Labels: pod-template-hash=1049547027
run=test-flask-deploy
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"test-flask-deploy-1049547027","uid":"b06a14b8-ab40-11e7-9714-080...
Status: Running
IP: 172.17.0.4
Created By: ReplicaSet/test-flask-deploy-1049547027
Controlled By: ReplicaSet/test-flask-deploy-1049547027
Containers:
test-flask-deploy:
Container ID: docker://577e339ce680bc5dd9388293f1f1ea62be59a6acc25be22889310761222c760f
Image: 192.168.1.201:5000/test_flask:1
Image ID: docker-pullable://192.168.1.201:5000/test_flask#sha256:d303ed635888394f69223cc0a66c5778444fd3636dfcde42295fd512be948898
Port: <none>
State: Running
Started: Sat, 07 Oct 2017 10:19:59 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5rrpm (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-5rrpm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5rrpm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
First, check the nodeport that is assigned to your service:
$ kubectl get svc test-flask-deploy
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-flask-deploy 10.0.0.76 <nodes> 8080:30341/TCP 4m
Now you should be able to access it on 192.168.xxxx:30341 or whatever your minikubeIP:nodeport is.