Why can't I find service discovery in Kubernetes? - kubernetes

I am creating a simple grpc example using Kubernetes in an on-premises environment.
When nodejs makes a request with pythonservice, pythonservice responds with helloworld and displays it on a web page.
However, pythonservice's clusterip is accessible, but not http://pythoservice:8000.
There may be a problem with coredns, so I checked various things and deleted kube-dns service of kube-system.
And if you check with pythonservice.default.svc.cluster.local with nslookup, you will see a different address from the clusterip of pythonservice.
Sorry I'm not good at English
This is the node.js code :
var setting = 'test';
var express = require('express');
var app = express();
const port = 80;
var PROTO_PATH = __dirname + '/helloworld.proto';
var grpc = require('grpc');
var protoLoader = require('#grpc/proto-loader');
var packageDefinition = protoLoader.loadSync(
PROTO_PATH,
{keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
});
// http://pythonservice:8000
// 10.109.228.152:8000
// pythonservice.default.svc.cluster.local:8000
// 218.38.137.28
var hello_proto =
grpc.loadPackageDefinition(packageDefinition).helloworld;
function main(callback) {
var client = new hello_proto.Greeter("http://pythonservice:8000",
grpc.credentials.createInsecure());
var user;
if (process.argv.length >= 3) {
user = process.argv[2];
} else {
user = 'world';
}
client.sayHello({name: user}, function(err, response) {
console.log('Greeting:', response.message);
setting = response.message;
});
}
var server = app.listen(port, function () {});
app.get('/', function (req, res) {
main();
res.send(setting);
//res.send(ip2);
//main(function(result){
// res.send(result);
//})
});
This is the yaml file for pythonservice
apiVersion: apps/v1
kind: Deployment
metadata:
name: practice-dp2
spec:
selector:
matchLabels:
app: practice-dp2
replicas: 1
template:
metadata:
labels:
app: practice-dp2
spec:
hostname: appname
subdomain: default-subdomain
containers:
- name: practice-dp2
image: taeil777/greeter-server:v1
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: pythonservice
spec:
type: ClusterIP
selector:
app: practice-dp2
ports:
- port: 8000
targetPort: 8000
this is kubectl get all:
root#pusik-server0:/home/tinyos/Desktop/grpc/node# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/practice-dp-55dd4b9d54-v4hhq 1/1 Running 1 68m
pod/practice-dp2-7d4886876-znjtl 1/1 Running 0 18h
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none>
443/TCP 34d
service/nodeservice ClusterIP 10.100.165.53 <none>
80/TCP 68m
service/pythonservice ClusterIP 10.109.228.152 <none>
8000/TCP 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/practice-dp 1/1 1 1 68m
deployment.apps/practice-dp2 1/1 1 1 18h
NAME DESIRED CURRENT READY
AGE
replicaset.apps/practice-dp-55dd4b9d54 1 1 1
68m
replicaset.apps/practice-dp2-7d4886876 1 1 1
18h
root#pusik-server0:/home/tinyos/Desktop/grpc/python# nslookup
pythonservice.default.svc.cluster.local
Server: 127.0.1.1
Address: 127.0.1.1#53
Name: pythonservice.default.svc.cluster.local
Address: 218.38.137.28

1. Answering the first question:
pythonservice's clusterip is accessible, but not http://pythoservice:8000.
Please refer to Connecting Applications with Services
The type of service/pythonservice is ClusterIP If you are interested with exposing the service outside the cluster please use service type NodePort or LoadBalancer. According to the attached screens, your application is accessible from within the cluster (ClusterIP serice).
2. Answering the second question:
The output
exec failed: container_linux.go:345: starting container process caused "exec: \"nslookup\": executable file not found in $PATH": unknown command terminated with exit code 126
means that inside your pod probably you don't have tools like nslookup so: please run some pod in the same namespace with installed tools and verify again:
kubectl run ubuntu --rm -it --image ubuntu --restart=Never --command -- bash -c 'apt-get update && apt-get -y install dnsutils && bash'
kubectl exec ubuntu2 -- nslookup pythonservice.default.svc.cluster.local
-- Update
Please verfify the state for all pods and svc especially in the kube-system namespace:
kubectl get nodes,pods,svc --all-namespaces -o wide
In order to start debugging please get more information about particular problem f.e. with coredns please use:
kubectl describe pod your coredns_pod -n kube-system
kubectl logs coredns_pod -n kube-system
Please refer to:
Debug Services
Debugging DNS Resolution
Hope this help.

Related

A sample containerized application in Kubernetes unable to be shown as targets in Prometheus for scraping metrics

My goal is to reproduce the observations in this blog post: https://medium.com/kubernetes-tutorials/monitoring-your-kubernetes-deployments-with-prometheus-5665eda54045
So far I am able to deploy the example rpc-app applicaiton in my cluster, the following shows the two pods for this application is running:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default rpc-app-deployment-64f456b65-5m7j5 1/1 Running 0 3h23m 10.244.0.15 my-server-ip.company.com <none> <none>
default rpc-app-deployment-64f456b65-9mnfd 1/1 Running 0 3h23m 10.244.0.14 my-server-ip.company.com <none> <none>
The application exposes metrics and is confirmed by:
root#xxxxx:/u01/app/k8s # curl 10.244.0.14:8081/metrics
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
...
rpc_durations_seconds{service="uniform",quantile="0.5"} 0.0001021102787270781
rpc_durations_seconds{service="uniform",quantile="0.9"} 0.00018233200374804932
rpc_durations_seconds{service="uniform",quantile="0.99"} 0.00019828258205623097
rpc_durations_seconds_sum{service="uniform"} 6.817882693745326
rpc_durations_seconds_count{service="uniform"} 68279
My prometheus pod is running in the same cluster. However I am unable to see any rpc_* meterics in the prometheus.
monitoring prometheus-deployment-599bbd9457-pslwf 1/1 Running 0 30m 10.244.0.21 my-server-ip.company.com <none> <none>
In the promethus GUI
click Status -> Servcie Discovery, I got
Service Discovery
rpc-metrics (0 / 3 active targets)
click Status -> Targets show nothing (0 targets)
click Status -> Configuration
The content can be seen as: https://gist.github.com/denissun/14835468be3dbef7bc924032767b9d7f
I am really new to Prometheus/Kubernetes monitoring, appreciate your help to troubleshoot this issue.
update 1 - I created the service
`
# cat rpc-app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: rpc-app-service
labels:
app: rpc-app
spec:
ports:
- name: web
port: 8081
targetPort: 8081
protocol: TCP
nodePort: 32325
selector:
app: rpc-app
type: NodePort
# kubectl get service rpc-app-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rpc-app-service NodePort 10.110.204.119 <none> 8081:32325/TCP 9h
Did you create the Kubernetes Service to expose the Deployment?
kubectl create -f rpc-app-service.yam
The Prometheus configuration watches for Service endpoints not Deployments|Pods.
Have a look at the Prometheus Operator. It's slightly more involved than running a Prometheus Deployment in your cluster but it represents a state-of-the-art deployment of Prometheus with some elegant abstractions such as PodMonitors and ServiceMonitors.

Not able to connect to kafka brokers

I've deployed https://github.com/confluentinc/cp-helm-charts/tree/master/charts/cp-kafka on my on prem k8s cluster.
I'm trying to expose it my using a TCP controller with nginx.
My TCP nginx configmap looks like
data:
"<zookeper-tcp-port>": <namespace>/cp-zookeeper:2181
"<kafka-tcp-port>": <namespace>/cp-kafka:9092
And i've made the corresponding entry in my nginx ingress controller
- name: <zookeper-tcp-port>-tcp
port: <zookeper-tcp-port>
protocol: TCP
targetPort: <zookeper-tcp-port>-tcp
- name: <kafka-tcp-port>-tcp
port: <kafka-tcp-port>
protocol: TCP
targetPort: <kafka-tcp-port>-tcp
Now I'm trying to connect to my kafka instance.
When i just try to connect to the IP and port using kafka tools, I get the error message
Unable to determine broker endpoints from Zookeeper.
One or more brokers have multiple endpoints for protocol PLAIN...
Please proved bootstrap.servers value in advanced settings
[<cp-broker-address-0>.cp-kafka-headless.<namespace>:<port>][<ip>]
When I enter, what I assume are the correct broker addresses (I've tried them all...) I get a time out. There are no logs coming from the nginx controler excep
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:12 +0000]TCP200000.000
[08/Apr/2020:15:51:14 +0000]TCP200000.001
From the pod kafka-zookeeper-0 I'm gettting loads of
[2020-04-08 15:52:02,415] INFO Accepted socket connection from /<ip:port> (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2020-04-08 15:52:02,415] WARN Unable to read additional data from client sessionid 0x0, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2020-04-08 15:52:02,415] INFO Closed socket connection for client /<ip:port> (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
Though I'm not sure these have anything to do with it?
Any ideas on what I'm doing wrong?
Thanks in advance.
TL;DR:
Change the value nodeport.enabled to true inside cp-kafka/values.yaml before deploying.
Change the service name and ports in you TCP NGINX Configmap and Ingress object.
Set bootstrap-server on your kafka tools to <Cluster_External_IP>:31090
Explanation:
The Headless Service was created alongside the StatefulSet. The created service will not be given a clusterIP, but will instead simply include a list of Endpoints.
These Endpoints are then used to generate instance-specific DNS records in the form of:
<StatefulSet>-<Ordinal>.<Service>.<Namespace>.svc.cluster.local
It creates a DNS name for each pod, e.g:
[ root#curl:/ ]$ nslookup my-confluent-cp-kafka-headless
Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
Name: my-confluent-cp-kafka-headless
Address 1: 10.8.0.23 my-confluent-cp-kafka-1.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 2: 10.8.1.21 my-confluent-cp-kafka-0.my-confluent-cp-kafka-headless.default.svc.cluster.local
Address 3: 10.8.3.7 my-confluent-cp-kafka-2.my-confluent-cp-kafka-headless.default.svc.cluster.local
This is what makes this services connect to each other inside the cluster.
I've gone through a lot of trial and error, until I realized how it was supposed to be working. Based your TCP Nginx Configmap I believe you faced the same issue.
The Nginx ConfigMap asks for: <PortToExpose>: "<Namespace>/<Service>:<InternallyExposedPort>".
I realized that you don't need to expose the Zookeeper, since it's a internal service and handled by kafka brokers.
I also realized that you are trying to expose cp-kafka:9092 which is the headless service, also only used internally, as I explained above.
In order to get outside access you have to set the parameters nodeport.enabled to true as stated here: External Access Parameters.
It adds one service to each kafka-N pod during chart deployment.
Then you change your configmap to map to one of them:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
Note that the service created has the selector statefulset.kubernetes.io/pod-name: demo-cp-kafka-0 this is how the service identifies the pod it is intended to connect to.
Edit the nginx-ingress-controller:
- containerPort: 31090
hostPort: 31090
protocol: TCP
Set your kafka tools to <Cluster_External_IP>:31090
Reproduction:
- Snippet edited in cp-kafka/values.yaml:
nodeport:
enabled: true
servicePort: 19092
firstListenerPort: 31090
Deploy the chart:
$ helm install demo cp-helm-charts
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-cp-control-center-6d79ddd776-ktggw 1/1 Running 3 113s
demo-cp-kafka-0 2/2 Running 1 113s
demo-cp-kafka-1 2/2 Running 0 94s
demo-cp-kafka-2 2/2 Running 0 84s
demo-cp-kafka-connect-79689c5c6c-947c4 2/2 Running 2 113s
demo-cp-kafka-rest-56dfdd8d94-79kpx 2/2 Running 1 113s
demo-cp-ksql-server-c498c9755-jc6bt 2/2 Running 2 113s
demo-cp-schema-registry-5f45c498c4-dh965 2/2 Running 3 113s
demo-cp-zookeeper-0 2/2 Running 0 112s
demo-cp-zookeeper-1 2/2 Running 0 93s
demo-cp-zookeeper-2 2/2 Running 0 74s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-control-center ClusterIP 10.0.13.134 <none> 9021/TCP 50m
demo-cp-kafka ClusterIP 10.0.15.71 <none> 9092/TCP 50m
demo-cp-kafka-0-nodeport NodePort 10.0.7.101 <none> 19092:31090/TCP 50m
demo-cp-kafka-1-nodeport NodePort 10.0.4.234 <none> 19092:31091/TCP 50m
demo-cp-kafka-2-nodeport NodePort 10.0.3.194 <none> 19092:31092/TCP 50m
demo-cp-kafka-connect ClusterIP 10.0.3.217 <none> 8083/TCP 50m
demo-cp-kafka-headless ClusterIP None <none> 9092/TCP 50m
demo-cp-kafka-rest ClusterIP 10.0.14.27 <none> 8082/TCP 50m
demo-cp-ksql-server ClusterIP 10.0.7.150 <none> 8088/TCP 50m
demo-cp-schema-registry ClusterIP 10.0.7.84 <none> 8081/TCP 50m
demo-cp-zookeeper ClusterIP 10.0.9.119 <none> 2181/TCP 50m
demo-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 50m
Create the TCP configmap:
$ cat nginx-tcp-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: kube-system
data:
31090: "default/demo-cp-kafka-0-nodeport:31090"
$ kubectl apply -f nginx-tcp.configmap.yaml
configmap/tcp-services created
Edit the Nginx Ingress Controller:
$ kubectl edit deploy nginx-ingress-controller -n kube-system
$kubectl get deploy nginx-ingress-controller -n kube-system -o yaml
{{{suppressed output}}}
ports:
- containerPort: 31090
hostPort: 31090
protocol: TCP
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
My ingress is on IP 35.226.189.123, now let's try to connect from outside the cluster. For that I'll connect to another VM where I have a minikube, so I can use kafka-client pod to test:
user#minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-client 1/1 Running 0 17h
user#minikube:~$ kubectl exec kafka-client -it -- bin/bash
root#kafka-client:/# kafka-console-consumer --bootstrap-server 35.226.189.123:31090 --topic demo-topic --from-beginning --timeout-ms 8000 --max-messages 1
Wed Apr 15 18:19:48 UTC 2020
Processed a total of 1 messages
root#kafka-client:/#
As you can see, I was able to access the kafka from outside.
If you need external access to Zookeeper as well I'll leave a service model for you:
zookeeper-external-0.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: cp-zookeeper
pod: demo-cp-zookeeper-0
name: demo-cp-zookeeper-0-nodeport
namespace: default
spec:
externalTrafficPolicy: Cluster
ports:
- name: external-broker
nodePort: 31181
port: 12181
protocol: TCP
targetPort: 31181
selector:
app: cp-zookeeper
statefulset.kubernetes.io/pod-name: demo-cp-zookeeper-0
sessionAffinity: None
type: NodePort
It will create a service for it:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-cp-zookeeper-0-nodeport NodePort 10.0.5.67 <none> 12181:31181/TCP 2s
Patch your configmap:
data:
"31090": default/demo-cp-kafka-0-nodeport:31090
"31181": default/demo-cp-zookeeper-0-nodeport:31181
Add the Ingress rule:
ports:
- containerPort: 31181
hostPort: 31181
protocol: TCP
Test it with your external IP:
pod/zookeeper-client created
user#minikube:~$ kubectl exec -it zookeeper-client -- /bin/bash
root#zookeeper-client:/# zookeeper-shell 35.226.189.123:31181
Connecting to 35.226.189.123:31181
Welcome to ZooKeeper!
JLine support is disabled
If you have any doubts, let me know in the comments!

Unable to get ClusterIP service url from minikube

I have created a ClusterIP service according to configuration files below, however I can't seem to get the URL from minikube for that service
k create -f service-cluster-definition.yaml
➜ minikube service myapp-frontend --url
😿 service default/myapp-frontend has no node port
And if I try to add NodePort into the ports section of service-cluster-definition.yaml it complains with error, that such key is deprecated.
What am I missing or doing wrong?
service-cluster-definition.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp-frontend
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: etl
deployment-definition.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
env: experiment
type: etl
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
env: experiment
type: etl
spec:
containers:
- name: nginx-container
image: nginx:1.7.1
replicas: 3
selector:
matchLabels:
type: etl
➜ k get pods --selector="app=myapp,type=etl" -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-deployment-59856c4487-2g9c7 1/1 Running 0 45m 172.17.0.9 minikube <none> <none>
myapp-deployment-59856c4487-mb28z 1/1 Running 0 45m 172.17.0.4 minikube <none> <none>
myapp-deployment-59856c4487-sqxqg 1/1 Running 0 45m 172.17.0.8 minikube <none> <none>
(⎈ |minikube:default)
Projects/experiments/kubernetes
➜ k version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
(⎈ |minikube:default)
First let's clear some concepts from Documentation:
ClusterIP: Exposes the Service on a cluster-internal IP.
Choosing this value makes the Service only reachable from within the cluster.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort).
You’ll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort.
Question 1:
I have created a ClusterIP service according to configuration files below, however I can't seem to get the URL from minikube for that service.
Since Minikube is a virtualized environment on a single host we tend to forget that the cluster is isolated from the host computer. If you set a service as ClusterIP, Minikube will not give external access.
Question 2:
And if I try to add NodePort into the ports section of service-cluster-definition.yaml it complains with error, that such key is deprecated.
Maybe you were pasting on the wrong position. You should just substitute the field type: ClusterIP for type: NodePort. Here is the correct form of your yaml:
apiVersion: v1
kind: Service
metadata:
name: myapp-frontend
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: etl
Reproduction:
user#minikube:~$ kubectl apply -f deployment-definition.yaml
deployment.apps/myapp-deployment created
user#minikube:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-deployment-59856c4487-7dw6x 1/1 Running 0 5m11s
myapp-deployment-59856c4487-th7ff 1/1 Running 0 5m11s
myapp-deployment-59856c4487-zvm5f 1/1 Running 0 5m11s
user#minikube:~$ kubectl apply -f service-cluster-definition.yaml
service/myapp-frontend created
user#minikube:~$ kubectl get service myapp-frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-frontend NodePort 10.101.156.113 <none> 80:32420/TCP 3m43s
user#minikube:~$ minikube service list
|-------------|----------------|-----------------------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|----------------|-----------------------------|-----|
| default | kubernetes | No node port | |
| default | myapp-frontend | http://192.168.39.219:32420 | |
| kube-system | kube-dns | No node port | |
|-------------|----------------|-----------------------------|-----|
user#minikube:~$ minikube service myapp-frontend --url
http://192.168.39.219:32420
user#minikube:~$ curl http://192.168.39.219:32420
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...{{output suppressed}}...
As you can see, with the service set as NodePort the minikube start serving the app using MinikubeIP:NodePort routing the connection to the matching pods.
Note that nodeport will be chosen by default between 30000:32767
If you have any question let me know in the comments.
To access inside cluster, do kubectl get svc to get the cluster ip or use the service name directly.
To access outside cluster, you can use NodePort as service type.

Why can't my service pass traffic to a pod with a named port on minikube?

I'm having trouble with the examples in section 5.1.1 Using Named Ports of Kubernetes In Action by Marko Luksa. The example goes like this:
First - Create
I'm creating a pod with a named port that runs a Node.js container that responds with You've hit <hostname> when it's hit:
apiVersion: v1
kind: Pod
metadata:
name: named-port-pod
labels:
app: named-port
spec:
containers:
- name: kubia
image: michaellundquist/kubia
ports:
- name: http
containerPort: 8080
And a service like this (note, this is a simplified version of the original example which also doesn't work.:
apiVersion: v1
kind: Service
metadata:
name: named-port-service
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: named-port
Second - Verify
$ kubectl get po -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
named-port-pod 1/1 Running 0 45m 172.17.0.7 minikube <none> <none> app=named-port
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m
named-port-service ClusterIP 10.96.115.108 <none> 80/TCP 19m
$ kubectl describe service named-port-service
Name: named-port-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=named-port
Type: ClusterIP
IP: 10.96.115.108
Port: http 80/TCP
TargetPort: http/TCP
Endpoints: 172.17.0.7:8080
Session Affinity: None
Events: <none>
Third - Test (Failing)
$ kubectl exec named-port-pod -- curl named-port-pod:8080
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 26 0 26 0 0 5494 0 --:--:-- --:--:-- --:--:-- 6500
You've hit named-port-pod
$ kubectl exec named-port-pod -- curl --max-time 20 named-port-service
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:19 --:--:-- 0curl: (28) Connection timed out after 20001 milliseconds
command terminated with exit code 28
As you can see, everything works when I hit named-port-pod:8080, but fails when I hit named-port-service. I'm pretty sure I have the mapping correct because kubectl describe service named-port-service has the correct endpoint I think minikube can use named ports but my service can't pass connections to my pod. Why?
p.s here's my minikube version:
$ minikube version
minikube version: v1.6.2
commit: 54f28ac5d3a815d1196cd5d57d707439ee4bb392
This is known issue with minikube. Pod cannot reach itself via service IP. You can try accesing your service from a different pod or use the following workaround to fix this.
minikube ssh
sudo ip link set docker0 promisc on
Open issue: https://github.com/kubernetes/minikube/issues/1568

How to publicly expose Traefik ingress controller on Google Cloud Container Engine?

I've been trying to use Traefik as an Ingress Controller on Google Cloud's container engine.
I got my http deployment/service up and running (when I exposed it with a normal LoadBalancer, it was answering fine).
I then removed the LoadBalancer, and followed this tutorial: https://docs.traefik.io/user-guide/kubernetes/
So I got a new traefik-ingress-controller deployment and service, and an ingress for traefik's ui which I can access through the kubectl proxy.
I then create my ingress for my http service, but here comes my issue: I can't find a way to expose that externally.
I want it to be accessible by anybody via an external IP.
What am I missing?
Here is the output of kubectl get --export all:
NAME READY STATUS RESTARTS AGE
po/mywebservice-3818647231-gr3z9 1/1 Running 0 23h
po/mywebservice-3818647231-rn4fw 1/1 Running 0 1h
po/traefik-ingress-controller-957212644-28dx6 1/1 Running 0 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/mywebservice 10.51.254.147 <none> 80/TCP 1d
svc/kubernetes 10.51.240.1 <none> 443/TCP 1d
svc/traefik-ingress-controller 10.51.248.165 <nodes> 80:31447/TCP,8080:32481/TCP 25m
svc/traefik-web-ui 10.51.248.65 <none> 80/TCP 3h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/mywebservice 2 2 2 2 1d
deploy/traefik-ingress-controller 1 1 1 1 3h
NAME DESIRED CURRENT READY AGE
rs/mywebservice-3818647231 2 2 2 23h
rs/traefik-ingress-controller-957212644 1 1 1 3h
You need to expose the Traefik service. Set the service spec type to LoadBalancer. Try the below service file that i've used previously:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
app: traefik
tier: proxy
ports:
- port: 80
targetPort: 80