I'm setting up a centOS 7 server with Ansible 2.6 and ufw as my firewall. Ufw comes with two predefined rules: SSH and mDNS.
While I can easily delete the SSH rule with my playbook:
- name: delete SSH rule by name
ufw:
rule: allow
name: SSH
delete: yes
For the mDNS rule my script doesn't work:
predefined ufw rule:
xxx.xxx.xxx.xxx 5353/udp (mDNS) ALLOW IN Anywhere
xyz::xyz 5353/udp (mDNS) ALLOW IN Anywhere (v6)
My attempts in the playbook:
- name: delete mDNS rule by name
ufw:
rule: allow
name: mDNS
delete: yes
or
- name: delete mDNS rule
ufw:
rule: allow
to_ip: xxx.xxx.xxx.xxx
to_port: 5353
proto: udp
delete: yes
In both cases, Ansible reports an "ok" statment but the mDNS rule is still present.
TASK [delete mDNS rule by name] ************
ok: [host ip]
TASK [delete mDNS rule] ************
ok: [host ip]
Is there a way with ansible? I want to automate my project as much as possible.
This worked for me:
- name: Delete UFW default IPv6 mDNS rule
ufw:
rule: allow
direction: in
dest: xxxx::xx
name: mDNS
delete: yes
- name: Delete UFW default IPv4 mDNS rule
ufw:
rule: allow
direction: in
dest: xxx.xxx.xxx.xxx
name: mDNS
delete: yes
I know it's a bit late for a response, but I just finally worked it out myself.
Related
I have a problem. In my cluster I have a Ruby-On-Rails application which I want to map to the database on the hosted machine (Not containerized). It's a Postgres database, which listens to the following port when I run:
sudo netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 1109/postgres
Then I created a service which maps the DB_HOST to the local machine like this:
apiVersion: v1
kind: Service
metadata:
name: external-postgres-svc
namespace: myapp-nm
spec:
ports:
- port: 5432
targetPort: 5432
protocol: TCP
I also added an endpoint:
apiVersion: v1
kind: Endpoints
metadata:
name: external-postgres-svc
namespace: myapp-nm
subsets:
- addresses:
- ip: "10.0.2.2"
ports:
- port: 5432
And in my configmap I have the following config:
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: myapp-nm
data:
db_host: "external-postgres-svc.myapp-nm.svc"
db_port: "5432"
db_username: "myuser"
db_password: "mypass"
But then when all resources are created and the migration job runs, it never completes. After like 2-3 minutes it crashes and gives the error:
connection to server at port 5432 failed: Operation timed out
I have added:
listen_addresses = '*'
to the /etc/postgresql/14/main/postgresql.conf, I added:
host all all 0.0.0.0/0 md5
to the /etc/postgresql/14/main/pg_hba.conf,
so I think it should listen to incoming traffic. I also ran
sudo ufw allow 5432/TCP
to allow the firewall port on my machine and I checked if the user was correct and it is, so what can be the problem?
I can connect to the database if I am not in the cluster using the
ip
port
username
password
What am I doing wrong?
Error: Operation Timed Out : indicates that your server failed to issue a complete response within the allowed time period.
Check below solutions :
1) Check If /etc directory owned by another user, /etc directory needs to be set as root user and also open 5432 port. (Add the hostname and IP details of the host in /etc/hosts. In the case of a multi-node cluster, the /etc/hosts on each machine has to be updated with the details of all cluster nodes).
2) Check again that there's no route to the database server because it's being blocked by a firewall. Make sure you set a rule in the firewall to allow the Ruby-On-Rails application to connect.
3) Check the setting if you are not doing the connection locally ONLY>, in the postgresql.conf file:
Connection Settings - #listen_addresses = 'localhost' >>>> This should be = '*' instead of localhost
Save the conf and restart the service.
4) Check Tunnel issue :
Verify that attributes of the ssh process match what customers provided: Look at the output of ps aux | grep ssh, the relevant part is:
-L number_1:string_or_number_1:number_2 ... KnownHostsFile=/dev/null string_or_number_2
*some_number1: Looker side port number
*string_or_number_1: Database host number or IP address
*number_2: Database side port number
*string_or_number_2: Tunnel server host or IP address
How to manually set the port:
Note : It is not recommended since it is still possible that the looker set the port back again,instead. Recommend you to open needed ports on the DB.
Create a tunnel through the Looker database connections UI
Update tunnel via the API(PATCH /api/4.0/connections/:connection_name) after it's been created
Set the desired local_host_port
Make sure db_connection and the following fields are set by using API GET /api/4.0/ssh_tunnel/:ssh_tunnel_id or check from go-ssh-sidecar:
*tunnel_id: id of the tunnel
*port: new local_host_port
*host: localhost
Also check for Improper Tunnel migration:
The following types of issues may be related to SSH Tunnels on newly migrated K8s
1.Tunnel is not migrated, or tunnels are partially migrated
2.Tunnel is migrated with incorrect information
3.IPs have changed, and are not documented or announced to customers
4.Public keys have changed (specific to migrated tunnels)
I am trying to run the bookinfo example on my local with wsl2 and docker desk. I am having issues when trying to access the productpage service via the gateway as I got the connection refused. I am not sure whether I missed anything. Here is what I have done after googled a lot on the internet
Deployed all services from bookinfo example and all up running, I can curl productpage from other service using kubectl exec
Deployed bookinfo-gateway using the file from the example without any change under the default namespace
Name: bookinfo-gateway
Namespace: default
Labels: <none>
Annotations: <none>
API Version: networking.istio.io/v1beta1
Kind: Gateway
Metadata:
Creation Timestamp: 2021-06-06T20:47:18Z
Generation: 1
Managed Fields:
API Version: networking.istio.io/v1alpha3
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:selector:
.:
f:istio:
f:servers:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2021-06-06T20:47:18Z
Resource Version: 2053564
Self Link: /apis/networking.istio.io/v1beta1/namespaces/default/gateways/bookinfo-gateway
UID: aa390a1d-2e34-4599-a1ec-50ad7aa9bdc6
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
The istio-ingressgateway can expose to the outside via localhost (not sure how this can be configured as it is deployed during istio installation) on 80, which I as understand will be used by bookinfo-gateway
kubectl get svc istio-ingressgateway -n istio-system
following Determining the ingress IP and ports section in the instruction.
My INGRESS_HOST=127.0.0.1 and INGRESS_PORT is 80
curl -v -s http://127.0.0.1:80/productpage | grep -o ".*"
* Trying 127.0.0.1:80...
* TCP_NODELAY set
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to 127.0.0.1 port 80: Connection refused
* Closing connection 0
trying this http://127.0.0.1/productpage on browser, return 404. Does this 404 mean the gateway is kind of up but virtual service is not working??
further question if it is relevant. I am a bit confusing how wsl2 works now. It looks like localhost on windows browser and wsl2 terminal are not the same thing, though I know there is kind of forwarding from windows to wsl2 server (which I can get its IP from /etc/resolv.conf). if it is the same, why one return connection refused and the other return 404
On windows I have tried to disable IIS or anything running on port 80 (net stop http). Somehow, I still can see something listen to port 80
netstat -aon | findstr :80
TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 4
tasklist /svc /FI "PID eq 4"
Image Name PID Services
========================= ======== ============================================
System 4 N/A
I am wondering whether this is what causes the difference in point 7? As windows is running on another http server on port 80?
I know this a lot of questions asked. I believe many of us that new to istio and wsl2 may have similar questions. Hopefully, this helps others as well. Please advise.
There seems to be a problem with WSL2 itself, probably connected with Local sites running in WSL2 not accessible in browser #5298.
You can work around that by issuing
ip addr show
in your WSL distro, and replacing 127.0.0.1/localhost with eth0 address. In my case it is 172.21.29.254 - so URL is http://172.21.29.254/productpage
This workaround worked for me.
I managed to get this working: This is what I did.
Shell into the distro (mine was Ubuntu 20.04 LTS)
Run:
sudo apt-get -y install socat
sudo apt update
sudo apt upgrade
exit
The above will add socat (which I was getting errors with when looking at the istio logs - connection refused) and update the distro to the latest updates (and upgrade them)
Now you have to run a port-forward to be able to host localhost: to hit the istio gateway with:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
If you are already using 8080, just remove it from the command, just use :80 and the port forward will select a free port.
Now go to
http://localhost:8080/productpage
You should hit the page and the port-forward should output
Handling connection for 8080
Hope that helps...
Good thing is now I don't have to use Hyper-V or another cluster installer like minikube/microk8s and use the built-in kubernetes in docker desktop and... my laptop doesn't seem under load for what I'm doing too.
I'm trying to build a simple mongo replica set cluster in kubernetes.
i have a StatefulSet of mongod instances, with
livenessProbe:
initialDelaySeconds: 60
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
readinessProbe:
initialDelaySeconds: 60
exec:
command:
- /usr/bin/mongo --quiet --eval 'rs.status()' | grep ok | cut -d ':' -f 2 | tr -dc '0-9' | awk '{ if($0=="0"){ exit 127 }else{ exit 0 } }'
as you can see, my readinessProbe is checking to see if the mongo replicaSet is working correctly.
however, i get a circular dependency with (and existing) cluster reporting:
"lastHeartbeatMessage" : "Error connecting to mongo-2.mongo:27017 :: caused by :: Could not find address for mongo-2.mongo:27017: SocketException: Host not found (authoritative)",
(where mongo-2 was undergoing a rolling update).
looking further:
$ kubectl run --generator=run-pod/v1 tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash
bash-5.0# nslookup mongo-2.mongo
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find mongo-2.mongo: NXDOMAIN
bash-5.0# nslookup mongo-0.mongo
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local
Address: 10.27.137.6
so the question is whether there is a way to get kubernetes to always keep the dns entry for the mongo pods to always be present? it appears that i have a chicken and egg situation where if the entire pod hasn't passed its readiness and liveness checks, then a dns entry is not created, and hence the other mongod instances will not be able to access it.
I ended up just putting in a ClusterIP Service for each of the statefulset instances with a selector for the specific instance:
ie
apiVersion: v1
kind: Service
metadata:
name: mongo-0
spec:
clusterIP: 10.101.41.87
ports:
- port: 27017
protocol: TCP
targetPort: 27017
selector:
role: mongo
statefulset.kubernetes.io/pod-name: mongo-0
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
and repeat for the othe stss. the key here is the selector:
statefulset.kubernetes.io/pod-name: mongo-0
I believe you are misinterpreting the error.
Could not find address for mongo-2.mongo:27017: SocketException: Host not found (authoritative)"
The pod is created with an IP attached. Then it's registered into DNS:
Pod-0 has the IP 10.0.0.10 and now it's FQDN is Pod-0.servicename.namespace.svc.cluster.local
Pod-1 has the IP 10.0.0.11 and now it's FQDN is Pod-1.servicename.namespace.svc.cluster.local
Pod-2 has the IP 10.0.0.12 and now it's FQDN is Pod-2.servicename.namespace.svc.cluster.local
But DNS is a live service, IPs are dynamically assigned and can't be duplicated.
So whenever it receives a request:
"Connect me with Pod-A.servicename.namespace.svc.cluster.local"
It tries to reach the registered IP and if the Pod is offline due to a rolling update, it will think the pod is unavailable and will return "Could not find the address (IP) for Pod-0.servicename" until the pod is online again or until the IP reservation expires and only then the DNS registry will be recycled.
The DNS is not discarting the DNS name registered, it's only answering it's currently offline.
You can either ignore the errors during the rolling or rethink your script and try using the internal js environment as mentioned in the comments for continuous monitoring of the mongo status.
EDIT:
When Pods from a StatefulSet with N replicas are being deployed, they are created sequentially, in order from {0..N-1}.
When Pods are being deleted, they are terminated in reverse order, from {N-1..0}.
This is the expected/desired default behavior.
So the error is expected, since the rollingUpdate makes the pod temporarily unavailable.
Cluster setup:
OS: Ubuntu 18.04, w/ Kubernetes recommended install settings
Cluster is bootstrapped with Kubespray
CNI is Calico
Quick Facts (when redis service ip is 10.233.90.37):
Host machine: psql 10.233.90.37:6379 => success
Host machine: psql 10.233.90.37:80 => success
Pods (in any namespace) psql 10.233.90.37:6379 => timeout
Pods (in any namespace) psql redis:6379 => timeout
Pods (in any namespace) psql redis.namespace.svc.cluster.local => timeout
Pods (in any namespace) psql redis:80 => success
Pods (in any namespace) psql redis.namespace.svc.cluster.local:80 => success
Kubernetes Service (NodePort, LoadBalancer, ClusterIP) will not forward ports other than 80 and 443, for pods. The pod ports can be different, but the requests to the Service will time out if the Service port is not 80 or 443.
Requests from the host machine to a Kubernetes Service on ports other than 80 and 443 work. BUT requests from pods to these other ports fail.
Requests from pods to services on ports 80 and 443 do work.
user#host: curl 10.233.90.37:80
200 OK
user#host: curl 10.233.90.37:5432
200 OK
# ... exec into Pod
```bash
bash-4.4# curl 10.233.90.37:80
200 OK
bash-4.4# curl 10.233.90.37:5432
Error ... timeout ...
user#host: kubectl get NetworkPolicy -A
No resources found.
user#host: kubectl get PodSecurityPolicy -A
No resources found.
Example service:
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
name: redis
namespace: namespace
spec:
ports:
- port: 6379
protocol: TCP
targetPort: 6379
name: redis
- port: 80
protocol: TCP
targetPort: 6379
name: http
selector:
app: redis
type: NodePort # I've tried ClusterIP, NodePort, and LoadBalancer
What's going on with this crazy Kubernetes Service port behavior!?
After debugging, I've found that it may be related to ufw and iptables config.
ufw settings (very permissive):
Status: enabled
80 ALLOW Anywhere
443 ALLOW Anywhere
6443 ALLOW Anywhere
2379 ALLOW Anywhere
2380 ALLOW Anywhere
10250/tcp ALLOW Anywhere
10251/tcp ALLOW Anywhere
10252/tcp ALLOW Anywhere
10255/tcp ALLOW Anywhere
179 ALLOW Anywhere
5473 ALLOW Anywhere
4789 ALLOW Anywhere
10248 ALLOW Anywhere
22 ALLOW Anywhere
80 (v6) ALLOW Anywhere (v6)
443 (v6) ALLOW Anywhere (v6)
6443 (v6) ALLOW Anywhere (v6)
2379 (v6) ALLOW Anywhere (v6)
2380 (v6) ALLOW Anywhere (v6)
10250/tcp (v6) ALLOW Anywhere (v6)
10251/tcp (v6) ALLOW Anywhere (v6)
10252/tcp (v6) ALLOW Anywhere (v6)
10255/tcp (v6) ALLOW Anywhere (v6)
179 (v6) ALLOW Anywhere (v6)
5473 (v6) ALLOW Anywhere (v6)
4789 (v6) ALLOW Anywhere (v6)
10248 (v6) ALLOW Anywhere (v6)
22 (v6) ALLOW Anywhere (v6)
Kubespray deployment fails with ufw disabled. Kubespray deployment succeeds with ufw enabled.
Once deployed, disabling ufw will allow pods to connect on ports other than 80, 443. However, the cluster crashes when ufw is disabled.
Any idea what's going on? Am I missing a port in ufw config.... ? Seems weird that ufw would be required for kubespray install to succeed.
LoadBalancer service exposes 1 external IP which external clients or users will use to connect with your app. In most cases, you would expect your LoadBalancer service to listen on port 80 for http traffic and port 443 for https. Because you would want your users to type http://yourapp.com or https://yourapp.com instead of http://yourapp.com:3000.
It looks like you are mixing different services in your Example service yaml for e.g. nodePort is used when service is of type NodePort. You may try:
apiVersion: v1
kind: Service
metadata:
labels:
app: redis
role: master
tier: backend
name: redis
spec:
ports:
- port: 80
protocol: TCP
targetPort: 6379 // service will target containers on port 6379
name: someName
selector:
app: redis
role: master
tier: backend
type: LoadBalancer
I have a kubernetes cluster running with 2 minions.
Currently I make my service accessible in 2 steps:
Start replication controller & pod
Get minion IP (using kubectl get minions) and set it as publicIPs for the Service.
What is the suggested practice for exposing service to the public? My approach seems wrong because I hard-code the IP-s of individual minion IP-s. It also seems to bypass load balancing capabilities of kubernetes services because clients would have to access services running on individual minions directly.
To set up the replication controller & pod I use:
id: frontend-controller
kind: ReplicationController
apiVersion: v1beta1
desiredState:
replicas: 2
replicaSelector:
name: frontend-pod
podTemplate:
desiredState:
manifest:
version: v1beta1
id: frontend-pod
containers:
- name: sinatra-docker-demo
image: madisn/sinatra_docker_demo
ports:
- name: http-server
containerPort: 4567
labels:
name: frontend-pod
To set up the service (after getting minion ip-s):
kind: Service
id: frontend-service
apiVersion: v1beta1
port: 8000
containerPort: http-server
selector:
name: frontend-pod
labels:
name: frontend
publicIPs: [10.245.1.3, 10.245.1.4]
As I mentioned in the comment above, the createExternalLoadBalancer is the appropriate abstraction that you are looking for, but unfortunately it isn't yet implemented for all cloud providers, and in particular for vagrant, which you are using locally.
One option would be to use the public IPs for all minions in your cluster for all of the services you want to be externalized. The traffic destined for the service will end up on one of the minions, where it will be intercepted by the kube-proxy process and redirected to a pod that matches the label selector for the service. This could result in an extra hop across the network (if you land on a node that doesn't have the pod running locally) but for applications that aren't extremely sensitive to network latency this will probably not be noticeable.
As Robert said in his reply this is something that is coming up, but unfortunately isn't available yet.
I am currently running a Kubernetes cluster on our datacenter network. I have 1 master and 3 minions all running on CentOS 7 virtuals (vcenter). The way I handled this was to create a dedicated "kube-proxy" server. I basically am just running the Kube-Proxy service (along with Flannel for networking) and then assigning "public" IP addresses to the network adapter attached to this server. When I say public I mean addresses on our local datacenter network. Then when I create a service that I would like to access outside of the cluster I just set the publicIPs value to one of the available IP addresses on the kube-proxy server. When someone or something attempts to connect to this service from outside the cluster it will hit the kube-proxy and then be redirected to the proper minion.
While this might seem like a work around, this is actually similar to what I would expect to be happening once they come up with a built in solution to this issue.
If you're running a cluster locally, a solution I used was to expose the service on your kubernetes nodes using the nodeport directive in your service definition and then round robin to every node in your cluster with HAproxy.
Here's what exposing the nodeport looks like:
apiVersion: v1
kind: Service
metadata:
name: nginx-s
labels:
name: nginx-s
spec:
type: NodePort
ports:
# must match the port your container is on in your replication controller
- port: 80
nodePort: 30000
selector:
name: nginx-s
Note: the value you specify must be within the configured range for node ports. (default: 30000-32767)
This exposes the service on the given nodeport on every node in your cluster. Then I set up a separate machine on the internal network running haproxy and a firewall that's reachable externally on the specified nodeport(s) you want to expose.
If you look at your nat table on one of your hosts, you can see what it's doing.
root#kube01:~# kubectl create -f nginx-s.yaml
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30000) to serve traffic.
See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.
services/nginx-s
root#kube01:~# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-PORTALS-CONTAINER all -- anywhere anywhere /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
KUBE-NODEPORT-CONTAINER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-PORTALS-HOST all -- anywhere anywhere /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
DOCKER all -- anywhere !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
KUBE-NODEPORT-HOST all -- anywhere anywhere ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-NODEPORT-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere anywhere /* default/nginx-s: */ tcp dpt:30000 redir ports 42422
Chain KUBE-NODEPORT-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere anywhere /* default/nginx-s: */ tcp dpt:30000 to:169.55.21.75:42422
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 192.168.3.1 /* default/kubernetes: */ tcp dpt:https redir ports 51751
REDIRECT tcp -- anywhere 192.168.3.192 /* default/nginx-s: */ tcp dpt:http redir ports 42422
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 192.168.3.1 /* default/kubernetes: */ tcp dpt:https to:169.55.21.75:51751
DNAT tcp -- anywhere 192.168.3.192 /* default/nginx-s: */ tcp dpt:http to:169.55.21.75:42422
root#kube01:~#
Particularly this line
DNAT tcp -- anywhere anywhere /* default/nginx-s: */ tcp dpt:30000 to:169.55.21.75:42422
And finally, if you look at netstat, you can see kube-proxy is listening and waiting for that service on that port.
root#kube01:~# netstat -tupan | grep 42422
tcp6 0 0 :::42422 :::* LISTEN 20748/kube-proxy
root#kube01:~#
Kube-proxy will listen on a port for each service, and do network address translation into your virtual subnet that your containers reside in. (I think?) I used flannel.
For a two node cluster, that HAproxy configuration might look similiar to this:
listen sampleservice 0.0.0.0:80
mode http
stats enable
balance roundrobin
option httpclose
option forwardfor
server noname 10.120.216.196:30000 check
server noname 10.155.236.122:30000 check
option httpchk HEAD /index.html HTTP/1.0
And your service is now reachable on port 80 via HAproxy. If any of your nodes go down, the containers will be moved to another node thanks to replication controllers and HAproxy will only route to your nodes that are alive.
I'm curious what methods others have used though, that's just what I came up with. I don't usually post on stack overflow, so apologies if I'm not following conventions or proper formatting.
This is for MrE. I did not have enough space in the comments area to post this answer so I had to create another answer. Hope this helps:
We have actually moved away from Kubernetes since posting this reply. If I remember correctly though all I really had to do was run the kube-proxy executable on a dedicated CentOS VM. Here is what I did:
First I removed Firewalld and put iptables in place. Kube-proxy relies on iptables to handle its NAT and redirections.
Second, you need to install flanneld so you can have a bridge adapter on the same network as the Docker services running on your minions.
Then what I did was assign multiple ip addresses to the local network adapter installed on the machine. These will be the ip addresses you can use when setting up a service. These will be the addresses available OUTSIDE your cluster.
Once that is all taken care of you can start the proxy service. It will connect to the Master, and grab an IP address for the flannel bridge network. Then it will sync up all the IPtables rules and you should be set. Every time a new service is added it will create the proxy rules and replicate those rules across all minions (and your proxy). As long as you specified an ip address available on your proxy server then that proxy server will forward all traffic for that ip address over to the proper minion.
Hope this is a little more clear. Remember though I have not been part of the Kubernetes project for about 6 months now so I am not sure what changed have been made since I left. They might even have a feature in place that handles this sort of thing. If not hopefully this helps you get it taken care of.
You can use Ingress resource to allow external connections from outside of a Kubernetes cluster to reach the cluster services.
Assuming that you already have a Pod deployed, you now need a Service resource, e.g.:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
labels:
tier: frontend
spec:
type: ClusterIP
selector:
name: frontend-pod
ports:
- name: http
protocol: TCP
# the port that will be exposed by this service
port: 8000
# port in a docker container; defaults to what "port" has set
targetPort: 8000
And you need an Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: frontend-service
# the targetPort from service (the port inside a container)
servicePort: 8000
In order to be able to use Ingress resources, you need some ingress controller deployed.
Now, providing that you know your Kubernetes master IP, you can access your application from outside of a Kubernetes cluster with:
curl http://<master_ip>:80/ -H 'Host: foo.bar.com'
If you use some DNS server, you can add this record: foo.bar.com IN A <master_ip> or add this line to your /etc/hosts file: <master_ip> foo.bar.com and now you can just run:
curl foo.bar.com
Notice, that this way you will always access foo.bar.com using port 80. If you want to use some other port, I recommend using a Service of type NodePort, only for that one not-80 port. It will make that port resolvable, no matter which Kubernetes VM IP you use (any master or any minion IP is fine). Example of such a Service:
apiVersion: v1
kind: Service
metadata:
name: frontend-service-ssh
labels:
tier: frontend
spec:
type: NodePort
selector:
name: frontend-pod
ports:
- name: ssh
targetPort: 22
port: 22
nodePort: 2222
protocol: TCP
And if you have <master_ip> foo.bar.com in your /etc/hosts file, then you can access: foo.bar.com:2222