How to expose kubernetes service to public without hardcoding to minion IP? - kubernetes

I have a kubernetes cluster running with 2 minions.
Currently I make my service accessible in 2 steps:
Start replication controller & pod
Get minion IP (using kubectl get minions) and set it as publicIPs for the Service.
What is the suggested practice for exposing service to the public? My approach seems wrong because I hard-code the IP-s of individual minion IP-s. It also seems to bypass load balancing capabilities of kubernetes services because clients would have to access services running on individual minions directly.
To set up the replication controller & pod I use:
id: frontend-controller
kind: ReplicationController
apiVersion: v1beta1
desiredState:
replicas: 2
replicaSelector:
name: frontend-pod
podTemplate:
desiredState:
manifest:
version: v1beta1
id: frontend-pod
containers:
- name: sinatra-docker-demo
image: madisn/sinatra_docker_demo
ports:
- name: http-server
containerPort: 4567
labels:
name: frontend-pod
To set up the service (after getting minion ip-s):
kind: Service
id: frontend-service
apiVersion: v1beta1
port: 8000
containerPort: http-server
selector:
name: frontend-pod
labels:
name: frontend
publicIPs: [10.245.1.3, 10.245.1.4]

As I mentioned in the comment above, the createExternalLoadBalancer is the appropriate abstraction that you are looking for, but unfortunately it isn't yet implemented for all cloud providers, and in particular for vagrant, which you are using locally.
One option would be to use the public IPs for all minions in your cluster for all of the services you want to be externalized. The traffic destined for the service will end up on one of the minions, where it will be intercepted by the kube-proxy process and redirected to a pod that matches the label selector for the service. This could result in an extra hop across the network (if you land on a node that doesn't have the pod running locally) but for applications that aren't extremely sensitive to network latency this will probably not be noticeable.

As Robert said in his reply this is something that is coming up, but unfortunately isn't available yet.
I am currently running a Kubernetes cluster on our datacenter network. I have 1 master and 3 minions all running on CentOS 7 virtuals (vcenter). The way I handled this was to create a dedicated "kube-proxy" server. I basically am just running the Kube-Proxy service (along with Flannel for networking) and then assigning "public" IP addresses to the network adapter attached to this server. When I say public I mean addresses on our local datacenter network. Then when I create a service that I would like to access outside of the cluster I just set the publicIPs value to one of the available IP addresses on the kube-proxy server. When someone or something attempts to connect to this service from outside the cluster it will hit the kube-proxy and then be redirected to the proper minion.
While this might seem like a work around, this is actually similar to what I would expect to be happening once they come up with a built in solution to this issue.

If you're running a cluster locally, a solution I used was to expose the service on your kubernetes nodes using the nodeport directive in your service definition and then round robin to every node in your cluster with HAproxy.
Here's what exposing the nodeport looks like:
apiVersion: v1
kind: Service
metadata:
name: nginx-s
labels:
name: nginx-s
spec:
type: NodePort
ports:
# must match the port your container is on in your replication controller
- port: 80
nodePort: 30000
selector:
name: nginx-s
Note: the value you specify must be within the configured range for node ports. (default: 30000-32767)
This exposes the service on the given nodeport on every node in your cluster. Then I set up a separate machine on the internal network running haproxy and a firewall that's reachable externally on the specified nodeport(s) you want to expose.
If you look at your nat table on one of your hosts, you can see what it's doing.
root#kube01:~# kubectl create -f nginx-s.yaml
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30000) to serve traffic.
See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.
services/nginx-s
root#kube01:~# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-PORTALS-CONTAINER all -- anywhere anywhere /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
KUBE-NODEPORT-CONTAINER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-PORTALS-HOST all -- anywhere anywhere /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
DOCKER all -- anywhere !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
KUBE-NODEPORT-HOST all -- anywhere anywhere ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-NODEPORT-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere anywhere /* default/nginx-s: */ tcp dpt:30000 redir ports 42422
Chain KUBE-NODEPORT-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere anywhere /* default/nginx-s: */ tcp dpt:30000 to:169.55.21.75:42422
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 192.168.3.1 /* default/kubernetes: */ tcp dpt:https redir ports 51751
REDIRECT tcp -- anywhere 192.168.3.192 /* default/nginx-s: */ tcp dpt:http redir ports 42422
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 192.168.3.1 /* default/kubernetes: */ tcp dpt:https to:169.55.21.75:51751
DNAT tcp -- anywhere 192.168.3.192 /* default/nginx-s: */ tcp dpt:http to:169.55.21.75:42422
root#kube01:~#
Particularly this line
DNAT tcp -- anywhere anywhere /* default/nginx-s: */ tcp dpt:30000 to:169.55.21.75:42422
And finally, if you look at netstat, you can see kube-proxy is listening and waiting for that service on that port.
root#kube01:~# netstat -tupan | grep 42422
tcp6 0 0 :::42422 :::* LISTEN 20748/kube-proxy
root#kube01:~#
Kube-proxy will listen on a port for each service, and do network address translation into your virtual subnet that your containers reside in. (I think?) I used flannel.
For a two node cluster, that HAproxy configuration might look similiar to this:
listen sampleservice 0.0.0.0:80
mode http
stats enable
balance roundrobin
option httpclose
option forwardfor
server noname 10.120.216.196:30000 check
server noname 10.155.236.122:30000 check
option httpchk HEAD /index.html HTTP/1.0
And your service is now reachable on port 80 via HAproxy. If any of your nodes go down, the containers will be moved to another node thanks to replication controllers and HAproxy will only route to your nodes that are alive.
I'm curious what methods others have used though, that's just what I came up with. I don't usually post on stack overflow, so apologies if I'm not following conventions or proper formatting.

This is for MrE. I did not have enough space in the comments area to post this answer so I had to create another answer. Hope this helps:
We have actually moved away from Kubernetes since posting this reply. If I remember correctly though all I really had to do was run the kube-proxy executable on a dedicated CentOS VM. Here is what I did:
First I removed Firewalld and put iptables in place. Kube-proxy relies on iptables to handle its NAT and redirections.
Second, you need to install flanneld so you can have a bridge adapter on the same network as the Docker services running on your minions.
Then what I did was assign multiple ip addresses to the local network adapter installed on the machine. These will be the ip addresses you can use when setting up a service. These will be the addresses available OUTSIDE your cluster.
Once that is all taken care of you can start the proxy service. It will connect to the Master, and grab an IP address for the flannel bridge network. Then it will sync up all the IPtables rules and you should be set. Every time a new service is added it will create the proxy rules and replicate those rules across all minions (and your proxy). As long as you specified an ip address available on your proxy server then that proxy server will forward all traffic for that ip address over to the proper minion.
Hope this is a little more clear. Remember though I have not been part of the Kubernetes project for about 6 months now so I am not sure what changed have been made since I left. They might even have a feature in place that handles this sort of thing. If not hopefully this helps you get it taken care of.

You can use Ingress resource to allow external connections from outside of a Kubernetes cluster to reach the cluster services.
Assuming that you already have a Pod deployed, you now need a Service resource, e.g.:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
labels:
tier: frontend
spec:
type: ClusterIP
selector:
name: frontend-pod
ports:
- name: http
protocol: TCP
# the port that will be exposed by this service
port: 8000
# port in a docker container; defaults to what "port" has set
targetPort: 8000
And you need an Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: frontend-service
# the targetPort from service (the port inside a container)
servicePort: 8000
In order to be able to use Ingress resources, you need some ingress controller deployed.
Now, providing that you know your Kubernetes master IP, you can access your application from outside of a Kubernetes cluster with:
curl http://<master_ip>:80/ -H 'Host: foo.bar.com'
If you use some DNS server, you can add this record: foo.bar.com IN A <master_ip> or add this line to your /etc/hosts file: <master_ip> foo.bar.com and now you can just run:
curl foo.bar.com
Notice, that this way you will always access foo.bar.com using port 80. If you want to use some other port, I recommend using a Service of type NodePort, only for that one not-80 port. It will make that port resolvable, no matter which Kubernetes VM IP you use (any master or any minion IP is fine). Example of such a Service:
apiVersion: v1
kind: Service
metadata:
name: frontend-service-ssh
labels:
tier: frontend
spec:
type: NodePort
selector:
name: frontend-pod
ports:
- name: ssh
targetPort: 22
port: 22
nodePort: 2222
protocol: TCP
And if you have <master_ip> foo.bar.com in your /etc/hosts file, then you can access: foo.bar.com:2222

Related

Access kubernetes external IP from the internet

I am currently setting up a kubernetes cluster (bare ubuntu servers). I deployed metallb and ingress-nginx to handle the ip and service routing. This seems to work fine. I get a response from nginx, when I wget the externalIP of the ingress-nginx-controller service (works on every node). But this only works inside the cluster network. How do I access my services (the ingress-nginx-controller, because it does the routing) from the internet through a node/master servers ip? I tried to set up routing with iptables, but it doesn't seem to work. What am I doing wrong and is it the best practise ?
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
iptables -t nat -A PREROUTING -i eth0 -p tcp -d <Servers IP> --dport 80 -j DNAT --to <ExternalIP of nginx>:80
iptables -A FORWARD -p tcp -d <ExternalIP of nginx> --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -F
Here are some more information:
kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.103.219.111 198.51.100.1 80:31872/TCP,443:31897/TCP 41h
ingress-nginx-controller-admission ClusterIP 10.108.194.136 <none> 443/TCP 41h
Please share some thoughts
Jonas
Bare-metal cluster are a bit tricky to set-up because you need to create and manage the point of contact to your services. In cloud environment these are available on-demand.
I followed this doc and can assume that your load balancer seems to be working fine as you are able to curl this IP address. However, you are trying to get a response when calling a domain. For this you need some app running inside your cluster, which is exposed to hostname via Ingress component.
I'll take you through steps to achieve that.
First, create a Deployment to run a webservice, I'm gonna use simple nginx example:
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Second, create a Service of type LoadBalancer to be able to access it externally. You can do that by simply running this command:
kubectl expose deployment nginx-deployment --type=LoadBalancer --name=<service_name>
If your software load balancer is set up correctly, this should give external IP address to the Deployment you created before.
Last but not least, create Ingress service which will manage external access and name-based virtual hosting. Example:
kind: Ingress
metadata:
name: <ingress_name>
spec:
rules:
- host: <your_domain>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <service_name>
port:
number: 80
Now, you should be able to use your domain name as an external access to your cluster.
I ended up with installing HAProxy on the maschine I want to resole my domain to. HAProxy listens to port 80 and 443 and forwards all trafic to the externalIP of my ingress controller. You can also do this on multiple mashines and DNS failover for high availability.
My haproxy.cfg
frontend unsecure
bind 0.0.0.0:80
default_backend unsecure_pass
backend unsecure_pass
server unsecure_pass 198.51.100.0:80
frontend secure
bind 0.0.0.0:443
default_backend secure_pass
backend secure_pass
server secure_pass 198.51.100.0:443

HAProxy Not Working with Kubernetes NodePort for Backend (Bare Metal)

I have a host running HAProxy already. It's been up and running since before I did anything with Kubernetes. It works flawlessly as a reverse proxy and SSL terminator for various web things in various Docker containers on various other host machines.
Now I have a Kubernetes cluster up and running across some of those other machines. I've created the NodePort Service that exposes port 30080 on each worker node, as follows:
apiVersion: v1
kind: Service
metadata:
name: snginx
labels:
app: nginx
spec:
type: NodePort
externalTrafficPolicy: Local #Cluster or Local
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
From the machine running HAProxy (which is not part of the cluster), I can curl the NodePort successfully ( curl 10.0.0.20:30080 ), and I get "Welcome to nginx!..." However, if I set that NodePort as a backend in HAProxy, I get a 503 "No server is available", and HAProxy traffic log says:
localhost haproxy[18489]: [redactedIP]:49399 [30/Aug/2021:19:24:00.764] http-in~ load/load 66/0/-1/-1/67 503 212 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1"
The haproxy admin log says:
Aug 30 20:07:13 localhost haproxy[18839]: Server load/load is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)"
However, I've disabled the firewall with
sudo systemctl disable --now firewalld
and verified the status is not running. Also, SELinux was disabled when I installed the cluster. Also, I can ping 10.0.0.20 just fine.
"load" is the hostname I'm using for testing load balancing (i.e. load.mydomain.com).
Also, if I use PAT on my physical router to route directly to that NodePort, from outside the building, it works as expected.
What gives? What's the difference between the proxied request and curl?
Thank you.
SELinux is the difference. That is, SELinux on the HAProxy host (not a cluster node):
"SELinux only allows the web server to make outbound connections to a limited set of ports"
That is, you can't make an outbound http request to any port in the NodePort range (30000-32768) without opening that port on the "client", which is the HAProxy server in this case.
sudo semanage port --add --type http_port_t --proto tcp 30080

How are the various Istio Ports used?

Question
I am trying to learn Istio and I am setting up my Istio Ingress-Gateway. When I set that up, there are the following port options (as indicated here):
Port
NodePort
TargetPort
NodePort makes sense to me. That is the port that the Ingress-Gateway will listen to on each worker node in the Kubernetes cluster. Requests that hit there are going to route into the Kubernetes cluster using the Ingress Gateway CRDs.
In the examples, Port is usually set to the common port for its matching traffic (80 for http, and 443 for https, etc). I don't understand what Istio needs this port for, as I don't see any traffic using anything but the NodePort.
TargetPort is a mystery to me. I have seen some documentation on it for normal Istio Gateways (that says it is only applicable when using ServiceEntries), but nothing that makes sense for an Ingress-Gateway.
My question is this, in relation to an Ingress-Gateway (not a normal Gateway) what is a TargetPort?
More Details
In the end, I am trying to debug why my ingress traffic is getting a "connection refused" response.
I setup my Istio Operator following this tutorial with this configuration:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-controlplane
namespace: istio-system
spec:
components:
ingressGateways:
- enabled: true
k8s:
service:
ports:
- name: http2
port: 80
nodePort: 30980
hpaSpec:
minReplicas: 2
name: istio-ingressgateway
pilot:
enabled: true
k8s:
hpaSpec:
minReplicas: 2
profile: default
I omitted the TargetPort from my config because I found this release notes that said that Istio will pick safe defaults.
With that I tried to follow the steps found in this tutorial.
I tried the curl command indicated in that tutorial:
curl -s -I -H Host:httpbin.example.com "http://10.20.30.40:30980/status/200"
I got the response of Failed to connect to 10.20.30.40 port 30980: Connection refused
But I can ping 10.20.30.40 fine, and the command to get the NodePort returns 30980.
So I got to thinking that maybe this is an issue with the TargetPort setting that I don't understand.
A check of the istiod logs hinted that I may be on the right track. I ran:
kubectl logs -n istio-system -l app=istiod
and among the logs I found:
warn buildGatewayListeners: skipping privileged gateway port 80 for node istio-ingressgateway-dc748bc9-q44j7.istio-system as it is an unprivileged pod
warn gateway has zero listeners for node istio-ingressgateway-dc748bc9-q44j7.istio-system
So, if you got this far, then WOW! I thank you for reading it all. If you have any suggestions on what I need to set TargetPort to, or if I am missing something else, I would love to hear it.
Port, Nodeport and TargetPort are not Istio concepts, but Kubernetes ones, more specifically of Kubernetes Services, which is why there is no detailed description of that in the Istio Operator API.
The Istio Operator API exposes the options to configure the (Kubernetes) Service of the Ingress Gateway.
For a description of those concepts, see the documentation for Kubernetes Service.
See also
Difference between targetPort and port in Kubernetes Service definition
So the target port is where the containers of the Pod of the Ingress Gateway receive their traffic.
Therefore I think, that the configuration of ports and target ports is application specific and the mapping 80->8080 is more or less arbitrary, i.e. a "decision" of the application.
Additional details:
The Istio Operator describes the Ingress Gateway, which itself consists of a Kubernetes Service and a Kubernetes Deployment. Usually it is deployed in istio-system. You can inspect the Kubernetes Service of istio-ingressgateway and it will match the specification of that YAML.
Therefore the Istio Ingress Gateway is actually talking to its containers.
However, this is mostly an implementation detail of the Istio Ingress Gateway and is not related to a Service and a VirtualService which you define for your apps.
The Ingressgateway is itself a Service and receives traffic on the port you define (i.e. 80) and forwards it to 8080 on its containers. Then it processes the traffic according to the rules which are configured by Gateways and VirtualServices and sends it to the Service of the application.
I still don't really understand what TargetPort is doing, but I got the tutorial working.
I went back an uninstalled Istio (by deleting the operator configuration and then the istio namespaces). I then re-installed it but I took the part of my configuration out that specified the node port.
I then ran a kubectl get namespace istio-ingressgateway -o yaml -n istio-system. That showed me what the istio ingress gateway was using as its defaults for the port. I then went and updated my yaml for the operator to match (except for my desired custom NodePort). That worked.
In the end, the yaml looked like this:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-controlplane
namespace: istio-system
spec:
components:
ingressGateways:
- enabled: true
k8s:
service:
ports:
- name: status-port
nodePort: 32562
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 30980
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32013
port: 443
protocol: TCP
targetPort: 8443
hpaSpec:
minReplicas: 2
name: istio-ingressgateway
pilot:
enabled: true
k8s:
hpaSpec:
minReplicas: 2
profile: default
I would still like to understand what the TargetPort is doing. So if anyone can answer with that (again, in context of the Istio Ingress Gateway service (not an istio gateway)), then I will accept that answer.
Configuring the istio-gateway with a service will create a kubernetes service with the given port configuration, which (as in a different answer already mentioned) isn't an istio concept, but a kubernetes one. So we need to take a look at the underlying kubernetes mechanisms.
The service that will be created is of type LoadBalancer by default. Also your cloud provider will create an external LoadBalancer that forwards traffic coming to it on a certain port to the cluster.
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
istio-ingressgateway LoadBalancer 10.107.158.5 1.2.3.4 15021:32562/TCP,80:30980/TCP,443:32013/TCP
You can see the internal ip of the service as well as the external ip of the external loadbalancer and in the PORT(S) column that for example your port 80 is mapped to port 30980. Behind the scenes kube-proxy takes your config and configures a bunch of iptables chains to set up routing of traffic to the ingress-gateway pods.
If you have access to a kubernetes host you can investigate those using the iptables command. First start with the KUBE-SERVICES chain.
$ iptables -t nat -nL KUBE-SERVICES | grep ingressgateway
target prot opt source destination
KUBE-SVC-TFRZ6Y6WOLX5SOWZ tcp -- 0.0.0.0/0 10.107.158.5 /* istio-system/istio-ingressgateway:status-port cluster IP */ tcp dpt:15021
KUBE-FW-TFRZ6Y6WOLX5SOWZ tcp -- 0.0.0.0/0 1.2.3.4 /* istio-system/istio-ingressgateway:status-port loadbalancer IP */ tcp dpt:15021
KUBE-SVC-G6D3V5KS3PXPUEDS tcp -- 0.0.0.0/0 10.107.158.5 /* istio-system/istio-ingressgateway:http2 cluster IP */ tcp dpt:80
KUBE-FW-G6D3V5KS3PXPUEDS tcp -- 0.0.0.0/0 1.2.3.4 /* istio-system/istio-ingressgateway:http2 loadbalancer IP */ tcp dpt:80
KUBE-SVC-7N6LHPYFOVFT454K tcp -- 0.0.0.0/0 10.107.158.5 /* istio-system/istio-ingressgateway:https cluster IP */ tcp dpt:443
KUBE-FW-7N6LHPYFOVFT454K tcp -- 0.0.0.0/0 1.2.3.4 /* istio-system/istio-ingressgateway:https loadbalancer IP */ tcp dpt:443
You'll see that there are basically six chains, two for each port you defined: 80, 433 and 15021 (on the far right).
The KUBE-SVC-* are for cluster internal traffic, the KUBE-FW-* for cluster external traffic. If you take a closer look you can see that the destination is the (external|internal) ip and one of the ports. So the traffic arriving on the node's network interface is for example for the destination 1.2.3.4:80. You can now follow down that chain, in my case KUBE-FW-G6D3V5KS3PXPUEDS:
iptables -t nat -nL KUBE-FW-G6D3V5KS3PXPUEDS | grep KUBE-SVC
target prot opt source destination
KUBE-SVC-LBUWNFSUU3FNPZ7L all -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 loadbalancer IP */
follow that one as well
$ iptables -t nat -nL KUBE-SVC-LBUWNFSUU3FNPZ7L | grep KUBE-SEP
target prot opt source destination
KUBE-SEP-RZL3ZLWSG2M7ZJYD all -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 */ statistic mode random probability 0.50000000000
KUBE-SEP-F7W3YTTYPP5NEPJ7 all -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 */
where you see the service endpoints, which are round robin loadbalanced by 50:50, and finally (choosing one of them):
$ iptables -t nat -nL KUBE-SEP-RZL3ZLWSG2M7ZJYD | grep DNAT
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 */ tcp to:172.17.0.4:8080
where they end up being DNATed to 172.17.0.4:8080, which is the ip of one of the istio-ingressgateway pods ip and port 8080.
If you don't have access to a host/don't run in a public cloud environment, you wont have an external loadbalancer, so you wont find any KUBE-FW-* chains (also the EXTERNAL-IP in the service will stay in <pending>). In that case you would be using <nodeip>:<nodeport> to access the cluster from externally, for which iptables chains are also created. Run iptables -t nat -nL KUBE-NODEPORTS | grep istio-ingressgateway which will also show you three KUBE-SVC-* chains, which you can follow down to the DNAT as shown above.
So the targetport (like 8080) is used to configure networking in kubernetes and also istio uses it to defines on which ports the ingressgateway pods bind to. You can kubectl describe pod <istio-ingressgateway-pod> where you'll find the defined ports (8080 and 8443) as container ports. Change them to whatever (above 1000) and they will change accordingly. Next you would apply a Gateway with spec.servers where you define those ports 8080,8443 to configure envoy (= istio-ingressgateway, the one you define with spec.selector) to listen on those ports and a VirtualService to define who to handle the received requests. See also my other answer on that topic.
Why didn't you initial config work? If you omit the targetport, istio will bind to the port you define (80). That requires istio to run as root, otherwise the ingressgateway is unable to bind to ports below 1000. You can change it by setting values.gateways.istio-ingressgateway.runAsRoot=true in the operator, also see the release note you mentioned. In that case the whole traffic flow from above would look exactly the same, except the ingressgateway pod would bind to 80,443 instead of 8080,8443 and the DNAT would <pod-ip>:(80|443) instead of <pod-ip:(8080|8443)>.
So you basically just misunderstood the release note: If you don't run the istio-ingressgateway pod as root you have to define the targetPort or alternatively omit the whole k8s.service overlay (in that case istio will choose safe ports itself).
Note that I greped for KUBE-SVC, KUBE-SEP and DNAT. There will always be a bunch of KUBE-MARK-MASQ and KUBE-MARK-DROP that not really matter for now. If you want to learn more about the topic, there are some great articles about this out there, like this one.

GKE LoadBalancer Static IP

I created a regional static IP in the same region of the cluster and I'm trying to use it with a LoadBalancer:
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
selector:
service: ambassador
loadBalancerIP: "x.x.x.x"
However, I don't know why I am getting this error:
Error creating load balancer (will retry): failed to ensure load balancer for service default/ambassador: requested ip "x.x.x.x" is neither static nor assigned to the LB
Edit: Problem solved but ..
When I created the static IP address, I used:
gcloud compute addresses create regional-ip --region europe-west1
I used this address with the Service.
It didn't work like I said.
However, when I created an external static regional IP using the web console, the IP worked fine with my Service and it was attached without problems.
My bet is that the source IP service is not exposed then. As the official docs say:
As of Kubernetes 1.5, packets sent to Services with Type=LoadBalancer are source NAT’d by default, because all schedulable Kubernetes nodes in the Ready state are eligible for loadbalanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint, replacing the source IP on the packet with the IP of the node (as described in the previous section).
Try this command to expose the source IP service to the loadbalancer:
kubectl expose deployment <source-ip-app> --name=loadbalancer --port=80 --target-port=8080 --type=LoadBalancer
On this page, you will find more guidance and a number of diagnostic commands for sanity check.
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer

Istio (1.0) intra ReplicaSet routing - support traffic between pods in a Kubernetes Deployment

How does Istio support IP based routing between pods in the same Service (or ReplicaSet to be more specific)?
We would like to deploy a Tomcat application with replica > 1 within an Istio mesh. The app runs Infinispan, which is using JGroups to sort out communication and clustering. JGroups need to identify its cluster members and for that purpose there is the KUBE_PING (Kubernetes discovery protocol for JGroups). It will consult K8S API with a lookup comparable to kubectl get pods. The cluster members can be both pods in other services and pods within the same Service/Deployment.
Despite our issue being driven by rather specific needs the topic is generic. How do we enable pods to communicate with each other within a replica set?
Example: as a showcase we deploy the demo application https://github.com/jgroups-extras/jgroups-kubernetes. The relevant stuff is:
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ispn-perf-test
namespace: my-non-istio-namespace
spec:
replicas: 3
< -- edited for brevity -- >
Running without Istio, the three pods will find each other and form the cluster. Deploying the same with Istio in my-istio-namespace and adding a basic Service definition:
kind: Service
apiVersion: v1
metadata:
name: ispn-perf-test-service
namespace: my-istio-namespace
spec:
selector:
run : ispn-perf-test
ports:
- protocol: TCP
port: 7800
targetPort: 7800
name: "one"
- protocol: TCP
port: 7900
targetPort: 7900
name: "two"
- protocol: TCP
port: 9000
targetPort: 9000
name: "three"
Note that output below is wide - you might need to scroll right to get the IPs
kubectl get pods -n my-istio-namespace -o wide
NAME READY STATUS RESTARTS AGE IP NODE
ispn-perf-test-558666c5c6-g9jb5 2/2 Running 0 1d 10.44.4.63 gke-main-pool-4cpu-15gb-98b104f4-v9bl
ispn-perf-test-558666c5c6-lbvqf 2/2 Running 0 1d 10.44.4.64 gke-main-pool-4cpu-15gb-98b104f4-v9bl
ispn-perf-test-558666c5c6-lhrpb 2/2 Running 0 1d 10.44.3.22 gke-main-pool-4cpu-15gb-98b104f4-x8ln
kubectl get service ispn-perf-test-service -n my-istio-namespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ispn-perf-test-service ClusterIP 10.41.13.74 <none> 7800/TCP,7900/TCP,9000/TCP 1d
Guided by https://istio.io/help/ops/traffic-management/proxy-cmd/#deep-dive-into-envoy-configuration, let's peek into the resulting Envoy conf of one of the pods:
istioctl proxy-config listeners ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace
ADDRESS PORT TYPE
10.44.4.63 7900 TCP
10.44.4.63 7800 TCP
10.44.4.63 9000 TCP
10.41.13.74 7900 TCP
10.41.13.74 9000 TCP
10.41.13.74 7800 TCP
< -- edited for brevity -- >
The Istio doc describes the listeners above as
Receives outbound non-HTTP traffic for relevant IP:PORT pair from
listener 0.0.0.0_15001
and this all makes sense. The pod ispn-perf-test-558666c5c6-g9jb5 can reach itself on 10.44.4.63 and the service via 10.41.13.74. But... what if the pod sends packets to 10.44.4.64 or 10.44.3.22? Those IPs are not present among the listeners so afaik the two "sibling" pods are non-reachable for ispn-perf-test-558666c5c6-g9jb5.
Can Istio support this today - then how?
You are right that HTTP routing only supports local access or remote access by service name or service VIP.
That said, for your particular example, above, where the service ports are named "one", "two", "three", the routing will be plain TCP as described here. Therefore, your example should work. The pod ispn-perf-test-558666c5c6-g9jb5 can reach itself on 10.44.4.63 and the other pods at 10.44.4.64 and 10.44.3.22.
If you rename the ports to "http-one", "http-two", and "http-three" then HTTP routing will kick in and the RDS config will restrict the remote calls to ones using recognized service domains.
To see the difference in the RDF config look at the output from the following command when the port is named "one", and when it is changed to "http-one".
istioctl proxy-config routes ispn-perf-test-558666c5c6-g9jb5 -n my-istio-namespace --name 7800 -o json
With the port named "one" it will return no routes, so TCP routing will apply, but in the "http-one" case, the routes will be restricted.
I don't know if there is a way to add additional remote pod IP addresses to the RDS domains in the HTTP case. I would suggest opening an Istio issue, to see if it's possible.