Does the setting of socket options require namespace switching? - sockets

In Linux, suppose a socket is created by switching to a namespace "red" from the "default" namespace. (After the socket creation, the namespace is switched back to the "default" namespace).
To set socket options on the created socket using "setsockopt" do we need to switch the namespace again ?
I have not found any references about how to set these socket options in case of different namespaces.

No. Linux sockets are permanently attached to the namespace where they were created. Anything you do to that socket will affect it in the namespace where it was created, even if the current namespace is different.

Related

Game Networking with Kubernetes/Agones

I am currently working on a multiplayer game that is meant to handle 20-50 player connections to a single game instance.
My current client connection model:
Client requests connection from server rest endpoint
Server creates 2 new sockets bound to random ports (1 tcp and 1 udp)
Client gets response and connects
I don't see anything glaringly wrong with this, but I am now questioning whether this is the general way that game server connections are done.
To explain further, I am in the process of learning how to use Kubernetes and Agones to deploy and manage app/game instances by wrapping them in Kubernetes pods. I am mostly working off of information found in the official guides (https://agones.dev/site/docs/getting-started/create-gameserver/) and associated github examples (https://github.com/googleforgames/agones/blob/release-1.15.0/examples).
For Agones, my understanding is that client connections are made via the port specified in "hostPort" in the "GameServer" yaml. I have previously deployed some instances with plain Kubernetes, using the "hostNetwork=true" option, which enables my above network model to work by allowing the game instance to bind directly to host ports and be exposed to the outside network. With Agones though, it seems that using this option is, at the very least, not encouraged (https://github.com/googleforgames/agones/issues/1389).
I'm certainly not an expert on networking, so please forgive my ignorance, but how are the client connections meant to be handled here if I'm only exposing one port? Is all the traffic multiplexed, or can I directly pass off connections somehow to other sockets/ports and have them automatically be exposed to the outside network?
Is all the traffic multiplexed, or can I directly pass off connections somehow to other sockets/ports and have them automatically be exposed to the outside network?
I would multiplex the traffic. It sounds like right now you are using the incoming port to determine "who is who". But you could also include that information in the packet flow to a shared port instead.

How to use a different dns name for OpenShift 3.11 routes than the default wildcard dns name?

I'm not able to get a custom domain record working with an openshift cluster. I've read tons of articles, StackOverflow posts, and this youtube video https://www.youtube.com/watch?v=Y7syr9d5yrg. All seem to "almost" be usefull for me, but there is always something missing and I'm not able to get this working by myself.
The scenario is as follows. I've got an openshift cluster deployed on an IBM Cloud account. I've registered myinnovx.com. I want to use it with an openshift app. Cluster details:
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
openshift v3.11.146
kubernetes v1.11.0+d4cacc0
I've got an application deployed with a blue/green strategy. In the following screenshot, you can see the routes I've available.
mobile-blue: I created this one manually pointing to my custom domain mobileoffice.myinnovx.com
mobile-office: Created with oc expose service mobile-office --name=mobile-blue to use external access.
mobile-green: Openshift automatically generated a route for the green app version. (Source2Image deployment)
mobile-blue: Openshift automatically generated a route for the blue app version. (Source2Image deployment)
I've set up a two CNAME record on my DNS edit page as follows:
In several blogs/articles, I've found that I'm supposed to point my wildcard record to the router route canonical name. But I don't have any route canonical name in my cluster. I don't even have an Ingress route configured.
I'm at a loss here as to what I'm missing. Any help is greatly appreciated.
This is the response I get testing my DNS:
This is a current export of my DNS:
$ORIGIN myinnovx.com.
$TTL 86400
# IN SOA ns1.softlayer.com. msalimbe.ar.ibm.com. (
2019102317 ; Serial
7200 ; Refresh
600 ; Retry
1728000 ; Expire
3600) ; Minimum
# 86400 IN NS ns1.softlayer.com.
# 86400 IN NS ns2.softlayer.com.
*.myinnovx.com 900 IN CNAME .mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud.
mobileoffice 900 IN CNAME mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud
mobile-test.myinnovx.com 900 IN A 169.63.244.76
I think you almost got it, Matias.
The FQDN - mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud - resolves for me to an IP that is part of SOFTLAYER-RIPE-4-30-31 and is accessible from the Internet. So, it should be possible to configure what you want.
That snapshot in your question of the DNS records isn't displaying the entries in full but what might be missing is a dot . at the end of both the "Host/Service" and "Value/Target". Something like this:
mobileoffice.myinnovx.com. CNAME 900 (15min) mobile-office-mobile-office.mycluster-342148-26562a7d6831df3dfa02975385757d2d-0001.us-south.containers.appdomain.cloud.
Most of what I'm about to say only applies to OpenShift 3.x. In OpenShift 4.x things are sufficiently different that most of the below doesn't quite apply.
By default OpenShift 3.11 exposes applications via Red Hat's custom HAProxy Ingress Controller (colloquially known as the "Router"). The typical design in a OpenShft 3.x cluster is to designate particular cluster hosts for running cluster infrastructure workloads like the HAProxy router and the internal OpenShift registry (usually using the node-role.kubernetes.io/infra=true node labels).
For convenience purposes so admins don't have to manually create a DNS record for each exposed OpenShift application, there is a wildcard DNS entry that points to the load balancer associated with the HAProxy Router. The DNS name of this is configured in the openshift_master_default_subdomain of the ansible inventory file used to do your cluster installation.
The structure of this record is generally something like *.apps.<cluster name>.<dns subdomain>, but it can be anything you like.
If you want to have a prettier DNS name for your applications you can do a couple things.
The first is to create a DNS entry myapp.example.com pointing to your load balancer and have your load balancer configured to forward those requests to the cluster hosts where the HAProxy Router is running on port 80/443. You can then configure your application's Route object to use hostname myapp.example.com instead of the default <app name>-<project name>.apps.<cluster name>.<dns subdomain>.
Another method would be to do what your suggesting and let the application use the default wildcard route name, but create a DNS CNAME pointing to the original wildcard route name. For example if my openshift_master_default_subdomain is apps.openshift-dev.example.com and my application route is myapp-myproject.apps.openshift-dev.example.com then I could create a CNAME DNS record myapp.example.com pointing to myapp-myproject.apps.openshift-dev.example.com.
The key thing that makes either of the above work is that the HAProxy router doesn't care what the hostname of the request is. All its going to do is match the Host header (SNI must be set in the case of TLS requests and the HAProxy router configured for pass through) of the incoming request against all of Route objects in the cluster and see if any of them match. So if your DNS/Load Balancer configuration is setup to bring requests to the HAProxy Router and the Host header matches a Route, that request will get forwarded to the appropriate OpenShift service.
In your case I don't think you have the CNAME pointed at the right place. You need to point your CNAME at the wildcard hostname your application Route is using.
Also, please note the instructions for custom DNS setup for a route on OpenShift v4 are a bit different and are not correctly displayed in the web console:
apps.<clustername>.<clusterid>.<shard>.openshiftapps.com will not resolve to anything. *.apps.<clustername>.<clusterid>.<shard>.openshiftapps.com is the wildcard entry, so you need something prepending that.
To align with the way it was on v3 we usually chose the arbitrary string elb, e.g. - elb.apps.<clustername>.<clusterid>.<shard>.openshiftapps.com. That will hit the routers.
Here is the related BZ - https://bugzilla.redhat.com/show_bug.cgi?id=1925132

Allow load balanced instances to connect single compute instance postgresql server

I am looking for GCP networking best practice, where I can allow connection of auto-scaled instances to Postgresql server installed on separate instance.
So far I tried whitelisting load-balancer IP within firewall and postgresql config file, but failed.
Any help or pointer is highly appreciated.
The load-balancer doesn't process information by itself, it just redirects Frontend addresse(s) and manage the requests with Instance Groups.
That instance group should manage the HTTP requests and connect with the database instance.
The load-balancer is used to dynamically distribute (or even create additional instances) to handle the requests over the same Frontend address.
--
So first you should make it work with a regular instance, configure it and save the instance template. Then you can proceed with creating an instance group that can be managed by a load-balancer.
EDIT - Extended the answer from my comment
"I don't think your problem is related to Google cloud platform now. If you have a known IP address for the PostgreSQL server (connect using an internal network IP address so it doesn't change), then make sure your auto-balanced instances are in the same internal network, use db's internal IP and connect to it."

Marathon Service Ports

I know that 'servicePort' is used by marathon-lb to identify an app. Is there any other user of this setting besides marathon-lb?
If the answer is no, why is it mandatory (omitting it well generate one for me)? I have many marathon apps which are not managed by marathon-lb, and they all take up service ports by default.
From the documentation: ""servicePort" is a helper port intended for doing service discovery using a well-known port per service. The assigned servicePort value is not used/interpreted by Marathon itself but supposed to be used by the load balancer infrastructure."
So service ports seem to have no other use other than for marathon-lb.
When you don't specify a servicePort, its as if you put in "servicePort": 0.
See closed issue here.
Here's a discussion about the re-architected networking API.
If you look at the Jira ticket, you will see that the new API model lets you define services without servicePorts at all.

Access external IP address from service

Is it possible to get the external IP address for a POD? It doesn't appear to be populating in the environmental variables for a service, so I was wondering if there was another way to get that information.
Basically: I'm setting up a proftpd service, and it needs to send out its external ip as well as a port for passive communication. Right now, it's sending the local IP address which is causing FTP clients to fail.
The kubernetes service discovery mechanism (DNS or environment variable) doesn't populate the external IP.
One way to work around is to create a static IP first, then assign it to your service.
Or you can exec kubectl inside your cluster to get the external IP but that's nasty.