OpenShift access service in other namespace without network join - kubernetes

I'm new to OpenShift. I have two projects|namespaces. In each I have a rest service. What I want is service from NS1 access service from NS2 without joining projects networks. Also SDN with multi tenant plugin.
I found example on how to add external services to cluster as native. In NS1 I created an Endpoint for external IP of Service form NS2, but when I tried to create a Service in NS1 for this Endpoint, it failed cause there was no type tag (which wasn't in example also).
I also tried ExternalName. For externalName key my value was URL of router to service in NS2. But it doesn't work pretty well, cause it always returns me a page with Application is not available. But app\service works.

Services in different namespaces are not external, but local to the cluster. So you simply access the services using DNS:
for example: servicename.svc.cluster.local or simply servicename.svc
see also https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/networking.html

Your question is not very clear and lacks information regarding your network setup and what you mean by joining projects network. What does the SDN multi-tenancy do for example?
By default, the network within the cluster is routable within the whole cluster. If you expose a service in a namespace NS_A, it can access a services in namespace NS_B like so:
Pod in namespace A : curl NS_B.servicename:port
vice versa:
Pod in namespace B : curl NS_A.servicename:port
If your SDN setup makes that impossible, you can expose both service with an Ingress / route and address is from the network where you expose those ( public or not ).
Read the docs on those, for example:
https://kubernetes.io/docs/concepts/services-networking/ingress/
That website is a great resource for all things Kubernetes (like OpenShift).
In OpenShift a slightly different take on it is with routes :
https://docs.openshift.com/container-platform/4.11/networking/routes/route-configuration.html
Basically, try to understand how the networks are set up and how these principles work.
If this does not answer your question, please make it more clear and specific.

Related

Connecting to many kubernetes services from local machine

From my local machine I would like to be able to port forward to many services in a cluster.
For example I have services of name serviceA-type1, serviceA-type2, serviceA-type3... etc. None of these services are accessible externally but can be accessed using the kubectl port-forward command. However there are so many services, that port forwarding to each is unfeasible.
Is it possible to create some kind of proxy service in kubernetes that would allow me to connect to any of the serviceA-typeN services by specifying the them in a URL? I would like to be able to port-forward to the proxy service from my local machine and it would then forward the requests to the serviceA-typeN services.
So for example, if I have set up a port forward on 8080 to this proxy, then the URL to access the serviceA-type1 service might look like:
http://localhost:8080/serviceA-type1/path/to/endpoint?a=1
I could maybe create a small application that would do this but does kubernetes provide this functionality already?
kubectl proxy command provides this functionality.
Read more here: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
Good option is to use Ingrees to achieve it.
Read more about what Ingress is.
Main concepts are:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In Kubernetes we have 4 types of Services and the default service type is Cluster IP which means the service is only reachable within the cluster.Ingress exposes your service outside the cluster so ingress acts as the entry point into your cluster.
If you plan to move to cloud (I assume you will, because all applications are going to work in cloud in future) with Ingress, it will be compatible with cloud services and eventually will save time and will be easier to migrate from local environment.
To start with ingress you need to install an Ingress controller first.
There are different ingress controllers which you can use.
You can start with most common ingress-nginx which is supported by kubernetes community.
If you're using a minikube than it can be enabled as an addon - see here
Once you have installed ingress in your cluster, you need to create a rule to have it work. Simple fanout is an example with two services and path based routing to it.

Ping api in kubernetes environment which is running in other namespaces

How do I ping my api which is running in kubernetes environment in other namespace rather than default. Lets say I have pods running in 3 namespaces - default, dev, prod. I have ingress load balancer installed and configured the routing. I have no problem in accessing default namespace - https://localhost/myendpoint.... But how do I access the apis that are running different image versions in other namespaces eg dev or prod? Do I need to add additional configuration in service or ingress-service files?
EDIT: my pods are restful apis that communicates over http requests. All I’m asking how to access my pod which runs in other namespace rather than default. The deployments communicate between each other with no problem. Let’s say I have a front end application running and want to access it from the browser, how is it done? I can access if the pods are in the default namespace by hitting http://localhost/path... but if I delete all the pods from default namespace and move all the services and deoloyments into dev namespace, I cannot access it anymore from the browser with the same url. Does it have a specific path for different namespaces like http://localhost/dev/path? Do I need to cinfigure it
Hopefully it's clear enough. Thank you
Route traffic with Ingress to Service
When you want to route request from external clients, via Ingress to a Service, you should put the Ingress and Service object in the same namespace. I recommend to use different domains in your Ingress for the environments.
Route traffic from Service to Service
When you want to route traffic from a pod in your cluster to a Service, possible in another namespace, it is easiest to use Service Discovery with DNS, e.g. send request to:
<service-name>.<namespace>.svc.<configured-cluster-name>.<configured-name>
this is most likely
<service-name>.<namespace>.svc.cluster.local

Kubernetes StatefulSets: External DNS

Kubernetes StatefulSets create internal DNS entries with stable network IDs. The docs describe this here:
Each Pod in a StatefulSet derives its hostname from the name of the
StatefulSet and the ordinal of the Pod. The pattern for the
constructed hostname is $(statefulset name)-$(ordinal). The example
above will create three Pods named web-0,web-1,web-2. A StatefulSet
can use a Headless Service to control the domain of its Pods. The
domain managed by this Service takes the form: $(service
name).$(namespace).svc.cluster.local, where “cluster.local” is the
cluster domain. As each Pod is created, it gets a matching DNS
subdomain, taking the form: $(podname).$(governing service domain),
where the governing service is defined by the serviceName field on the
StatefulSet.
I am experimenting with headless services, and this works great for communication between individual services i.e web-0.web.default.svc.cluster.local can connect and communicate with web-1.web.default.svc.cluster.local just fine.
Is there any way that I can configure this to work outside of the cluster network as well, where "cluster.local" is replaced with something like "clustera.com"?
I would like to give another kubernetes cluster, lets call it clusterb.com, access to the individual services of the original cluster (clustera.com); I'm hoping it would look something like clusterb simply hitting endpoints like web-1.web.default.svc.clustera.com and web-0.web.default.svc.clustera.com.
Is this possible? I would like access to the individual services, not a load balanced endpoint.
I would suggest you to test the following solutions and check if they can help you to achieve your goal in your particular scenario:
The first one is for sure the easiest and I believe that you didn't implemented it for some reason and you did not reported in the question why.
I am talking about Headless services Without selectors CNAME records for ExternalName-type services.
ExternalName: Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or higher of kube-dns
Therefore if you need to point a service of an other cluster you will need to register a domain name pointing to the relative IP of clusterb.
The second solution that I have never tested, but I believe it can apply to your case is to make use of a Federated Cluster whose reason why to use it is accordinding to the documentation:
Cross cluster discovery: Federation provides the ability to auto-configure DNS servers and load balancers with backends from all clusters. For example, you can ensure that a global VIP or DNS record can be used to access backends from multiple clusters.

Frontend communication with API in Kubernetes cluster

Inside of a Kubernetes Cluster I am running 1 node with 2 deployments. React front-end and a .NET Core app. I also have a Load Balancer service for the front end app. (All working: I can port-forward to see the backend deployment working.)
Question: I'm trying to get the front end and API to communicate. I know I can do that with an external facing load balancer but is there a way to do that using the clusterIPs and not have an external IP for the back end?
The reason we are interested in this, it simply adds one more layer of security. Keeping the API to vnet only, we are removing one more entry point.
If it helps, we are deploying in Azure with AKS. I know they have some weird deployment things sometimes.
Pods running on the cluster can talk to each other using a ClusterIP service, which is the default service type. You don't need a LoadBalancer service to make two pods talk to each other. According to the docs on this topic
ClusterIP exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
As explained in the Discovery documentation, if both Pods (frontend and API) are running on the same namespace, the frontend just needs to send requests to the name of the backend service.
If they are running on different namespaces, the frontend API needs to use a fully qualified domain name to be able to talk with the backend.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
You can find more info about how DNS works on kubernetes in the docs.
The problem with this configuration is the idea that the Frontend app will be trying to reach out to the API via the internal cluster. But it will not. My app, on the client's browser can not reach services and pods in my Kluster.
My cluster will need something like nginx or another external Load Balancer to allow my client side api calls to reach my API.
You can alternatively used your front end app, as your proxy, but that is highly not advised!
I'm trying to get the front end and api to communicate
By api, if you mean the Kubernetes API server, first setup a service account and token for the front-end pod to communicate with the Kubernetes API server by following the steps here, here and here.
is there a way to do that using the clusterIPs and not have an external IP for the back end
Yes, this is possible and more secure if external access is not needed for the service. Service type ClusterIP will not have an ExternalIP and the pods can talk to each other using ClusterIP:Port within the cluster.

How to access Kubernetes pod in local cluster?

I have set up an experimental local Kubernetes cluster with one master and three slave nodes. I have created a deployment for a custom service that listens on port 10001. The goal is to access an exemplary endpoint /hello with a stable IP/hostname, e.g. http://<master>:10001/hello.
After deploying the deployment, the pods are created fine and are accessible through their cluster IPs.
I understand the solution for cloud providers is to create a load balancer service for the deployment, so that you can just expose a service. However, this is apparently not supported for a local cluster. Setting up Ingress seems overkill for this purpose. Is it not?
It seems more like kube proxy is the way to go. However, when I run kube proxy --port <port> on the master node, I can access http://<master>:<port>/api/..., but not the actual pod.
There are many related questions (e.g. How to access services through kubernetes cluster ip?), but no (accepted) answers. The Kubernetes documentation on the topic is rather sparse as well, so I am not even sure about what is the right approach conceptually.
I am hence looking for a straight-forward solution and/or a good tutorial. It seems to be a very typical use case that lacks a clear path though.
If an Ingress Controller is overkill for your scenario, you may want to try using a service of type NodePort. You can specify the port, or let the system auto-assign one for you.
A NodePort service exposes your service at the same port on all Nodes in your cluster. If you have network access to your Nodes, you can access your service at the node IP and port specified in the configuration.
Obviously, this does not load balance between nodes. You can add an external service to help you do this if you want to emulate what a real load balancer would do. One simple option is to run something like rocky-cli.
An Ingress is probably your simplest bet.
You can schedule the creation of an Nginx IngressController quite simply; here's a guide for that. Note that this setup uses a DaemonSet, so there is an IngressController on each node. It also uses the hostPort config option, so the IngressController will listen on the node's IP, instead of a virtual service IP that will not be stable.
Now you just need to get your HTTP traffic to any one of your nodes. You'll probably want to define an external DNS entry for each Service, each pointing to the IPs of your nodes (i.e. multiple A/AAAA records). The ingress will disambiguate and route inside the cluster based on the HTTP hostname, using name-based virtual hosting.
If you need to expose non-HTTP services, this gets a bit more involved, but you can look in the nginx ingress docs for more examples (e.g. UDP).