I'm a beginner; now I am trying to deploy my app and use traefik. I don't understand how he works
For that syntax - "traefik. HTTP.services.wd_api.loadBalancer.server.port=3100" I must enter a different port and he cloned my app on another port? How did he throw piece requests on that server?
Related
I have an issue with my kubernetes routing.
The issue is that one of the pods makes a GET request to auth.domain.com:443 but the internal routing is directing it to auth.domain.com:8443 which is the container port.
Because the host returning the SSL negotiation identifies itself as auth.domain.com:8443 instead of auth.domain.com:443 the connection times out.
[2023/01/16 18:03:45] [provider.go:55] Performing OIDC Discovery...
[2023/01/16 18:03:55] [main.go:60] ERROR: Failed to initialise OAuth2 Proxy: error intiailising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: error performing request: Get "https://auth.domain.com/realms/master/.well-known/openid-configuration": net/http: TLS handshake timeout
(If someone knows the root cause of why it is not identifying itself with the correct port 443 but instead the container port 8443, that would be extremely helpful as I could fix the root cause.)
To workaround this issue, I have the idea to force it to route out of the pod onto the internet and then back into the cluster.
I tested this by setting up the file I am trying to GET on a host external to the cluster, and in this case the SSL negoiation works fine and the GET request succeeds. However, I need to server the file from within the cluster, so this isn't a viable option.
However, if I can somehow force the pod to route through the internet, I believe it would work. I am having trouble with this though, because everytime the pod looks up auth.domain.com it sees that it is an internal kubernetes IP, and it rewrites the routing so that it is routed locally to the 10.0.0.0/24 address. After doing this, it seems to always return with auth.domain.com:8443 with the wrong port.
If I could force the pod to route through the full publicly routable IP, I believe it would work as it would come back with the external facing auth.domain.com:443 with the correct 443 port.
Anyone have any ideas on how I can achieve this or how to fix the server from identifying itself with the wrong auth.domain.com:8443 port instead of auth.domain.com:443 causing the SSL negotiation to fail?
After succesfully hosting a first service on a single node cluster I am trying to add a second service with both its own dnsName.
The first service uses LetsEncrypt succesfully and now I am trying out the second service with a test-certifcate and the staging endpoint/clusterissuer
The error I am seeing once I describe the Letsencrypt Order is:
Waiting for HTTP-01 challenge propagation: failed to perform self check GET request 'http://example.nl/.well-known/acme-challenge/9kdpAMRFKtp_t8SaCB4fM8itLesLxPkgT58RNeRCwL0': Get "http://example.nl/.well-known/acme-challenge/9kdpAMRFKtp_t8SaCB4fM8itLesLxPkgT58RNeRCwL0": dial tcp: lookup example.nl on 10.43.0.11:53: server misbehaving
The port that is misbehaving is pointing to the internal IP of my service/kube-dns, which means it is past my service/traefik i think.
The cluster is running on a VPS and I have also checked the example.nl domain name is added to /etc/hosts with the VPS's ip like so:
206.190.101.190 example1.nl
206.190.101.190 example.nl
The error is a bit vague to me because I do not know exactly what de kube-dns is doing and why it thinks the server is misbehaving, I think maybe it is because it has now 2 domain names to handle I missed something. Anyone can shed some light on it?
Feel free to ask for more ingress or other server config!
Everything was setup right to be able to work, however this issue had definitely had something to do with DNS resolving. Not internally in the k3s cluster, but externally at the domain registrar.
I found it by using https://unboundtest.com for my domain and saw my old namespaces still being used.
Contacted the registrar and they had to change something for the domain in the DNS of the registry.
Pretty unique situation, but maybe helpful for people who also think the solution has to be found internally (inside k3s).
I was wondering -
When setting rabbitmq nodes to use a TLS connection (as seen here https://github.com/artooro/rabbitmq-kubernetes-ha/blob/master/configmap.yaml), as I understand, I need to create a certificate that matches the hostname, wildcard can be used - https://www.rabbitmq.com/clustering-ssl.html.
As cluster dns is internal, I guess I should create a certificate with a common name such as - ‘*.rabbitmq.default.svc.cluster.local’.
When I’m exposing the service, I'm supposed to create either a NodePort service or a LoadBalancer service - with a totally different hostname (it should route internally).
My question is - how will the amqps connection work? Won't it present me with one of the node’s certificates - which will not match the load balancer’s dns?
What's the correct way to expose the amqps protocol?
Thanks in advance
If anyone is looking at it, it doesn't matter - this is not a "standard" https connection.
The client needs to specify the correct common name and that's enough for the connection to work.
I have an application running on my own server with kubernetes. This application is supposed to work as a gateway and has a LoadBalancer service, which is exposing it to "the world". Now I'd like to connect this application with other applications running within the very same kubernetes cluster, so they can exchange HTTP requests with each other.
So let's say that my Gateway app is running on the port 9000, the app which I'd like to call runs on 9001. When I make curl my_cluster_ip:9001 it gives me a response. Nevertheless I never know, what the Cluster IP will be, so I can't implement this to my gateway app.
Use case is typing to the web browser url_of_my_server:9000 -> this will call the gateway -> it sends HTTP Request to the other app running in the cluster on the port 9001 -> response back to the gateway -> response back to the user.
Where the magic has to happen and how to easily make these two apps to talk with each other, while only one will be exposed to "the world" and the other one will be accessible only from within the cluster?
You can expose your app on port 9001 as a service (lets say myservice).
When you do that myservice.<namespace>.svc.cluster.local will resolve to IP addres of your app. More Info on DNS here : https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
And then you can access your app within Kubernetes cluster as:
http://myservice.<namespace>.svc.cluster.local:9001
You have a couple of options for internal service discovery:
You can use the cluster-internal DNS service to find the other application, as detailed in the answer by bits.
if both the proxy and the app runs in the same namespace, there are environment variables that expose the IP and ports. This may mean you have to restart the proxy if you remove/readd the other application, as the ports may change.
you can run both apps as two different containers in the same pod; this will ensure they get scheduled on the same host, which allows you to communicate on the same host.
Also note that support for your HTTP proxy setup already exist in Kubernetes; take a look at Ingress and Ingress Controllers.
I have a query which is basically a clarification regarding Routes in OpenShift Origin.
I managed to setup OpenShift Origin version 1.4.0-rc1 on a CentOS hosted in local VMWare installation. Am also able to pull and setup image for nginx and pod status shows Running. Able to access nginx on the service endpoint also. Now as per documentations if I want to access this nginx instance outside the hosted system I need to create a Route, which I also did.
Confusion is on the Create Route screen from OpenShift Web Console it generates a hostname or allows to enter a hostname. Both of the option i tried, generated hostname seems to be a a long subdomain kind of hostname and it doesn't work. What I mean is I'm not able to access this hostname from anywhere in the network including the hosting OS as well.
To summarize, service endpoints which looks like 172.x.x.x is working on the local machine which is hosting OpenShift. But the generated/entered hostname for the route doesn't work from anywhere.
Please clarify the idea behind this route concept and how could one access a service from outside the host machine (Part of same network)
As stated in documentation:
An OpenShift Origin route exposes a service at a host name, like
www.example.com, so that external clients can reach it by name. DNS
resolution for a host name is handled separately from routing; your
administrator may have configured a cloud domain that will always
correctly resolve to the OpenShift Origin router, or if using an
unrelated host name you may need to modify its DNS records
independently to resolve to the router.
It is important to notice the difference between "route" and "router". The Opensfhit router (that is mentioned above)listens to all requests to Openshift deployed applications, and has to be previoulsy deployed, in order for routes to work.
https://docs.openshift.org/latest/architecture/core_concepts/routes.html
So once you have the router deployed and working, all routes that you create in openshift should resolve where that Openshift router is listening. For example, configuring your DNS with a wildcard (this is dnsmaq wildcard example):
address=/.yourdomain.com/107.117.239.50
This way all your "routes" to services should be like this:
service1.yourdomain.com
service2.yourdomain.com
...
Hope this helps