I have nginx server and have information about connected users.
Also I can send 'GET' request with connection id to server and disconnect my users. BUT, I can't send request to selected pod, because load-balance redirect request to another pod. I decide just send request to all pods, but how can I do it?
Perhaps you'll need to change your approach and use a kind of control plane on which pods subscribe to receive control messages. As you write the load balancer redirect to another pod in every request as this is its primary function.
Regards.
Related
How does one view request headers, payloads and response headers from an application thats deployed in GKE? For example if a node api invokes a function cleanData(); and it makes an api request to intuit API, how does one capture the network request/response headers in GCP logs?
I tried going to GCP console logs and its all just console.logs of the application and no network logs
The manual says this about the HTTP URL value of an http listener:
"Displays the generated HTTP URL for the HTTP Listener. This is not an actual
configurable setting, but is instead displayed for copy/paste convenience. Note
that the host in the URL will be the same as the host you used to connect to
the Administrator. The actual host that connecting clients use may be different
due to differing networking environments."
When I have used the feature in the past its value has always begun "http://localhost:" which would be great except this time it is auto-generating " http://'domainName':${Incoming_Pathology_Source_Port}/${Incoming_Pathology_Source_BaseContextPath}/"
For the first time, we are deploying Mirth inside a Kubernetes cluster, 'a different working environment'. (nginx accepts https and we want it pass the messages on as http to Mirth).
Is there any way I can take control of the URL or must I change the configuration of the cluster in some way.
All help/suggestions welcome.
When watching kubernetes resources for changes, what exactly is happening under the hood? Does the http suddenly change to a wss connection?
To solve a problem of too many requests to the kube-apiserver I am rewriting some code to what I think is more of an operator pattern.
In our multi-tenant microservice architecture all services use the same library to look up connection details to tenant-specific DBs. The connection details are saved in secrets within the same namespace as the application. Every tenant DB has its own secret.
So on every call all secrets with the correct label are read and parsed for the necessary DB connection details. We have around 400 services/pods...
My idea: instead of reading all secrets on every call, create a cache and update the cache everytime a relevant secret was changed via a watcher.
My concerns: am I just replacing the http requests with equally expensive websockets? As I understand I will now have an open websocket connection for every service/pod, which still is 400 open connections.
Would it be better to have a proxy service to watch the secrets (kube-apiserver requests) and then all services query that service for connection details (intranet requests, kube-apiserver unreleated)?
From the sources:
// ServeHTTP serves a series of encoded events via HTTP with Transfer-Encoding: chunked
// or over a websocket connection.
It pretty much depends on the client which protocol is used (either chunked http or ws), both of them having their cost, which you'll have to compare to your current request frequency.
You may be better of with a proxy cache that either watches or polls in regular intervals, but that depends a lot on your application.
I'am working on a microservice architecture based on Docker, registrator, consul and HAProxy.
I'am also using Consul-template to dynamically generate the HAProxy config file. Everything works fine : When I add multiple instances of the same microservice, the HAProxy configuration is updated immediately and requests are dispatched correctly using a round robin strategy.
My problem occurs when I remove some instances (scale down). If a container is shut down while a request is running I have an error.
I'am new to HAProxy so is there a way to configure HAProxy to tell it to retry a failing request to another endpoint if a container disappears?
Precision : I'am using a layer7 routing mode (mode http) for my frontends and backends. Here is a little sample of my consul-template file :
backend hello-backend
balance roundrobin
mode http
{{range service "HelloWorld" }}server {{.Node}} {{.Address}}:{{.Port}} check
{{end}}
# Path stripping
reqrep ^([^\ ]*)\ /hello/(.*) \1\ /\2
frontend http
bind *:8080
mode http
acl url_hello path_beg /hello
use_backend hello-backend if url_hello
Thank you for your help.
It isn't possible for HAProxy to resend a request that has already been sent to a backend.
Here's a forum post from Willy, the creator.
redispatch only happens when the request is still in haproxy. Once it has been sent, it is cannot be performed. It must not be performed either for non idempotent requests, because there is no way to know whether some processing has begun on the server before it died and returned an RST.
http://haproxy.formilux.narkive.com/nGKXq6WU/problems-with-haproxy-down-servers-and-503-errors
The post is quite old but it's still applicable based on more recent discussions. If a request is larger than tune.bufsize (default is around 16KB iirc) then HAProxy hasn't even retained the entire request in memory at the point an error occurs.
Both fortunately (for the craft) and unfortunately (for purposes of real-world utility), Willy has always insisted on correct behavior by HAProxy, and he is indeed correct that it is inappropriate to retry non-idempotent requests once they have been sent to a back-end server, because there are certainly cases where this would result in duplicate processing.
For GET requests which, by definition, should be idempotent (a GET request must be repeatable without consequence, otherwise it should not have been designed to use GET -- it should have been POST or another verb) there's a viable argument that resending to a different back-end would be a legitimate course of action, but this also is not currently supported.
Varnish, by contrast, does support a do-over, which I have used (behind HAProxy) with success on GET requests where I have on-line and near-line storage for the same object namespace. Old, "unpopular" files are migrated to near-line (slower, cheaper) storage, but all requests are sent to on-line storage, with the retry destination of near-line if on-line returns a 404. But, I've never tried this with requests other than GET.
Ideally, your solution would be for your back-ends to be declared unhealthy, perhaps by deliberately failing their HTTP health checks for a draining time before shutting down. One fairly simple approach is for the health check to require the presence of a static file, which gets deleted from the back-end before shutdown. Or, you can request HAProxy consider the backend to be in maintenance mode through the stats/admin UI or socket, preventing more requests from being initiated while allowing running requests to drain.
I'm very curious about how to implement redirect code in a back server node.
For example: Client A request web server C, there is a load balance node B between A and C. So the graph is A=>B=>C=>A (not A=>B=>C=>B=>A). Actually C get requests from B, so I'm wondering how does C create a socket to connect to A and send data to A. I highly appreciate if you would share me some code snippet about this, Thanks!
I think this is the question you are asking:
"I have multiple web servers behind a load balancer, so how can I create a persistent http socket connection to a back end server from a client without it being redirected to another server and therefore breaking connection?"
The answer to that question is through cookie injection. For example, with HAProxy you can set a cookie depending on the server that the request is routed to first, then the load balancer will know to stick that request in future to the specified server.
An example in HAProxy backend configuration:
backend socket-servers
timeout server 120s
balance leastconn
# based on cookie set in header
# haproxy will add the cookies for us
cookie SRVNAME insert
server node-1 127.0.0.1:5000 cookie S1 check
server node-2 127.0.0.1:5001 cookie S2 check
This example was taken from http://toon.io/configuring-haproxy-multiple-engine-io-servers/.
Upon a new request it will see no cookie, and route to the best server based on the server with the least number of connections. As it does so, it sets a cookie SRVNAME=node-1 or SRVNAME=node-2 depending on the server it goes to. Every subsequent request from the client goes to the node specified in the cookie.