we have multiple source application servers from where request comes to our database (using single db user). but there are two application servers from which request comes but not cleaned up as checked in database, for rest of app servers there are request comes and immediately gets cleaned up. After analyzing from all sources that how request is coming to db, we suspect one of source channel from where it is using web-service (uses HTTP request).
As we understood HTTP protocol is a "stateless" connection gets destroyed once it completes activity . But we suspect somehow it is not closing db connections.
Any help will be helpful.
Related
When watching kubernetes resources for changes, what exactly is happening under the hood? Does the http suddenly change to a wss connection?
To solve a problem of too many requests to the kube-apiserver I am rewriting some code to what I think is more of an operator pattern.
In our multi-tenant microservice architecture all services use the same library to look up connection details to tenant-specific DBs. The connection details are saved in secrets within the same namespace as the application. Every tenant DB has its own secret.
So on every call all secrets with the correct label are read and parsed for the necessary DB connection details. We have around 400 services/pods...
My idea: instead of reading all secrets on every call, create a cache and update the cache everytime a relevant secret was changed via a watcher.
My concerns: am I just replacing the http requests with equally expensive websockets? As I understand I will now have an open websocket connection for every service/pod, which still is 400 open connections.
Would it be better to have a proxy service to watch the secrets (kube-apiserver requests) and then all services query that service for connection details (intranet requests, kube-apiserver unreleated)?
From the sources:
// ServeHTTP serves a series of encoded events via HTTP with Transfer-Encoding: chunked
// or over a websocket connection.
It pretty much depends on the client which protocol is used (either chunked http or ws), both of them having their cost, which you'll have to compare to your current request frequency.
You may be better of with a proxy cache that either watches or polls in regular intervals, but that depends a lot on your application.
Working from some time on a sails web application.
So far overcome all issues by hard reading, trial and error.
Recently had to install the app for a close beta test on the client's ec2 free trial instance where it works just fine in development mode.
The app is behind a nginx proxy which listens on the 80 port and redirects to http://server_IP:1337.
CORS and CSRF are enabled, allowOrigins and onlyAllowOrigins are set to the server IP, web domain and localhost in production.js and, security.js and sockets.js.
But when switching to production mode all API requests, except GET, give 403 forbidden.
Tried everything I could find on Google, it simply doesn't work on production but it completely works on development.
If anyone could share a shred of light on this will be greatly appreciated.
EDIT:
Running the app with debug silly, showed this:
A socket is being allowed to connect, but the session could not be loaded. Creating an empty, one-time session to use for the life of this socket connection.
This log often shows up because a client socket from a previous lift or another Sails app is trying to reconnect (e.g. from an open browser tab), but the session indicated by its cookie no longer exists-- because either this app is not currently using a persistent session store like Redis, or the session entry has been removed from the session store (e.g. by a scheduled job or because it expired naturally).
Details:
Error: Session could not be loaded.
at Immediate._onImmediate (/var/www/allscubashops.com/node_modules/sails/lib/hooks/session/index.js:543:42) at processImmediate (internal/timers.js:445:19)
Then I have deleted the old browser cookie and got this:
Could not fetch session, since connecting socket has no cookie in its handshake.
Generated a one-time-use cookie:
sails.sid=s%3APlHbdXvOZRo5yNlKPdFKkaPgVTNaNN8i.DwZzwHPhb1%2Fs9Am49lRxRTFjRqUzGO8UN90uC7rlLHs
and saved it on the socket handshake.
This means the socket started off with an empty session, i.e. (req.session === {})
That "anonymous" session will only last until the socket is disconnected. To work around this,
make sure the socket sends a cookie header or query param when it initially connects.
(This usually arises due to using a non-browser client such as a native iOS/Android app,
React Native, a Node.js script, or some other connected device. It can also arise when
attempting to connect a cross-origin socket in the browser, particularly for Safari users.
To work around this, either supply a cookie manually, or ignore this message and use an
approach other than sessions-- e.g. an auth token.)
Also no new cookie was set.
The apparent conclusion is that somehow in production mode something is wrong with setting the session.
EDIT 2:
The latest find is that if I run the app without nginx proxy, I do not have the forbidden API requests issue but I still have the one related to the session not being created.
I am sure the nginx proxy settins are OK but now I am thinking of implementing the redis way to store sessions instead of the default memory one and see what happens
EDIT 3:
I have implemented the Redis sessions which works both for dev and prod modes.
Still same situation, the ec2 instance without nginx proxy works in production mode while the same files (git replicated) on the ec2 instance with nginx proxy doesn't work in production mode (API requests 403 forbidden) but works great in development mode.
The X-CSRF token is sent, screenshot
The sails error message I get in production (besides the network 403 forbidden error for all requests except GET) is:
A socket is being allowed to connect, but the session could not be loaded. Creating an empty, one-time session to use for the life of this socket connection.
This log often shows up because a client socket from a previous lift or another Sails app is trying to reconnect (e.g. from an open browser tab), but the session indicated by its cookie no longer exists-- because either this app is not currently using a persistent session store like Redis, or the session entry has been removed from the session store (e.g. by a scheduled job or because it expired naturally).
Details:
Error: Session could not be loaded.
at /var/www/example.com/node_modules/sails/lib/hooks/session/index.js:543:42
at Command.callback (/var/www/example.com/node_modules/#sailshq/connect-redis/lib/connect-redis.js:148:25)
at normal_reply (/var/www/example.com/node_modules/machinepack-redis/node_modules/redis/index.js:714:21)
at RedisClient.return_reply (/var/www/example.com/node_modules/machinepack-redis/node_modules/redis/index.js:816:9)
at JavascriptRedisParser.returnReply (/var/www/example.com/node_modules/machinepack-redis/node_modules/redis/index.js:188:18)
at JavascriptRedisParser.execute (/var/www/example.com/node_modules/redis-parser/lib/parser.js:574:12)
at Socket. (/var/www/example.com/node_modules/machinepack-redis/node_modules/redis/index.js:267:27)
at Socket.emit (events.js:193:13)
at addChunk (_stream_readable.js:296:12)
at readableAddChunk (_stream_readable.js:277:11)
at Socket.Readable.push (_stream_readable.js:232:10)
at TCP.onStreamRead (internal/stream_base_commons.js:150:17)
Therefore I assume that the sockets connect but the session is not created.
Redis works OK, I see sessions in it for when in development.
Have you exposed the csrf endpoint and are you making a call to that endpoint first, to get a token, before making further requests? This tipped me up once.
I have an app with a backend and a frontend that communicate with each other, and is developed on Wildfly.
When I make a PUT request from the backend to the frontend, these requests do not arrive and they return a 500 error. However, POST requests work correctly and do not generate problems.
I have checked the configuration of the wildfly, ficher or configuration server (standalone.xml), but I do not fall where the problem may be.
How can I fix this?
So you say when the backend sends a PUT request the response has status 500, while when the backend sends a POST request to the same URL the response has status 2XX?
That does not smell like a connectivity issue. It seems more that your PUT request gets blocked somewhere. Try to find out where that error 500 is coming from. It could be a proxy or firewall inbetween client and server, it could be even that Wildfly or the application are not configured to respond to PUT.
I'am working on a microservice architecture based on Docker, registrator, consul and HAProxy.
I'am also using Consul-template to dynamically generate the HAProxy config file. Everything works fine : When I add multiple instances of the same microservice, the HAProxy configuration is updated immediately and requests are dispatched correctly using a round robin strategy.
My problem occurs when I remove some instances (scale down). If a container is shut down while a request is running I have an error.
I'am new to HAProxy so is there a way to configure HAProxy to tell it to retry a failing request to another endpoint if a container disappears?
Precision : I'am using a layer7 routing mode (mode http) for my frontends and backends. Here is a little sample of my consul-template file :
backend hello-backend
balance roundrobin
mode http
{{range service "HelloWorld" }}server {{.Node}} {{.Address}}:{{.Port}} check
{{end}}
# Path stripping
reqrep ^([^\ ]*)\ /hello/(.*) \1\ /\2
frontend http
bind *:8080
mode http
acl url_hello path_beg /hello
use_backend hello-backend if url_hello
Thank you for your help.
It isn't possible for HAProxy to resend a request that has already been sent to a backend.
Here's a forum post from Willy, the creator.
redispatch only happens when the request is still in haproxy. Once it has been sent, it is cannot be performed. It must not be performed either for non idempotent requests, because there is no way to know whether some processing has begun on the server before it died and returned an RST.
http://haproxy.formilux.narkive.com/nGKXq6WU/problems-with-haproxy-down-servers-and-503-errors
The post is quite old but it's still applicable based on more recent discussions. If a request is larger than tune.bufsize (default is around 16KB iirc) then HAProxy hasn't even retained the entire request in memory at the point an error occurs.
Both fortunately (for the craft) and unfortunately (for purposes of real-world utility), Willy has always insisted on correct behavior by HAProxy, and he is indeed correct that it is inappropriate to retry non-idempotent requests once they have been sent to a back-end server, because there are certainly cases where this would result in duplicate processing.
For GET requests which, by definition, should be idempotent (a GET request must be repeatable without consequence, otherwise it should not have been designed to use GET -- it should have been POST or another verb) there's a viable argument that resending to a different back-end would be a legitimate course of action, but this also is not currently supported.
Varnish, by contrast, does support a do-over, which I have used (behind HAProxy) with success on GET requests where I have on-line and near-line storage for the same object namespace. Old, "unpopular" files are migrated to near-line (slower, cheaper) storage, but all requests are sent to on-line storage, with the retry destination of near-line if on-line returns a 404. But, I've never tried this with requests other than GET.
Ideally, your solution would be for your back-ends to be declared unhealthy, perhaps by deliberately failing their HTTP health checks for a draining time before shutting down. One fairly simple approach is for the health check to require the presence of a static file, which gets deleted from the back-end before shutdown. Or, you can request HAProxy consider the backend to be in maintenance mode through the stats/admin UI or socket, preventing more requests from being initiated while allowing running requests to drain.
I have a problem with my azure appfabric local development server. I'm using WCF REST and Comet architecture on my server. I have a service method calling "GetUpdates". When a client sends a GET request to this method, it waits until a server-side update notification releases it.
There are no problem with this azure server on the cloud, but my local development server does not respond to all requests after sending two concurrent open requests to "GetUpdates" method. (html pages, js files, images, webservice requests.. etc.)
How can I solve this issue on local development server. Any idea what is causing it?