Sensu: Registering client doesn't render keep alive warning after delay - sensu

When I create a Sensu client with the POST /clients API resource, it shows up in Uchiwa Dashboard, but I'm expecting the keep alive check to fail after 180 seconds by default because there is no actual Sensu client checking in.
How do I get the keep alive check to start failing?
Successfully adding keepalive to the subscriptions on register has no effect.

The Sensu server does not enable keepalive support for clients created via the API: https://github.com/sensu/sensu/blob/master/lib/sensu/server/process.rb#L476
# Create a blank client (data) and add it to the client
# registry. Only the client name is known, the other client
# attributes must be updated via the API (POST /clients:client).
# Dynamically created clients and those updated via the API will
# have client keepalives disabled, `:keepalives` is set to
# `false`.

Related

In Mirth (nextgen-connect) how do I configure the HTTP URL of an HTTP Listener

The manual says this about the HTTP URL value of an http listener:
"Displays the generated HTTP URL for the HTTP Listener. This is not an actual
configurable setting, but is instead displayed for copy/paste convenience. Note
that the host in the URL will be the same as the host you used to connect to
the Administrator. The actual host that connecting clients use may be different
due to differing networking environments."
When I have used the feature in the past its value has always begun "http://localhost:" which would be great except this time it is auto-generating " http://'domainName':${Incoming_Pathology_Source_Port}/${Incoming_Pathology_Source_BaseContextPath}/"
For the first time, we are deploying Mirth inside a Kubernetes cluster, 'a different working environment'. (nginx accepts https and we want it pass the messages on as http to Mirth).
Is there any way I can take control of the URL or must I change the configuration of the cluster in some way.
All help/suggestions welcome.

Accessing IBM API Connect endpoint through Postman

I just created an REST API in API Connect and the endpoint works when I test it in the APIC assemble tab. It requires a client id and client secret. When I send a request through Postman, I currently get a “Could not get any response” message from when I try to add them as header values or OAuth authorization. I’m using the request endpoint that’s displayed when I hit the debug button from the successful response on the Assemble tab. Is this the correct endpoint to use? How do I properly include the client id and client secret in a Postman request?
If you get a "Could not get any response in Postman", that means that Postman can't reach the destination of the request.
There are several reasons for that:
Is it an intranet or internet endpoint?
Are you using a proxy? (check proxy config)
Is the hostname resolvable? (try ip)
If it is an https
endpoint, with a self signed certificate, check if you have SSL
Certificate verification enabled (Settings-> general)
On the other hand, to send the client-id and client-secret headers, just click on Headers tab and add both (see the following picture)
Please check the below things to get access to API Connect published services.
Service needs to be allowed to invoke from postman(System from which you are invoking.)
Please check the web-api MPGW service titled in DataPower default domain created when you configure your API connect with DataPower have you created an access control list in the front-side-handler.
Please disable the SSL configuration in the postman, sometime this may create a problem(since the service exposed from API Connect will be with SSL)
From the error you are getting, I suspect there is no connection or only one-way traffic is enabled which means you are blocking response. If there is an issue with the request parameters you are sending, an error will be different saying, wrong client id or client secret.
Testing API which is on-boarded from API Connect will be straightforward or same we invoke other rest services.
Thx Srikanth
I needed to include the client id and client secret in the headers using the correct name for them, which is specified when creating/editing the api under the 'Security Definitions' category as 'Parameter Name'.
I was also hitting the wrong endpoint. To find the correct endpoint click the hamburger icon in the upper left of api connect website, select dashboard, click on the environment you want such as sandbox or dev, click settings, click gateway, then you'll see the endpoint.

HAProxy & Consul-template : retry request when scaling down

I'am working on a microservice architecture based on Docker, registrator, consul and HAProxy.
I'am also using Consul-template to dynamically generate the HAProxy config file. Everything works fine : When I add multiple instances of the same microservice, the HAProxy configuration is updated immediately and requests are dispatched correctly using a round robin strategy.
My problem occurs when I remove some instances (scale down). If a container is shut down while a request is running I have an error.
I'am new to HAProxy so is there a way to configure HAProxy to tell it to retry a failing request to another endpoint if a container disappears?
Precision : I'am using a layer7 routing mode (mode http) for my frontends and backends. Here is a little sample of my consul-template file :
backend hello-backend
balance roundrobin
mode http
{{range service "HelloWorld" }}server {{.Node}} {{.Address}}:{{.Port}} check
{{end}}
# Path stripping
reqrep ^([^\ ]*)\ /hello/(.*) \1\ /\2
frontend http
bind *:8080
mode http
acl url_hello path_beg /hello
use_backend hello-backend if url_hello
Thank you for your help.
It isn't possible for HAProxy to resend a request that has already been sent to a backend.
Here's a forum post from Willy, the creator.
redispatch only happens when the request is still in haproxy. Once it has been sent, it is cannot be performed. It must not be performed either for non idempotent requests, because there is no way to know whether some processing has begun on the server before it died and returned an RST.
http://haproxy.formilux.narkive.com/nGKXq6WU/problems-with-haproxy-down-servers-and-503-errors
The post is quite old but it's still applicable based on more recent discussions. If a request is larger than tune.bufsize (default is around 16KB iirc) then HAProxy hasn't even retained the entire request in memory at the point an error occurs.
Both fortunately (for the craft) and unfortunately (for purposes of real-world utility), Willy has always insisted on correct behavior by HAProxy, and he is indeed correct that it is inappropriate to retry non-idempotent requests once they have been sent to a back-end server, because there are certainly cases where this would result in duplicate processing.
For GET requests which, by definition, should be idempotent (a GET request must be repeatable without consequence, otherwise it should not have been designed to use GET -- it should have been POST or another verb) there's a viable argument that resending to a different back-end would be a legitimate course of action, but this also is not currently supported.
Varnish, by contrast, does support a do-over, which I have used (behind HAProxy) with success on GET requests where I have on-line and near-line storage for the same object namespace. Old, "unpopular" files are migrated to near-line (slower, cheaper) storage, but all requests are sent to on-line storage, with the retry destination of near-line if on-line returns a 404. But, I've never tried this with requests other than GET.
Ideally, your solution would be for your back-ends to be declared unhealthy, perhaps by deliberately failing their HTTP health checks for a draining time before shutting down. One fairly simple approach is for the health check to require the presence of a static file, which gets deleted from the back-end before shutdown. Or, you can request HAProxy consider the backend to be in maintenance mode through the stats/admin UI or socket, preventing more requests from being initiated while allowing running requests to drain.

How to implement the code which redirect clients request to another node

I'm very curious about how to implement redirect code in a back server node.
For example: Client A request web server C, there is a load balance node B between A and C. So the graph is A=>B=>C=>A (not A=>B=>C=>B=>A). Actually C get requests from B, so I'm wondering how does C create a socket to connect to A and send data to A. I highly appreciate if you would share me some code snippet about this, Thanks!
I think this is the question you are asking:
"I have multiple web servers behind a load balancer, so how can I create a persistent http socket connection to a back end server from a client without it being redirected to another server and therefore breaking connection?"
The answer to that question is through cookie injection. For example, with HAProxy you can set a cookie depending on the server that the request is routed to first, then the load balancer will know to stick that request in future to the specified server.
An example in HAProxy backend configuration:
backend socket-servers
timeout server 120s
balance leastconn
# based on cookie set in header
# haproxy will add the cookies for us
cookie SRVNAME insert
server node-1 127.0.0.1:5000 cookie S1 check
server node-2 127.0.0.1:5001 cookie S2 check
This example was taken from http://toon.io/configuring-haproxy-multiple-engine-io-servers/.
Upon a new request it will see no cookie, and route to the best server based on the server with the least number of connections. As it does so, it sets a cookie SRVNAME=node-1 or SRVNAME=node-2 depending on the server it goes to. Every subsequent request from the client goes to the node specified in the cookie.

Behaviour of SAML when HTTP Server used for high availability

I have implemented the supporting of SAML SSO to have my application act as the Service Provider using Spring Security SAML Extension. I was able to integrate my SP with different IDPs. So for example I have HostA,HostB, and HostC, all these have different instances of my application. I had an SP metadata file specified for each host and set the AssertionConsumerServiceURL with the URL of that host( EX:https:HostA.com/myapp/saml/sso ). I added each metadata file to the IDP and tested all of them and it is working fine.
However, my project also supports High Availability by having an IBM HTTP Server configured for load balancing. So in this case the HTTP Server will configure the hosts(A,B,C) to be the hosts used for load balancing, the user will access the my application using the URL of the HTTP server: https:httpserver.com/myapp/
If I defined one SP metadata file and had the URL of the HTTP Server specified in the AssertionConsumerServiceURL( https://httpserver.com/saml/sso ) and changed my implementation to accept assertions targeted to my HTTP Server, what will be the outcome of this scenario:
User accesses the HTTPServer which dispatched the user to HostA(behind the scenes)
My SP application in HostA sends a request to the IDP for authentication.
The IDP sends back the response to my httpserver as: https://httpserver.com/saml/sso .
Will the HTTP Server redirect to HostA, to have it like this: https://HostA.com/saml/sso
Thanks.
When deploying same instance of application in a clustered mode behind a load balancer you need to instruct the back-end applications about the public URL on the HTTP server (https://httpserver.com/myapp/) which they are deployed behind. You can do this using the SAMLContextProviderLB (see more in the manual). But you seem to have already successfully performed this step.
Once your HTTP Server receives a request, it will forward it to one of your hosts to URL e.g. https://HostA.com/saml/sso and usually will also provide the original URL as an HTTP header. The SAMLContextProviderLB will make the SP application think that the real URL was https://httpserver.com/saml/sso which will make it pass all the SAML security checks related to destination URL.
As the back-end applications store state in their HttpSessions make sure to do one of the following:
enable sticky session on the HTTP server (so that related requests are always directed to the same server
make sure to replicate HTTP session across your cluster
disable checking of response ID by including bean EmptyStorageFactory in your Spring configuration (this option also makes Single Logout unavailable)