How to pass the SSO page using Uptime -kuma? - single-sign-on

I have set up uptime kuma to monitor few websites. ow there is a website which is reachable only through SSO , when done manually. When I try to monitor this website , it is not able to give me correct status . How to pass through this page using uptime kuma and get the correct status ?

With http monitor, try to find a public ressource as a target eg. yoursite.com/favicon.ico or something similar
If not possible and as long as your uptime kuma server is publicly accessible, you'll have to do it with a push monitor, check this github issue

uptime-kuma HTTP(s) monitoring type supports basic auth and NTLM authentication.
If that does not work. You will have to setup a heartbeat monitor... basically the webapp will push heartbeat to uptime-kuma:
Add new monitor then select "push" in the monitor type menu as in this screenshot:
The webapp behind the SSO should make a GET request the push url every n seconds as set in the heartbeat interval for this monitor.
See more here:
https://github.com/louislam/uptime-kuma/issues/279
https://github.com/louislam/uptime-kuma/issues/558

Related

How do I Re-route Ghost Blog Admin URL without modifying the API Address?

Ghost blog platform has a setting that allows you to change the admin panel login location (which starts as: https://whateveryoursiteis.com/ghost). Methodology / docs for changing that setting can be found here: https://ghost.org/docs/config/#admin-url
However — when using the above methodology the API Url that is used for Search etc etc is ALSO modified meaning all requests to the ghost API will also be forwarded to the alternate domain (not just the admin access).
My question is — what is the best way to achieve a redirect of the admin URL to a different Domain / protocol while allowing the API url used by Ghost to remain the same?
More background.
We are running ghost on top of GKE (Google Kubernetes Engine) on a Multi-Region Ingress which allows us to dump our CloudSQL DB down to a SQLite file and then build that database into our production Docker Containers which are then deployed to the different Kubernetes nodes that are fronted by the GCE-Ingress load balancer.
Since we need to rebuild that database / container on content change (not just on code change) we need to have a separate Admin URL backed by Cloud SQL where we can persist / modify our data which then triggers the rebuild on our Ci pipeline via Ghost Webhooks.
Another related question might be:
Is it possible to use standard ghost redirects (created via: https://docs.ghost.org/concepts/redirects/) to redirect the admin panel URL (ie. https://whateveryoursiteis.com/ghost) to a different domain (ie. https://youradminsite.com/ghost)?
Another Related GKE / GCE-Ingress Question:
Is it possible to create 301 redirects natively using Kuberentes GCE-Ingress on GKE without adding an nGinx container etc?
That will be my first attempt after posting this — but I figured either way maybe it helps another ghost platform fan down the line someplace — I will attempt to respond back as I find answers to those questions (assuming someone doesn't beat me to it!).
Regarding your question if it's possible to create 301 redirects without adding a nginx container, I can suggest to use istio, find out more information about traffic routing here.
OK. So as it turns out the Ghost team currently has things setup to point API connections at the Admin URL. So if you change your Admin URL expect your clients to attempt to connect to that URL.
I am going to be raising the potential of splitting these off as a feature request over on the ghost forums (as soon as I get out from under pre-launch hell on the current project).
Here's the official Ghost response:
What is referred as 'official docker image' is not something that we
as a Ghost team support.
The APIs are indeed hosted under the same URL as the admin and that's
by design and not really a bug. Introducing configuration options for
each API Ghost instance hosts would be a feature and should be
discussed at our forum first 👍 I think it's a nice idea to be able to
serve APIs from different host, but it's not something that is within
our priorities at the moment.
In case you need more granular handling of admin site, you could
introduce those on your proxy level and for example, handle requests
that are coming to /ghost/api with a different set of rules.
See the full discussion over here on the TryGhost GitHub:
https://github.com/TryGhost/Ghost/issues/10441#issuecomment-460378033
I haven't looked into what it would take to implement the feature but the suggestion on proxying the request could work... if only I didn't need to run on GKE Multi region (which requires use of GCE-Ingress which doesn't have support for redirection hah!). This would be relatively easy to solve the nGinx ingress.
Hopefully this helps someone — I will update as I work through the process. As of now I solved it by dumping my GCP CloudSQL database down to a SQLite db file during build time (thereby allowing me to keep my admin instance clean and separate from the API endpoint — which for me remains the same URL).

Setting up load-balancer based on authenticated users

I'm trying to set up a loadbalancer that would redirect to specific version of an application certein users. So far i was using Blue/Green deployment strategy (so once i made new version of an app i created new environment and redirected traffic there). Now i would like to change this approach. I want to be able to specify users (more experienced or whatever) that would see new site after authentication while the others would still be redirected to old one. If something goes wrong with new version all users will see old version. Currently my loadbalancing is made in apache and authentication is done on application level. So is this even possible? I know i could hardcode it in application but what if there is a bug in new feature and new users are still being redirected there? I would then need to stop application for all users and rollback to old version and that's bad i guess. I was thinking about using external CAS however didnt find any information if it would be possible then. So i would like to ask is it possible and are there any tools (maybe some apache plugin) for that purpose?
Here's a working solution with nginx
create conf.d/balancer.conf
put the code into it (see below)
docker run -p8080:8080 -v ~/your_path/conf.d:/etc/nginx/conf.d openresty/openresty:alpine
use curl to play with it
balancer.conf:
map $cookie_is_special_user $upstream {
default http://example.com;
~^1$ http://scooterlabs.com/echo;
}
server {
listen 8080;
resolver 8.8.8.8;
location / {
proxy_pass $upstream;
}
}
testing
curl --cookie "is_special_user=1" http://localhost:8080
It would return the contents of scooterlabs.com dumping the request it receives
curl http://localhost:8080
Produces the contents of example.com
explanation
the idea is that you set a special cookie to the users you treat as special by the backend app after they get authorized as usual
of course it would only work if both app versions are served on the same domain so that the cookie is seen by both versions
after that you balance them to a desired server depending on the cookie value
you can easily disable such routing by tweaking your nginx config file
with this approach you can come up with even more complex scenarios like setting random cookie values in the range 1-10 and then gradually switching some of the special users in your config file i.e. start with those having value 1, after that 1-2 etc

How to use alchemyAPI news data in Bluemix Node-RED?

I am using Bluemix environment and Node-RED flow editor. While trying to use the feature extract node that comes built-in Node-RED for the AlchemyAPI service, I am finding it hard to use it.
I tried connecting it to the HTTP request node, HTTP response node, etc, but no result. Maybe I am not completing the connections procedure correctly?
I need this code to get Twitter news and news using AlchemyAPI news data for specific companies and also give a sentiment score to and get store in IBM HDFS.
Here is the code:
[{"id":"8bd03bb4.742fc8","type":"twitter
in","z":"5fa9e76b.a05618","twitter":"","tags":"Ashok Leyland, Tata
Communication, Welspun, HCL Info,Fortis H, JSW Steel, Unichem Lab,
Graphite India, D B Realty, Eveready Ind, Birla Corporation, Camlin
Fine Sc, Indian Economy, Reserve Bank of India, Solar Power,
Telecommunication, Telecom Regulatory Authority of
India","user":"false","name":"Tweets","topic":"tweets","x":93,"y":92,"wires":[["f84ebc6a.07b14"]]},{"id":"db13f5f.f24ec08","type":"ibm
hdfs","z":"5fa9e76b.a05618","name":"Dec12Alchem","filename":"/12dec_alchem","appendNewline":true,"overwriteFile":false,"x":564,"y":226,"wires":[]},{"id":"4a1ed314.b5e12c","type":"debug","z":"5fa9e76b.a05618","name":"","active":true,"console":"false","complete":"false","x":315,"y":388,"wires":[]},{"id":"f84ebc6a.07b14","type":"alchemy-feature-extract","z":"5fa9e76b.a05618","name":"TrailRun","page-image":"","image-kw":"","feed":true,"entity":true,"keyword":true,"title":true,"author":"","taxonomy":true,"concept":true,"relation":"","pub-date":"","doc-sentiment":true,"x":246,"y":160,"wires":[["c0d3872.f3f2c78"]]},{"id":"c0d3872.f3f2c78","type":"function","z":"5fa9e76b.a05618","name":"To
mark tweets","func":"msg.payload={tweet:
msg.payload,score:msg.features};\nreturn
msg;\n","outputs":1,"noerr":0,"x":405,"y":217,"wires":[["db13f5f.f24ec08","4a1ed314.b5e12c"]]},{"id":"4181cf8.fbe7e3","type":"http
request","z":"5fa9e76b.a05618","name":"News","method":"GET","ret":"obj","url":"https://gateway-a.watsonplatform.net/calls/data/GetNews?apikey=&outputMode=json&start=now-1d&end=now&count=1&q.enriched.url.enrichedTitle.relations.relation=|action.verb.text=acquire,object.entities.entity.type=Company|&return=enriched.url.title","x":105,"y":229,"wires":[["f84ebc6a.07b14"]]},{"id":"53cc794e.ac3388","type":"inject","z":"5fa9e76b.a05618","name":"GetNews","topic":"News","payload":"","payloadType":"string","repeat":"","crontab":"","once":false,"x":75,"y":379,"wires":[["4181cf8.fbe7e3"]]}]
First you have to bind an Alchemy service instance to your node-red application.
Then you can develop your application, here is an example using the http and Feature Extract nodes:
Here is the node flow for this basic sample if you want to try:
[{"id":"e191029.f1e6f","type":"function","z":"2fc2a93f.d03d56","name":"","func":"msg.payload = msg.payload.url;\nreturn msg;","outputs":1,"noerr":0,"x":276,"y":202,"wires":[["12082910.edf7d7"]]},{"id":"12082910.edf7d7","type":"alchemy-feature-extract","z":"2fc2a93f.d03d56","name":"","page-image":"","image-kw":"","feed":"","entity":true,"keyword":true,"title":true,"author":true,"taxonomy":true,"concept":true,"relation":true,"pub-date":true,"doc-sentiment":true,"x":484,"y":203,"wires":[["8a3837f.f75c7c8","d164d2af.2e9b3"]]},{"id":"8a3837f.f75c7c8","type":"debug","z":"2fc2a93f.d03d56","name":"Alchemy Debug","active":true,"console":"true","complete":"true","x":736,"y":156,"wires":[]},{"id":"fb988171.04678","type":"http in","z":"2fc2a93f.d03d56","name":"Test Alchemy","url":"/test_alchemy","method":"get","swaggerDoc":"","x":103.5,"y":200,"wires":[["e191029.f1e6f"]]},{"id":"d164d2af.2e9b3","type":"http response","z":"2fc2a93f.d03d56","name":"End Test Alchemy","x":749,"y":253,"wires":[]}]
You can use curl to test it, for example:
curl -G http://yourapp.mybluemix.net/test_alchemy?url=<your url here>
or use your browser as well:
http://yourapp.mybluemix.net/test_alchemy?url=http://myurl_to_test_alchemy
You can see the results in the node-red debug tab or your can see it in application logs:
$ cf logs yourapp --recent

How to check HTTP response code in zabbix?

I have a Zabbix server 2.2 and a few linux hosts with websites. How can I get a notification from Zabbix, if the HTTP(s) response code is not 200?
I've tried those triggers without any success:
{owncloud:web.test.rspcode[Availability of owncloud,owncloud availability].last(,10)}#200
{owncloud:web.test.error[Availability of owncloud].count(10,200)}<1
{owncloud:web.test.error[Availability of owncloud].last(#1,10)}=200
But nothing works. I never got an notification, that the code is not 200 anymore even it was 404, because I have renamed the index.php of owncloud to index2.php
I configured the Application and the we the Web Scenario as followed:
if you have already configured the host go to step 1
1) Select the host by Configuration-> Host groups -> select host (example server 1)
2) Go to Config > Hosts > [Host Created Above] > Applications and click on Create Application
3) Now you have to create the Web scenario with the status code check, in my case I checked status code 200. So go to Configuration > Hosts > [Host Created Above] > Web Scenarios and click on Create Web Scenario .
Remark: you have to select the previous application created at the step 2
4) After that without click on Add button go to Steps window and you have to configure the host and parameters for the chek. After that click on Add. In my cas e check the status code 200 response for the HTTP request.
I found the issue. You need to specify the URL to check with file. For example like this in your web scenario:
https://owncloud.example.com/index.php
"Note that Zabbix frontend uses JavaScript redirect when logging in, thus first we must log in, and only in further steps we may check for logged-in features. Additionally, the login step must use full URL to index.php file." - https://www.zabbix.com/documentation/2.4/manual/web_monitoring/example
I also used following expression as trigger:
{owncloud:web.test.fail[Availability of owncloud].last()}>0
you have set a triggers bye Expression
{host name:web.test.rspcode[Scenario name,Steps name].last()}=200
The question has been answered adequately, but I will provide a very much more advanced solution that you could use for all HTTP status codes.
I've created an item that monitors all HTTP status codes of a proxy, graphs them, and then set up multiple different types of triggers to watch last value and counts in last N minutes.
The regex I used to extract all the values from a Nginx or Apache access log is
^(\S+) (\S+) (\S+) \[([\w:\/]+\s[+\-]\d{4})\] \"(\S+)\s?(\S+)?\s?(\S+)?\" (\d{3}|-) (\d+|-)\s?\"?([^\"]*)\"?\s?\"?([^\"]*)\"?\s
I then set many triggers relevant for my particular situation
101 Switching Protocols
301 Moved Permanently
302 Redirect
304 not modified
400 Bad Request
401 Unauthorised
403 Forbidden
404 Not found
500 Server Error
It's also important that your Zabbix agent has permissions to read the log file on the host. You can add the zabbix-agent to the www-data group using this command.
$ sudo usermod -a -G www-data Zabbix
See the tutorial for all the steps in greater detail.

How to configure Mongodb MMS to go via a Proxy?

How to I change the monitoring-agent.config to go out via proxy with authentication?
The change log states...
Monitoring Agent 2.3.1.89-1
Released 2014-07-08
Added support for HTTP proxy configuration in the agent configuration file.
But I can't see how to do this.
Following wdberkeley's link I can add this value to the monitoring-agent.config file.
httpProxy=http://"pxproxy01":3128
But this gives..
Failure getting conf. Op: Get Err: Proxy Authentication Required
Is there anyway to set the authentication user/password ?
Edit file:
C:\MMSData\Monitoring\monitoring-agent.config
Add line...
httpProxy=http://<insert_server_address>:<insert_port>
e.g.
httpProxy=http://PROXY01.server.com:3128
Then get the proxy control team, who ever they be, to exclude the following from requiring authentication.
https://mms.mongodb.com 80
https://mms.mongodb.com 443
This has worked for me. I now have the MMS Agent on Windows sending stat's to the MMS service.
Thanks to #wdberkeley for starting me off on this route.
wdberkeley, the page you linked to does not exist & the classic page PDF & HTTP versions state 'HTTP_PROXY' not 'httpproxy' (on OSx section & tar.gz section), section '6.6 Monitoring Agent Configuration' does state the correct property name 'httpproxy'.