How to make uchiwa dashboard url be able to adjust threshold? - plugins

me again..
I had done all the sensu-uchiwa-graphite set up. And i get a new request,:(. Rather than go to change the threshold in check.json file on sensu server..any plugin at the UCHIWA that this adjustment will be shown in Uchiwa dashboard? I asked because in case that my application teams wanna change it by themselves without accessing to server.
I think sensu-admin in enterprise is available but we need to pay big money per year ;(...
Thanks in advance to help.
Sumana W.

This is fairly doable if you use a configuration management system like Chef/Ansible/Puppet - especially if you run standalone checks on the sensu-client.
This allows the clients to define their own thresholds, rather than changing the sensu servers themselves.
See https://sensuapp.org/docs/latest/reference/checks.html#standalone-checks
In this case, the definitions for the checks are sitting on the client servers and they have the choice of their thresholds or configurations. The client itself manages how often to run the check and sends the output back to the server, rather than the server requesting the checks. This helps quite a bit as far as scaling or multitenancy.
The other way to accomplish this, if you are tied to serverside checks, would be to use client attributes (https://sensuapp.org/docs/0.25/reference/checks.html#check-token-substitution)
For example, you can have a cpu check that says something like check-cpu.sh -w :::cpu_warn::: -c :::cpu_critical::: and these come from a cpu_warn and cpu_critical value from the client.json on the client server.
Source: We use sensu extensively in an enterprise environment across thousands of hosts and have been working through these same issues.

Related

What are the limitations of the flask built-in web server

I'm a newbie in web server administration. I've read multiple times that flask built-in web server is not designed for "production", and must be used only for tests and debug...
But what if my app touchs only a thousand users who occasionnaly send data to the server ?
If it works, when will I have to bother with the configuration of a more sophisticated web server ? (I am looking for approximative metrics).
In a nutshell, I would love to find what the builtin web server can do (with approx thresholds) and what it cannot.
Thanks a lot !
There isn't one right answer to this question, but here are some things to keep in mind:
With the right amount of horizontal scaling, it is quite possible you could keep scaling out use of the debug server forever. When exactly you would need to start scaling (or switch to using a "real" web server) would also depend on the environment you are hosting in, the expectations of the users, etc.
The main issue you would probably run into is that the server is single-threaded. This means that it will handle each request one at a time, serially. This means that if you are trying to serve more than one request (including favicons, static items like images, CSS and Javascript files, etc.) the requests will take longer. If any given requests happens to take a long time (say, 20 seconds) then your entire application is unresponsive for that time (20 seconds). This is only the default, of course: you could bump the thread counts (or have requests be handled in other processes), which might alleviate some issues. But once again, it can still be slow under a "high" load. What is considered a "high" load will be dependent on your application and the expectations of a maximum acceptable response time.
Another issue is security: if you are concerned at ALL about security (and not just the security of the data in the application itself, but the security of the box that will be running it as well) then you should not use the development server. It is not ready to withstand any sort of attack.
Finally, the development server could just fail outright. It is not designed to be used as a long-running process (days, weeks, months), and so it has not been well tested to work in this capacity.
So, yes, it has limitations. Yes, you could still conceivably use it in production. And yes, I would still recommend using a "real" web server. If you don't like the idea of needing to install something like Apache or Nginx, you can still go with a solution that is still as easy as "run a python script" by using some of the WSGI Standalone servers, which can run a server that is designed to be in production with something just as simple as running python run_app.py in the command line. You typically just need to create a 4-5 line python script to import and create the server object, point it to your Flask app, and run it.
gunicorn could be run with only the following on the command line, no extra script needed:
gunicorn myproject:app
...where "myproject" is the Python package that contains the app Flask object. Keep in mind that one of developers of gunicorn would probably recommend against this approach. See https://serverfault.com/questions/331256/why-do-i-need-nginx-and-something-like-gunicorn.
The OP has long-since moved on, but for those who encounter this question in the future I would just add that setting up an Apache server, even on a laptop, is free and pretty easy. It can be readily configured for as few or as many features as you want just by uncomment in or commenting out lines in the config file. There might be an even easier GUI method for doing that nowdays, but just editing the configs is simple.

Can I send multiple requests at one time using Fiddler?

Using Fiddler, I want to send multiple requests in one hit, to check the response time from the server, if too many requests are sent at one time. Basically, I want to perform a, kind of, load testing on my service. Is there any way to perform this action? I want to repeat the process of hitting the server, again and again.
In Fiddler, you can repeat a request as many times as you like by hitting SHIFT+R on the selected Web Session. You'll be prompted for a repeat count and then Fiddler will issue the specified number of requests.
Caveat: Having said that, generally speaking, you'd want to use a tool like Telerik Test Studio's Load Test tool for a task like this. Alternatively, you could use Fiddler's Export architecture to generate a script for VS WebTest or Microsoft's free WCAT tool and use those tools to generate the load. You can then run these scripts on multiple machines from multiple networks and generate a more-realistic set of load than you could by simply running on a single client.
I've been load testing with StresStimulus today. Overall, I'm quite impressed.
It's now a standalone application (it used to be a fiddler extension). There's a 7 day free trial which allows up to 50 virtual users. Also, the setup wizard is great for beginners.
For basic load testing the trial should be fine. Consider upgrading for extensive/professional use.

How should I determine what is issuing a flush_all command

We have a memcached server that is shared by about two dozen apps. One of the web apps (or perhaps one of our utility apps) is issuing a flush_all command periodically. The frequency seems random, or at least we haven't seen a pattern yet. It happens about 10 times an hour.
Here's the rub. I can't figure out a good way to figure out which app is doing this. The memcacehd logs are not helpful at all. Here's what I've done so far:
* grep all source code - Other than memcached libraries I can't see anywhere where we issue this command.
* Enable verbose logging (-vv) in memcached - I see the commands get issued, but the log doesn't show any information about where the command is being issued from.
* Research how to administratively disable this; without an unapproved source patch to memcached I can't figure out a good way to do it.
Has anyone else had this problem? I'm assuming that this is coming from one of our web apps, but its possible its from somewhere else too. Any suggestions?
My next step is to setup a second memcached server and move applications one by one (which will be slow and time consuming). There must be a better way.
A little late, but in case anyone else hits this...
I'd suggest you set up multiple memcache proxies and configure each application to use a different one. The first proxy I found was twemproxy, no idea how good it is.
After that you can use the logs for the proxy to identify which application is issuing the commands.

How to stop users from visiting staging area after production deployment

We have a few servers that have different roles. For instance, we have production servers, and testing/staging servers. We have a few end users who forget to switch paths to production once things are tested and approved or use; They use the new paths for a bit, then revert back to using the testing/staging at some point for some reason that we can't understand other than stupidity. We still want to be able to get a glimpse into our staging environment after pushing a build into production, but we want to stop them from being able to still hit those servers/services.
We are now pondering some solutions to this problem. One being never give them the direct staging url. An idea would be to create a virtual directory or have a set of domain aliases that we could give them and then shut down while still allowing us access to these endpoints. We could restrict our main staging domain to the office ip range so they never have direct access and call it good.
Does this sound like a good solution? Is our process wrong, are there better routes?
I am interested in solutions for websites as well as web services where visuals can't be used effectively.
We've run into this at my work as well… quite recently in fact. One thing that I thought about other than the virtual directory was setting up specific ports for them to test on then either take the ports down or change them for our internal uses only.
Well without details in how your application is deployed it could be troublesome to give concrete examples. One wonderful solution is to get better users :P Perhaps a more possible solution however is to let your production boxes move a certain set of users(as decided in your code) to your test/staging systems. I.E. the User always connects to Production, but the production machines at connect/auth time, may decide these people are too cool for production let them run the test/staging code instead.
It's not a fullproof method of course, but it works for many many websites to let a certain set of users into different parts of their codebase.
I don't know how feasible this would be for you, but it's a possibility perhaps.
I find that users sometimes have difficulties with URLs, and don't like to have subtle changes like port number in the address.
The best approach I've found is to have the application tell the user what environment they are in.
For example, my teams have used absolutely positioned headers or footers, color coded for Dev/Staging environments that show the application version number with an alpha/beta tag, along with a message that says "Work done on this site will be lost, use Production (link) to keep your work." Typically we make the Dev area red, and the staging area yellow. We also like to put a link to the bug tracking system right in this area.
On production there is not usually a region like this. However, we do sometimes provide positive reinforcement by placing a green region, with the app version and a Production tag in it, and then fade the green region away after a few seconds. This helps keep the app front and center, but let's the user know they are in the right place.

What's the best way to update code remotely?

For example, I have a website with various types of information. If that goes down I have a copy of the same website the users use on a local webserver, like Apache or IIS on the client. They use this local version until the Internet version returns. They can have no downtime, in other words.
The problem is that over time the Internet version will change while the client versions will remain the same unless I touch each client's machine to make the updates. I don't want to do that.
Is there a good way to keep my client up to date so that when I make a change on the server the client gets a copy so they can run it locally if needs be?
Thank you.
EDIT: do you think maybe using SVN and timely running of the update by the clients would work?
EDIT: they'll never ever submit anything. It's just so I don't have to update the client by hand, manually going to the machine. they're webpages that run in case the main server is down.
I will go for Git over SVN because of its distributed nature. Gives you multiple copies of code; use it along with this comment's solution:
Making git auto-commit
to autocommit.
Why not use something like HTTrack to make local copies of your actual internet site on each machine, rather then trying to do a separate deployment. That way you'll automatically stay in sync.
This has the advantage that if, at some point, part of your website is updated dynamically from a database, the user will still be able to have a static copy of the resulting site that is up-to-date.
There are tools like rsync which you can use periodically to sync the changes.