how do i find out about (and turn off) AmazonCloudWatch alarms? - amazon-cloudwatchlogs

I have been getting some emails from AWS with warnings about usage of AmazonCloudWatch alarms (image attached also):
Apparently I have used 8.6 so far this month, but after trying to find out exactly what alarm I have enabled that is triggering I am coming up short. Can anyone advise how to find out what these alarms are and how to turn them off?
Many Thanks

Related

Google colab always shows connecting, how can I solve it?

This is the status of the problem in the picture
Based on my understanding, 'Connecting' can be due to many things. The most common reasons are
Your internet connection is unstable
You become idle* and the session was disconnected
*Either you are away from keyboard or you are doing something else instead of looking at the colab tab.
For 1, you can try using a different internet connection and see if that improves.
For 2, you can either move your mouse every now and then in the colab tab to prevent yourself from becoming idle.. or you can consider subscribing a premium version colab which help with that situation in theory.

AWS Lex chatbot not logging utterances (both missed and detected utterances)

One of our AWS bots is not logging detected and missed utterances. Where as all the new bots created in the same account are logging missed utterances in Monitoring -> Utterances section. I have checked the configuration of all the bots and it is all same.
In Monitoring -> Monitoring Graphs, I can see the graph showing missed utterances. I am failing to understand why the utterances (both missed and detected) is not appearing in the Monitoring > Utterances section. I know we need to wait 24 hours for them to appear. But it is not appearing at all even after 2 days. So if you can suggest some reasons for this, I will try to look into it.
I have made the aliases point to the latest version so no chance of utterances going to a wrong version. Thanks in advance
Utterance statistics are not generated under the following conditions:
The childDirected field was set to true when the bot was created.
You are using slot obfuscation with one or more slots.
You opted out of participating in improving Amazon Lex.
And as you mentioned, need to wait ~24 hours for data to be processed:
https://docs.aws.amazon.com/lex/latest/dg/ex-utterances.html

PHPList Bounce Rules?

I'm trying to get PHPList 3.3.1 to process email bounces and to "unconfirm" or delete users based on email bounces to them. I have the following settings in my PHPList config file:
define ("MANUALLY_PROCESS_BOUNCES",1);
define('USE_ADVANCED_BOUNCEHANDLING',0);
$bounce_unsubscribe_threshold = 2;
I have "Processed Bounces" and PHPList dutifully reads the bounced emails, adds them to the database, and deletes the emails.
However, it doesn't seem to mark users as unsubscribed, even after 2 bounces.
Do I need to add advanced bounce rules? If so, can you provide me with a good basic list of rules to use?
I did try the "Generate Bounce Rules" option and it created 1100 rules (yes, one thousand one hundred rules) - yikes! Seems like there should be something like 5 or 10 rules that would cover most bounces.
Little help?
This is still a relatively undocumented part of phplist. We have a sophisticated list of regex expressions we use but currently not public.
I suggest you start here: PHPList Bounce Rules? to find expressions to track the kind of phrases you want to capture and also the doc itself includes some starting rules: https://www.phplist.org/manual/ch040_bounce-management.xhtml
What is not so documented, or I haven't found at least, is the differences between some of the actions you have available but with a bit of work and time you can fine tune based on your traffic and more important... your customers MTAs.
Further to this questions I started a thread on PHPlists forum than might be of help:
https://discuss.phplist.org/t/please-help-clarifying-advance-bounce-processing/4077/4
if you're still having difficulty with the rules. Be sure they are ACTIVE and not in the CANDIDATE section. Sometimes, with so many created rules the system won't let you just tag them all and change to ACTIVE as it freezes.
You can always go to your phplist database and use the following:
UPDATE `TABLEPREFIX_bounceregex` SET `status` = 'active'
Where TABLEPREFIX should be the same as yours. Hope it helps so many years later.
Consider, as well, installing the Housekeeping plugin ยป https://resources.phplist.com/plugin/housekeeping

How long do you fine tune false positives with mod_security and OWASP rules?

I just started using owasp rules and got tons of false positives. Example someone in the description field has written:
"we are going to select some users tomorrow for our job platform."
This is detected as sql injection attack (id 950007). Well it is not. It is valid comment. I have tons of this kind false positives.
First I have set up SecRuleEngine DetectionOnly to gather information.
Then I started using "SecRuleUpdateTargetById 950007 !ARGS:desc" or "SecRuleRemoveById 950007" and I already spend a day for this. modsec_audit.log is alreay > 100MB of size.
I am interested from your experience, how long do you fine tune it (roughly). After you turn it on, do you still get false positives and how do you manage to add white lists on time (do you analyze the logs daily) ?
I need this info to tell by boss the estimation for this task. It seems that will be long lasting.
Totally depends on your site, your technology and your test infrastructure. The OWASP CRS is very noisy by default and does require a LOT of tweaking. Incidentally there is some work going on this and next version might have a normal and a paranoid mode, to hopefully reduce false positives.
To give an example I look after a reasonably sized site with a mixture of static pages and a number of apps written in wide variety of technologies (legacy code - urgh!) and a fair amount of visitors.
Luckily I had a nightly regression run in our preproduction environment with good coverage, so that was my first port of call. I released ModSecurity there after some initial testing, in DetectionOnly mode and tweaked it over a month maybe until I'd addressed all of the issues and was comfortable moving to prod. This wasn't a full month of continuous work of course but 30-60 mins on most days to check the previous nights run, tweak the rules appropriately and set it up for next night's run (damn cookies with their random strings!).
Next up I did the same in production, and pretty much immediately ran into issues with free text feedback fields like you have (of course I didn't see most of these in regression runs). That took a lot of tweaking (had to turn off a lot of SQL Injection rules for those fields). I also got a lot of insight how many bots and scripts run against our site! Most were harmless or Wordpress exploit attempts (luckily I don't run Wordpress), so no real risk to my site, but still an eye opener. I monitored the logs hourly initially (paranoid!), then daily, and then weekly.
I would say from memory that it took another 3 months or so until I was fully comfortable turning it on fully and checked it a lot over the next few days. Luckily all hard work paid off and very few false positives.
Since then it's been fairly stable and very few false alerts - mostly dues to bad data (e.g. email##example.com entered as an email address for a field which didn't validate email addresses properly) and I often left those place and fixed the field validation instead.
Some of the common issues and rules I had to tweak are given here: Modsecurity: Excessive false positives (note you may not need or want to turn off all these rules in your site).
We have Splunk installed on our web servers (basically a tool which sucks up log files and can then be searched or automatically alert or report on issues). So set up a few alerts for when the more troublesome, free text fields fields caused a ModSecurity block (have corrected one or two more false positives there), and also on volume (so we get an alert when a threshold passed and could see we were under a sustained attack - happens few times a year) and weekly/monthly reporting.
So a good 4-5 months to implement from scratch end to end with maybe 30-40 man days work over that time. But it was a very complicated site and I had no prior ModSecurity/WAF experience. On plus side learned a lot about web technologies, ModSecurity and got regexpr-blindness from staring at some of the rules! :-)

Concurrent Connection Test

So I ran into a network problem the other day and I was trying to find a way to test for this problem in the future.
I had a lot of users online at once and hit my routers max IP connection limit (not DHCP! TCP/UDP connections.)
Once I figured out what the problem was it was fairly simple to fix however I was wondering if there is any way to simulate this kind of activity? Everything worked fine when I tested it, it wasn't until I had 150+ users that I discoved I had a problem.
I have spent the last 3-4hrs looking for such a test/audit tool. Here is what I found:
http://bittwist.sourceforge.net/ -DDoS simulator (can't make it work, barly get +300 connections)
http://stevesouders.com/hpws/max-connections.php -Browser concurrent connection tester (This hits the browser limit (6 in chrome) w/o making a dent on my router even open in 70+ tabs at the same time)
http://www.smallnetbuilder.com/lanwan/lanwan-howto/31103-how-we-test-hardware-routers-revision-3 -Some tool linked about halfway down the page (Reads like its exactly what I want, however it barely has a noticable effect on my router.)
http://www.http-kit.org/600k-concurrent-connection-http-kit.html -Concurrent HTTP connection simulator (This one seems like it would do what I want, but my linux-foo is limited and I can't get it working. tear)
So do you guys have a tool to test your routers with? I would love something that does both TCP/UDP.
(btw, for anyone misunderstanding I'm not trying to test "speed", just sheer number of connections)
Thanks!
Kz