Haproxy clear table is not clearing all records - haproxy

I have a stick-table in my Haproxy:
backend myback
stick-table type ip size 5k expire 1h store http_req_cnt
http-request track-sc0 src
When I clear the table using:
echo "clear table myback" | sudo socat stdio
/var/run/haproxy/admin.sock
And show table immediately after the above command, using:
echo "show table myback" | sudo socat stdio
/var/run/haproxy/admin.sock
I still have some records in my table with a huge number of http_req_cnt which aren't possible to be in a few seconds. For example in the below image I have a record with 149 http_req_cnt which cannot be possible in a few seconds.
My Clear Table Image
Plus, that the number of http records for each IP in stick-table is not equal with the number of http requests in the log file of Haproxy. Does anyone have any idea why they are different?
Note that I want a solution that helps me with resetting the stick-table http_req_cnt counter after 1h. In other words, I want each record of my table to be the number of http requests in each hour and reset the counter in the first minute of each hour (done in config file not by socat commands). I was wondering if anyone would have any suggestions.
Thanks in advance

Related

Downloaded zip size from a Github release using curl is only 9 bytes

Github public repo has release v1.0.
The following curl command downloads only 9 bytes output of 42KB.
curl -O -L -J --ssl-no-revoke https://github.com/marmayogi/TTF2PostscriptCID-Win/releases/v1.0/TTF2PostscriptCID-Win-1.0.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9 100 9 0 0 9 0 0:00:01 --:--:-- 0:00:01 9
Based on comments received, the response of curl command only withL flag is added up in the post:
curl -L https://github.com/marmayogi/TTF2PostscriptCID-Win/releases/v1.0/TTF2PostscriptCID-Win-1.0.zip
curl: (35) schannel: next InitializeSecurityContext failed: Unknown error (0x80092012) - The revocation function was unable to check revocation for the certificate.
Added with post based on the comments received.
My desktop was expecting --ssl-no-revoke along with curl command. This problem was resolved with flag k. Here is the evidence.
"C:\Program Files\Neovim\bin\curl.exe" -o TTF2PostscriptCID-Win-1.0.zip -L https://github.com/marmayogi/TTF2PostscriptCID-Win/archive/refs/tags/v1.0.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
Can anyone throw some light on this issue?
Thanks in advance.
I'd recommend using
curl -sL https://github.com/marmayogi/TTF2PostscriptCID-Win/archive/refs/tags/v1.0.zip >filename.zip
or
curl -sLO https://github.com/marmayogi/TTF2PostscriptCID-Win/archive/refs/tags/v1.0.zip
Optionally you can also use (loosens SSL security)
curl -sL --ssl-no-revoke https://github.com/marmayogi/TTF2PostscriptCID-Win/archive/refs/tags/v1.0.zip >filename.zip
or
curl -sLO --ssl-no-revoke https://github.com/marmayogi/TTF2PostscriptCID-Win/archive/refs/tags/v1.0.zip
-s, --silent
Silent or quiet mode. Do not show progress meter or error messages. Makes Curl mute. It will still output the data you ask for, potentially even to the terminal/stdout unless you redirect it.
Use --show-error in addition to this option to disable progress meter but still show error messages.
-L, --location
(HTTP) If the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place.
-O, --remote-name
Write output to a local file named like the remote file we get. (Only the file part of the remote file is used, the path is cut off.)
The file will be saved in the current working directory.
--ssl-no-revoke
(Schannel) This option tells curl to disable certificate revocation checks. WARNING: this option loosens the SSL security, and by using this flag you ask for exactly that.
The curl then gets redirected to whatever filename you want (filename.zip) or with the -sLO it selects the filename automatically.
seems your URL is bad? try
curl 'https://github.com/marmayogi/TTF2PostscriptCID-Win/archive/refs/tags/v1.0.zip' -LO

RDS database with a PostgreSQL engine connection limit

I've got an RDS instance (db.t2.micro - 1 vCPU, 1GiB RAM) that I'm spamming with connection attempts to mimic high load on the DB over a short period of time and am consistently hitting a DB connection limit of ~100 regardless of the DB instance class (I've tried db.t2.large - 4 vCPU, 16GiB RAM), setting the 'max_connections' parameters as part of a custom parameter group and the use of an RDS proxy for connection pooling.
I do notice that the red line on the DB connections graph below disappears when I increase the DB instance class which looks like more connections should be available but as can be seen in the graph the connection limit is pretty fixed at ~100
I've read threads where people have DB connections into the 000s and 0000s even so I'm convinced I'm missing something here on the configuration side of things, any ideas?
Edit: I am able to exceed ~100 connections if using the JDBC library but when I use mimic our production system which is a REST API running as a service on AWS ECS, I max out at ~100, with a http 500 error
The CloudWatch log indicates the 'rate exceeded'. The REST API is built using Microsoft.NET.Sdk.Web. In my use case the server needs to be able to handle ~500 API requests a second every 15mins.
I'm suspecting that your API, which have already identified is the REST api(could be the only one you are using, not sure from the info) is "Throttling".
Firstly to identify if its throttling or not, go to your CloudTrail console
and then create a table for a CloudTrail trail.
check the Athena console
and then select New query, type the below query and replace the table name with your clodtrail table you created.
select eventname, errorcode,eventsource,awsregion, useragent,COUNT(*) count
FROM your-cloudtrail-table
where errorcode = 'ThrottlingException'
AND eventtime between '2020-10-11T03:00:08Z' and '2020-10-12T07:15:08Z'
group by errorcode,awsregion, eventsource, useragent, eventname
order by count desc;
Once you have identified for sure that your API is throttling, You can ask the AWS team to bump up the limit if the throttling is due to the limit
(which they should be able to confirm).
see this for limit related conversation:
https://forums.aws.amazon.com/thread.jspa?threadID=226764
also check out the quota doc for the limits on ECS service:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-quotas.html
Secondly go ahead and check the PG end what is the connections limit showing there, you can psql into it and run the command:
show max_connections or
postgres=> select * from pg_settings where name='max_connections';
-[ RECORD 1 ]---+-----------------------------------------------------
name | max_connections
setting | 83
unit |
category | Connections and Authentication / Connection Settings
short_desc | Sets the maximum number of concurrent connections.
extra_desc |
context | postmaster
vartype | integer
source | configuration file
min_val | 1
max_val | 262143
enumvals |
boot_val | 100
reset_val | 83
sourcefile | /rdsdbdata/config/postgresql.conf
sourceline | 33
pending_restart | f
Hope this helps!.
This will tell you the max connections limit for that particular instance. I know there is no limit as such(there is a theoretical limit). The connection limit is dynamic in PG depending on the memory of your instance\cluster.
IF you go to RDS and then on the left side "Parameter groups"
you can search for max_connections and check for the column "values"
LEAST({DBInstanceClassMemory/9531392},5000).
I really know very little about Microsoft and .NET but it sounds like your application has a default connection pool of 100 connections.
https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling
In your DB connection string try adding Max Pool Size=200;

How to configure the rate filter in snort for windows environment?

I have installed and configured the snort software for windows environment. As per their documentation, the threshold is deprecated and I have to use other filters. I need to use rate_filter in my application, however, I don't know how to set it inside my snort software.
I have read all the documentation and internet resources, and I have added the example codes of rate_filter directly to my snort.conf file, but still I can't get what I want.
Am I missing something?
You may need to share your filter to best help here. An example of layout here:
Example 1 - allow a maximum of 100 connection attempts per second from any one IP address, and block further connection attempts from that IP address for 10 seconds:
rate_filter \
gen_id 135, sig_id 1, \
track by_src, \
count 100, seconds 1, \
new_action drop, timeout 10

How to provision a test user in kamailio?

I have just (for the first time) compiled and installed kamailio, following this guide. For configuration, I am following the documentation here
I am trying to test a new SIP user. I have created it with:
» kamctl add test testpasswd
The user is there:
» kamctl db show subscriber
|----+----------+--------------------+------------+---------------+----------------------------------+----------------------------------+------|
| id | username | domain | password | email_address | ha1 | ha1b | rpid |
|----+----------+--------------------+------------+---------------+----------------------------------+----------------------------------+------|
| 5 | test | tethys.wavilon.net | testpasswd | | 5cf40781f33c6f43a26244046564b67e | eb898de815bc16092e4c2e8c04bfe188 | NULL |
|----+----------+--------------------+------------+---------------+----------------------------------+----------------------------------+------|
I try to connect with my sip client, and the registration times out (Request Timeout (408)). I have tried to verify what is going on by doing:
» kamailio -l <my-ip> -E -ddddd -D 1
And I see lots of messages, one of them interesting:
0(15818) DEBUG: auth [api.c:86]: pre_auth(): auth:pre_auth: Credentials with realm '<my-ip>' not found
But I do not know how to solve this problem. How can I verify what credentials associated to realm <my-ip> are configured? What is a "realm"? I do not find any beginners guide for kamailio. Is there a simple how-to on how to setup a simple kamailio configuration?
The log message you pasted in the question is for debug purposes (hence DEBUG level) and it could be printed for first SIP requests that come with no credentioals (e.g., first REGISTER) -- in such case it is all ok. Those requests are challenged for authentication with 401 replies, then they are resent by phone with credentials in Autorization header.
If for those requests with credentials you don't get the same realm as used in challenge function parameters (e.g., www_challenge(), auth_challenge()...), then the SIP phone might be misconfigured. Typically the realm is the same as SIP domain in order to ensure it is unique, but that is not a must. With default kamailio configuration, the realm is the From header URI domain.
However, you say you get 408 timeout for registration, then the issues might be something else. When the credentials matching the realm are not found, then 401reply is sent back, not 408.
The reason for timeout could be that the REGISTER didn't get to kamailio or kamailio tries to send it somewhere else. You should look at the SIP traffic on the kamailio server to see what happens. You can use ngrep for that purpose, like:
ngrep -d any -qt -W byline . port 5060
Watch to see if the REGISTER comes to kamailio server and if it is attempted to be sent to another IP.
I got the same issue. I that add the alias record in kamailio.cfg and it works well.
alias="tethys.wavilon.net"
Kamailio is a proxy. It is not simple, so if you want something simple, try Asterisk instead. Kamailio configuration requires knowledge of SIP.
For this problem: you set the realm somewhere (in config file or in database) but are not using it for registration. Possible solutions would be to:
Remove the realm or set it to the correct domain name (and use it!). In the default config, that means disabling domains.
Use tethys.wavilon.net as you described in the subscriber table.
For more info, go to the Kamailio site and read this document.

Is there a api for ganglia?

Hello I would like to enquire if there is an API that can be used to retrieve Ganglia stats for all clients from a single ganglia server?
The Ganglia gmetad component listens on ports 8651 and 8652 by default and replies with XML metric data. The XML data type definition can be seen on GitHub here.
Gmetad needs to be configured to allow XML replies to be sent to specific hosts or all hosts. By default only localhost is allowed. This can be changed in /etc/ganglia/gmetad.conf.
Connecting to port 8651 will get you a default XML report of all metrics as a response.
Port 8652 is the interactive port which allows for customized queries. Gmetad will recognize raw text queries sent to this port, i.e. not HTTP requests.
Here are examples of some queries:
/?filter=summary (returns a summary of the whole grid, i.e. all clusters)
/clusterName (returns raw data of a cluster called "clusterName")
/clusterName/hostName (returns raw data for host "hostName" in cluster "clusterName")
/clusterName?filter=summary (returns a summary of only cluster "clusterName")
The ?filter=summary parameter changes the output to contain the sum of each metric value over all hosts. The number of hosts is also provided for each metric so that the mean value may be calculated.
Yes, there's an API for Ganglia: https://github.com/guardian/ganglia-api
You should check this presentation from 2012 Velocity Europe - it was really a great talk: http://www.guardian.co.uk/info/developer-blog/2012/oct/04/winning-the-metrics-battle
There is also an API you can install from pypi with 'pip install gangliarest' and sets up a configurable API backed with a Redis cache and indexer to improve performance.
https://pypi.python.org/pypi/gangliarest