How to provision a test user in kamailio? - sip

I have just (for the first time) compiled and installed kamailio, following this guide. For configuration, I am following the documentation here
I am trying to test a new SIP user. I have created it with:
» kamctl add test testpasswd
The user is there:
» kamctl db show subscriber
|----+----------+--------------------+------------+---------------+----------------------------------+----------------------------------+------|
| id | username | domain | password | email_address | ha1 | ha1b | rpid |
|----+----------+--------------------+------------+---------------+----------------------------------+----------------------------------+------|
| 5 | test | tethys.wavilon.net | testpasswd | | 5cf40781f33c6f43a26244046564b67e | eb898de815bc16092e4c2e8c04bfe188 | NULL |
|----+----------+--------------------+------------+---------------+----------------------------------+----------------------------------+------|
I try to connect with my sip client, and the registration times out (Request Timeout (408)). I have tried to verify what is going on by doing:
» kamailio -l <my-ip> -E -ddddd -D 1
And I see lots of messages, one of them interesting:
0(15818) DEBUG: auth [api.c:86]: pre_auth(): auth:pre_auth: Credentials with realm '<my-ip>' not found
But I do not know how to solve this problem. How can I verify what credentials associated to realm <my-ip> are configured? What is a "realm"? I do not find any beginners guide for kamailio. Is there a simple how-to on how to setup a simple kamailio configuration?

The log message you pasted in the question is for debug purposes (hence DEBUG level) and it could be printed for first SIP requests that come with no credentioals (e.g., first REGISTER) -- in such case it is all ok. Those requests are challenged for authentication with 401 replies, then they are resent by phone with credentials in Autorization header.
If for those requests with credentials you don't get the same realm as used in challenge function parameters (e.g., www_challenge(), auth_challenge()...), then the SIP phone might be misconfigured. Typically the realm is the same as SIP domain in order to ensure it is unique, but that is not a must. With default kamailio configuration, the realm is the From header URI domain.
However, you say you get 408 timeout for registration, then the issues might be something else. When the credentials matching the realm are not found, then 401reply is sent back, not 408.
The reason for timeout could be that the REGISTER didn't get to kamailio or kamailio tries to send it somewhere else. You should look at the SIP traffic on the kamailio server to see what happens. You can use ngrep for that purpose, like:
ngrep -d any -qt -W byline . port 5060
Watch to see if the REGISTER comes to kamailio server and if it is attempted to be sent to another IP.

I got the same issue. I that add the alias record in kamailio.cfg and it works well.
alias="tethys.wavilon.net"

Kamailio is a proxy. It is not simple, so if you want something simple, try Asterisk instead. Kamailio configuration requires knowledge of SIP.
For this problem: you set the realm somewhere (in config file or in database) but are not using it for registration. Possible solutions would be to:
Remove the realm or set it to the correct domain name (and use it!). In the default config, that means disabling domains.
Use tethys.wavilon.net as you described in the subscriber table.
For more info, go to the Kamailio site and read this document.

Related

Getting Started With PeerJS

I am trying the simplest example I can, pulled directly from their website. Here is my entire html file, with code taken exactly from https://peerjs.com/index.html:
<script src="https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js"></script>
<script>
var peer = new Peer();
var conn = peer.connect('another-peers-id');
// on open will be launch when you successfully connect to PeerServer
conn.on('open', function(){
// here you have conn.id
conn.send('hi!');
});
</script>
In Chrome and Edge I get this in the console:
peerjs.min.js:64 GET https://0.peerjs.com/peerjs/id?ts=15956160926060.016464029424720694 net::ERR_CONNECTION_REFUSED
In Firefox I get this:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://0.peerjs.com/peerjs/id?ts=15956162489620.8436734374800061. (Reason: CORS request did not succeed).
What am I doing wrong?
#reyad has requested "a full trace of requests and responses". Here's what I see in my network tab in Firefox:
And here's Chrome:
And a tiny bit more Chrome:
[Note: It would have been better if you could provide a full trace of requests and responses. This problem may occur for several reasons. I'll state two solutions. So, try those. If those doesn't work, provide full trace of requests and responses.]
1. First Solution:
Sometimes, this type of error occurs because of self-signed certificate. To solve this problem, open developer tools/options, then go to network tab. You'll see a list of requests. Select the request which was failed because of CORS(i.e. which gave you this Reason: CORS request did not succeed). Open it(i.e. click it). If your problem is related to cert you'll see the following error message:
AN ERROR OCCURED: SEC_ERROR_INADEQUATE_KEY_USAGE
To solve this problem, go to url that is the reason of this problem and accept the certificate manually.
2. Second solution:
Check the request(which is the reason of CORS) in the network tab of developers tools/options(same as described in 1. First Solution). You'll find a Transferred column. See, what's written in the Transferred column of the failed request. If it is written Blocked By Some Ad-Blocker, then disable the Ad-Blocker. Your request will work fine.
[P.S.]: These solutions are proposed on assumptions. Hope these works. If these two do not work, then please provide more info about requests and responses. And also check this.
3. Third and final solution:
[Note: This solution may not solve your problem directly, but it'll give you alternative solution and also insight about what your problem is and how to work around it]
Before reading the solution below, read this to understand how Access-Control-Allow-Origin works(it is the reason for CORS error).
Let me first explain how peerjs works:
PEERJS works based on PEER ID. So, you've to get some PEER ID either from the PEERJS CLOUD SERVER or you've to provide yourself one in the PEER CONSTRUCTOR i.e. new Peer("some-peer-id"). Peer id has to be unique, cause its necessary to detect all the users uniquely. And, peerjs uses this PEER ID to send and receive data from user to user.
Now, you should know that, you're using PEERJS CLOUD SERVER to get/generate unique peer id which is the default server PEERJS uses unless you specified some other server to use.
Now let me explain why you're facing this problem:
As you already know how CORS works, you may have already guessed, that https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js(the downloaded js file) is calling https://0.peerjs.com to retrieve/generate new unique PEER ID. But, this request by https://your.website.com does not have Access-Control-Allow-Origin access for some reason, it may also be a middleware problem. So, its difficult to tell where the problem is actually occuring. But one thing for sure, it's not your fault of writing code :D.
I hope all the concepts is clear to you I've stated above.
Now, to solutions:
Alternative-appraoch-1 (Using PEERJS CLOUD SERVER AND Your own provided id):
In this approach you've to generate your own unique PEER ID. So, "https://your.website.com" does not have to call "https://0.peerjs.com" for unique peer id. [Note: make your peer id large enough so that its always unique, at least 64 chars long]
In this way, you can avoid the CORS problem.
Update:
I just saw an new issue in github, which says the public peerjs cloud server is now unstable or does not work properly. It just gives error like: Firefox cannot establish a connection with the server at the address wss://0.peerjs.com/peerjs?key=peerjs&id=123222589562487856955685485555&token=ocyxworx62i and in Chrome: Error in connection establishment: net::ERR_CONNECTION_REFUSED. For details check here. So, its better, you use your own server(see the next approach).
Alternative-appraoch-2 (Using your own peerjs server):
You can host your own peerjs server instead of PEERJS CLOUD SERVER. In this way, you can allow access to anyone/any website you want. If you want know how to host a peerjs server, you may visit here.
[P.S.]: I have studied pearjs issues in github. After reading all those issues, it seems, it is better to use your own server rather than using pearjs cloud. There are a lot of various problems with each new release of peerjs. And mostly related with connection with peerjs cloud and also peerjs cloud is not stable I guess. They were hosting it in 0.peerjs.com:9000 before(not secure). But now in 0.peerjs.com:443.
I haven't use peerjs before nor set up peerjs server. If you want to set up one, I hope the community would be able help you on how to do that properly.
What I understand from your question is that there is an issue of (CORS => Cross-origin resource sharing ), Maybe what I am suggesting is not very intuitive.
First : download the "https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js" in your local directory . and then incklude the local javascript code to the html.
like: <script src="./peerjs.min.js"></script>
Second :
you are using var peer = new Peer();
but please provide an extra unique id from your side. for example, I just created a random id and provided it.
StackOverflow link: https://stackoverflow.com/questions/21216758/peerjs-set-your-own-peerid#:~:text=1%20Answer&text=Provide%20a%20peer%20id%20when,to%20under%20Create%20a%20peer.
var a_random_id = Math.random().toString(36).replace(/[^a-z]+/g, '').substr(2, 10);
var peer = new Peer(a_random_id, {key: 'myapikey'});
Third : the best option is to run PeerServer: A server for PeerJS of your own.
If you don't want to develop anything, just enter a few commands below.
Install the package globally:
$ npm install peer -g
Run the server:
$ peerjs --port 9000 --key peerjs --path /myapp
Started PeerServer on ::, port: 9000, path: /myapp (v. 0.3.2)
Check it: http://127.0.0.1:9000/myapp It should return JSON with name, description, and website fields.
details:https://github.com/peers/peerjs-server

keycloak Invalid parameter: redirect_uri

When I am trying to hit from my api to authenticate user from keycloak, but its giving me error Invalid parameter: redirect_uri on keycloak page. I have created my own realm apart from master. keycloak is running on http. Please help me.
What worked for me was adding wildchar '*'. Although for production builds, I am going to be more specific with the value of this field. But for dev purposes you can do this.
Setting available under, keycloak admin console -> Realm_Name -> Cients -> Client_Name.
EDIT: DO NOT DO THIS IN PRODUCTION. Doing so creates a large security flaw.
If you are a .Net Devloper Please Check below Configurations
keycloakAuthentication options class set
CallbackPath = RedirectUri,//this Property needs to be set other wise it will show invalid redirecturi error
I faced the same error. In my case, the issue was with Valid Redirect URIs was not correct. So these are the steps I followed.
First login to keycloack as an admin user.
Then Select your realm(maybe you will auto-direct to the realm). Then you will see below screen
Select Clients from left panel.
Then select relevant client which you configured for your app.
By default, you will be Setting tab, if not select it.
My app was running on port 3000, so my correct setting is like below.
let say you have an app runs on localhost:3000, so your setting should be like this
If you're getting this error because of a new realm you created
You can directly change the URL in the URL bar to get past this error. In the URL that you are redirected to (you may have to look in Chrome dev tools for this URL), change the realm from master to the one you just created, and if you are not using https, then make sure the redirect_uri is also using http.
If you're getting this error because you're trying to setup Keycloak on a public facing domain (not localhost)
Step 1)
Follow this documentation to setup a MySql database (link's broken. If you find some good alternative documentation that works for you, feel free to update this link and remove this message). You may also need to refer to this documentation.
Step 2)
Run the command update REALM set ssl_required = 'NONE' where id = 'master';
Note:
At this point, you should technically be able to login, but version 4.0 of Keycloak is using https for the redirect uri even though we just turned off https support. Until Keycloak fixes this, we can get around this with a reverse proxy. A reverse proxy is something we will want to use anyhow to easily create SSL/TLS certificates without having to worry about Java keystores.
Note 2: After writing these instructions, Keycloak come out with their own proxy. They then stopped supporting it and recommended using oauth2 proxy instead. It is lacking some features the Keycloak proxy had, and an unoffical version of that proxy is still being maintained here. I haven't tried using either of these proxies, but at this point, you might want to stop following my directions and use one of those instead.
Step 3) Install Apache. We will use Apache as a reverse proxy (I tried NGINX, but NGINX had some limitations that got in the way). See yum installing Apache (CentOs 7), and apt-get install Apache (Ubuntu 16), or find instructions for your specific distro.
Step 4) Run Apache
Use sudo systemctl start httpd (CentOs) or sudo systemctl start apache2 (Ubuntu)
Use sudo systemctl status httpd (CentOs) or sudo systemctl status apache2
(Ubuntu) to check if Apache is running. If you see in green text the words active (running) or if the last entry reads Started The Apache HTTP Server. then you're good.
Step 5) We will establish a SSL connection with the reverse proxy, and then the reverse proxy will communicate to keyCloak over http. Because this http communication is happening on the same machine, you're still secure. We can use Certbot to setup auto-renewing certificates.
If this type of encryption is not good enough, and your security policy requires end-to-end encryption, you will have to figure out how to setup SSL through WildFly, instead of using a reverse proxy.
Note:
I was never actually able to get https to work properly with the admin portal. Perhaps this may have just been a bug in the beta version of Keycloak 4.0 that I'm using. You're suppose to be able to set the SSL level to only require it for external requests, but this did not seem to work, which is why we set https to none in step #2. From here on we will continue to use http over an SSH tunnel to manage the admin settings.
Step 6)
Whenever you try to visit the site via https, you will trigger an HSTS policy which will auto-force http requests to redirect to https. Follow these instructions to clear the HSTS rule from Chrome, and then for the time being, do not visit the https version of the site again.
Step 7)
Configure Apache. Add the virtual host config in the code block below. If you've never done this, then the first thing you'll need to do is figure out where to add this config file.
On RHEL and some other distros
you'll need to find where your httpd.conf or apache2.conf file is located. That config file should be loading virtual host config files from another folder such as conf.d.
If you are using Ubuntu or Debian,
your config files will be located in /etc/apache2/sites-available/ and you'll have an extra step of needing to enable them by running the command sudo a2ensite name-of-your-conf-file.conf. That'll create a symlink in /etc/apache2/sites-enabled/ which is where Apache looks for config files on Ubuntu/Debian (and remember the config file was placed in sites-available, slightly different).
All distros
Once you've found the config files, change out, or add, the following virtual host entries in your config files. Make sure you don't override the already present SSL options that where generated by certbot. When done, your config file should look something like this.
<VirtualHost *:80>
RewriteEngine on
#change https redirect_uri parameters to http
RewriteCond %{request_uri}\?%{query_string} ^(.*)redirect_uri=https(.*)$
RewriteRule . %1redirect_uri=http%2 [NE,R=302]
#uncomment to force https
#does not currently work
#RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI}
#forward the requests on to keycloak
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost *:443>
RewriteEngine on
#Disable HSTS
Header set Strict-Transport-Security "max-age=0; includeSubDomains;" env=HTTPS
#change https redirect_uri parameters to http
RewriteCond %{request_uri}\?%{query_string} ^(.*)redirect_uri=https(.*)$
RewriteRule . %1redirect_uri=http%2 [NE,R=302]
#forward the requests on to keycloak
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
#Leave the items added by certbot alone
#There should be a ServerName option
#And a bunch of options to configure the location of the SSL cert files
#Along with an option to include an additional config file
</VirtualHost>
</IfModule>
Step 8) Restart Apache. Use sudo systemctl restart httpd (CentOs) or sudo systemctl restart apache2 (Ubuntu).
Step 9)
Before you have a chance to try to login to the server, since we told Keycloak to use http, we need to setup another method of connecting securely. This can be done by either installing a VPN service on the keycloak server, or by using SOCKS. I used a SOCKS proxy. In order to do this, you'll first need to setup dynamic port forwarding.
ssh -N -D 9905 user#example.com
Or set it up via Putty.
All traffic sent to port 9905 will now be securely routed through an SSH tunnel to your server. Make sure you whitelist port 9905 on your server's firewall.
Once you have dynamic port forwarding setup, you will need to setup your browser to use a SOCKS proxy on port 9905. Instructions here.
Step 10) You should now be able to login to the Keycloak admin portal. To connect to the website go to http://127.0.0.1, and the SOCKS proxy will take you to the admin console. Make sure you turn off the SOCKS proxy when you're done as it does utilize your server's resources, and will result in a slower internet speed for you if kept on.
Step 11) Don't ask me how long it took me to figure all of this out.
IMPORTANT UPDATE IN KEYCLOAK 18
In the newest keycloak 18, they have deprecated the redirect_uri variable for the openid connect logout -> https://www.keycloak.org/docs/latest/upgrading/index.html#openid-connect-logout
Go to keycloak admin console > SpringBootKeycloak> Cients>login-app page.
Here in valid-redirect uris section add
http://localhost:8080/sso/login
This will help resolve indirect-uri problem
For me, I had a missing trailing slash / in the value for Valid Redirect URIs
[For Keycloak version 18 or Higher]
None of the mentioned solutions should be working if you are using Keycloak 18 or a higher version.
According to the version 18 release note. Keycloak does not support logout with redirect_uri anymore. you need to include post_logout_redirect_uri and id_token_hint as parameters.
Please check the answer of this question for more information.
keycloak: using react user can login but when I try logout I get a message "Invalid parameter: redirect_uri"
Log in the Keycloak admin console website, select the realm and its client, then make sure all URIs of the client are prefixed with the protocol, that is, with http:// for example. An example would be http://localhost:8082/*
Another way to solve the issue, is to view the Keycloak server console output, locate the line stating the request was refused, copy from it the redirect_uri displayed value and paste it in the * Valid Redirect URIs field of the client in the Keycloak admin console website. The requested URI is then one of the acceptables.
If you're seeing this problem after you've made a modification to the Keycloak context path, you'll need to make an additional change to a redirect url setting:
Change <web-context>yourchange/auth</web-context> back to
<web-context>auth</web-context> in standalone.xml
Restart Keycloak and navigate to the login page (/auth/admin)
Log in and select the "Master" realm
Select "Clients" from the side menu
Select the "security-admin-console" client from the list that appears
Change the "Valid Redirect URIs" from /auth/admin/master/console/* to
/yourchange/auth/admin/master/console/*
Save and sign out. You'll again see the "Invalid redirect url" message after signing out.
Now, put in your original change
<web-context>yourchange/auth</web-context> in standalone.xml
Restart Keycloak and navigate to the login page (which is now
/yourchange/auth/admin)
Log in and enjoy
I faced the same issue. I rectified it by going to the particular client under the realm respectively therein redirect URL add * after your complete URL.
I had the same problem with "localhost" in the redirect URL. Change to 127.0.0.1 in the "Valid Redirect URIs" field of clients config (KeyCloak web admin console). It works for me.
It seems that this problem can occur if you put whitespace in your Realm name. I had name set to Debugging Realm and I got this error. When I changed to DebuggingRealm it worked.
You can still have whitespace in the display name. Odd that keycloak doesn't check for this on admin input.
even I faced the same issue. I rectified it by going to the particular client under the realm respectively therein redirect URL add * after your complete URL.
THE PROBLEM WILL BE SOLVED
Example:
redirect URI: http:localhost:3000/myapp/generator/*
Looking at the exact rewrite was key for me. the wellKnownUrl lookup was returning "http://127.0.01:7070/" and I had specified "http://localhost:7070" ;-)
I faced the Invalid parameter: redirect_uri problem problem while following spring boot and keycloak example available at http://www.baeldung.com/spring-boot-keycloak. when adding the client from the keycloak server we have to provide the redirect URI for that client so that keycloak server can perform the redirection.
When I faced the same error multiple times, I followed copying correct URL from keycloak server console and provided in the valid Redirect URIs space and it worked fine!
This error is also thrown when your User does not have the expected Role delegated in User definition(Set role for the Realm in drop down).
Your redirect URI in your code(keycloak.init) should be the same as the redirect URI set on Keycloak server (client -> Valid Uri)
Ran into this problem too. After two days of pulling my hair out I discovered that the URLs in Keycloak are case sensitive. However the browser coverts the URL to lowercase, which means that uppercase URLs in Keycloak will never work.
e.g. my server name is MYSERVER (hostname returns MYSERVER)
Keycloak URLs are https://MYSERVER:8080/*
Browse to https://myserver:8080 -> fails invalid_url
Browse to https://MYSERVER:8080 -> fails invalid_url
Change Keycloak URLs to https://myserver:8080/*
Browse to https://myserver:8080 -> works
Browse to https://MYSERVER:8080 -> works
We also saw this, but only on certain URLs. After seeing this clue, I realized that the Java URI constructor has to be able to decode it,
like so URI uri = URI.create(redirectUri);
We had a { and } in our URLs which normally worked fine, but when going through two layers of URL decode/encode, Java decided the { and } were invalid.
We'll be changing our curly braces to something else to get around the double encode/decode issue.
I know other people provided the same answer, but my reputation was not high enough to upvote them. In the redirect menu, Mine had a redirect of " 0.0.0.0:8080/* ". I added
(actualIP) followed by :8080/* and it worked.
In your client, set the origin of your request. In my case, localhost:3000 (javaScript client)
If you are using the Authorization Code Flow then the response_type query param must be equal to code. See https://www.keycloak.org/docs/3.3/server_admin/topics/sso-protocols/oidc.html
You need to check the keycloak admin console for fronted configuration. It must be wrongly configured for redirect url and web origins.
If you're trying to redirect to the keycloak login page after logout (as I was), that is not allowed by default but also needs to be configured in the "Valid Redirect URIs" setting in the admin console of your client.
Check that the value of the redirect_uri parameter is whitelisted for the client that you are using. You can manage the configuration of the client via the admin console.
The redirect uri should match exactly with one of the whitelisted redirect uri's, or you can use a wildcard at the end of the uri you want to whitelist. See: https://www.keycloak.org/docs/latest/server_admin/#_clients
Note that using wildcards to whitelist redirect uri's is allowed by Keycloak, but is actually a violation of the OpenId Connect specification. See the discussion on this at https://lists.jboss.org/pipermail/keycloak-dev/2018-December/011440.html
My issue was caused by the wrong client_id (OPENID_CLIENT_ID) I had defined in the deployment.yaml. Make sure this field is assigned with the one in Keycloak client id.
The problem seems related to an invalid value in Valid Redirect URIs field. You can try with one of these tips:
set the same value of Client ID (if it's a URL) making it end with /* , or
tryingToLearn 's reponse [https://stackoverflow.com/a/51420355/97799] (but beware of security issues).
I into this due to a malformed redirect url in the keycloak client:
https://http://192.168.1.10/hub/oauth_callback
As soon as I took out https:// the error
I'm using version 20.0.2 and, for me, the solution was to simply add a '+' in the "Valid post logout redirect URIs" field:
As stated in the help balloon, "A value of '+' will use the list of valid redirect uris".
I faced a similar issue because I create a realm with two words and had a space on it. eg Test Realm, this gave me this error. I put an underscore and was good to go eg, Test_Realm.

How to check HTTP response code in zabbix?

I have a Zabbix server 2.2 and a few linux hosts with websites. How can I get a notification from Zabbix, if the HTTP(s) response code is not 200?
I've tried those triggers without any success:
{owncloud:web.test.rspcode[Availability of owncloud,owncloud availability].last(,10)}#200
{owncloud:web.test.error[Availability of owncloud].count(10,200)}<1
{owncloud:web.test.error[Availability of owncloud].last(#1,10)}=200
But nothing works. I never got an notification, that the code is not 200 anymore even it was 404, because I have renamed the index.php of owncloud to index2.php
I configured the Application and the we the Web Scenario as followed:
if you have already configured the host go to step 1
1) Select the host by Configuration-> Host groups -> select host (example server 1)
2) Go to Config > Hosts > [Host Created Above] > Applications and click on Create Application
3) Now you have to create the Web scenario with the status code check, in my case I checked status code 200. So go to Configuration > Hosts > [Host Created Above] > Web Scenarios and click on Create Web Scenario .
Remark: you have to select the previous application created at the step 2
4) After that without click on Add button go to Steps window and you have to configure the host and parameters for the chek. After that click on Add. In my cas e check the status code 200 response for the HTTP request.
I found the issue. You need to specify the URL to check with file. For example like this in your web scenario:
https://owncloud.example.com/index.php
"Note that Zabbix frontend uses JavaScript redirect when logging in, thus first we must log in, and only in further steps we may check for logged-in features. Additionally, the login step must use full URL to index.php file." - https://www.zabbix.com/documentation/2.4/manual/web_monitoring/example
I also used following expression as trigger:
{owncloud:web.test.fail[Availability of owncloud].last()}>0
you have set a triggers bye Expression
{host name:web.test.rspcode[Scenario name,Steps name].last()}=200
The question has been answered adequately, but I will provide a very much more advanced solution that you could use for all HTTP status codes.
I've created an item that monitors all HTTP status codes of a proxy, graphs them, and then set up multiple different types of triggers to watch last value and counts in last N minutes.
The regex I used to extract all the values from a Nginx or Apache access log is
^(\S+) (\S+) (\S+) \[([\w:\/]+\s[+\-]\d{4})\] \"(\S+)\s?(\S+)?\s?(\S+)?\" (\d{3}|-) (\d+|-)\s?\"?([^\"]*)\"?\s?\"?([^\"]*)\"?\s
I then set many triggers relevant for my particular situation
101 Switching Protocols
301 Moved Permanently
302 Redirect
304 not modified
400 Bad Request
401 Unauthorised
403 Forbidden
404 Not found
500 Server Error
It's also important that your Zabbix agent has permissions to read the log file on the host. You can add the zabbix-agent to the www-data group using this command.
$ sudo usermod -a -G www-data Zabbix
See the tutorial for all the steps in greater detail.

Debian - invoke external script from exim on receipt of emails

I am looking fopointers on the best approach to process incoming emails to a certain vhost and to call an external script with the email data as parameters - basically to allow email to be sent to a certain "private" email address at a host which then auto inserts something into that sites database. I currently have exim set up as the mail handler.
You have to follow exim single file configurations structure. In routers section write your own custom router that will deliver email to your desired php script. In transport section write your own custom transport that will ensure delivery to the desired script using curl. Just write the following configurations in your /etc/exim.cnf file:
############ROUTERS
runscript:
driver = accept
transport = run_script
unseen
no_expn
no_verify
############TRANSPORT
run_script:
debug_print = "T: run_script for $local_part#$domain"
driver = pipe
command = /home/bin/curl http://my.domain.com/mailTest.php --data-urlencode $original_local_part#$original_domain
Where mailTest.php will be your destined script.
Procmail is a good generic answer. If your needs are very specific, you could hook in your own script directly from your .forward (or Exim's corresponding construct -- can't remember exactly how it differs), but oftentimes, wrapping your own script inside a simple .procmailrc helps you avoid a bunch of iffy details of email delivery, and concentrate on the actual processing.
:0
' ^Subject: secretpassword adduser \/[A-Z]+
| echo "insert $MATCH into users" | mysql -d users

Catchall Router on Exim does not work

I have setup a catchall router on exim (used as last router):
catchall:
driver = redirect
domains = +local_domains
data = ${lookup{*#$domain}lsearch{/etc/aliases}}
retry_use_local_part
This works perfectly when sending emails locally. However, if I login to my GMail account and send an email to whatever#mydomain.com, then I get an "Unrouteable Address".
Thank you for any hints to solve this issue.
In the system_aliases: section of the config file you already have a section which does the lookup in /etc/aliases.
Replace
data = ${lookup{$local_part}lsearch{/etc/aliases}}
with
data = ${lookup{$local_part}lsearch*#{/etc/aliases}}
and make sure you have *:catchall_username* in /etc/aliases
This works great for a single domain mail server which is already using /etc/aliases
For this router to work, make sure that
mydomain.com is in local_domains
there is an entry for *#mydomain.com in /etc/aliases
MX record for mydomain.com is pointing to the server, where you've
configured this
This is old as heck, but I didn't see a good answer posted and someone else might want to know the answer.
This post is geared towards Debian with in single configuration file mode. It should work on any Linux Exim4 install though. For the purpose of explaining things we’ll use test#example.com which is configured with the hostname mail.example.com. The system will have a real user called test and we want to create an alias for test called alias. So the end result will all email sent to alias#example.com forwarded to test#example.com without having to create the user alias on the system.
First we need to create a place to store all of the alias files:
mkdir /etc/exim/aliases.d
vim /etc/exim/aliases.d/mail.example.com
contents of the alias file for mail.example.com alias:test
vim /etc/exim/exim4.conf.template
Now look for the section system_aliases. Here you’ll see data = ${lookup{$local_part}lsearch{/etc/aliases}} or something similar. Change that to
data = ${lookup{$local_part}lsearch{/etc/exim4/aliases.d/$domain}}
Save the file and restart exim. The alias should now work. To add support for other domains just add more alias files in the aliases.d directory with the correct hostname.
I copied and pasted this from my blog:
0xeb.info