How to detect hangup event in Freeswitch? - perl

I'm new to Freeswitch and looking for help from experts. My problem is below.
I'm trying to do below scenario in Perl:
When I'm getting an incoming call to script (test.pl) I play a file to it and then put inbound session to on-hold. Then I try to make new outbound session from a separate script (test2.pl). If outbound call get answer, then I break inbound session from on-hold and bridge both sessions. The scrips are like below.
test.pl
use strict;
use warnings;
our $session;
if( $session->ready() ) {
$session->setVariable( 'hangup_after_bridge', 'false' );
$session->setVariable( 'continue_on_fail', 'true' );
$session->answer();
my $api = new freeswitch::API;
$api->execute('perlrun', '/usr/share/freeswitch/scripts/test2.pl ' . $session->get_uuid());
$session->setVariable('playback_timeout_sec', '70');
$session->execute("playback", '$${hold_music}');
if ($session->getVariable('outbound_answered') == 'true') {
my $outbound_uuid = $session->getVariable('outbound_uuid');
my $outbound_session = new freeswitch::Session($outbound_uuid);
if ($outbound_session->ready()) {
$session->bridge($outbound_session);
} else {
# outbound session disconnected.
}
} else {
# outbound session didn't answer.
}
} else {
# Inbound disconneected.
}
1;
test2.pl
use strict;
use warnings;
my $api = new freeswitch::API;
my $inbound_uuid = $ARGV[0];
my $inbound_session = new freeswitch::Session( $inbound_uuid );
my $originate_str = "sofia/gateway/outbound/0416661666";
my $outbound_session = new freeswitch::Session( $originate_str );
my $hangup_cause_out = $outbound_session->hangupCause();
$inbound_session->setVariable( 'outbound_uuid', $outbound_session->get_uuid() );
if ( $outbound_session->ready() ) {
if ( $inbound_session->ready() ) {
$inbound_session->setVariable('outbound_answered', 'true');
$outbound_session->streamFile("/usr/share/freeswitch/sounds/en/us/callie/ivr/8000/ivr-please_hold_while_party_contacted.wav");
$inbound_session->execute('break');
while ($inbound_session->ready()) {
sleep(10);
# Inbound sessnio ok, continue outbound session.
}
} else {
# Inbound session got disconnected while we trying outbound.
}
} else {
# Outbound call failed.
}
1;
Debian syslog:
Jul 19 13:14:50 XXX systemd[1]: freeswitch.service: Main process exited, code=killed, status=6/ABRT
Jul 19 13:14:50 XXX systemd[1]: freeswitch.service: Unit entered failed state.
Jul 19 13:14:50 XXX systemd[1]: freeswitch.service: Failed with result 'signal'.
Jul 19 13:14:50 XXX systemd[1]: freeswitch.service: Service hold-off time over, scheduling restart.
Jul 19 13:14:50 XXX systemd[1]: Stopped freeswitch.
Jul 19 13:14:50 XXX systemd[1]: Starting freeswitch...
The expected scenario work without issues. But my issue is Freeswitch getting restart when inbound session hangup by incoming call party while Freeswitch ringing outbound party.
UPDATE1:
I think I need to handle inbound session hangup event but kinda lost here. May be I'm doing all wrong.
UPDATE2:
I been seen this issue in Freeswitch 1.8.7. I tried testing the same code in Freeswitch 1.6.20 and it didn't crash, but gave below error,
[ERR] freeswitch_perl.cpp:114 session is not initalized
So it seems Freeswitch 1.8 mod_perl module not handling it properly.
Looking for suggestions from Freeswitch experts.
Thank you!

Related

Foswiki plugin on Perl/FCGI failed to use File:Find at the 5th attempt

I am writing a REST plugin for Foswiki using Perl and I am facing an reliability issue when using File::Find. I have tried my best to write a minimal reproducible example. The plugin uses File::Find to traverse directories and print the filenames in the HTTP response. The REST request is working properly 4 times, but stop to work the 5th time. The HTTP status remain “HTTP/1.1 200 OK” but no file is reported by File::Find anymore.
The webserver is nginx and is configured to use FastCGI. It appear to run 4 working threads managed by foswiki-fcgi-pm:
> ps aux
www-data 16957 0.0 7.7 83412 78332 ? Ss 16:52 0:00 foswiki-fcgi-pm
www-data 16960 0.0 7.5 83960 76740 ? S 16:52 0:00 foswiki-fcgi
www-data 16961 0.0 7.6 84004 76828 ? S 16:52 0:00 foswiki-fcgi
www-data 16962 0.0 7.6 83956 76844 ? S 16:52 0:00 foswiki-fcgi
www-data 16963 0.0 7.5 83960 76740 ? S 16:52 0:00 foswiki-fcgi
Firstly, the plugin initialization simply register the REST handler:
sub initPlugin {
my ( $topic, $web, $user, $installWeb ) = #_;
# check for Plugins.pm versions
if ( $Foswiki::Plugins::VERSION < 2.3 ) {
Foswiki::Func::writeWarning( 'Version mismatch between ',
__PACKAGE__, ' and Plugins.pm' );
return 0;
}
Foswiki::Func::registerRESTHandler(
'restbug', \&RestBug,
authenticate => 0, # Set to 0 if handler should be useable by WikiGuest
validate => 0, # Set to 0 to disable StrikeOne CSRF protection
http_allow => 'GET,POST', # Set to 'GET,POST' to allow use HTTP GET and POST
description => 'Debug'
);
# Plugin correctly initialized
return 1;
}
Secondly, the REST handler is implemented as follow, printing all the files it can possibly find:
sub RestBug {
my ($session, $subject, $verb, $response) = #_;
my #Directories = ("/var/www/foswiki/tools");
sub findfilestest
{
$response->print("FILE $_\n");
}
find({ wanted => \&findfilestest }, #Directories );
}
When I test the REST service with a HTTP request, the first 4 times I get the following HTTP response, which seems quite satisfying:
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Tue, 22 Nov 2022 09:23:10 GMT
Content-Length: 541
Connection: keep-alive
Set-Cookie: SFOSWIKISID=385db599c5d66bb19591e1eef7f1a854; path=/; secure; HttpOnly
FILE .
FILE foswiki.freebsd.init-script
FILE bulk_copy.pl
FILE dependencies
FILE mod_perl_startup.pl
FILE geturl.pl
FILE extender.pl
FILE extension_installer
FILE configure
FILE lighttpd.pl
FILE foswiki.freebsd.etc-defaults
FILE save-pending-checkins
FILE babelify
FILE upgrade_emails.pl
FILE tick_foswiki.pl
FILE foswiki.defaults
FILE rewriteshebang.pl
FILE fix_file_permissions.sh
FILE foswiki.init-script
FILE convertTopicSettings.pl
FILE mailnotify
FILE html2tml.pl
FILE tml2html.pl
FILE systemd
FILE foswiki.service
The following attempts give this unexpected response:
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Tue, 22 Nov 2022 09:24:56 GMT
Transfer-Encoding: chunked
Connection: keep-alive
Set-Cookie: SFOSWIKISID=724b2c4b1ddfbebd25d0dc2a0f182142; path=/; secure; HttpOnly
Note that if I restart Foswiki with the command systemctl restart foswiki, the REST service work again 4 more times.
How to make this REST service work more than 4 times in a row?

Vert.x event bus consumer called twitce

i wrote a simple code for communicate between a java server in vert.x and a browser client with the event bus library. The client send a message to the server through the event bus and the server call eventbus.consumer to read the message and reply to it.
I notice something strange, idk why the consumer method is called twice, I aspect it call one. How can i solve this? Thanks.
This is my code:
SERVER
SockJSBridgeOptions options = new SockJSBridgeOptions().
addInboundPermitted(new PermittedOptions().setAddressRegex("out"));
sockJSHandler.bridge(options);
router.route("/eventbus/*").handler(sockJSHandler);
eventBus.<String>consumer("out", event -> {
logger.info(event.body());
event.reply("TEST");
});
vertx.createHttpServer().requestHandler(router).listen(serverPort, res -> {
if (res.succeeded()) {
startPromise.complete();
} else {
startPromise.fail(res.cause());
}
});
CLIENT
var eb = new EventBus('http://localhost:8088/eventbus');
eb.onopen = () => {
eb.send('out', {name: 'tim', age: 5817}, (e, m) => {
console.log("TEST RESPONSE")
console.log(JSON.stringify(m));
});
}
CONSOLE SERVER OUTPUT
apr 07, 2022 7:55:43 PM it.unibo.guessthesong.lobby.LobbyVerticle
INFO: {"name":"tim","age":5817}
apr 07, 2022 7:55:43 PM it.unibo.guessthesong.lobby.LobbyVerticle
INFO: {"name":"tim","age":5817}
CONSOLE CLIENT OUTPUT
TEST RESPONSE
{"type":"rec","address":"92888ada-fada-43ff-9677-9eeed95b44da","body":"TEST"}
TEST RESPONSE
{"type":"rec","address":"2ced1214-13cf-4ebf-b7d8-6773612097e7","body":"TEST"}

Private only dovecot, local docker configuration for one user fails login for Apple Mail

I'm trying to make a local docker-dovecot machine to archive my e-mails. I would like to query them with Apple Mail. I have a simple ubuntu docker machine (on an VM with parallels, because I'm on a Mac).
I have this local.conf:
# A comma separated list of IPs or hosts where to listen in for connections.
# "*" listens in all IPv4 interfaces, "::" listens in all IPv6 interfaces.
# If you want to specify non-default ports or anything more complex,
# edit conf.d/master.conf.
listen = *,::
# Protocols we want to be serving.
protocols = imap
# Static passdb.
# This can be used for situations where Dovecot doesn't need to verify the
# username or the password, or if there is a single password for all users:
passdb {
driver = static
args = password=dovecot
}
# Location for users' mailboxes. The default is empty, which means that Dovecot
# tries to find the mailboxes automatically. This won't work if the user
# doesn't yet have any mail, so you should explicitly tell Dovecot the full
# location.
#
# If you're using mbox, giving a path to the INBOX file (eg. /var/mail/%u)
# isn't enough. You'll also need to tell Dovecot where the other mailboxes are
# kept. This is called the "root mail directory", and it must be the first
# path given in the mail_location setting.
#
# There are a few special variables you can use, eg.:
#
# %u - username
# %n - user part in user#domain, same as %u if there's no domain
# %d - domain part in user#domain, empty if there's no domain
# %h - home directory
#
# See doc/wiki/Variables.txt for full list. Some examples:
#
# mail_location = maildir:~/Maildir
# mail_location = mbox:~/mail:INBOX=/var/mail/%u
# mail_location = mbox:/var/mail/%d/%1n/%n:INDEX=/var/indexes/%d/%1n/%n
#
# <doc/wiki/MailLocation.txt>
#
mail_location = maildir:/var/mail/%n
# System user and group used to access mails. If you use multiple, userdb
# can override these by returning uid or gid fields. You can use either numbers
# or names. <doc/wiki/UserIds.txt>
# mail_uid = CHANGE_THIS_to_your_short_user_name_or_uid
# mail_gid = admin
# SSL/TLS support: yes, no, required. <doc/wiki/SSL.txt>
ssl = no
# Login user is internally used by login processes. This is the most untrusted
# user in Dovecot system. It shouldn't have access to anything at all.
# default_login_user = _dovenull
# Internal user is used by unprivileged processes. It should be separate from
# login user, so that login processes can't disturb other processes.
# default_internal_user = _dovecot
# Setting limits.
default_process_limit = 10
default_client_limit = 50
and I'm getting this from Apple Mail
May 23 07:15:58 Mail[87524] <Debug>: <0x7fe16f021cd0:[Non-authenticated]> Wrote: 1.11 ID ("name" "Mac OS X Mail" "version" "9.3 (3124)" "os" "Mac OS X" "os-version" "10.11.5 (15F34)" "vendor" "Apple Inc.")
May 23 07:15:58 Mail[87524] <Debug>: <0x7fe16f021cd0:[Non-authenticated]> Read: * ID {
name = Dovecot;
}
May 23 07:15:58 Mail[87524] <Debug>: <0x7fe16f021cd0:[Non-authenticated]> Read: 1.11 OK
May 23 07:15:58 Mail[87524] <Debug>: <0x7fe16f021cd0:[Non-authenticated]> Wrote: 3.11 LOGOUT
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Disconnected]> Read: * OK [CAPABILITY (
IMAP4REV1,
"LITERAL+",
"SASL-IR",
"LOGIN-REFERRALS",
ID,
ENABLE,
IDLE,
"AUTH=PLAIN",
"AUTH=LOGIN"
)]
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Non-authenticated]> Wrote: 1.23 ID ("name" "Mac OS X Mail" "version" "9.3 (3124)" "os" "Mac OS X" "os-version" "10.11.5 (15F34)" "vendor" "Apple Inc.")
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Non-authenticated]> Read: * ID {
name = Dovecot;
}
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Non-authenticated]> Read: 1.23 OK
May 23 07:16:00 Mail[87524] <Debug>: <0x7fe16aa14590:[Non-authenticated]> Wrote: 3.23 LOGOUT
and this from dovecot (mail.log):
May 23 05:07:22 f8ab3e20742f dovecot: master: Dovecot v2.2.9 starting up (core dumps disabled)
May 23 05:07:22 f8ab3e20742f dovecot: ssl-params: Generating SSL parameters
May 23 05:07:29 f8ab3e20742f dovecot: ssl-params: SSL parameters regeneration completed
May 23 05:07:52 f8ab3e20742f dovecot: imap-login: Aborted login (no auth attempts in 0 secs): user=<>, rip=10.211.55.2, lip=172.17.0.2, session=<IJwtbHszbgAK0zcC>
May 23 05:07:54 f8ab3e20742f dovecot: imap-login: Aborted login (no auth attempts in 0 secs): user=<>, rip=10.211.55.2, lip=172.17.0.2, session=<qsRNbHszdgAK0zcC>
The output of doveconf -n is (so "disable_plaintext_auth = no" is active):
# 2.2.9: /etc/dovecot/dovecot.conf
# OS: Linux 4.4.8-boot2docker x86_64 Ubuntu 14.04.4 LTS aufs
auth_mechanisms = plain login
default_client_limit = 50
default_process_limit = 10
disable_plaintext_auth = no
listen = *,::
mail_location = maildir:/var/mail/%n
namespace inbox {
inbox = yes
location =
mailbox Drafts {
special_use = \Drafts
}
mailbox Junk {
special_use = \Junk
}
mailbox Sent {
special_use = \Sent
}
mailbox "Sent Messages" {
special_use = \Sent
}
mailbox Trash {
special_use = \Trash
}
prefix =
}
passdb {
args = password=dovecot
driver = static
}
protocols = imap
ssl = no
Any suggestions why this login isn't working?
Thanks!
The solution is to fix and configure the following line correctly (from local.conf):
# mail_uid = CHANGE_THIS_to_your_short_user_name_or_uid
How did I find out? Thanks to #Kondybas for the pointer to try another client. I used Thunderbird and it produced dovecot log entries (why didn't Apple Mail produce these lines? No clue), saying that it couldn't switch to mail_uid user context. I extended dovecot Dockerfile and switched the user appropriately. Afterwards it worked with Thunderbird and then with Apple Mail.

Healthcheck for endpoints - quick and dirty version

I have a few REST endpoints and few [asmx/svc] endpoints.
Some of them are GET and the others are POST operations.
I am trying to put together a quick and dirty , repeatable healthcheck sequence for finding if all the endpoints are responsive or if any are down.
Essentially either get a 200 or 201 and report error if otherwise.
What is the easiest way to do this?
SOAPUI use internally apache http-client 4.1.1 version, you can use it inside a groovy script testStep to perform your checks.
Add a groovy script testStep to your testCase an inside use the follow code; which basically try to perform a GET against a list of URLs, if its returns http-status 200 or 201 its considered that is working, if http-status 405 (Method not allowed) is returned then it tries with POST and perform the same status code check, otherwise it's considered down.
Note that some services can be running however can return for example 400 (BAD request) if the request is not correct, so think about if you need to rethink the way you want perform a check or add some other status codes to consider if the server is running correctly.
import org.apache.http.HttpEntity
import org.apache.http.HttpResponse
import org.apache.http.client.methods.HttpGet
import org.apache.http.client.methods.HttpPost
import org.apache.http.impl.client.DefaultHttpClient
// urls to check
def urls = ['http://google.es','http://stackoverflow.com']
// apache http-client to use in closue
DefaultHttpClient httpclient = new DefaultHttpClient()
// util function to get the response
def getStatus = { httpMethod ->
HttpResponse response = httpclient.execute(httpMethod)
// consume the entity to avoid error with http-client
if(response.getEntity() != null) {
response.getEntity().consumeContent();
}
return response.getStatusLine().getStatusCode()
}
HttpGet httpget;
HttpPost httppost;
// finAll urls that are working
def urlsWorking = urls.findAll { url ->
log.info "try GET for $url"
httpget = new HttpGet(url)
def status = getStatus(httpget)
// if status are 200 or 201 it's correct
if(status in [200,201]){
log.info "$url is OK"
return true
// if GET is not allowed try with POST
}else if(status == 405){
log.info "try POST for $url"
httppost = new HttpPost(url)
status = getStatus(httpget)
// if status are 200 or 201 it's correct
if(status in [200,201]){
log.info "$url is OK"
return true
}
log.info "$url is NOT working status code: $status"
return false
}else{
log.info "$url is NOT working status code: $status"
return false
}
}
// close connection to release resources
httpclient.getConnectionManager().shutdown()
log.info "URLS WORKING:" + urlsWorking
This scripts logs:
Tue Nov 03 22:37:59 CET 2015:INFO:try GET for http://google.es
Tue Nov 03 22:38:00 CET 2015:INFO:http://google.es is OK
Tue Nov 03 22:38:00 CET 2015:INFO:try GET for http://stackoverflow.com
Tue Nov 03 22:38:03 CET 2015:INFO:http://stackoverflow.com is OK
Tue Nov 03 22:38:03 CET 2015:INFO:URLS WORKING:[http://google.es, http://stackoverflow.com]
Hope it helps,

500 internal server error on certain page after a few hours

I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours.
The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs.
That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder
My ers_portal_nginx.conf:
server {
listen 80;
server_name www.mydomain.com;
location / { try_files $uri #app; }
location #app {
include uwsgi_params;
uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock;
}
}
My ers_portal_uwsgi.ini:
[uwsgi]
#user info
uid = metheuser
gid = ers_group
#application's base folder
base = /home/metheuser/webapps/ers_portal
#python module to import
app = run_web
module = %(app)
home = %(base)/ers_portal_venv
pythonpath = %(base)
#socket file's location
socket = /home/metheuser/webapps/ers_portal/%n.sock
#permissions for the socket file
chmod-socket = 666
#uwsgi varible only, does not relate to your flask application
callable = app
#location of log files
logto = /home/metheuser/webapps/ers_portal/logs/%n.log
Relevant parts of my views.py
data_modification_time = None
data = None
def reload_data():
global data_modification_time, data, sites, column_names
filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename
mtime = os.stat(filename).st_mtime
if data_modification_time != mtime:
data_modification_time = mtime
with open(filename) as f:
data = pickle.load(f)
return data
#a bunch of authentication stuff...
#app.route('/')
#app.route('/index')
def index():
return render_template("index.html",
title = 'Main',)
#app.route('/login', methods = ['GET', 'POST'])
def login():
login stuff...
#app.route('/my_table')
#login_required
def my_table():
print 'trying to access data table...'
data = reload_data()
return render_template("my_table.html",
title = "Rundata Viewer",
sts = sites,
cn = column_names,
data = data) # dictionary of data
I installed nginx via yum as described here (yesterday)
I am using uWSGI installed in my venv via pip
I am on CentOS 6
My uwsgi log shows:
Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1)
IOError: write error
[pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0)
When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working.
Any ideas?