I try to use sipcmd to call a phone number and play a wav file.
I use the command that way ( playing DTMF just for test):
./sipcmd -P sip -u 0033972nnnnnn -c passwd -w sip3.ovh.fr -x "w5000;c0033661nnnnnn;d123;ws500;h"
After few seconds, my phone rings. I answer but sipcmd continues its timeout count up and finally hangs up with the error Call: Dial timed out.
Same thing if I terminates the call before the end of the timeout.
Looking a the code in debug, WaitForStateChange() desesperately stays in CONNECTING state whatever happens on the remote called.
Any idea on the problem?
The timeout indicates that the server sip3.ovh.fr didn't reply, the best option would be to verify that the call can be made with a softphone before to discard any connectivity issues.
Related
I have a proc which spawns a child process on port 2600. it connects to the process with handle 6, extracts some data, kills the process and then it starts another child process on the same port, I connect to it and when I try to run my commands on the handle I get this following error:
'Cannot write to handle 6. OS reports: Bad File Descriptor
The process is running and I can connect to it manually and get my data. I have a table tracking the connection and I can see the handle for the second time the child process is spawned is also 6. Any suggestions on why this is happening?
UPDATE: Just to make it clear. When I close the handle I send an async message to the proc with "exit 1". Could this be that the process is not killed before I open a connection to it, gets killed after that and then what I think it's the new process is just a garrbage handle to the old, now defunct process?
UPDATE: I do flush the handle after I send "exit 1" and the new process seem to start ok. I can see the new process running on the process (they have different names)
UPDATE: Even thought connection is successful the handle is not added to .z.W. so the handle doesn't appear in .z.W
UPDATE: found the issue. the .IPC.SCON function which I was using which i thought was just a simple wrapper for an error trap ... has logic to check a table of cached handles for the same hst:prt and take that one instead of opening another one. because my process was running on the same host:port, it was using the old cached handle instead of opening it again. thank you all for your help.
What version of kdb are you using? Can you run the below without issues?
KDB+ 4.0 2020.08.28 Copyright (C) 1993-2020 Kx Systems
q)system"q -p 5000";system"sleep 1";h:hopen 5000;0N!h".z.i";neg[h](exit;0)
4475i
q)system"q -p 5000";system"sleep 1";h:hopen 5000;0N!h".z.i";neg[h](exit;0)
4481i
q)system"q -p 5000";system"sleep 1";h:hopen 5000;0N!h".z.i";neg[h](exit;0)
4487i
Update: Without seeing your code or the order of execution, it's hard to pinpoint what the issue is. If the process wasn't terminated before you attempt to start the new process, you would see an Address already in use error. If the process isn't up when you attempt to connect you would see a 'hop error which means hopen failed.
Also, async messages are not sent immediately so depending on the execution order of your code, you may need to flush the async 'exit' message, but like I mentioned, if the original child process is still up when you attempt to start another, you would get an address clash
Update 2: As per my comment above
q)system"q -p 5000"; system"sleep 1"; h:#[hopen;(`::5000;2000);0Ni]; 0N!h".z.i"; neg[h](exit;1)
4365i
q)system"q busy.q -p 5000"; system"sleep 1"; #[hopen;(`::5000;2000);0Ni]; h".z.i" // hopen times out + handle variable not amended
'Cannot write to handle 4. OS reports: Bad file descriptor
[0] system"q busy.q -p 5000"; system"sleep 1"; #[hopen;(`::5000;2000);0Ni]; h".z.i" // hopen times out + handle variable not amended
^
q).z.W
q)
I think what you have written in your UPDATE is correct. The process is trying to connect to that port before your new process is running on it.
I think the best option would be for the newly started process to initiate the connection to the parent rather, that way you know it will be running and don't have to introduce any sleeps.
Other option is to try and reconnect on a timer, once a successful connection occurs remove it from the timer and continue with what you are trying to do.
I am using Ratchet Socket. I have established a new server connection and I want to stop server from running. In this scenario I have IP(Hostname) and port with me, So how can I stop that?
Is it possible to make a server connection that never ends?
When I make a server connection, first day the data output is perfect from DB, But on second day, the error is generated as "Connection is closed by foreign host". But still I can connect to that port.
Code
<?php
use Ratchet\Server\IoServer;
use MyApp\Chat;
use React\EventLoop\Factory;
use React\ZMQ\Context;
require dirname(__DIR__) . '/vendor/autoload.php';
require dirname(__DIR__) . '/src/MyApp/Chat.php';
$server = IoServer::factory(
new Chat(),
6666
);
$server->run();
?>
1.
I am using Ratchet Socket. I have established a new server connection
and I want to stop server from running. In this scenario I have
IP(Hostname) and port with me, So how can I stop that?
I assume that you currently run your Ratchet server by running it as a php script in a terminal window or screen.
eg: php push-server.php
Once you stop running the script your server will stop.
2.
Is it possible to make a server connection that never ends?
Yes, if your php script stops working from the terminal, you have to manually restart it. Its better to use a program like Supervisor (A Process Control System) which is recommended by Ratchet.
Check this link for more info http://supervisord.org/installing.html
The supervisord service will monitor your php script and will automatically restart it if it crashes which is suited for production environments.
3.
When I make a server connection, first day the data output is perfect
from DB, But on second day, the error is generated as "Connection is
closed by foreign host". But still I can connect to that port.
This is quite common and I've noticed it too. It usually happens when the server is heavily loaded or times out. Your JavaScript should check for this message and re-initiate a new connection if you see this message. You can also get it to try again after a random timer as well.
Edits
Also the __construct method for the Ratchet\Server\IoServer requires 3 prams, of which the 3rd one being optional. The first and second need to be objects of MessageComponentInterface and ServerInterface.
public function __construct(MessageComponentInterface $app, ServerInterface $socket, LoopInterface $loop = null) {
The way you instantiate the IoServer seems incorrect.
I'm using RawCap to capture packets sent from my dev machine (one app) to itself (another app). I can get it to write the captures to a file like so:
RawCap.exe 192.168.125.50 piratedPackets.pcap
...and then open the *.pcap file in Wireshark to examine the details. This worked when I called my REST method from Postman, but when using Fiddler's Composer tab to attempt the same, the *.pcap file ends up being empty. I think this may be because my way of shutting down RawCap was rather raw itself - I simply closed the command prompt. Typing "exit" does nothing while it's busy capturing.
How can I make like a modern-day Mansel Alcantra if I the captured packets sink to the bottom of the ocean before I can plunder the booty? How can I gracefully shut RawCap down so that it (hopefully) saves its contents to the log (*.pcap) file?
RawCap is gracefully closed by hitting Ctrl + C. Doing so will flush all packets from memory to disk.
You can also tell RawCap to only capture a certain number if packets (using -c argument) or end sniffing after a certain number of seconds (using -s argument).
Here's one example using -s to sniff for 60 seconds:
RawCap.exe -s 60 192.168.125.50 piratedPackets.pcap
Finally, if none of the above methods is available for you, then you might wanna use the -f switch. By using -f all captured packets will be flushed to disk immediately. However, this has a performance impact, so you run a greater risk of missing/dropping packets when sniffing with the -f switch.
You can run RawCap.exe --help to show the available command line arguments. They are also documented here:
http://www.netresec.com/?page=RawCap
I am running a local instance of HTTP::Daemon using a modified version of the looping structure outlined in the documentation. I have made it possible to exit the loop at the user's request, but a subsequent execution of my Perl script gives me the error:
HTTP::Daemon: Address already in use ...propagated at /path/to/script line NNN, line 3.
What more must I do to be a good citizen and clean up after my Daemon?
Most likely nothing. The address is in use by leftover connections from the previous instance. As soon as they are all shut down, the address will be automatically released.
If you want to speed up this process, you can set the SO_REUSEADDR socket option before binding. See the PERL socket documentation for more details. "if a server dies without outstanding connections the port won't be immediately reusable unless you use the option SO_REUSEADDR using setsockopt() function."
I have a UNIX daemon, which wait of SIGHUP for refresh a data. I try to send a signal from the Perl script (under Apache www-data:www-data on the same server) by Proc::Killall ("killall('HUP', 'mydaemon');"), but I have no properly permissions. suid bit doesn't work too. 'kill -n HUP ' from shell are working.
Does you have any idea to do this?
The usual work-around is to employ a »touch file« to indicate a reload, see Apache2::Reload for a real life example.
Listen to notifications set up with e.g. File::ChangeNotify or AnyEvent::Inotify::Simple, then do your reloading.