exosip always answer twice when receive invite event - sip

I'm new in sip and use exosip to develop a sip gateway.
I found my gate way is always send answer twice, but the log in my program just print once.
I don't know what happened and how to fix this problem.
UPDATE:
I update my version to 5.2.0, but the problem still exists.
Also, I found there exists duplicated RTP connections:
The question most bothers me is that audio packets are duplicated, which makes high packet loss rate:

In version 5.1.2, there was a major rewrite of DNS, UDP, TCP and TLS management. The rewrite was required to maximize performance and to reach the same results with select and epoll implementations. Those changes introduced issues resolved in incremental steps and 3 versions were published in a row: exosip 5.1.2, 5.1.3 and finally 5.2.0
Your issue was fixed only in 5.2.0 with this ChangeLog line:
* fix duplicate packets for TCP and TLS when several outgoing NICT are happening at the same time [since 5.1.2]
The git fix is this one:
https://git.savannah.nongnu.org/cgit/exosip.git/commit/?id=1fdc54ed38eaf5155f5702240586c472f2cc73d4
You can read the full ChangeLog here for details.
There as been 3 commits since 5.2.0 in the git which may be nice to have. access git here
Make sure you also use latest osip 5.2.0 or access git here. There was only one additionnal commit which is also interesting.
NOTE: In my own tests: I have seen only retransmission of REQUESTs, but I would not be surprised if the bug was also affecting ANSWERS? If you use already the latest version, please write me a mail.

Related

Accessing Internet after update and telemetry was disabled

I just installed the VScode Windows zip version with vim extension on Windows 7. For privacy I disabled these options in the settings.json file:
{
"update.channel": "none",
"telemetry.enableTelemetry": false,
"telemetry.enableCrashReporter": false
}
But VScode is still connecting to different MS Internet sites at startup:
191.238.172.191 Microsoft Informatica Ltda (br)
40.114.241.141 Microsoft Corporation (MSFT)
40.77.226.250 Microsoft Corporation (MSFT)
How can I disable VScode connecting to sites in the background I don't want to connect to?
I ran the program with Wireshark on linux and I can confirm the background connections.
The first address is for marketplace.visualstudio.com, so it's probably used for extension update checks or similar? If you use extensions you might want to leave it as it is.
The last addresses are most likely related to telemetry. Visual Studio Code makes several DNS lookups even though the telemetry and updates are "disabled". You can try to add some of those DNS names to your hosts file in order to prevent the connections, but keep in mind that doing so might have side effects on your system functionality as Windows uses them for other (telemetry and such) purposes. Here are a few of the DNS lookups I was able to trigger with a small testing, there are likely many more:
0.0.0.0 dc.services.visualstudio.com
0.0.0.0 dc.trafficmanager.net
0.0.0.0 vortex.data.microsoft.com
0.0.0.0 weu-breeziest-in.cloudapp.net
I share your concerns. Having 20 years of experience with Microsoft I'm very frustrated and scared of using any of their products. The license for Visual Studio Code does not fill me up with confidence either. I'm not really surprised that "opting out" of telemetry does not even actually disable it.
I do like Visual Studio Code more than Atom and instead of downloading it from Microsoft, I cloned the original vscode repository (which is MIT licensed base product for the Visual Studio Code) and installed it instead. It doesn't seem to connect to Internet immediately when I start typing. Unfortunately, I haven't figured out how I can easily update and install extensions to vscode, so I might have to return to Atom eventually. I wish someone with more interest and time on their hands would fork a telemetry free vscode with marketplace functionality and share the binaries for the rest of us.
Update:
I raised an issue (#16131) about this. Looking at the total amount of open issues (~3000) on vscode I don't expect it to be solved anytime soon. For the time being you should block at least vortex.data.microsoft.com and dc.services.visualstudio.com in your hosts file. Blocking those two won't affect the usage of marketplace or any other necessary functionality. Blocking them seems to cease most Internet traffic for Visual Studio Code 1.7.2 (which might change in future versions).
What also worries me is that even though the data sent to vortex is encrypted, Visual Studio Code actually sends details about your machine and OS unencrypted plain text (via HTTP POST) to dc.services.visualstudio.com. (Note that I didn't yet file an issue about that).
Update 2:
According to the official reply I got for the issue #16131, Visual Studio Code was sending the information to Microsoft that the user had opted out from telemetry. A bit odd choice to send telemetry about the user not wanting any telemetry, but they said that they will stop doing it in the future. I appreciate their honesty with this matter.
seanmcbreen:
We use our telemetry to help understand how to make the product better – in fact a good example of this right now is some work we are doing to improve performance. So, we appreciate it when users opt to send us telemetry.
That said there are times when people don’t want to do that and you bring up a good point – today we continue to send events stating that a user has opted out and nothing else i.e. no usage data is sent. Here is the test to ensure that is all we send...
https://github.com/Microsoft/vscode/blob/master/src/vs/platform/telemetry/common/telemetryService.ts#L103
But we don’t need to do that and I don’t think it’s what you expect as a user – so we will stop sending anything i.e. even the opt out event 😄 Look for a change there soon.
Thanks for bringing this to our attention and I hope you enjoy working with VS Code.
Check also the version of Git you are using with VSCode.
Starting Git 2.19 (Q3 2019), possible telemetry can be sent from IDE/products using Git.
See commit 7545941 (13 Jul 2018) by Jeff Hostetler (jeffhostetler).
Helped-by: Eric Sunshine (sunshineco), René Scharfe (rscharfe), Wink Saville (winksaville), and Ramsay Jones (jeffhostetler).
(Merged by Junio C Hamano -- gitster -- in commit a14a9bf, 15 Aug 2018)
Junio C Hamano, official maintainer for Git, added in this thread:
Transport (or file) can stay outside the core of this "telemetry" thing---agreeing on what and when to trace, and how the trace is represented, and having an API and solid guideline, would allow us to annotate the code just once and get useful data in
a consistent way.
Ævar Arnfjörð Bjarmason added here:
To elaborate a bit on Jeff's reply (since this was discussed in more
detail at Git Merge this year), the point of this feature is not to ship
git.git with some default telemetry, but so that in-house users of git
like Microsoft, Dropbox, Booking.com etc. can build & configure their
internal versions of git to turn on telemetry for their own users.
There's numerous in-house monkeypatches to git on various sites (at
least Microsoft & Dropbox reported having internal patches already).
Something like this facility would allow us to agree on some
implementation that could be shipped by default (but would be off by
default), those of us who'd make use of this feature already have "root"
on those user's machines, and control what git binary they use etc,
their /etc/gitconfig and so on.
So, in addition of Microsoft/vscode issue 16131, we will have to monitor how VSCode uses Git in the future, since Git will offer a telemetry framework, for editors to use if they choose to.
With Git 2.25 (Q1 2020), We have had compatibility fallback macro definitions for "PRIuMAX", "PRIu32", etc.
But we did not for "PRIdMAX", while the code used the last one apparently without any hiccup reported recently.
The fallback macro definitions for these <inttypes.h> macros that must appear in C99 systems have been removed.
See commit ebc3278 (24 Nov 2019) by Hariom Verma (harry-hov).
(Merged by Junio C Hamano -- gitster -- in commit e547e5a, 05 Dec 2019)
git-compat-util.h: drop the PRIuMAX and other fallback definitions
Signed-off-by: Hariom Verma
Helped-by: Jeff King
Git's code base already seems to be using PRIdMAX without any such fallback definition for quite a while (75459410edd (json_writer: new routines to create JSON data, 2018-07-13), to be precise, and the first Git version to include that commit was v2.19.0).
Having a fallback definition only for PRIuMAX is a bit inconsistent.
We do sometimes get portability reports more than a year after the problem was introduced.
This one should be fairly safe.
PRIuMAX is in C99 (for that matter, SCNuMAX, PRIu32 and others also are), and we've been picking up other C99-isms without complaint.
The PRIuMAX fallback definition was originally added in 3efb1f343a ("Check for PRIuMAX rather than NO_C99_FORMAT in fast-import.c.", 2007-02-20, Git v1.5.1-rc1 -- merge).
But it was replacing a construct that was introduced in an even earlier commit, 579d1fbfaf ("Add NO_C99_FORMAT to support older compilers.", 2006-07-30, Git v1.4.2-rc3), which talks about gcc 2.95.
That's pretty ancient at this point.
[jc: tweaked both message and code, taking what peff wrote]
Signed-off-by: Junio C Hamano
Visual Studio 2019 - Block IP Addresses:
ServiceHub.IdentityHost.exe -------------------
13.69.65.22
13.69.65.23
13.69.66.140
40.114.241.141
51.140.6.23
117.18.232.200
152.199.19.161
65.55.44.109
devenv.exe ------------------------------------
13.107.5.88
88.221.230.45
104.69.107.218
104.81.67.220
104.88.48.82
104.107.176.162
104.214.77.221
117.18.232.200
152.199.19.161
23.198.79.63
13.69.66.140
PerfWatson2.exe --------------------------------
2.17.84.229
2.19.38.59
20.44.86.43
52.158.208.111
65.55.44.109
104.81.67.220
104.107.176.162
117.18.232.200
152.199.19.161
104.74.143.169
104.88.48.82
51.143.111.7
104.69.107.218
ServiceHub.VSDetouredHost.exe ------------------
117.18.232.200
152.199.19.161
104.74.143.169
2.19.38.59
vsls-agent.exe ---------------------------------
40.114.242.48
ServiceHub.Host.CLR.x86.exe --------------------
88.221.230.45
117.18.232.200
152.199.19.161
ServiceHub.RoslynCodeAnalysisService32.exe -----
117.18.232.200
152.199.19.161
ServiceHub.DataWarehouseHost.exe ---------------
117.18.232.200
BackgroundDownload.exe --------------------------
2.20.222.12
2.20.222.14
2.19.237.173
2.21.120.100
2.22.211.152
13.107.5.88
23.35.175.220
23.38.229.99
23.40.218.49
23.43.200.93
23.62.197.99
23.67.120.162
23.194.18.196
23.195.139.83
23.198.79.63
23.206.38.71
23.213.218.202
23.214.174.91
23.217.250.58
65.55.44.109
88.221.230.45
96.6.244.11
104.69.107.218
104.74.143.169
104.83.182.123
104.89.78.57
104.98.168.64
104.102.189.193
104.107.176.162
104.111.238.86
104.119.235.204
104.126.110.182
104.214.77.221
104.126.245.85
117.18.232.200
152.199.19.161
172.227.168.22
VSIXAutoUpdate.exe ------------------------------
117.18.232.200
23.203.81.132
65.55.44.109
184.87.56.190
152.199.19.161
104.89.78.57
104.102.189.193
Microsoft.ServiceHub.Controller.exe -------------
13.69.65.23
13.69.65.22
152.199.19.161
13.69.66.140
ServiceHub.SettingsHost.exe ---------------------
152.199.19.161

Eclipse SVN connection timeout

I'm using Eclipse Kepler and connect to SVN via VPN. Sometimes VPN connection drops and when I try do do commit without connection I have to wait for 10 minutes until timeout.
The SVN console shows:
commit -m "...comment..." -N ...file_list...
org.apache.subversion.javahl.ClientException: Connection timed out
svn: Commit failed (details follow):
svn: Unable to connect to a repository at URL 'http://192.168.9.2:81/svn/...'
svn: Connection timed out
Why it takes 10 minutes to get timeout? How can I change it?
EDIT:
Maybe it is related to network routing problem. With VPN is disconnected ping 192.168.9.2 gets timeout instead of unreachable host.
(If you're connecting to your SVN using http protocol) Try setting the http-timeout property in ([global] or relevant server-group section of the) servers file in your subversion home (at %APPDATA%\Subversion\servers on Windows) to a low enough value (in seconds - say like 30), so that subversion realizes the connection is kaput sooner, and causes an error message pop-up!
(By default, bottom of) The file should looks something like this:
[global]
http-timeout = 30
History:
I also had issues with work VPN every now and then, and google took me to this old question. #lhasadad commented that he posted an answer but I couldn't see one (I don't know if this method did not work for him, or why the post was deleted, but I could confirm that it still works! (that is, it makes the handling of connection issue better)). Unfortunately that forum link is also giving 404 atm. So I dived into wayback machine and found a copy from April 2016! Credits go to "stefan", a moderator of that forum. He also mentions (if you have Tortoise), this servers file can be opened by "Settings\Network[Edit]" in that GUI.

play framework 2.4.2 client connection remains open

i am using using activator project which has play 2.4.2 . just for testing i deployed raw project which only listen on port 80 0r 9000 and returning Ok("abc").
but when i check the output of
$ sudo lsof -i | wc -l
the number increasing gradually with time, and after some time let say 24-48 hours. the server crashes with exception too many file open.
i tested with apache benchmark also, after completion of benchmarking, there is still some connections open and never close.
please someone help.
There seems to be some debate around this issue, when sometime back I was working with playframework.
First verify that if your client is asking for connection to be kept alive. In this case playframework would honor the client and keep the connection open. See this disscussion . The takeaway from discussion was play can handle a lot of request, which is questionable if you think about DoS attacks.
The other thing there seems to be options to kill the connection from the action with the header, but I have never tried with those. See this. I am not able to pull any documentation around this option at this moment.
Edit : Seems to be mentioned in 2.2. hightlight.

TFS Source Control returns HTTP Code 302 with remote user

I have a remote developer connected to my TFS via the internet. When he attempts to do a GET from source control, he fails to get a number of files with error messages as this:
D:\CaseTrakker\CaseTrakker_v6_0\CaseTrakker\CaseTrakker.ObjectModel\Framework\Factories\Value\LookupValueViewModelFactory.cs: Please contact your administrator. There was an error contacting the server.
Technical information (for administrator):
HTTP code 302: Moved Temporarily
This does not happen for all files, but for many, and repeated retries does not resolve it. I am at a complete loss.
Possibly germane, the way that I have published my TFS is to set up a rule in my firewall to route requests targetting http://publicserver:8080/tfs to http://internalserver:8080/tfs. Since this error seems to have to do with redirection, that might be some or all of the issue.
Thanks in advance for any assistance.
David Mullin
IMA Technologies
Might be worth getting the external developer to upgrade to the latest Update 3 CTP of VS 2012 as there was a fix in it to handle retries better on downloads.
However, you'll probably have more luck if you configure it so that your TFS server is accessible over the same fully qualified domain name both internally and externally (internally resolving to the internal IP - externally resolving to your external IP). Check out this word document for more information (http://www.christiano.ch/common/documents/Exposing_Team_Foundation_Server_to_the_Internet.docx) or take a look at the Pro TFS 2012 book.

How to trace MSMQ?

I have an agent and a server in different domains. The server acts as an MSMQ server and the agent acts as an MSMQ client. I am using the mqsender utility, which is part of the MSMQ tools.
My problem is that a message is not delivered when using the HTTP:// format string (the MSMQ is installed with HTTP support). Using the OS: format string works fine.
When using HTTP the messages are immediately moved to the Dead Letter queue and the Class is set to Unknown, so I do not know the reasons for this behaviour.
So, this works:
mqsender.exe /c:10 /j:dead /f:Direct=OS:il-mark-w2k3\private$\test
And this does not:
mqsender.exe /c:10 /j:dead /f:Direct=http://il-mark-w2k3/msmq/private$/test
I checked that MSMQ virtual directory exists. How can I trace the MSMQ operation to try and understand what is going on?
Thanks.
EDIT
All the commands work as expected when ran locally on the server.
Navigating to http://il-mark-w2k3/msmq/private$/test in the browser on the agent (and the server) results in 501 - Header values specify a method that is not implemented. The same error is received when navigating to http://il-mark-w2k3/msmq. I suppose that is OK, after all it is not 404 - Not Found, right?
EDIT2
I have succeeded to resolve the issue. IIS lacked Anonymous Authentication, it became obvious from observing its log - 401.2 HTTP error was there. All worked well after it was enabled. The mistery remains why did MSMQ display Class Unknown on the dead messages? On other machine the same setup produces Error : 401, which makes much more sense.
The logging for MSMQ is internal so you won't easily be able to see exactly why the message didn't get delivered without raising a support case with Microsoft.
I have a few blog posts on solving various MSMQ/HTTP issues.
The 17 entitled "MSMQ messages using HTTP just won't get delivered" may help.
Also make sure you check the IIS logs for information.
Cheers
John Breakwell