Ejabberd server keeps logging me off and back on constantly - xmpp

I'm building an iOS app, but the problem exists on all clients. iChat, Messages, Psi, etc. So because it exists on all clients I'm going to assume it's a server issue.
Has anyone ever experienced something like this? If so, what did you do to fix it? I'm sure it's some silly config setting or something but I simply can't figure this out. This is the only thing that looks like it might be related in ejabberd.log:
=ERROR REPORT==== 2012-09-05 12:07:12 ===
Mnesia(ejabberd#localhost): ** WARNING ** Mnesia is overloaded: {dump_log,
time_threshold}
Thanks in advance for any tips/pointers.

https://github.com/processone/ejabberd/blob/master/src/ejabberd_c2s.erl#L936 seems to have already been patched. The config variable is called resource_conflict and the value you want is setresource.

The above warning is (probably) not related to the issue you are facing. These mnesia events usually happens when the transaction log needs to be dumped, but the previous transaction log dump hasn't finished yet.
Problem that you are facing needs to be debugged for which you can set {log_level, 5} inside ejabberd.cfg. This will enable debug logging for ejabberd. Then look into the logs to find any guesses on why this is happening for you. Also, come back and paste your log file details here, probably we will be able to help you further. I have never faced such non-sensical issues with ejabberd.
Update after log file attachment:
As Joe wrote below, this is indeed happening because of resource conflict. Two of your clients are trying to login with same resource value. But in an ideal world this shouldn't matter. Jabber servers SHOULD take care of this by appending or prepending custom value on top of resource value requested by the client.
For example, here is what gtalk (even facebook chat) servers will do:
SENT <iq xmlns="jabber:client" type="set" id="1"><bind xmlns="urn:ietf:params:xml:ns:xmpp-bind"><resource>jaxl#resource</resource></bind></iq>
RCVD <iq id="1" type="result"><bind xmlns="urn:ietf:params:xml:ns:xmpp-bind"><jid>jabberxmpplibrary#gmail.com/jaxl#resou27F46704</jid></bind></iq>
As you can see my client requested to bind with resource value jaxl#resource but gtalk server actually bound my session with resource value jaxl#resou27F46704. In short, this is not a bug in your client but a bug in ejabberd.
To fix this you can do two things:
Resource value is probably hardcoded somewhere in your client configuration. Simply remove that. A good client will automatically take care of this by generating a random resource value at it's end.
Patch ejabberd to behave how gtalk server does (as shown above). This is the relevant section inside ejabberd_c2s.erl src which needs some tweaking. Also search for Replaced by new connection inside the c2s source file and you will understand what's going on.

This sounds like the "dueling resources" bug in your client. You may have two copies of your client running simultaneously using the same resource, and doing faulty auto-reconnect logic. When the second client logs in, the first client is booted offline with a conflict error. The first client logs back in, causing a conflict error on the second client. Loop.
Evidence for this is in your logfile, on line 3480:
D(<0.373.0>:ejabberd_c2s:1553) : Send XML on stream =
<<"<stream:error><conflict xmlns='urn:ietf:params:xml:ns:xmpp-streams'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-streams'>
Replaced by new connection
</text>
</stream:error>">>

Related

How to ignore some special requests explicitly when using charles?

When started charles, java app cannot access redis got below error
redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
Then I tried to ignore the redis connection to solve it
but the problem still exists
So how to explicitly ignore some connection, e.g redis connection, mongo connection etc. ?
I'm sorry I don't really know what your real problem is. I guess the problem that you have is those HTTP requests still appearing in the Structure view, right?
Being that the case, I would strongly recommend you to use the Focus feature. To use it, you only need to add the domain you are working on to the View -> Focused Hosts... ( you can also do it by right clicking on the request and then selecting "Focus").
By doing this, all the non-focused domains will get grouped in a "Other hosts" entry in the Structure panel so they won't disturb your work anymore.

An attempt was made to access a socket in a way forbidden by its access permissions in Azure Web Apps

I'm running a webapi on an Azure website that makes calls to external web services. The webapi handles approximately 2K-3K requests per minute.
Periodically, lots of socket errors start occurring that indicate: "An attempt was made to access a socket in a way forbidden by its access permissions". This error seems to occur regardless of the ip address of the external web service.
At first, I thought it might be ephemeral port exhaustion, but I've limited "connectionManagement" to a maximum of 100 connections.
What would be causing this?
Thanks very much. Happy to provide whatever information might be helpful.
Update 6/1: - doesn't work per 6/2
I added the following to my web.config system.net section:
<defaultProxy enabled="false" useDefaultCredentials="false">
<proxy/>
<bypasslist/>
<module/>
</defaultProxy>
It appears to have helped as I haven't seen this issue in the last 6 hours. I have no idea why this would actually help though as I'm not using any proxy-related stuff.
Any thoughts?
Update 6/2:
Adding the defaultProxy doesn't actually appear to help. The problem is still occurring. Back to the drawing board.
I've finally figured out the cause of this problem. The issue was occurring due to port exhaustion.
I was using an NLog email target which was grabbing and holding onto too many SMTP connections over time (despite the 100 max connection limit). After removing the email target, the issue no longer occurs. I haven't figured out why NLog was exhibiting this behavior.

RPC not working on addclose handler

I'm facing a weird problem in GWT. I generate an excel file on server side for users to download. But after the download the file should get deleted.
I have put logic to delete it on server-Side on 2 occasions. One when user logs out and another when browser is closed.
When the user logs out, it works perfectly as it has enough time to make a call to the server whereas in case of addclosehandler, it loses connection and file remains as it is.
i.e. the method on server side does not get executed.
I tried to find another way to call the method directly by importing the package and inheriting in gwt.xml. But an error was thrown at the compile time and rightly so that server side cant be inherited.
Please get me out of this.
Thanks in advance.
But after the download the file should get deleted. I have put logic
to delete it on server-Side on 2 occasions.
This does not have to do anything with the client. I don't know excactly how your progam works but generally it should work like this:
Client makes a request
Servlet generates the bytes (Is there really a need to store the bytes in a file?)
sends them to client
And that's it.

Perl application move causing my head to explode...please help

I'm attempting to move a web app we have (written in Perl) from an IIS6 server to an IIS7.5 server.
Everything seems to be parsing correctly, I'm just having some issues getting the app to actually work.
The app is basically a couple forms. You fill the first one out, click submit, it presents you with another form based on what checkboxes you selected (using includes and such).
I can get past the first form once... but then after that it stops working and pops up the generated error message. After looking into the code and such, it basically states that there aren't any checkboxes selected.
I know the app writes data into .dat files... (at what point, I'm not sure yet), but I don't see those being created. I've looked at file/directory permissions and seemingly I have MORE permissions on the new server than I did on the last. The user/group for the files/dirs are different though...
Would that have anything to do with it? Why would it pass me on to the next form, displaying the correct "modules" I checked the first time and then not any other time after that? (it seems to reset itself after a while)
I know this is complicated so if you have any questions for me, please ask and I'll answer to the best of my ability :).
Btw, total idiot when it comes to Perl.
EDIT AGAIN
I've removed the source as to not reveal any security vulnerabilities... Thanks for pointing that out.
I'm not sure what else to do to show exactly what's going on with this though :(.
I'd recommend verifying, step by step, that what you think is happening is really happening. Start by watching the HTTP request from your browser to the web server - are the arguments your second perl script expects actually being passed to the server? If not, you'll need to fix the first script.
(start edit)
There's lots of tools to watch the network traffic.
Wireshark will read the traffic as it passes over the network (you can run it on the sending or receiving system, or any system on the collision domain).
You can use a proxy server, like WebScarab (free), Burp, Paros, etc. You'll have to configure your browser to send traffic to the proxy server, which will then forward the requests to the server. These particular servers are intended to aid testing, in that you'll be able to mess with the requests as they go by (and much more)
As Sinan indicates, you can use browser addons like Fx LiveHttpHeaders, or Tamper Data, or Internet Explorer's developer kit (IIRC)
(end edit)
Next, you should print out all CGI arguments that the second perl script receives. That way, you'll know what the script really thinks it gets.
Then, you can enable verbose logging in IIS, so that it logs the full HTTP request.
This will get you closer to the source of the problem - you'll know if it's (a) the first script not creating correct HTML, resulting in an incomplete HTTP request from the browser, (b) the IIS server not receiving the CGI arguments for some odd reason, or (c) the arguments aren't getting from the IIS server and into the perl script (or, possibly, that the perl script is not correctly accessing the arguments).
Good luck!
What you need to do is clear.
There is a lot of weird excess baggage in the script. There seemed to be no subroutines. Just one long series of commands with global variables.
It is time to start refactoring.
Get one thing running at a time.
I saw HTML::Template there but you still had raw HTML mixed in with code. Separate code from presentation.

Why is ASP.NET SmtpClient.Send() making my site unavailable?

I have an ASP MVC 2 application, through which I occasionally send emails using SmtpClient.Send(). Typically, emails are sent out in batches of between 1 and 50 emails, with hours or even days passing between batches. I have this all set up so that the emails are actually sending just fine. But, the problem is that when the emails are sent, my site suddenly becomes unavailable for about 15 minutes, and I have no idea why.
My site is hosted on a shared, Windows 2008 server with a third-party web host.
Here is the relevant section in my web.config file, edited for privacy:
<system.net>
<mailSettings>
<smtp deliveryMethod="Network" from="fromemail#doman.com">
<network host="mail.DOMAIN.COM" userName="username" password="password"/>
</smtp>
</mailSettings>
</system.net>
Does anyone have any thoughts or ideas as to why this might be happening? I've been trying to reaserch it and Google it for some time now, but I'm just not coming up with anything.
This really could be many different things but..
The first thing I suggest you do is enable ASP.NET Health Monitoring on your site. This should hopefully help you gain visibility of the exception that is causing this issue (A guide to using Health Monitoring).
For obvious reasons be sure not to choose the Mail Provider to send you your exceptions - perhaps use the SQL provider or write a custom provider that writes to a file.
I would also ask your hosts to look into the Event Log for any information that may be of value.
Hope that helps.
E-mail can be an expensive operation. Have you considered using the asynchronous e-mail send so that the process does not block your main thread?