fulljid is empty after connection to BOSH service with XMPHP - strophe

I am trying to pre-bind an XMPP session via XMPHP and pass the rid/sid/jid to a strophe client to attach to the session.
connection code here:
$conn = new CIRCUIT_BOSH('server.com', 7070, $username, $pass, $resource, 'server.com', $printlog=true, $loglevel=XMPPHP_Log::LEVEL_VERBOSE);
$conn->autoSubscribe();
try{
$conn->connect('http://xmpp.server.com/http-bind', 1, true);
$log->lwrite('Connected!');
}catch(XMPPHP_Exception $e){
die($e->getMessage());
}
I am getting the rid and sid but the fulljid in the $conn object stays empty and I cant see a session started on my openfire admin console.
If I create the jid manually by using the given resource and passing jid/rid/sid to strophe to use in attach, I get the ATTACHED status and I see calls from the client to the BOSH ip but I still dont see a session and I cant use the connection.
Strophe Client Code:
Called on document ready:
var sid = $.cookie('sid');
var rid = $.cookie('rid');
var jid = $.cookie('jid');
$(document).trigger('attach', {
sid: sid,
rid: rid,
jid: jid,
});
$(document).bind('attach', function (ev, data) {
var conn = new Strophe.Connection(
"http://xmpp.server.com/http-bind");
conn.attach(data.jid, data.sid, data.rid, function (status) {
if (status === Strophe.Status.CONNECTED) {
$(document).trigger('connected');
} else if (status === Strophe.Status.DISCONNECTED) {
$(document).trigger('disconnected');
} else if (status === Strophe.Status.ATTACHED){
$(document).trigger('attached');
}
});
Object.connection = conn;
});
I think the problem starts on the XMPPHP side which is not creating the session properly.
'attached' is triggered but never 'connected', is status 'connected' supposed to be sent?
What am I missing?

Ok, solved, I saw that XMPPHP lib didn't create a session at all on the openfire server, so I wrote a simple test for the XMPP class which was good and created the session, and for the XMPP_BOSH class that didn't manage create one. Then I saw the issue report here: http://code.google.com/p/xmpphp/issues/detail?id=47 comment no.9 worked, it fixed the issue by copying the processUntil() function from the XMLStream.php to BOSH.php, still can't figure out why this is working. Then I found I had an overlapping bug also with some of the passwords set for users on the openfire server. These passwords contained these ! # % ^ characters, for some reason the XMPP_BOSH is sending the password corrupted or changed so I got Auth Failed exception. Changing the password fixed the issue and I can now attach to the session XMPPHP created with the Strophe.js library.

Related

How can i use my existing cakephp based project users to work with XMPP ejabberd chat application

I have a cakephp2.3 based project with table name "user_master".
I am using ejabberd chat application and ejabberd user table name is "user".
I am using convers.js client.
Now i am facing problem to use my existing project user with XMPP ejabberd to authenticate , send friend request , chat with friends.
I tried using external auth but it allowed me to login even if I add wrong credentials on ejabberd server using http://localhost:5280/admin link.
I am using Ubuntu and i have add all types of setting.It is working fine if i use it as stand alone application but when i want use it for my existing user it stopped working.
Ejabberd Server : http://localhost:5280/admin
External authentication configuration in "ejabberd.cfg" file.
{auth_method, external}.
{extauth_program, "/etc/ejabberd/auth.php"}.
External authentication file "auth.php".
<?php
require 'ejabberd_external_auth.php';
class Auth extends EjabberdExternalAuth {
protected function authenticate($user, $server, $password) {
$stmt = $this->db()->prepare("SELECT username FROM users WHERE username = ? AND password = ? ");
$stmt->execute(array($user, $password));
if($stmt->rowCount() >= 0 )
{
return true;
}
else
{
return false;
}
}
protected function exists($user, $server) {
$stmt = $this->db()->prepare("SELECT username FROM users WHERE username = ? ");
$stmt->execute(array($user));
if($stmt->rowCount() >= 0 )
{
return true;
}
else
{
return false;
}
}
}
$pdo = new PDO('mysql:dbname=ejabberd;host=localhost', 'root', 'root');
new Auth($pdo, 'auth.log');
Thanks in advance

Google Cloud SQL or node-mysql answers a long time

We have this project using Polymer as the FrontEnd and Node.js as the API being consumed by Polymer, and our Node API replies a really long time especially if you just leave the page alone for like 10 minutes. Upon further investigation by inserting a DATE calculation in the MySQL Query, I found out that MySQL responds a Really long time. The query looks like this:
var query = dataStruct['formed_query'];
console.log(query);
var now = Date.now();
console.log("Getting Data for Foobar Query============ "+Date());
console.log(query);
GLOBAL.db_foobar.getConnection(function(err1, connection) {
////console.log("requesting MySQL connection");
if(err1==null)
{
connection.query(query,function(err,rows,fields){
console.log("response from MySQL Foobar Query============= "+Date());
console.log("MySQL response Foobar Query=========> "+(Date.now()-now)+" ms");
if(err==null)
{
//respond.respondJSON is just a res.json(msg); but I've added a similar calculation for response time starting from express router.route until res.json occurs
respond.respondJSON(dataJSON['resVal'],res,req);
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondError(msg,res,req);
}
connection.release();
});
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondJSON(msg,res,req);
respond.emailError(msg);
try{
connection.release();
}catch(err_release){
respond.LogInConsole(err_release);
respond.LogInConsole(err_release.stack);
}
}
});
}
When Chrome Developer tools reports a LONG PENDING time for the API, this happens to my log.
SELECT * FROM `foobar_table` LIMIT 0,20;
MySQL response Foobar Query=========> 10006 ms
I'm dumbfounded as to why this is happening.
We have our system hosted in Google Cloud Services. Our MySQL is a Google SQL service with an activation policy of ALWAYS. We've also set that our Node Server, which is a Google Compute Engine, to keep alive TCP4 connections via:
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
sudo /sbin/sysctl --load=/etc/sysctl.conf
I'm using mysql Pool from node-mysql
db_init.database = 'foobar_dbname';
db_init=ssl_set(db_init);
//GLOBAL.db_foobar = mysql.createConnection(db_init);
GLOBAL.db_foobar = mysql.createPool(db_init);
GLOBAL.db_foobar.on('connection', function (connection) {
setTimeout(tryForceRelease, mysqlForceTimeOut,connection);
});
db_init looks like this:
db_init = {
host : 'ip_address_of_GCS_SQL',
user : 'user_name_of_GCS_SQL[![enter image description here][1]][1]',
password : '',
database : '',
supportBigNumbers: true,
connectionLimit:100
};
I'm also forcing to release connections if they're not released in 2 minutes, just to make sure it's released
function tryForceRelease(connection)
{
try{
//console.log("force releasing connection");
connection.release();
}catch(err){
//do nothing
//console.log("connection already released");
}
}
This is really wracking my brains out here. If anyone can help please do.
I'll post the same answer here as I posted in node-mysql pool experiences ETIMEDOUT.
The questions are sufficiently different that I'm not sure it's worth duping them.
I suspect the reason is that keepalive is not enabled on the connection to the MySQL server.
node-mysql does not have an option to enable keepalive and neither does node-mysql2, but node-mysql2 provides a way to supply a custom function for creating sockets which we can use to enable keepalive:
var mysql = require('mysql2');
var net = require('net');
var pool = mysql.createPool({
connectionLimit : 100,
host : '123.123.123.123',
user : 'foo',
password : 'bar',
database : 'baz',
stream : function(opts) {
var socket = net.connect(opts.config.port, opts.config.host);
socket.setKeepAlive(true);
return socket;
}
});

How can I check the connection of Mongoid

Does Mongoid has any method like ActiveRecord::Base.connected??
I want to check if the connection that's accessible.
We wanted to implement a health check for our running Mongoid client that tells us whether the established connection is still alive. This is what we came up with:
Mongoid.default_client.database_names.present?
Basically it takes your current client and tries to query the databases on its connected server. If this server is down, you will run into a timeout, which you can catch.
My solution:
def check_mongoid_connection
mongoid_config = File.read("#{Rails.root}/config/mongoid.yml")
config = YAML.load(mongoid_config)[Rails.env].symbolize_keys
host, db_name, user_name, password = config[:host], config[:database], config[:username], config[:password]
port = config[:port] || Mongo::Connection::DEFAULT_PORT
db_connection = Mongo::Connection.new(host, port).db(db_name)
db_connection.authenticate(user_name, password) unless (user_name.nil? || password.nil?)
db_connection.collection_names
return { status: :ok }
rescue Exception => e
return { status: :error, data: { message: e.to_s } }
end
snrlx's answer is great.
I use following in my puma config file, FYI:
before_fork do
begin
# load configuration
Mongoid.load!(File.expand_path('../../mongoid.yml', __dir__), :development)
fail('Default client db check failed, is db connective?') unless Mongoid.default_client.database_names.present?
rescue => exception
# raise runtime error
fail("connect to database failed: #{exception.message}")
end
end
One thing to remind is the default server_selection_timeout is 30 seconds, which is too long for db status check at least in development, you can modify this in your mongoid.yml.

SMTP settings work from localhost but on server it doesn't with PHPMailer

This is the issue what i am facing
Localhost
Test mail with SMTP settings work
New user creation mail using the above smtp settings work
Server
Test mail with SMTP settings work
New user creation mail using the above smtp settings doesn't work
I echoed the mail smtp settings in both the cases and they are exactly same. The error i am getting is
SMTP -> ERROR: Failed to connect to server: php_network_getaddresses:
getaddrinfo failed: Name or service not known (0)SMTP Connect()
failed.
Any suggestions would be helpful.
I further debug it. The behavior turns out to be wierd
if (isset($_POST['User']))
{
if (UserUtil::validateAndSaveUserData($model, $_POST))
{
$mailer = new UiMailer();
$mailer->setFrom('fromAddress', 'fromName');
$mailer->setTo('toaddress');
$mailer->setSubject('Test subject');
$mailer->setBody('Test Body');
$mailer->Mailer = 'smtp';
$mailer->Username = 'username';
$mailer->Password = 'password';
$mailer->Host = 'host';
$mailer->Port = 25;
$mailer->SMTPAuth = true;
$status = $mailer->send() ? true : false;
if($status == true)
{
print "Sucess";
}
else
{
print $mailer->ErrorInfo . "</br>";
print "Failuere";
}
exit;
}
}
If i comment the call if (UserUtil::validateAndSaveUserData($model, $_POST)), it works fine. In the function i am validating and saving models using Yii framework. I further debug the function. I have the following relation in the system
User has one person
User has one address
So in the above call, if i comment the address part which $model->address->attributes or $model->address->validate or $model->address->save(), it works fine. The save functionality for address works fine. There are no issues related to it.

Sending email with large body is failing

When sending an email from C# with body being large is resulting in failure in sending email
Mailbox unavailable.
The email is working fine with a smaller body. I am using html body to true property..
Thanks,
Zafar
Code:
using (MailMessage _mailMsg = new MailMessage())
{
_mailMsg.From = new MailAddress(ConfigurationManager.AppSettings["mailFrom"].ToString());
_mailMsg.Body = mail.Body;
_mailMsg.Subject = mail.Subject;
_mailMsg.IsBodyHtml = true;
foreach (string strEmailIds in mailTo)
{
if (strEmailIds != null && strEmailIds != string.Empty && strEmailIds != "")
{
if (!_mailMsg.To.Contains(new MailAddress(strEmailIds)))
_mailMsg.To.Add(new MailAddress(strEmailIds));
}
}
//_mailMsg.CC.Add(ConfigurationManager.AppSettings["mailCC"].ToString());
using (SmtpClient _client = new SmtpClient(ConfigurationManager.AppSettings["Host"].ToString()))
{
if (_mailMsg.To.Count > 0)
{
_client.Send(_mailMsg);
}
else
{
_mailMsg.Subject = "No emails associated with the portfolio: " + account + " Original Email:" + mail.Subject;
_mailMsg.To.Add(new MailAddress(ConfigurationManager.AppSettings["mailSuppotTeam"].ToString()));
_client.Send(_mailMsg);
}
Oke, on thing it could be is that the mail server rejects big messages. Let exclude that one... I assume you have a local smtp mail server installed (check for telnet 127.0.0.1 25 that should give a sort of reply) configure the mail server [ConfigurationManager.AppSettings["Host"]] for 127.0.0.1, can you send big mails now?
If ConfigurationManager.AppSettings["Host"] is already the local SMTP server then:
a) stop that smtp service (Simple Mail Transfer Protocol) for a moment (via the command services.msc)
b) send a small email
c) go to c:\inetpub\mailroot\pickup and edit the message via notepad so that it becomes a BIG email
d) start the smtp service again (services.msc)
The issue was with sending email to cross domain email id and was resulting in the Generic exception
"Mailbox unavailable." May be this is one of the reason behind the above exception.