"EADDRINUSE: address already in use" On every change even when port is changes - sockets

I have a basic server with socket.io, but every time I do a change on the server, it throws the error:
Error: listen EADDRINUSE: address already in use :::3000
But, even if I change the port, it will throw the error anyways. I used sudo kill -9 and that allowed me to run the server with no problems the first time, but if I make a change, or if I reset it, it will throw the error again.
SERVER CODE
var express = require('express');
var app = express();
var http = require('http').createServer(app);
var io = require('socket.io')(http);
app.get("/", function(request, result) {
result.send("== SERVIDOR ==");
});
var port = 3000
http.listen(port, function() {
console.log("-- Servidor iniciado")
console.log("-- Escuchando puerto "+ port +"...")
})

Related

Prevent nginx from killing idle tcp sockets

I'm trying to use nginx as a reverse proxy for ssl/tcp sockets (so that I can write my server custom as raw tcp, but have nginx handle the ssl certificates). My use case requires the tcp connections remain alive, but to go idle (no packets back and forth) for extended periods of time (determined by the client, but as long as an hour). Unfortunately, nginx kills my socket connections after the first 10 minutes (timed to within a second) of inactivity, and I haven't been able to find either online or in the docs what actually controls this timeout.
I know that it has to be nginx doing it (not my raw server timing out, or my client's ssl socket), since I can directly connect to the server's raw tcp server without timeout issues, but if I run nginx as a raw tcp reverse proxy (no ssl) it does timeout.
Here's some code to reproduce the issue, note that I've commented out the ssl relevent pieces in nginx because the timeout occurs either way.
/etc/nginx/modules-enabled/test.conf:
stream {
upstream tcp-server {
server localhost:33445;
}
server {
listen 33446;
# listen 33446 ssl;
proxy_pass tcp-server;
# Certs
# ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
}
}
server.js;
const net = require("net");
const s = net.createServer();
s.on("connection", (sock) => {
console.log('Got connection from', sock.remoteAddress, sock.remotePort );
sock.on("error", (err) => {
console.error(err)
clearInterval(i);
});
sock.on("close", () => {
console.log('lost connection from', sock.remoteAddress, sock.remotePort );
clearInterval(i);
});
});
s.listen(33445);
client.js
const net = require('net');
const host = 'example.com';
let use_tls = false;
let client;
let start = Date.now()
// Use me to circumvent nginx, and no timeout occurs
// let port = 33445;
// Use me to use nginx, and no timeouts occur after 10 mins of no RX/TX
let port = 33446;
client = new net.Socket();
client.connect({ port, host }, function() {
console.log('Connected via TCP');
// Include me, and nginx doesn't kill the socket
// setInterval(() => { client.write("ping") }, 5000);
});
client.on('end', function() {
console.log('Disconnected: ' + ((Date.now() - start)/1000/60) + " mins");
});
I've tried various directives in the nginx stream block, but nothing seems to help. Thanks in advance!

MqttBrowserClient fails to connect due to missing conack package

I am trying to make webapp over flutter which will connect to HIVE broker. I took the broker name from the official website, set the port number to 8000 just like mentioned there and still get the error message as below:
error is mqtt-client::NoConnectionException: The maximum allowed connection attempts ({1}) were exceeded. The broker is not responding to the connection request message (Missing Connection Acknowledgement?
I really have no clue how to proceed. Can someone please help?
Below is my code:
MqttBrowserClient mq = MqttBrowserClient(
'wss://broker.mqttdashboard.com:8000', '',
maxConnectionAttempts: 1);
/*
MqttBrowserClient mq = MqttBrowserClient('ws://test.mosquitto.org', 'client-1',
maxConnectionAttempts: 1);
*/
class mqttService {
Future<MqttBrowserClient?> connectToServer() async {
try {
final connMess = MqttConnectMessage()
.withClientIdentifier('clientz5tWzoydVL')
.authenticateAs('a14guguliye', 'z5tWzoydVL')
.withWillTopic('willtopic')
.withWillMessage('My Will message')
.startClean() // Non persistent session for testing
.withWillQos(MqttQos.atLeastOnce);
mq.port = 1883;
mq.keepAlivePeriod = 50;
mq.connectionMessage = connMess;
mq.websocketProtocols = MqttClientConstants.protocolsSingleDefault;
mq.onConnected = onConnected;
var status = await mq.connect();
return mq;
} catch (e) {
print("error is " + e.toString());
mq.disconnect();
return null;
}
}
}
That port 8000 may be open but the HiveMQ broker may not be listening.
Make sure that the broker is fully booted and binds to that IP:Port combo.
In the HiveMQ broker startup output, you should see something similar to:
Started Websocket Listener on address 0.0.0.0 and on port 8000
If needed, the HiveMQ Broker configuration documentation is here.
You can use the public HiveMQ MQTT Websocket demo client to test your connection to make sure it's not a local code issue.
As a last option, use Wireshark to monitor MQTT traffic with a filter of tcp.port == 8000 and mqtt

Adding a websocket "put" request in the bootstrap.js file in sails : cannot find io

I need to call a socket request from the bootstrap.js file in sails.
The bootstrap.js file has some code checking if some game engine has updated some file. If so, it needs send a message with some updated data via socket to some defined route called "/update"... e.g.
io.socket.put('/update', {history:{sessions:[1,2,3,4]}},function gotResponse(body, response) {
console.log('Server sending request ot server ');
})
The problem is that it tells me that io is not recognised.
I tried to do npm install for both sails.io.js and socket.io-client and then write:
var io = require('sails.io.js')( require('socket.io-client') );
at the top.
Unfortunately, it gives me the following error message:
C:\Users\Evolver\Documents\programming\pipegame\game6\node_modules\socket.io-client\lib\url.js:29
if (null == uri) uri = loc.protocol + '//' + loc.host;
^
TypeError: Cannot read property 'protocol' of undefined
at url (C:\Users\Evolver\Documents\programming\pipegame\game6\node_modules\socket.io-client\lib\url.js:29:29)
at lookup (C:\Users\Evolver\Documents\programming\pipegame\game6\node_modules\socket.io-client\lib\index.js:44:16)
at goAheadAndActuallyConnect (C:\Users\Evolver\Documents\programming\pipegame\game6\node_modules\sails.io.js\sails.io.js:835:21)
at selfInvoking (C:\Users\Evolver\Documents\programming\pipegame\game6\node_modules\sails.io.js\sails.io.js:812:18)
at SailsSocket.SailsIOClient.SailsSocket._connect (C:\Users\Evolver\Documents\programming\pipegame\game6\node_modules\sails.io.js\sails.io.js:831:9)
at null._onTimeout (C:\Users\Evolver\Documents\programming\pipegame\game6\node_modules\sails.io.js\sails.io.js:1463:17)
at Timer.listOnTimeout (timers.js:92:15)
Any idea ?
Ok.
It now works, once npm install has been done for socket.io-client and sails.io.js if I do exactly the following:
var socketIOClient = require('socket.io-client');
var sailsIOClient = require('sails.io.js');
// Instantiate the socket client (`io`)
var io = sailsIOClient(socketIOClient);
io.sails.url = 'http://localhost:1337';
// then I send something via my socket
io.socket.put('/update', {history:{sessions:[1,2,3,4]}},function gotResponse(body, response) {
console.log('Server sending request ot server ');
})

Google Cloud SQL or node-mysql answers a long time

We have this project using Polymer as the FrontEnd and Node.js as the API being consumed by Polymer, and our Node API replies a really long time especially if you just leave the page alone for like 10 minutes. Upon further investigation by inserting a DATE calculation in the MySQL Query, I found out that MySQL responds a Really long time. The query looks like this:
var query = dataStruct['formed_query'];
console.log(query);
var now = Date.now();
console.log("Getting Data for Foobar Query============ "+Date());
console.log(query);
GLOBAL.db_foobar.getConnection(function(err1, connection) {
////console.log("requesting MySQL connection");
if(err1==null)
{
connection.query(query,function(err,rows,fields){
console.log("response from MySQL Foobar Query============= "+Date());
console.log("MySQL response Foobar Query=========> "+(Date.now()-now)+" ms");
if(err==null)
{
//respond.respondJSON is just a res.json(msg); but I've added a similar calculation for response time starting from express router.route until res.json occurs
respond.respondJSON(dataJSON['resVal'],res,req);
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondError(msg,res,req);
}
connection.release();
});
}else{
var msg = {
"status":"Error",
"desc":"[Foobar Query]Error Getting Connection",
"err":err1,
"db_name":"common",
"query":query
};
respond.respondJSON(msg,res,req);
respond.emailError(msg);
try{
connection.release();
}catch(err_release){
respond.LogInConsole(err_release);
respond.LogInConsole(err_release.stack);
}
}
});
}
When Chrome Developer tools reports a LONG PENDING time for the API, this happens to my log.
SELECT * FROM `foobar_table` LIMIT 0,20;
MySQL response Foobar Query=========> 10006 ms
I'm dumbfounded as to why this is happening.
We have our system hosted in Google Cloud Services. Our MySQL is a Google SQL service with an activation policy of ALWAYS. We've also set that our Node Server, which is a Google Compute Engine, to keep alive TCP4 connections via:
echo 'net.ipv4.tcp_keepalive_time = 60' | sudo tee -a /etc/sysctl.conf
sudo /sbin/sysctl --load=/etc/sysctl.conf
I'm using mysql Pool from node-mysql
db_init.database = 'foobar_dbname';
db_init=ssl_set(db_init);
//GLOBAL.db_foobar = mysql.createConnection(db_init);
GLOBAL.db_foobar = mysql.createPool(db_init);
GLOBAL.db_foobar.on('connection', function (connection) {
setTimeout(tryForceRelease, mysqlForceTimeOut,connection);
});
db_init looks like this:
db_init = {
host : 'ip_address_of_GCS_SQL',
user : 'user_name_of_GCS_SQL[![enter image description here][1]][1]',
password : '',
database : '',
supportBigNumbers: true,
connectionLimit:100
};
I'm also forcing to release connections if they're not released in 2 minutes, just to make sure it's released
function tryForceRelease(connection)
{
try{
//console.log("force releasing connection");
connection.release();
}catch(err){
//do nothing
//console.log("connection already released");
}
}
This is really wracking my brains out here. If anyone can help please do.
I'll post the same answer here as I posted in node-mysql pool experiences ETIMEDOUT.
The questions are sufficiently different that I'm not sure it's worth duping them.
I suspect the reason is that keepalive is not enabled on the connection to the MySQL server.
node-mysql does not have an option to enable keepalive and neither does node-mysql2, but node-mysql2 provides a way to supply a custom function for creating sockets which we can use to enable keepalive:
var mysql = require('mysql2');
var net = require('net');
var pool = mysql.createPool({
connectionLimit : 100,
host : '123.123.123.123',
user : 'foo',
password : 'bar',
database : 'baz',
stream : function(opts) {
var socket = net.connect(opts.config.port, opts.config.host);
socket.setKeepAlive(true);
return socket;
}
});

fulljid is empty after connection to BOSH service with XMPHP

I am trying to pre-bind an XMPP session via XMPHP and pass the rid/sid/jid to a strophe client to attach to the session.
connection code here:
$conn = new CIRCUIT_BOSH('server.com', 7070, $username, $pass, $resource, 'server.com', $printlog=true, $loglevel=XMPPHP_Log::LEVEL_VERBOSE);
$conn->autoSubscribe();
try{
$conn->connect('http://xmpp.server.com/http-bind', 1, true);
$log->lwrite('Connected!');
}catch(XMPPHP_Exception $e){
die($e->getMessage());
}
I am getting the rid and sid but the fulljid in the $conn object stays empty and I cant see a session started on my openfire admin console.
If I create the jid manually by using the given resource and passing jid/rid/sid to strophe to use in attach, I get the ATTACHED status and I see calls from the client to the BOSH ip but I still dont see a session and I cant use the connection.
Strophe Client Code:
Called on document ready:
var sid = $.cookie('sid');
var rid = $.cookie('rid');
var jid = $.cookie('jid');
$(document).trigger('attach', {
sid: sid,
rid: rid,
jid: jid,
});
$(document).bind('attach', function (ev, data) {
var conn = new Strophe.Connection(
"http://xmpp.server.com/http-bind");
conn.attach(data.jid, data.sid, data.rid, function (status) {
if (status === Strophe.Status.CONNECTED) {
$(document).trigger('connected');
} else if (status === Strophe.Status.DISCONNECTED) {
$(document).trigger('disconnected');
} else if (status === Strophe.Status.ATTACHED){
$(document).trigger('attached');
}
});
Object.connection = conn;
});
I think the problem starts on the XMPPHP side which is not creating the session properly.
'attached' is triggered but never 'connected', is status 'connected' supposed to be sent?
What am I missing?
Ok, solved, I saw that XMPPHP lib didn't create a session at all on the openfire server, so I wrote a simple test for the XMPP class which was good and created the session, and for the XMPP_BOSH class that didn't manage create one. Then I saw the issue report here: http://code.google.com/p/xmpphp/issues/detail?id=47 comment no.9 worked, it fixed the issue by copying the processUntil() function from the XMLStream.php to BOSH.php, still can't figure out why this is working. Then I found I had an overlapping bug also with some of the passwords set for users on the openfire server. These passwords contained these ! # % ^ characters, for some reason the XMPP_BOSH is sending the password corrupted or changed so I got Auth Failed exception. Changing the password fixed the issue and I can now attach to the session XMPPHP created with the Strophe.js library.