This is the approach I tried but not working. I can forward the incoming messages from the WebSocket connection to the NetSocket, but only the first one received by NetSocket arrives to the client behind the WebSocket.
const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });
const NetSocket = require('net');
const net = new NetSocket.Socket();
// Web socket
wss.on('connection', function connection(ws) {
console.log((new Date()) + ' Remote connection accepted ' + ws.remoteAddress);
ws.on('message', function incoming(message) {
console.log('Received from remote: %s', message);
net.write(message)
});
ws.on('close', function(){
console.log((new Date()) + ' Remote connection closed');
});
});
// Net socket
net.connect(8745, '127.0.0.1', function() {
console.log((new Date()) + ' Local connection accepted');
});
net.on('data', function(data) {
console.log('Received from local: ' + data);
// Iterate the connected devices to send the broadcast
wss.clients.forEach(function each(c) {
if (c.readyState === WebSocket.OPEN) {
c.send(data);
}
});
});
net.on('close', function() {
console.log('Local connection closed');
});
After a new research I noticed that the problem was in my swift code.
private func setReceiveHandler() {
webSocketTask.receive { result in
defer { self.setReceiveHandler() } // I was missing this line
do {
let message = try result.get()
switch message {
case let .string(text):
print("Received text message: \(text)")
case let .data(data):
So, just adding defer { self.setReceiveHandler() } to my function, it started to work.
Note the defer statement at the start of the receive handler. It calls self.setReceiveHandler() to reset the receive handler on the socket connection to allow it to receive the next message. Currently, the receive handler you set on a socket connection is only called once, rather than every time a message is received. By using a defer statement, you make sure that self.setReceiveHandler is always called before exiting the scope of the receive handler, which makes sure that you always receive the next message from your socket connection.
I've got the information from:
https://www.donnywals.com/real-time-data-exchange-using-web-sockets-in-ios-13/
My socket emit works properly only on debug mode, when i tried with release APK nothing happened.
Code to connect socket -
socket = io(SOCKET_URL, {
transports: ['websocket'],// you need to explicitly tell it to use websockets
forceNew: true,
jsonp: false
});
socket.on('connect', () => {
console.log('connected!');
});
socket.on('disconnect', () => {
console.log('disconnect!');
});
Code to emit event
socket.emit('LIVE_MSG', { msg: "asdfasasdf3" }, (res) => {
console.log(res);
})
I have tried many options with socket connection i.e. timeout, setting and removing jsonp
Also tried with window.navigator.userAgent = "react-native";
But the result is none, socket only emits event when it is in debug mode, gone mad why it is not working with release apk.
Please help.
If you don't specify url, socket set url as localhost.
https://socket.io/get-started/chat/
"Notice that I’m not specifying any URL when I call io(), since it defaults to trying to connect to the host that serves the page."
(I'm not familiar with socet.io.)
I would like to return a retrieve a confirmation that the message was successfully published to the exchange before closing the AMQP connection. At the moment, I am using a timeout function to allow for the message to be published before closing the connection. This is not the right way. Can someone please help to retrieve a confirmation so I can close the connection based on a successful publish?
The code I am using is below:
function toExchange(msg)
{
amqp.connect('amqp://localhost:5672', function(err, conn) //local connection
{
conn.createChannel(function(err, ch)
{
var exchange = 'MessageExchange';
ch.assertExchange(exchange, 'fanout', {durable: true});
ch.publish(exchange, '', new Buffer(msg));
console.log("Sent to Exchange: %s", msg);
});
setTimeout(function() { conn.close(); }, 5000);
});
}
You can use a RabbitMQ extension called "Publisher confirms". Here is more information: https://www.rabbitmq.com/confirms.html#publisher-confirms.
You are not notified when the message is published to the exchange, but when it is published and routed to all queues: https://www.rabbitmq.com/confirms.html#when-publishes-are-confirmed
In your case using amqplib in nodeJS you can use this snippet of code: https://www.squaremobius.net/amqp.node/channel_api.html#confirmchannel
It uses the callback #waitForConfirms(function(err) {...}) that triggers when all published messages have been confirmed.
UPDATE: I am using the 2.1 version on the driver, against 3.2
I have a node application that uses MongoDB. The problem I have is that if the MongoDB server goes down for any reason, the application doesn't reconnect.
To get this right, I based my tests on the code in this official tutorial.
var MongoClient = require('mongodb').MongoClient
, f = require('util').format;
MongoClient.connect('mongodb://localhost:27017/test',
// Optional: uncomment if necessary
// { db: { bufferMaxEntries: 3 } },
function(err, db) {
var col = db.collection('t');
setInterval(function() {
col.insert({a:1}, function(err, r) {
console.log("insert")
console.log(err)
col.findOne({}, function(err, doc) {
console.log("findOne")
console.log(err)
});
})
}, 1000)
});
The idea is to run this script, and then stop mongod, and then restart it.
So, here we go:
TEST 1: stopping mongod for 10 seconds
Stopping MongoDb for 10 seconds does the desired result: it will stop running the queries for those 10 seconds, and then will run all of them once the server is back ip
TEST 2: stopping mongod for 30 seconds
After exactly 30 seconds, I start getting:
{ [MongoError: topology was destroyed] name: 'MongoError', message: 'topology was destroyed' }
insert
{ [MongoError: topology was destroyed] name: 'MongoError', message: 'topology was destroyed' }
The trouble is that from this on, when I restart mongod, the connection is not re-establised.
Solutions?
Does this problem have a solution? If so, do you know what it is?
Once my app starts puking "topology was destroyed", the only way to get everything to work again is by restarting the whole app...
There are 2 connection options that control how mongo nodejs driver reconnects after connection fails
reconnectTries: attempt to reconnect #times (default 30 times)
reconnectInterval: Server will wait # milliseconds between retries
(default 1000 ms)
reference on mongo driver docs
Which means that mongo will keep trying to connect 30 times by default and wait 1 second before every retry. Which is why you start seeing errors after 30 seconds.
You should tweak these 2 parameters based on you needs like this sample.
var MongoClient = require('mongodb').MongoClient,
f = require('util').format;
MongoClient.connect('mongodb://localhost:27017/test',
{
// retry to connect for 60 times
reconnectTries: 60,
// wait 1 second before retrying
reconnectInterval: 1000
},
function(err, db) {
var col = db.collection('t');
setInterval(function() {
col.insert({
a: 1
}, function(err, r) {
console.log("insert")
console.log(err)
col.findOne({}, function(err, doc) {
console.log("findOne")
console.log(err)
});
})
}, 1000)
});
This will try 60 times instead of the default 30, which means that you'll start seeing errors after 60 seconds when it stops trying to reconnect.
Sidenote: if you want to prevent the app/request from waiting until the expiration of the reconnection period you have to pass the option bufferMaxEntries: 0. The price for this is that requests are also aborted during short network interruptions.
package.json: "mongodb": "3.1.3"
Reconnect existing connections
To fine-tune the reconnect configuration for pre-established connections, you can modify the reconnectTries/reconnectInterval options (default values and further documentation here).
Reconnect initial connection
For the initial connection, the mongo client does not reconnect if it encounters an error (see below). I believe it should, but in the meantime, I've created the following workaround using the promise-retry library (which uses an exponential backoff strategy).
const promiseRetry = require('promise-retry')
const MongoClient = require('mongodb').MongoClient
const options = {
useNewUrlParser: true,
reconnectTries: 60,
reconnectInterval: 1000,
poolSize: 10,
bufferMaxEntries: 0
}
const promiseRetryOptions = {
retries: options.reconnectTries,
factor: 1.5,
minTimeout: options.reconnectInterval,
maxTimeout: 5000
}
const connect = (url) => {
return promiseRetry((retry, number) => {
console.log(`MongoClient connecting to ${url} - retry number: ${number}`)
return MongoClient.connect(url, options).catch(retry)
}, promiseRetryOptions)
}
module.exports = { connect }
Mongo Initial Connect Error: failed to connect to server [db:27017] on first connect
By default the Mongo driver will try to reconnect 30 times, one every second. After that it will not try to reconnect again.
You can set the number of retries to Number.MAX_VALUE to keep it reconnecting "almost forever":
var connection = "mongodb://127.0.0.1:27017/db";
MongoClient.connect(connection, {
server : {
reconnectTries : Number.MAX_VALUE,
autoReconnect : true
}
}, function (err, db) {
});
With mongodb driver 3.1.10, you can set up your connection as
MongoClient.connect(connectionUrl, {
reconnectInterval: 10000, // wait for 10 seconds before retry
reconnectTries: Number.MAX_VALUE, // retry forever
}, function(err, res) {
console.log('connected')
})
You do not have to specify autoReconnect: true as that's the default.
It's happening because it might have crossed the retry connection limit. After number of retries it destroy the TCP connection and become idle. So for it increase the number of retries and it would be better if you increase the gap between connection retry.
Use below options:
retryMiliSeconds {Number, default:5000}, number of milliseconds between retries.
numberOfRetries {Number, default:5}, number of retries off connection.
For more details refer to this link https://mongodb.github.io/node-mongodb-native/driver-articles/mongoclient.html
Solution:
MongoClient.connect("mongodb://localhost:27017/integration_test_?", {
db: {
native_parser: false,
retryMiliSeconds: 100000,
numberOfRetries: 100
},
server: {
socketOptions: {
connectTimeoutMS: 500
}
}
}, callback)
Behavior may differ with different versions of driver. You should mention your driver version.
driver version : 2.2.10 (latest)
mongo db version : 3.0.7
Below code will extend the time mongod can take to come back up.
var MongoClient = require('mongodb').MongoClient
, f = require('util').format;
function connectCallback(err, db) {
var col = db.collection('t');
setInterval(function() {
col.insert({a:1}, function(err, r) {
console.log("insert")
console.log(err)
col.findOne({}, function(err, doc) {
console.log("findOne")
console.log(err)
});
})
}, 1000)
}
var options = { server: { reconnectTries: 2000,reconnectInterval: 1000 }}
MongoClient.connect('mongodb://localhost:27017/test',options,connectCallback);
2nd argument can be used to pass server options.
If you was using Mongoose for your Schemas, it would be worth considering my option below since mongoose was never retrying to reconnect to mongoDB implicitly after first attempt failed.
Kindly note I am connecting to Azure CosmosDB for MongoDB API. On yours maybe on the local machine.
Below is my code.
const mongoose = require('mongoose');
// set the global useNewUrlParser option to turn on useNewUrlParser for every connection by default.
mongoose.set('useNewUrlParser', true);
// In order to use `findOneAndUpdate()` and `findOneAndDelete()`
mongoose.set('useFindAndModify', false);
async function mongoDbPool() {
// Closure.
return function connectWithRetry() {
// All the variables and functions in here will Persist in Scope.
const COSMODDBUSER = process.env.COSMODDBUSER;
const COSMOSDBPASSWORD = process.env.COSMOSDBPASSWORD;
const COSMOSDBCONNSTR = process.env.COSMOSDBCONNSTR;
var dbAuth = {
auth: {
user: COSMODDBUSER,
password: COSMOSDBPASSWORD
}
};
const mongoUrl = COSMOSDBCONNSTR + '?ssl=true&replicaSet=globaldb';
return mongoose.connect(mongoUrl, dbAuth, (err) => {
if (err) {
console.error('Failed to connect to mongo - retrying in 5 sec');
console.error(err);
setTimeout(connectWithRetry, 5000);
} else {
console.log(`Connected to Azure CosmosDB for MongoDB API.`);
}
});
};}
You may decide to export and reuse this module everywhere you need to connect to db via Dependency Injection. But instead I will only show how to access the database connection for now.
(async () => {
var dbPools = await Promise.all([mongoDbPool()]);
var mongoDbInstance = await dbPools[0]();
// Now use "mongoDbInstance" to do what you need.
})();
I am trying to create a audio broadcasting app using WebRTC. To make it compatible with IE I am using Teamsys plugin from Attlasian.
In most of the demos available on internet I have seen two audio/video controls on a single page. But I am trying it with two page application. one for sender and another for reciever.
I am sending my stream description using XHR to a database where it is received by the another user and used as local description for the peer connection on receiver end.
Here is the code :
Sender
function gotStream(stream) {
console.log('Received local stream');
// Call the polyfill wrapper to attach the media stream to this element.
localstream = stream;
audio1 = attachMediaStream(audio1, stream);
pc1.addStream(localstream);
console.log('Adding Local Stream to peer connection');
pc1.createOffer(gotDescription1, onCreateSessionDescriptionError);
}
function gotDescription1(desc) {
pc1.setLocalDescription(desc);
console.log('Offer from pc1 \n' + desc);
console.log('Offer from pc1 \n' + desc.sdp);
$.ajax({
type: "POST",
url: '../../home/saveaddress',
contentType: "application/json; charset=utf-8",
data: JSON.stringify({ SDP: desc }),
dataType: "json",
success: function (result) {
if (result) {
console.log('SDP Saved');
}
});
}
function iceCallback2(event) {
if (event.candidate) {
pc1.addIceCandidate(event.candidate,
onAddIceCandidateSuccess, onAddIceCandidateError);
console.log('Remote ICE candidate: \n ' + event.candidate.candidate);
}
}
At Receiver End
var pcConstraints = {
'optional': []
};
pc2 = new RTCPeerConnection(servers, pcConstraints);
console.log('Created remote peer connection object pc2');
pc2.onicecandidate = iceCallback1;
pc2.onaddstream = gotRemoteStream;
$.ajax({
type: "GET",
url: '../../home/getsavedaddress',
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function (result) {
if (result) {
gotDescription1(result);
}
},
error: function () {
}
});
function gotDescription1(desc) {
console.log('Offer from pc1 \n' + desc.sdp);
console.log('Offer from pc1 \n' + pc2);
pc2.setRemoteDescription(new RTCSessionDescription(desc));
pc2.createAnswer(gotDescription2, onCreateSessionDescriptionError,
sdpConstraints);
}
Using this I get the SDP from server , vedio tag has a source now. but video is not playing not showing anything.a an y clues..
also I am using asp.net for site , do I need to use node js in this project.
Thanks
Your question is lacking information, but I will give my opinion on it.
Are you supporting Trickle ICE? It seems you may be sending the SDP too fast!
When you do a
pc1.setLocalDescription(desc);
The ICE Candidates start being gathered based on the TURN and STUN server configured in your code here (servers parameter):
pc2 = new RTCPeerConnection(servers, pcConstraints);
That said, they are not yet included in your SDP. It can take a few milliseconds before the media ports are set in the localDescription Object. Your first error is that you are sending the "desc" Object from gotDescription1 instead of the post setLocalDescription SDP. That SDP doesn't have the proper media ports yet.
In your code, you are sending the SDP right away without waiting. My guess is that the SDP is not yet completed and you are not supporting Trickle. Because of that, even if signalling might look good, you will not see any media flowing.