I am writing some end-to-end test cases to test socket connections in my app. I expect receiving socket events after specific rest API requests. For instance, after hitting: /api/v1/[createTag], I expect receiving createTag event to be captured by socket.io-client. The issue is that, it is very inconsistently passing, and sometimes failing, with good rest API requests. The reason to fail is that done() event inside socket.on('createTag' ... is never called, so it gets timeout. On browser, currently all the API endpoints and sockets seem to be working fine. Is there a specific configuration that I might be missing in order to test socket.io-client within Node.js environment and Jest?
Below is my test cases, and thanks a lot in advance:
describe('Socket integration tests: ', () => {
beforeAll(async done => {
await apiInit();
const result = await requests.userSignIn(TEST_MAIL, TEST_PASSWORD);
TEST_USER = result.user;
SESSION = result.user.session;
console.log('Test user authenticated succesfully.');
done();
});
beforeEach(done => {
socket = io(config.socket_host, { forceNew: true })
socket.on('connect', () => {
console.log('Socket connection succesful.');
socket.emit('session', { data: SESSION }, (r) => {
console.log('Socket session successful.');
done();
});
});
})
test('Receiving createTag socket event?', async(done) => {
console.log('API request on createTag');
const response = await Requester.post(...);
console.log('API response on createTag', response);
socket.on('createTag', result => {
console.log('createTag socket event succesful.');
createdTagXid = result.data.xid;
done();
})
});
afterEach(done => {
if(socket.connected) {
console.log('disconnecting.');
socket.disconnect();
} else {
console.log('no connection to break');
}
done();
})
}
Basically, setting event handles after async API calls seems to be the issue. So I should have first set the socket.on( ... and then call rest API.
Related
I am trying to figure out if following is secure and correct way of handling user authentication with websockets when using socket.io. My understanding is that we only need to verify authentication (jwt token) during socket connection and can assume that all subsequent events are authenticated?
I essentially made this middleware to verify jwt token once during connect event and then set user details on socket.
socketIo.use((socket, next) => {
socket.on('connect', async () => {
try {
const { authToken } = socket.handshake.auth
const user = await verifyUserToken(authToken)
socket.user = {
uid: user.uid
}
next()
} catch {
next(new Error(SocketError.UNAUTHORIZED))
}
})
})
Is this correct and secure, or do I need to verify token on every event call?
I have a PWA project where I send the data to server. During this process, if the user is offline then the data is stored in indexedDb and a sync tag is registered. So, then when the user comes online that data can sent to the server.
But In my case the sync event gets executed immediately when the we register a sync event tag, which means the data is tried to be sent to server while its offline, which is not going to work.
I think the sync event supposed to fire while its online only, what could be issue here ?
The service worker's sync event works accordingly when I tried to enable and disable the offline option of chrome devtools, and also works correctly in my android phone.
This is how I register my sync tag
function onFailure() {
var form = document.querySelector("form");
//Register the sync on post form error
if ('serviceWorker' in navigator && 'SyncManager' in window) {
navigator.serviceWorker.ready
.then(function (sw) {
var post = {
datetime1: form.datetime1.value,
datetime: form.datetime.value,
name: form.name.value,
image: form.url.value,
message: form.comment.value
};
writeData('sync-comments', post)
.then(function () {
return sw.sync.register('sync-new-comment');
})
.then(function () {
console.log("[Sync tag registered]");
})
.catch(function (err) {
console.log(err);
});
});
}
}
And this is how the sync event is called
self.addEventListener('sync', function (event) {
console.log("[Service worker] Sync new comment", event);
if (event.tag === 'sync-new-comment') {
event.waitUntil(
readAllData('sync-comments')
.then(function (data) {
setTimeout(() => {
data.forEach(async (dt) => {
const url = "/api/post_data/post_new_comment";
const parameters = {
method: 'POST',
headers: {
'Content-Type': "application/json",
'Accept': 'application/json'
},
body: JSON.stringify({
datetime: dt.datetime,
name: dt.name,
url: dt.image,
comment: dt.message,
datetime1: dt.datetime1,
})
};
fetch(url, parameters)
.then((res) => {
return res.json();
})
.then(response => {
if (response && response.datetimeid) deleteItemFromData('sync-comments', response.datetimeid);
}).catch((error) => {
console.log('[error post message]', error.message);
})
})
}, 5000);
})
);
}
});
you mention
The service worker's sync event works accordingly when I tried to enable and disable the offline option of chrome devtools, and also works correctly in my android phone.
So I'm not sure which case is the one failing.
You are right that the sync will be triggered when the browser thinks the user is online, if the browser detects that the user is online at the time of the sync registration it will trigger the sync:
In true extensible web style, this is a low level feature that gives you the freedom to do what you need. You ask for an event to be fired when the user has connectivity, which is immediate if the user already has connectivity. Then, you listen for that event and do whatever you need to do.
Also, from the workbox documentation
Browsers that support the BackgroundSync API will automatically replay failed requests on your behalf at an interval managed by the browser, likely using exponential backoff between replay attempts.
I am invoking axios post method in aws lambda. Most of the times lambda does not return any result.logs show the following results
START RequestId: ac92d268-d212-4b80-a06c-927922fcf1d5 Version: $LATEST
END RequestId: ac92d268-d212-4b80-a06c-927922fcf1d5
But some times lambda return expected results. Looks like lambda is not waiting for axios to complete. below is lambda code.
var axios = require('axios')
exports.handler = async (event, context,callback) => {
axios.post('https://example.com/testapi/api.asmx/GetNames', {})
.then((res) => {
console.log(JSON.stringify(res.data,null,2))
callback(null,'success');
})
.catch((error) => {
console.error(error)
callback(null,'error');
})
};
Your handler is async which means it will run asynchronously and return a Promise. This means that your function is being terminated before your code actually runs.
Since axios already works with Promises and your method already is async, you don't need to change too much. This will fix the problem:
const axios = require('axios')
exports.handler = async (event) => {
try {
const res = await axios.post('https://example.com/testapi/api.asmx/GetNames', {})
console.log(res)
return {
statusCode: 200,
body: JSON.stringify(res)
}
} catch (e) {
console.log(e)
return {
statusCode: 400,
body: JSON.stringify(e)
}
}
};
You can understand more around async/await if you want to.
I was having a similar issue where I make a 3rd party API call with Axios in Lambda, after spending almost a day noticed that my lambda had 6 seconds default timeout. Sometimes the response from the api was getting longer than 6 seconds and it was causing a 502 response.
I'm having issues getting stubRequest to work properly. Here's my code:
it('should stub my request', (done) => {
moxios.stubRequest('/authenticate', {
status: 200
})
//here a call to /authenticate is being made
SessionService.login('foo', 'bar')
moxios.wait(() => {
expect(something).toHaveHappened()
done()
})
})
This works fine:
it('should stub my request', (done) => {
SessionService.login('foo', 'bar')
moxios.wait(async () => {
let request = moxios.requests.mostRecent()
await request.respondWith({
status: 200
})
expect(something).toHaveHappened()
done()
})
})
The second method just get's the last call though, and I'd really like to be able to explicitely stub certain requests.
I'm running Jest with Vue.
I landed here with a similar goal and eventually solved it using a different approach that may be helpful to others:
moxios.requests has a method .get() (source code) that lets you grab a specific request from moxios.requests based on the url. This way, if you have multiple requests, your tests don't require the requests to occur in a specific order to work.
Here's what it looks like:
moxios.wait(() => {
// Grab a specific API request based on the URL
const request = moxios.requests.get('get', 'endpoint/to/stub');
// Stub the response with whatever you would like
request.respondWith(yourStubbedResponseHere)
.then(() => {
// Your assertions go here
done();
});
});
NOTE:
The name of the method .get() is a bit misleading. It can handle different types of HTTP requests. The type is passed as the first parameter like: moxios.requests.get(requestType, url)
it would be nice if you show us the service. Service call must be inside the moxios wait func and outside must be the axios call alone. I have pasted a simplified with stubRequest
describe('Fetch a product action', () => {
let onFulfilled;
let onRejected;
beforeEach(() => {
moxios.install();
store = mockStore({});
onFulfilled = sinon.spy();
onRejected = sinon.spy();
});
afterEach(() => {
moxios.uninstall();
});
it('can fetch the product successfully', done => {
const API_URL = `http://localhost:3000/products/`;
moxios.stubRequest(API_URL, {
status: 200,
response: mockDataSingleProduct
});
axios.get(API_URL, mockDataSingleProduct).then(onFulfilled);
const expectedActions = [
{
type: ACTION.FETCH_PRODUCT,
payload: mockDataSingleProduct
}
];
moxios.wait(function() {
const response = onFulfilled.getCall(0).args[0];
expect(onFulfilled.calledOnce).toBe(true);
expect(response.status).toBe(200);
expect(response.data).toEqual(mockDataSingleProduct);
return store.dispatch(fetchProduct(mockDataSingleProduct.id))
.then(() => {
var actions = store.getActions();
expect(actions.length).toBe(1);
expect(actions[0].type).toBe(ACTION.FETCH_PRODUCT);
expect(actions[0].payload).not.toBe(null || undefined);
expect(actions[0].payload).toEqual(mockDataSingleProduct);
expect(actions).toEqual(expectedActions);
done();
});
});
});
})
I have a mongoose schema with a unique field and I am trying to write a backend (express) integration test which checks that POSTing the same entity twice results in HTTP 400. When testing manually behaviour is as excpected. Automatic testing however requires a wait:
it('should not accept two projects with the same name', function(done) {
var project = // ...
postProjectExpect201(project,
() => {
setTimeout( () => {
postProjectExpect400(project, done);
},100);
}
);
});
The two post... methods do as named and the code above works fine, but if the timeout is removed, BOTH requests receive HTTP 200 (though only one entity created in the database).
I'm new to those technologies and I'm not sure what's going on. Could this be a mongodb related concurrency issue and if so how should I deal with it?
The database call looks like this:
Project.create(req.body)
.then(respondWithResult(res, 201))
.catch(next);
I already tried connecting to mongodb with ?w=1 option btw.
Update:
To be more verbose: Project is a mongoose model and next is my express error handler which catches the duplicate error.
The test functions:
var postProjectExpect201=function(project, done, validateProject) {
request(app)
.post('/api/projects')
.send(project)
.expect(201)
.expect('Content-Type', /json/)
.end((err, res) => {
if (err) {
return done(err);
}
validateProject && validateProject(res.body);
done();
});
};
var postProjectExpect400=function(project, done) {
request(app)
.post('/api/projects')
.send(project)
.expect(400)
.end((err, res) => {
if (err) {
return done(err);
}
done();
});
};