Angular2 e2e test case with protractor throwing error - sockets

I have created my app with angular2-webpack-starter and i have used socket.io with it. I have created one common service to create socket connection and listen its method. this service is used and initialized after user is logged in. When app is running and i execute test case for login, i am checking url with below code :
browser.getCurrentUrl().then((url) => {
expect(url).toEqual('/dashboard');
});
The issue is when socket is connected its throwing error 'Timed out waiting for Protractor to synchronize with the page after 15 seconds' and if socket is not connected same test case is running without any error.

I'm not sure if connecting to the socket is actually make things take longer or not but if the 15 seconds isn't enough time, you can change the
allScriptsTimeout:timeout_in_millis in your protractor configuration file
protractor timeouts

So the solution I have found is:
(This is copied from here for your convenience. All credit goes to https://github.com/cpa-level-it
https://github.com/angular/angular/issues/11853#issuecomment-277185526)
What I did to fix the problem was using ngZone everywhere I have an observable that relies on socket.io.
So let's say you have this method in your service that gives you an observable on a socket.io.
private socket: SocketIOClient.Socket;
public getSocketIOEvents(): Observable<SocketIOEvent> {
if (this.socket == null) {
this.socket = io.connect(this._socketPath);
}
return Observable.create((observer: any) => {
this.socket.on('eventA', (item: any) => observer.next(new SocketIOEvent(item)));
this.socket.on('eventB', (item: any) => observer.next(new SocketIOEvent(item)));
return () => this.socket.close();
});
}
Then you need to use the ngZone service to tell Angular to create the socket outside the Angular 2 zone and then execute the callback of the Observable inside the Angular 2 zone.
import { NgZone } from '#angular/core';
constructor(
private socketService: SocketIOService, ,
private ngZone: NgZone) { }
ngOnInit() {
// Subscribe to the Observable outside Angular zone...
this.ngZone.runOutsideAngular(() => {
this.socketService
.getSocketIOEvents()
.subscribe(event => {
// Come back into Angular zone when there is a callback from the Observable
this.ngZone.run(() => {
this.handleEvent(event);
});
});
});
}
This way protractor doesn't hang waiting on the socket.

Related

How do I gracefully disconnect MongoDB in Google Functions? Behavior of "normal" Cloud Run and "Functions Cloud Run" seems to be different

In a normal Cloud Run something like the following seems to properly close a Mongoose/MongoDB connection.
const cleanup = async () => {
await mongoose.disconnect()
console.log('database | disconnected from db')
process.exit()
}
const shutdownSignals = ['SIGTERM', 'SIGINT']
shutdownSignals.forEach((sig) => process.once(sig, cleanup))
But for a Cloud-Functions-managed Cloud Run this seems not to be the case. The instances shut down without waiting the usual 10s that "normal" Cloud Runs give after the SIGTERM is sent, so I never see the database | disconnected from db.
How would one go about this? I don't wanna create a connection for every single Cloud Functions call (very wasteful in my case).
Well, here is what I went with for now:
import mongoose from 'mongoose'
import { Sema } from 'async-sema'
functions.cloudEvent('someCloudFunction', async (event) => {
await connect()
// actual computation here
await disconnect()
})
const state = {
num: 0,
sema: new Sema(1),
}
export async function connect() {
await state.sema.acquire()
if (state.num === 0) {
try {
await mongoose.connect(MONGO_DB_URL)
} catch (e) {
process.exit(1)
}
}
state.num += 1
state.sema.release()
}
export async function disconnect() {
await state.sema.acquire()
state.num -= 1
if (state.num === 0) {
await mongoose.disconnect()
}
state.sema.release()
}
As one can see I used kind of a "reference counting" of the processes which want to use the connection, and ensured proper concurrency with async-sema.
I should note that this works well with the setup I have; I allow many concurrent requests to one of my Cloud Functions instances. In other cases this solution might not improve over just opening up (and closing) a connection every single time the function is called. But as stuff like https://cloud.google.com/functions/docs/writing/write-event-driven-functions#termination seems to imply, everything has to be handled inside the cloudEvent function.

Is it necessary to close a Mongodb Change Stream?

I coded the next Node/Express/Mongo script:
const { MongoClient } = require("mongodb");
const stream = require("stream");
async function main() {
// CONECTING TO LOCALHOST (REPLICA SET)
const client = new MongoClient("mongodb://localhost:27018");
try{
// CONECTION
await client.connect();
// EXECUTING MY WATCHER
console.log("Watching ...");
await myWatcher(client, 15000);
} catch (e) {
// ERROR MANAGEMENT
console.log(`Error > ${e}`);
} finally {
// CLOSING CLIENT CONECTION ???
await client.close(); << ????
}
}main().catch(console.error);
// MY WATCHER. LISTENING CHANGES FROM MY DATABASE
async function myWatcher(client, timeInMs, pipeline = []) {
// TARGET TO WATCH
const watching = client.db("myDatabase").collection("myCollection").watch(pipeline);
// WATCHING CHANGES ON TARGET
watching.on("change", (next) => {
console.log(JSON.stringify(next));
console.log(`Doing my things...`);
});
// CLOSING THE WATCHER ???
closeChangeStream(timeInMs, watching); << ????
}
// CHANGE STREAM CLOSER
function closeChangeStream(timeInMs = 60000, watching) {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Closing the change stream");
watching.close();
resolve();
}, timeInMs);
});
}
So, the goal is to keep always myWatcher function in an active state, to watch any database changes and for example, send an user notification when is detected some updating. The closeChangeStream function close myWatcher function in X seconds after any database changes. So, to keep the myWatcher always active, do you recomment not to use the closeChangeStream function ??
Another thing. With this goal in mind, to keep always myWatcher function in an active state, if I keep the await client.close();, my code emits an error: Topology is closed, so when I ignore this await client.close(), my code works perfectly. Do you recomment not to use the await client.close() function to keep always myWatcher function in an active state ??
Im a newbee in this topics !
thanks for the advice !
Thanks for help !
MongoDB change streams are implemented in a pub/sub paradigm.
Send your application to a friend in the Sudan. Have both you and your friend run the application (that has the change stream implemented). If you open up mongosh and run db.getCollection('myCollection').updateOne({_id: ObjectId("6220ee09197c13d24a7997b7")}, {FirstName: Bob}); both you and your friend will get the console.log for the change stream.
This is assuming you're not running localhost, but you can simulate this with two copies of the applications locally.
The issue comes from going into production and suddenly you have 200 load bearers, 5 developers, etc. running and your watch fires a ton of writes around the globe.
I believe, the practice is to functionize it. Wrap your watch in a function and fire the function when you're about to do a write (and close after you do your associated writes).

SignalR Core - Error: Websocket closed with status code: 1006

I use SignalR in an Angular app. When I destroy component in Angular I also want to stop connection to the hub. I use the command:
this.hubConnection.stop();
But I get an error in Chrome console:
Websocket closed with status code: 1006
In Edge: ERROR Error: Uncaught (in promise): Error: Invocation canceled due to connection being closed. Error: Invocation canceled due to connection being closed.
It actually works and connection has been stopped, but I would like to know why I get the error.
This is how I start the hub:
this.hubConnection = new HubConnectionBuilder()
.withUrl("/matchHub")
.build();
this.hubConnection.on("MatchUpdate", (match: Match) => {
// some magic
})
this.hubConnection
.start()
.then(() => {
this.hubConnection.invoke("SendUpdates");
});
EDIT
I finally find the issue. Its caused by change streams from Mongo. If I remove the code from SendUpdates() method then OnDisconnected is triggered.
public class MatchHub : Hub
{
private readonly IMatchManager matchManager;
public MatchHub(IMatchManager matchManager)
{
this.matchManager = matchManager;
}
public async Task SendUpdates() {
using (var changeStream = matchManager.GetChangeStream()) {
while (changeStream.MoveNext()) {
var changeStreamDocument = changeStream.Current.FullDocument;
if (changeStreamDocument == null) {
changeStreamDocument = BsonSerializer.Deserialize<Match>(changeStream.Current.DocumentKey);
}
await Clients.Caller.SendAsync("MatchUpdate", changeStreamDocument);
}
}
}
public override async Task OnDisconnectedAsync(Exception exception)
{
await base.OnDisconnectedAsync(exception);
}
}
Method GetChangeStream from the manager.
ChangeStreamOptions options = new ChangeStreamOptions() { FullDocument = ChangeStreamFullDocumentOption.UpdateLookup };
var watch = mongoDb.Matches.Watch(options).ToEnumerable().GetEnumerator();
return watch;
But I don't know how to fix it.
This can be for many reasons but i think it is most likely this one:
I think this is because of how the server is handling the connected / disconnected events. I can't say for sure but the connection closing needs to handled correctly on the server also with code. Try overriding the built in On Connected /Disconnected methods on the server and see. My assumption only is that you're closing it but the server isn't closing properly and therefore not relaying the proper closed response.
found as a comment at : getting the reason why websockets closed with close code 1006
Where you don't need to change the connection/disconection because evrything works fine. But as an answer this one is the most likely.
It throws error because the callback doesn't get clear properly.
And it is caused by the return data from websocket.
normally it should return like
However, for some reason it might return something like
the very last response breaking into 2 pieces
And that causes the issue.
I don't think there is a way to bypass this without changing the source code.
I reported this on github repo as well at here
It turns out that I can just utilize invocation response to notify client to stop the hub. So it doesn't trigger racing issue.

Listening for Electron's ipcRenderer message inside a Vue component

Currently, I'm using Vue inside an Electron application. Inside a Vue's master component there are possibly multiple children. Each child listens to a signal that might be broadcasted by Electron's main process, like so:
export default {
...
created() {
ipcRenderer.on('set-service-status', (e, data) => {
// something with the data
})
}
...
}
However when there are more than 11 child components, node throws the error MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 set-service-status listeners added. Use emitter.setMaxListeners() to increase limit. This makes sense since multiple event listeners are being setup, one for every component.
How could this be solved? Should I just listen for the set-service-status signal inside the master component and then use Vue's eventing system to broadcast the message further down to the children? Or is there a better way to deal with this?
as I understand the problem with your current setup is, your starting listening each time component created and this cause problem of having a lot of listeners for one IPC call.
instead of listening via created() put this logic inside of your vuex
and call it only once. or you can still use created() in your entry file, the main root component. and give the data to your child components as props. That also works.
for example;
function setupIpc(dispatch) {
ipcRenderer.on('set-service-status', (e, data) => {
// something with the data
})
ipcRenderer.on('fullscreenChanged', (e, args) => {
dispatch('fullscreenHandler', args)
})
ipcRenderer.send('ipcReady')
}
and only call once when you start the application,
updateState({ commit, dispatch }) {
setupIpc(dispatch)
setInterval(() => { dispatch('stateSaveImmediate') }, 5000)
dispatch('init')
ipcRenderer.once('configGet', (e, data) => {
if (data === !null || !undefined) {
commit(ActionTypes.UPDATE_STATE, data)
} else {
commit(ActionTypes.UPDATE_STATE_ERROR_NO_CONFIG_FILE)
}
dispatch('doSomething')
})
ipcRenderer.send('configGet')
},

Want to indefinitely Observe an Array that changes over time

I am trying to use Rxjs Observables To watch for changes in my array with no luck.
Imports:
import {Observable} from 'rxjs/Observable';
import 'rxjs/add/observable/of';
In my main service in Angular 2
I am getting an array from a socket.io server that changes when users connect or disconnect.
I set the data after every change.
I know userList is Updating when the socket emits, but for some reason I can't figure out how to continuously observe this change in my component.
Main Service:
socket.on('get users',(data)=>{
this.userList= data;
});
Function in Main Service - getUsers():
getUsers(){
return Observable.of(this.userList);
}
I am trying to both subscribe to the userList var, and use an async pipe, but neither are updating, They only work the first time then stop.
How do I make it actually indefinitely observe for changes?
MainService
userList: Subject = new Subject<any>();
userList$ = this.userList.asObservable();
socket.on('get users', (data: any) => {
this.userList.next(data);
});
In Component
this.mainService.userList$
.subscribe(
(data:any) => console.log(data)
);