I'm using *mgo.Session of MongoDB driver labix_mgo for Go, however I don't know if a session is closed. When I use a closed session, a runtime error will be raised. I want to skip the session copy if I know a session is closed.
First, the mgo driver you are using: gopkg.in/mgo.v2 (hosted at https://github.com/go-mgo/mgo) is not maintained anymore. Instead use the community supported fork github.com/globalsign/mgo, it has a backward compatible API.
mgo.Session does not provide a way to detect if it has been closed (using its Session.Close() method).
But you shouldn't depend on others closing the session you are using. The same code that obtains a session should be responsible to close it. Follow this simple principle, and you won't bump into problems of using a closed session.
For instance, if we take a web server as an example, obtain a session using Session.Copy() (or Session.Clone()) in the beginning of the request, and close the session (preferably with defer) in the same handler, in the same function. And just pass along this session to whoever needs it. They don't have to close it, they mustn't, that's the responsibility of the function that created it.
Related
I'm using grails 4 to develop my backend, and I want to control how connections to my MongoDb is logged. Right now, nothing is logged (at least not unless the connection fails). There seems to be a lot going on under the hood, and the whole process of connecting to my database is very much hidden. It seems like the main bean that takes care of this is called mongoDatastore, but is there an easy way to, for example, register a listener for connection events on this bean? Or do I have to extend MongoDatastore and register my own bean?
I also had an idea of using the applicationContext to fetch the bean, and from there somehow register an event listener, but I don't know when or where in the initialization phase I would to that.
All MongoDB 4.4-compatible drivers publish CMAP events that the application can subscribe to. These tell you when individual connections are made and closed as well as pool behavior.
I'm new to Akka and Scala and self learning this to do a small project with websockets. End goal is simple, make a basic chat server that publishes + subscribes messages on some webpage.
In fact, after perusing their docs, I already found the pages that are relevant to my goal, namely this and this.
Using dynamic junctions (aka MergeHub & BroadcastHub), and the Flow.fromSinkAndSource() method, I was able to acheive a very basic example of what I wanted. We can even get a kill switch using the example from the akka docs which I have shown below. Code is like:
private lazy val connHub: Flow[Message, Message, UniqueKillSwitch] = {
val (sink, source) = MergeHub.source[Message].toMat(BroadcastHub.sink[Message])(Keep.both).run()
Flow.fromSinkAndSourceCoupled(sink, source).joinMat(KillSwitches.singleBidi[Message, Message])(Keep.right)
}
However, I now see one issue. The above will return a Flow that will be used by Akka's websocket directive: akka.http.scaladsl.server.Directives.handleWebSocketMessages(/* FLOW GOES HERE */)
That means the akka code itself will materialize this flow for me so long as I provide it the handler.
But let's say I wanted to arbitrarily kill one user's connection through a KillSwitch (maybe because their session has expired on my application). While a user's websocket would be added through the above handler, since my code would not be explicitly materializing that flow, I won't get access to a KillSwitch. Therefore, I can't kill the connection, only the user can when they leave the webpage.
It's strange to me that the docs would mention the kill switch method without showing how I would get one using the websocket api.
Can anyone suggest a solution as to how I could obtain the kill switch per connection? Do I have a fundamental misunderstanding of how this should work?
Thanks in advance.
I'm very happy to say that after a lot of time, research, and coding, I have an answer for this question. In order to do this, I had to post in the Akka Gitter as well as the Lightbend discussion forum. Please refer to the amazing answer I got there for some perspective on the problem and some solutions. I'll summarize that here.
In order to get the UniqueKillSwitch from the code that I was using, I needed to use the mapMaterializeValue() method on the Flow that I was returning. Here is the code that I'm using to return a Flow to the handleWebSocketMessages directive now:
// note - state will not be updated properly if cancellation events come through from the client side as user->killswitch mapping may still remain in concurrent map even if the connections are closed
Flow.fromSinkAndSourceCoupled(mergeHubSink, broadcastHubSource)
.joinMat(KillSwitches.singleBidi[Message, Message])(Keep.right)
.mapMaterializedValue { killSwitch =>
connections.put(user, killSwitch) // add kill switch in side effect once value is ready from materialization
NotUsed.notUsed()
}
The above code lives in a Chatroom class I've created that has access to the mergehub and broadcast hub materialized sink and source. It also has access to a concurrent hashmap that persists the kill switch to a user. In this way, we now have access to the Kill Switch through querying it through a map. From there, you can call switch.shutdown() to kill the user's connection from the server side.
My main issue was that I originally thought I could get the switch directly even though I didn't control the materialization. This doesn't seem possible. I suggest this method for when you know that the caller that requires your Flow doesn't care about the materialized value (aka the kill switch).
Please reference the answer I've linked for more scenarios and ways to handle this problem.
I implemented an UWP Server Socket following the sample here and it correctly works.
Now I want to make the app able to continuously accept requests, but I expect that when the app is suspendeded and a client sends a request, the server is not able to respond. If I am correct, what is the best way to avoid this status change? If possible, I would prefer a solution with Extended Execution instead of implementing a Background Task, but I don't know if the following code in the OnSuspending method is enough to keep the app in the Running status:
var newSession = new ExtendedExecutionSession();
newSession.Reason = ExtendedExecutionReason.Unspecified;
newSession.Revoked += SessionRevoked;
I saw people calling a "LongRunningWork()" function in other samples, but in my case the code to execute is already defined in the code-behind of the view as shown in the link above, so I would like simply keeping the app always running. Keep in mind that it is a LOB application, so I don't have Store limits.
I'm very new to Node.JS and asynchronous programming and have a challenging question. I want to fork a process from Node and then shoot that output back to the browser with Websockets, specifically the Sockets.io library. What is the best and most robust way to handle this?
The data isn't mission critical, it's just for updating the user on status. So if they leave the page, the socket can close and the child process can continue to run. It'd also be neat if there was some way to access the socket via a specific URL in Express and come back to it later (but that may be another days work).
Use the Redis Store support of socket.io:
var RedisStore = require('socket.io').RedisStore;
var io = require('socket.io').listen(app);
io.set('store', new RedisStore());
The socket.io library use redis server to storage the data and the events.
1) What is the default lifetime of session returned by SugarCRM login REST call.
2) Can storing the session be deemed a good practice?
Please advise.
The session life is the same as the PHP session life on the server, which can be controlled somewhat via the session.gc_maxlifetime directive in your php.ini file.
When you say "storing" the session, do you mean trying to use it across multiple scripts. Not sure if there is a good reason to do that, mainly because of the weirdness of how PHP sessions GC. I would initialize a session for each script, or at the very least check to see if you session is valid on each call to see if you need to re-init or not.