In ruby standard library, popen return a object which has pid method,
In eventmachine, popen return a Connection object which contain no process id information.
Demo code here:
require 'eventmachine'
def work
s = EM.popen('ls')
puts s.pid # how to get new process's pid?
end
EventMachine.run {
work
EM.stop
}
As EM.popen returns a standard a EM Connection object, you can use s.get_pid to get the pid of the subprocess.
http://eventmachine.rubyforge.org/EventMachine/Connection.html#get_pid-instance_method
Related
I am having a java agent which loops through the view and gets the attachment from each document, The attachment is nothing but the .dxl file containing the document xml data. I am extracting the file at some temp directory and trying import the extracted .dxl as soon as it get extracted.
But the problem here is ,it only imports or works on first document's attachment in the loop and throws the error in java debug console
NotesException: DXL importer operation failed
at lotus.domino.local.DxlImporter.importDxl(Unknown Source)
at JavaAgent.NotesMain(Unknown Source)
at lotus.domino.AgentBase.runNotes(Unknown Source)
at lotus.domino.NotesThread.run(Unknown Source)
My java Agent code is
public class JavaAgent extends AgentBase {
static DxlImporter importer = null;
public void NotesMain() {
try {
Session session = getSession();
AgentContext agentContext = session.getAgentContext();
// (Your code goes here)
// Get current database
Database db = agentContext.getCurrentDatabase();
View v = db.getView("DXLProcessing_mails");
DocumentCollection dxl_tranfered_mail = v.getAllDocumentsByKey("dxl_tranfered_mail");
Document dxlDoc = dxl_tranfered_mail.getFirstDocument();
while(dxlDoc!=null){
RichTextItem rt = (RichTextItem) dxlDoc.getFirstItem("body");
Vector allObjects= rt.getEmbeddedObjects();
System.out.println("File name is "+ allObjects.get(0));
EmbeddedObject eo = dxlDoc.getAttachment(allObjects.get(0).toString());
if(eo.getFileSize()>0){
eo.extractFile(System.getProperty("java.io.tmpdir") + eo.getName());
System.out.println("Extracted File to "+System.getProperty("java.io.tmpdir") + eo.getName());
String filePath = System.getProperty("java.io.tmpdir") + eo.getName();
Stream stream = session.createStream();
if (stream.open(filePath) & (stream.getBytes() >0)) {
System.out.println("In If"+System.getProperty("java.io.tmpdir"));
importer = session.createDxlImporter();
importer.setDocumentImportOption(DxlImporter.DXLIMPORTOPTION_CREATE);
System.out.println("Break Point");
importer.importDxl(stream,db);
System.out.println("Imported Sucessfully");
}else{
System.out.println("In else"+stream.getBytes());
}
}
dxlDoc = dxl_tranfered_mail.getNextDocument();
}
} catch(Exception e) {
e.printStackTrace();
}
The code executes till it prints "Break Point" and throws the error but the attachment get imported for first time
In other case if i hard code the filePath for the specific dxl file from file system it imports the dxl as document in the database with no errors
I am wondering if it is the issue of the stream passed doesn't get completes and the next loop executes.
Any kind of suggestion will be helpful.
I can't see any part where your while loop would move on from the first document.
Usually you would have something like:
Document nextDoc = dxl_tranfered_mail.getNextDocument(dxlDoc);
dxlDoc.recycle();
dxlDoc = nextDoc;
Near the end of the loop to advance it to the next document. As your code currently stands it looks like it would never advance, and always be on the first document.
If you do not know about the need to 'recycle' domino objects I suggest you have a search for some blog posts articles that explain the need to do so.
It is a little complicated but basically, the Java Objects are just a 'wrapper' for the the objects in the C API.
Whenever you create a Domino Object (such as a Document, View, DocumentCollection etc.) a memory handle is allocated in the underlying 'C' layer. This needs to be released (or recycled) and it will eventually do so when the session is recycled, however when your are processing in a loop it is much more important to recycle as you can easily exhaust the available memory handles and cause a crash.
Also it's possible you may need to close (and recycle) each Stream after you a finished importing each file
Lastly, double check that the extracted file that is causing an exception is definitely a valid DXL file, it could simply be that some of the attachments are not valid DXL and will always throw an exception.
you could put a try/catch within the loop to handle that scenario (and report the problem files), which will allow the agent to continue without halting
If I started a process using Scala Process/ProcessBuilder. How can I get the pid of the process that was created?
I could not find any mention of the pid in the official docs:
http://www.scala-lang.org/api/2.10.4/index.html#scala.sys.process.Process
http://www.scala-lang.org/api/2.10.4/index.html#scala.sys.process.ProcessBuilder
http://www.scala-lang.org/api/2.10.4/index.html#scala.sys.process.package
2016: same question; I've been clicking through related questions for a few minutes, but still couldn't find any solution that is generally agreed upon. Here is a Scala version inspired by LRBH10's Java code in the answer linked by wingedsubmariner:
import scala.sys.process.Process
def pid(p: Process): Long = {
val procField = p.getClass.getDeclaredField("p")
procField.synchronized {
procField.setAccessible(true)
val proc = procField.get(p)
try {
proc match {
case unixProc
if unixProc.getClass.getName == "java.lang.UNIXProcess" => {
val pidField = unixProc.getClass.getDeclaredField("pid")
pidField.synchronized {
pidField.setAccessible(true)
try {
pidField.getLong(unixProc)
} finally {
pidField.setAccessible(false)
}
}
}
// If someone wants to add support for Windows processes,
// this would be the right place to do it:
case _ => throw new RuntimeException(
"Cannot get PID of a " + proc.getClass.getName)
}
} finally {
procField.setAccessible(false)
}
}
}
// little demo
val proc = Process("echo 'blah blah blaaah'").run()
println(pid(proc))
WARNING: scala code runner is essentially just a bash script, so when you use it to launch scala programs, it will do thousand things before actually starting the java process. Therefore, the PID of the java-process that you are actually interested in will be much larger than what the above code snippet returns. So this method is essentially useless if you start your processes with scala. Use java directly, and explicitly add Scala library to the classpath.
The scala.sys.io.process classes are wrappers around the Java classes for starting processes, and unfortunately it is difficult to obtain the PID from this API. See the stackoverlow question for this, How to get PID of process I've just started within java program?.
I am using Motor driver to connect to Mongo DB.
Below is the code to insert data to the collection
client = motor.MotorClient('mongodb://localhost:27017').open_sync()
conn = client['database']['collection']
result = conn.insert({'foo': 'bar'})
print 'result:', result
The insert statement always returns None.
This is not a Tornado application.
Can motor be only used with Tornado?
If not why is the insert returning none?
You use motor just like pymongo. But motor is asynchronous: it means that when your print is executed, maybe the db request is not finished yet.
Furthermore, motor insert does not return anything, and you need to use a callback function as second argument with it. Cf. the differences between pymongo and motor, and the motor tutorial on how to insert a document.
In your case, the good way of soing this would be:
client = motor.MotorClient('mongodb://localhost:27017').open_sync()
conn = client['database']['collection']
result = conn.insert({'foo': 'bar'}, callback=once_done)
def once_done(result, error):
if error: print 'error:', error
else:
print 'result:', result
I guess, the WriteConcern is not set from the client-driver.
If you set it as safe=true, then you will get the status of the insert operation. Otherwise with safe=false, the insert operation is fire and forget.
You can try:
motor.MotorClient('mongodb://localhost:27017/?safe=true')
i'm trying to get a clean and gracefull shutdown, and for some reason, it wont execute. iv'e tried:
sys addShutdownHook{
logger.warn("SHUTTING DOWN...")
// irrelevant logic here...
}
and also:
Runtime.getRuntime.addShutdownHook(ThreadOperations.delayOnThread{
logger.warn("SHUTTING DOWN...")
// irrelevant logic here...
}
)
where ThreadOperations.delayOnThread definition is:
object ThreadOperations {
def startOnThread(body: =>Unit) : Thread = {
onThread(true, body)
}
def delayOnThread(body: =>Unit) : Thread = {
onThread(false, body)
}
private def onThread(runNow : Boolean, body: =>Unit) : Thread = {
val t=new Thread {
override def run=body
}
if(runNow){t.start}
t
}
// more irrelevant operations...
}
but when i run my program (executable jar, double activation), the hook does not start. so what am i doing wrong? what is the right way to add a shutdown hook in scala? is it in any way related to the fact i'm using double activation?
double activation is done like that:
object Gate extends App {
val givenArgs = if(args.isEmpty){
Array("run")
}else{
args
}
val jar = Main.getClass.getProtectionDomain().getCodeSource().getLocation().getFile;
val dir = jar.dropRight(jar.split(System.getProperty("file.separator")).last.length + 1)
val arguments = Seq("java", "-cp", jar, "boot.Main") ++ givenArgs.toSeq
Process(arguments, new java.io.File(dir)).run();
}
(scala version: 2.9.2 )
thanks.
In your second attempt, your shutdown hook you seems to just create a thread and never start it (so it just gets garbage collected and does nothing). Did I miss something? (EDIT: yes I did, see comment. My bad).
In the first attempt, the problem might just be that the underlying log has some caching, and the application exits before the log is flushed.
Solved it.
For some reason, I thought that run as opposed to ! would detach the process. It actually hangs on because there are open streams left to the Process, which is returned from run (or maybe it just hangs for another reason, 'cause exec doesn't hang, but returns a Process with open streams to and from the child process, much like run). For this reason, the original process was still alive, and I accidentally sent the signals to it. Of course, it did not contain a handler, or a shutdown hook, so nothing happened.
The solution was to use Runtime.getRuntime.exec(arguments.toArray) instead of Process(arguments, new java.io.File(dir)).run();, close the streams in the Gate object, and send the ^C signal to the right process.
In gtk applications all execution is taking place inside the gtk_main function. And other graphical frame works have similar event loops like app.exec for QT and clutter_main for Clutter. However ZeroMQ is based on the assumption that there is an while (1) ... loop that it is inserted into (see for instance here for examples).
How do you combine those two execution strategies?
I am currently wanting to use zeromq in a clutter application written in C, so I would of course like direct answers to that, but please add answers for other variants as well.
The proper way to combine zmq and gtk or clutter is to connect the file-descriptor of the zmq queue to the main event loop. The fd can be retrieved by using
int fd;
size_t sizeof_fd = sizeof(fd);
if(zmq_getsockopt(socket, ZMQ_FD, &fd, &sizeof_fd))
perror("retrieving zmq fd");
Connecting it to the main loop is the matter of using io_add_watch:
GIOChannel* channel = g_io_channel_unix_new(fd);
g_io_add_watch(channel, G_IO_IN|G_IO_ERR|G_IO_HUP, callback_func, NULL);
In the callback function, it is necessary to first check if there is really stuff to read, before reading. Otherwise, the function might block waiting for IO.
gboolean callback_func(GIOChannel *source, GIOCondition condition,gpointer data)
{
uint32_t status;
size_t sizeof_status = sizeof(status);
while (1){
if (zmq_getsockopt(socket, ZMQ_EVENTS, &status, &sizeof_status)) {
perror("retrieving event status");
return 0; // this just removes the callback, but probably
// different error handling should be implemented
}
if (status & ZMQ_POLLIN == 0) {
break;
}
// retrieve one message here
}
return 1; // keep the callback active
}
Please note: this is not actually tested, I did a translation from Python+Clutter, which is what I use, but I'm pretty sure that it'll work.
For reference, below is full Python+Clutter code which actually works.
import sys
from gi.repository import Clutter, GObject
import zmq
def Stage():
"A Stage with a red spinning rectangle"
stage = Clutter.Stage()
stage.set_size(400, 400)
rect = Clutter.Rectangle()
color = Clutter.Color()
color.from_string('red')
rect.set_color(color)
rect.set_size(100, 100)
rect.set_position(150, 150)
timeline = Clutter.Timeline.new(3000)
timeline.set_loop(True)
alpha = Clutter.Alpha.new_full(timeline, Clutter.AnimationMode.EASE_IN_OUT_SINE)
rotate_behaviour = Clutter.BehaviourRotate.new(
alpha,
Clutter.RotateAxis.Z_AXIS,
Clutter.RotateDirection.CW,
0.0, 359.0)
rotate_behaviour.apply(rect)
timeline.start()
stage.add_actor(rect)
stage.show_all()
stage.connect('destroy', lambda stage: Clutter.main_quit())
return stage, rotate_behaviour
def Socket(address):
ctx = zmq.Context()
sock = ctx.socket(zmq.SUB)
sock.setsockopt(zmq.SUBSCRIBE, "")
sock.connect(address)
return sock
def zmq_callback(queue, condition, sock):
print 'zmq_callback', queue, condition, sock
while sock.getsockopt(zmq.EVENTS) & zmq.POLLIN:
observed = sock.recv()
print observed
return True
def main():
res, args = Clutter.init(sys.argv)
if res != Clutter.InitError.SUCCESS:
return 1
stage, rotate_behaviour = Stage()
sock = Socket(sys.argv[2])
zmq_fd = sock.getsockopt(zmq.FD)
GObject.io_add_watch(zmq_fd,
GObject.IO_IN|GObject.IO_ERR|GObject.IO_HUP,
zmq_callback, sock)
return Clutter.main()
if __name__ == '__main__':
sys.exit(main())
It sounds like the ZeroMQ code wants simply to be executed over and over again as often as possible. The simplest way is to put the ZeroMQ code into an idle function or timeout function, and use non-blocking versions of the functions if they exist.
For Clutter, you would use clutter_threads_add_idle() or clutter_threads_add_timeout(). For GTK, you would use g_idle_add() or g_timeout_add().
The more difficult, but possibly better, way is to create a separate thread for the ZeroMQ code using g_thread_create(), and just use the while(1) construction with blocking functions as they suggest. If you do that, you will also have to find some way for the threads to communicate with each other - GLib's mutexes and async queues usually do fine.
I found that there is a QT integration library called Zeromqt. Looking at the source, the core of the integration is the following:
ZmqSocket::ZmqSocket(int type, QObject *parent) : QObject(parent)
{
...
notifier_ = new QSocketNotifier(fd, QSocketNotifier::Read, this);
connect(notifier_, SIGNAL(activated(int)), this, SLOT(activity()));
}
...
void ZmqSocket::activity()
{
uint32_t flags;
size_t size = sizeof(flags);
if(!getOpt(ZMQ_EVENTS, &flags, &size)) {
qWarning("Error reading ZMQ_EVENTS in ZMQSocket::activity");
return;
}
if(flags & ZMQ_POLLIN) {
emit readyRead();
}
if(flags & ZMQ_POLLOUT) {
emit readyWrite();
}
...
}
Hence, it is relying on QT's integrated socket handling and Clutter will not have something similar.
You can get a file descriptor for 0MQ socket (ZMQ_FD option) and integrate that with your event loop. I presume gtk has some mechanism for handling sockets.
This an example in Python, using the PyQt4. It's derived from a working application.
import zmq
from PyQt4 import QtCore, QtGui
class QZmqSocketNotifier( QtCore.QSocketNotifier ):
""" Provides Qt event notifier for ZMQ socket events """
def __init__( self, zmq_sock, event_type, parent=None ):
"""
Parameters:
----------
zmq_sock : zmq.Socket
The ZMQ socket to listen on. Must already be connected or bound to a socket address.
event_type : QtSocketNotifier.Type
Event type to listen for, as described in documentation for QtSocketNotifier.
"""
super( QZmqSocketNotifier, self ).__init__( zmq_sock.getsockopt(zmq.FD), event_type, parent )
class Server(QtGui.QFrame):
def __init__(self, topics, port, mainwindow, parent=None):
super(Server, self).__init__(parent)
self._PORT = port
# Create notifier to handle ZMQ socket events coming from client
self._zmq_context = zmq.Context()
self._zmq_sock = self._zmq_context.socket( zmq.SUB )
self._zmq_sock.bind( "tcp://*:" + self._PORT )
for topic in topics:
self._zmq_sock.setsockopt( zmq.SUBSCRIBE, topic )
self._zmq_notifier = QZmqSocketNotifier( self._zmq_sock, QtCore.QSocketNotifier.Read )
# connect signals and slots
self._zmq_notifier.activated.connect( self._onZmqMsgRecv )
mainwindow.quit.connect( self._onQuit )
#QtCore.pyqtSlot()
def _onZmqMsgRecv():
self._test_info_notifier.setEnabled(False)
# Verify that there's data in the stream
sock_status = self._zmq_sock.getsockopt( zmq.EVENTS )
if sock_status == zmq.POLLIN:
msg = self._zmq_sock.recv_multipart()
topic = msg[0]
callback = self._topic_map[ topic ]
callback( msg )
self._zmq_notifier.setEnabled(True)
self._zmq_sock.getsockopt(zmq.EVENTS)
def _onQuit(self):
self._zmq_notifier.activated.disconnect( self._onZmqMsgRecv )
self._zmq_notifier.setEnabled(False)
del self._zmq_notifier
self._zmq_context.destroy(0)
Disabling and then re-enabling the notifier in _on_ZmqMsgRecv is per the documentation for QSocketNotifier.
The final call to getsockopt is for some reason necessary. Otherwise, the notifier stops working after the first event. I was actually going to post a new question for this. Does anyone know why this is needed?
Note that if you don't destroy the notifier before the ZMQ context, you'll probably get an error like this when you quit the application:
QSocketNotifier: Invalid socket 16 and type 'Read', disabling...