is there any thing for providing remote procedure call in emacs to the outside world ?
is there anyone working on a bert, messagepack, thrift, even xml-rpc server in emacs ?
here is my work in progress using json to communicate with emacs. https://github.com/tinku99/elisp_rpc
i wonder if json-rpc is used for cross language work out of the box... it seems like the specification stops short of managing the connection... which seems like half the battle.
Elnode works as an HTTP server.
It shouldn't be too hard to build a handler that receives JSON or XML or whatever you like, unpacks it and does something interesting.
Elnode includes an example handler called "insideout" that publishes the buffer list of the emacs instance via http. If you browse to http://localhost:8028/ you get an HTML page that gives an itemized list of the active buffers.
Starting with that you could do something interesting I suppose. For example, you could build a handler that slurps in and emits json, using Edward O'Connor's json.el
One issue with using Emacs as an rpc server would be the lack of threading in Emacs. The Distel library "extends Emacs Lisp with Erlang-style processes and message passing"; so, you can use it to provide an rpc mechanism. A while back, I wrote a number of blog posts on Distel:
Distel = Erlang-like Concurrency in Emacs
Distel = Emacs erlang-mode++
Concurrent/Parallel Programming - The Next Generation - Part 2 (the bottom of that post)
They will give you a bit of a "feel" for what it's like to use Distel in Emacs.
I found this stompl implementation also https://github.com/jwhitlark/Stompem/blob/master/stompem.el
I wonder how hard it would be to write a zeromq or rabbitmq implementation in emacs.
Related
What is the best way to transfer binary data from plugin to browser.
We want to play YUV buffer received from network on browser tab.
currently am converting to base64 and giving via callback. but it is not efficient and am finding below issues
1> CPU and Memory is going up
2> Callback events are not passed when we change the browser tab, later all events are given at one shot on moving back to our tab.
I would also like to know is there any way we can directly draw YUV frame on browser using plugin thread itself.
Thanks in advance.
NPAPI has been removed from most major browsers... the last holdout, Safari, will be removing it as of macOS Mojave. That being the case, don't expect any updates of any kind to the spec -- however you're using it is likely a dying method.
That being the case, on windows there is a method (super hack, really) that you can use to draw directly to the window in the browser from a native message extension, but it's not portable and it depends on internal implementation details. I haven't actually looked into it since I wrote that other answer (linked in this paragraph) so I don't know if it still works or not.
Anyway, if you're on a browser which fully supports NPAPI then you could draw the YUV data directly to the plugin window given to you on the browser; there is an example of blitting image data in FireBreath which you could possibly trace through as an example.
You could also try some variation of listening on a TCP port in the plugin and connecting to it from the browser; you could easily run into some security issues there, but it is the only other method I can think of.
NPAPI simply wasn't ever designed to allow fast transfer of data between the plugin and the browser; I submitted a proposal to add that capability years ago but it was basically too close to the death of NPAPI (which is basically past at this point) for it to go anywhere. The issues you're seeing are 100% consistent with what I would expect, though... and it's still the best way I know.
My question is quite simple.
I encountered this sys_vm86old syscall (when reverse engineering) and I am trying to understand what it does.
I found two sources that could give me something but I'm still not sure that I fully understand; these sources are
The Source Code and this page which gives me this paragraph (but it's more readable directly on the link):
config GRKERNSEC_VM86
bool "Restrict VM86 mode"
depends on X86_32
help:
If you say Y here, only processes with CAP_SYS_RAWIO will be able to
make use of a special execution mode on 32bit x86 processors called
Virtual 8086 (VM86) mode. XFree86 may need vm86 mode for certain
video cards and will still work with this option enabled. The purpose
of the option is to prevent exploitation of emulation errors in
virtualization of vm86 mode like the one discovered in VMWare in 2009.
Nearly all users should be able to enable this option.
From what I understood, it would ensure that the calling process has cap_sys_rawio enabled. But this doesn't help me a lot...
Can anybody tell me ?
Thank you
The syscall is used to execute code in VM86 mode. This mode allows you to run old "real mode" 32bit code (like present in some BIOS) inside a protected mode OS.
See for example the Wikipedia article on it: https://en.wikipedia.org/wiki/Virtual_8086_mode
The setting you found means you need CAP_SYS_RAWIO to invoke the syscall.
I think X11 especially is using it to call BIOS methods for switching the video mode. There are two syscalls, the one with old suffix offers less operations but is retained for binary (ABI) compatibility.
I'm writing a major mode for a language and I want to offer completion after '.', or following a keypress. Completions are determined by sending a request to a backgound process using process-send-string and set-process-filter. The returned list of completions is dependent on the background process parsing the current state of the file and being instructed to give completions at that particular point.
I have tried using the popular autocomplete package, but it is really not written with this use case in mind. This is partly because it offers automatic suggetions (i.e. without keypress), which is a nice feature that I don't need. The function that you offer it to call needs to be called synchronously, and emacs process control is asynchronous. I have coded something up along the lines of https://github.com/Golevka/emacs-clang-complete-async, but it doesn't feel robust at all, and has been very fiddly to get it working.
I like the menus used in autocomplete, and would like to know what would fit best with my use-case, preferably while also looking nice.
You can wait synchronously for a background process's output with accept-process-output. You might like to take a look at gud-gdb-completions and gud-gdb-run-command-fetch-lines in a recent enough version of gud.el.
I have a perl app which processes text files from the local filesystem (think about it as an overly-complicated grep).
I want to design a webapp which allows remote users to invoke the perl app by setting the required parameters.
Once it's running it would be desirable some sort of communication between the perl app and the webapp about the status of the process (running, % done, finished).
Which would be a recommended way of communication between the two processes? I was thinking in a database table, but I'm not really sure it's a good idea.
any suggestions are appreciated.
Stackers, go ahead and edit this answer to add code examples or links to them.
DrNoone, two approaches come to mind.
callback
Your greppy app needs to offer a callback function that returns the status and which is periodically called by the Web app.
event
This makes sense if you are already using a Web server/app framework which exposes an event loop usable from external applications (rather unlikely in Perl land). The greppy app fires events on status changes and the Web app attaches/listens to them and acts accordingly.
For IPC as you envision it, a plain database is not so suitable. Look into message queues instead. For great interop, pick AMPQ compliant implementation.
If you run the process using open($handle, "cmd |") you can read the results in real time and print them straight to STDOUT while your response is open. That's probably the simplest approach.
I am using the wonderful AnyEvent for creating an asynchronous TCP server (specifically, a MUD server).
In order to keep everything running smoothly and with as few blocking/synchronous pieces of code possible, I have replaced some modules I was using with their asynchronous counterpart, for example AnyEvent::Memcached and AnyEvent::Gearman. This allows the main program to be quite speedy, which is desirable. I have coded around the need for some of these calls to be synchronous.
One problem I currently have, and the focus of this question, is logging.
Before turning to AnyEvent for this server program, I was using Log::Log4perl as it allows me to fine-tune which modules or subroutines should be logged, at which level and to which log output (screen, file, etc).
The problem here is that the Log4perl actions (warn, info, etc) are currently performed synchronously but I have no requirement for that as long as the log lines eventually end up on the screen / file (and in the correct order).
Is Log::Log4perl still the right choice when using an asynchronous event handler such as AnyEvent, or should I look at a different module? If so, which is recommended?
AnyEvent::Log, which comes with AnyEvent, uses AnyEvent::IO, which appends to files asynchronously when IO::AIO is available (and synchronously when not).
What you are trying to avoid? If it's synchronous file IO (writing to log files/stdout etc.) then your problem would probably be solved with an asynchronous and/or buffering appender(s) rather than replacing all use of Log4perl in your code.
Log::Log4perl::Appender::Buffer seems like it might be a good start, but a completely async appender doesn't appear to exist anymore.