Does IPC::Shareable work with blessed objects? - perl

I would like to share a blessed object between two or more Perl applications. The object in question is quite expensive to instantiate, but always the same (static). The idea is to instantiate it once in one application and use it in other applications. This particular object is basically an http client using HTTP::Tiny an a whole bunch of other modules. Instantiating it via new() can take more than 50% of the total run time. I think the only module which may be a problem is the HTTP::Tiny since it open sockets, but not really sure. Can I use IPC::Shareable or some other method to share this http client among other applications?
Follow-up, are there any significant security issues with IPC::Shareable?

It supports anything Storable can handle. So it can handle objects, but not file handles (incl sockets). File handles are process-specific anyway.
Access to the shared memory is controlled using the same permission system that is used to control access to files (via the mode option), so the security issues are the same as they are for files.

Related

Perl Catalyst: possibility to share .so files between child server processes

I have a catalyst web server. I can see every child server process load a lot of same .so files individually, which take a lot of memory.
Is there any possible Catalyst preload all .so file once for all child processes?
The specific behavior you are describing is a feature of mod_perl rather than Catalyst itself. But you can of course run your Catalyst application under a mod_perl environment.
Under mod_perl there can only load shared library files once and it is not possible to have different versions. Aside from the saving of loading for multiple children this would actually work over different applications on the same server. So two different web applications using the mod_perl interpreter would actually share a loaded instance of a lirbrary that they both used.
For this reason, most people generally prefer to not use mod_perl as a means of serving their application, because they actually want to maintain different library versions per application. For the reasons described above this would not be possible in this environment.
But if that is something that you believe suits your needs, then mod_perl may be the environment for you.
Support for mod_perl with Catalyst is in a slight state of flux. The generally preferred method is to use the Plack::Handler methods to bootstrap Catalyst as a PSGI application to the mod_perl environment. There are some additional notes on configuration here.
I don't know what options are available in built-in Catalyst server, but looking at the documentation for Starman shows this option:
--preload-app
This option lets Starman preload the specified PSGI application in the
master parent process before preforking children. This allows memory
savings with copy-on-write memory management. When not set (default),
forked children loads the application in the initialization hook.
Enabling this option can cause bad things happen when resources like
sockets or database connections are opened at load time by the master
process and shared by multiple children.
Since Starman 0.2000, this option defaults to false, and you should
explicitly set this option to preload the application in the master
process.
Alternatively, you can use -M command line option (plackup's common
option) to preload the modules rather than the
itself.starman -MCatalyst -MDBIx::Class myapp.psgi
will load the modules in the master process for memory savings with
CoW, but the actual loading of myapp.psgi is done per children,
allowing resource managements such as database connection safer.
If you enable this option, sending HUP signal to the master process
will not pick up any code changes you make. See "SIGNALS" for details.

Recommended communication pattern for web frontend of command line app

I have a perl app which processes text files from the local filesystem (think about it as an overly-complicated grep).
I want to design a webapp which allows remote users to invoke the perl app by setting the required parameters.
Once it's running it would be desirable some sort of communication between the perl app and the webapp about the status of the process (running, % done, finished).
Which would be a recommended way of communication between the two processes? I was thinking in a database table, but I'm not really sure it's a good idea.
any suggestions are appreciated.
Stackers, go ahead and edit this answer to add code examples or links to them.
DrNoone, two approaches come to mind.
callback
Your greppy app needs to offer a callback function that returns the status and which is periodically called by the Web app.
event
This makes sense if you are already using a Web server/app framework which exposes an event loop usable from external applications (rather unlikely in Perl land). The greppy app fires events on status changes and the Web app attaches/listens to them and acts accordingly.
For IPC as you envision it, a plain database is not so suitable. Look into message queues instead. For great interop, pick AMPQ compliant implementation.
If you run the process using open($handle, "cmd |") you can read the results in real time and print them straight to STDOUT while your response is open. That's probably the simplest approach.

Difference between shared memory IPC mechanism and API/system-call invocation

I am studying about operating systems(Silberscatz, Galvin et al). My programming experiences are limited to occasional coding of exercise problems given in a programing text or an algorithm text. In other words I do not have a proper application programming or system programming experience. I think my below question is a result of a lack of experience of the above and hence a lack of context.
I am specifically studying IPC mechanisms. While reading about shared memory(SM) I couldn't imagine a real life scenario where processes communicate using SM. An inspection of processes attached to the same SM segment on my linux(ubuntu) machine(using 'ipcs' in a small shell script) is uploaded here
Most of the sharing by applications seem to be with the X deamon. From what I know , X is the process responsible for giving me my GUI. I infered that these applications(mostly applets which stay on my taskbar) share data with X about what needs to change in their appearances and displayed values. Is this a reasonable inference??
If so,
my question is, what is the difference between my applications communicating with 'X' via shared memory segments versus my applications invoking certain API's provided by 'X' and communicate to 'X' about the need to refresh their appearances?? BY difference I mean, why isn't the later approach used?
Isn't that how user processes and the kernel communicate? Application invokes a system call when it wants to, say read a file, communicating the name of the file and other related info via arguments of the system call?
Also could you provide me with examples of routinely used applications which make use of shared memory and message-passing for communication?
EDIT
I have made the question more clearer. I have formatted the edited part to be bold
First, since the X server is just another user space process, it cannot use the operating system's system call mechanism. Even when the communication is done through an API, if it is between user space processes, there will be some inter-process-communication (IPC) mechanism behind that API. Which might be shared memory, sockets, or others.
Typically shared memory is used when a lot of data is involved. Maybe there is a lot of data that multiple processes need to access, and it would be a waste of memory for each process to have its own copy. Or a lot of data needs to be communicated between processes, which would be slower if it were to be streamed, a byte at a time, through another IPC mechanism.
For graphics, it is not uncommon for a program to keep a buffer containing a pixel map of an image, a window, or even the whole screen that then needs to be regularly copied to the screen. Sometimes at a very high rate...30 times a second or more. I suspect this is why X uses shared memory when possible.
The difference is that with an API you as a developer might not have access to what is happening inside these functions, so memory would not necessarily be shared.
Shared Memory is mostly a specific region of memory to which both apps can write and read from. This off course requires that access to that memory is synchronized so things don't get corrupted.
Using somebody's API does not mean you are sharing memory with them, that process will just do what you asked and perhaps return the result of that operation to you, however that doesn't necessarily go via shared memory. Although it could, it depends, as always.
The preference for one over another I'd say depends on the specifications of the particular application and what it is doing and what it needs to share. I can imagine that a big dataset of some kind or another would be shared by shared memory, but passing a file name to another app might only need an API call. However largely dependent on requirements I'd say.

Perl scripts, to use forks or threads?

I am writing a couple fo scripts that go and collect data from a number of servers, the number will grow and im trynig to future proof my scripts, but im a little stuck.
so to start off with I have a script that looks up an IP in a mysql database and then connects to each server grabs some information and then puts it into the database again.
What i have been thinknig is there is a limited amount of time to do this and if i have 100 servers it will take a little bit of time to go out to each server get the information and then push it to a db. So I have thought about either using forks or threads in perl?
Which would be the prefered option in my situation? And hs anyone got any examples?
Thanks!
Edit: Ok so a bit more inforamtion needed: Im running on Linux, and what I thought was i could get the master script to collect the db information, then send off each sub process / task to connect and gather information then push teh information back to the db.
Which is best depends a lot on your needs; but for what it's worth here's my experience:
Last time I used perl's threads, I found it was actually slower and more problematic for me than forking, because:
Threads copied all data anyway, as a thread would, but did it all upfront
Threads didn't always clean up complex resources on exit; causing a slow memory leak that wasn't acceptable in what was intended to be a server
Several modules didn't handle threads cleanly, including the database module I was using which got seriously confused.
One trap to watch for is the "forks" library, which emulates "threads" but uses real forking. The problem I faced here was many of the behaviours it emulated were exactly what I was trying to get away from. I ended up using a classic old-school "fork" and using sockets to communicate where needed.
Issues with forks (the library, not the fork command):
Still confused the database system
Shared variables still very limited
Overrode the 'fork' command, resulting in unexpected behaviour elsewhere in the software
Forking is more "resource safe" (think database modules and so on) than threading, so you might want to end up on that road.
Depending on your platform of choice, on the other hand, you might want to avoid fork()-ing in Perl. Quote from perlfork(1):
Perl provides a fork() keyword that
corresponds to the Unix system call of
the same name. On most Unix-like
platforms where the fork() system call
is available, Perl's fork() simply
calls it.
On some platforms such as Windows
where the fork() system call is not
available, Perl can be built to
emulate fork() at the interpreter
level. While the emulation is
designed to be as compatible as
possible with the real fork() at the
level of the Perl program, there are
certain important differences that
stem from the fact that all the pseudo
child "processes" created this way
live in the same real process as far
as the operating system is concerned.

iOS/Objective-C Multiple URL Connections

I have a few apps that I am trying to develop a reusable URL connection layer. I have done some research and am struggling between architectures. Specifically the APIs this layer utilizes.
In the past, I have used NSURLConnection and NSOperation on a separate RunLoop. This seems overkill. I've seen libraries that subclass NSURLConnection. Others have a singleton Engine object that manages all requests.
The Engine and/or NSURLConnection seem best to me. But I am asking for input before I go too far down one road. My goals would be:
Ability to cancel a request
Concurrent requests
Non-blocking
Data object of current open requests
Any direction or existing references with code samples would be greatly appreciated.
I'm not sure about a "data object of current open requests", but ASIHTTPRequest does the first three and is very easy to use.
Update
Actually, it looks like ASINetworkQueue may fulfill your last bullet point.
I personally use a singleton engine with my large Apps though it might not always be the best case. All the URL's I use require signing in first, figured it would be best if one Class handles all of the requests to prevent multiple URLS from signing into the one location.
I basically create a protocol for all my different connection classes into my singleton and pass the delegate of both the calling class and the singleton into it. If an error occurs its passed to the singleton so it can deal with it, if it completes it returns the data to the calling class.