Use of JCLLIB and DOCLIB in ESP scheduler application is not clear - scheduler

I can see in some applications, JCLLIB and DOCLIB both are present and in some applications both are not mentioned. So in such a case, where both are not mentioned, is there any default library which ESP searches? I mean what will happen in this case? Being quite naïve in this, I am not able to understand. Please share your knowledge.

Related

IPC between kext modules

I was wondering if I can implement bi-directional communication channel between 2 kext modules using sockets under the domain PF_SYSTEM. this method mostly used to communicate between driver and user-space agent..
In my particular case I've got one module based on IOKit and the other which is simple kernel module with start and stop callback functions and I'd like to pass some small messages between them..
Do you think this approach is suitable for my needs or there's other preferable way (shared memory ? mach ports ? )
EDIT, after digging a little deeper, maybe there's an option to export an API from one driver to the other by modifying the client driver plist file as follows.. is it possible ?
<key>OSBundleLibraries</key>
<dict>
<key>com.driver.server_driver</key>
<string>1</string>
This however, doesn't work because when i try to manually load the client driver after the server driver already loaded (visible from kextstat), I get the No kexts found for these libraries error.
Using messaging techniques normally used for IPC for communicationg between kernel extensions is unusual, as it's a lot more complex than taking advantage of the fact that they're running in the same address space anyway. I covered some of the details of this latter approach in my answer to your other question which you've obviously already seen, but I'm linking to for the benefit of others in a similar situation.
To answer your question: I suspect both ends of a system socket being in the kernel is probably not very well tested, and you could run into bugs in the kernel. The in-kernel public socket KPI is also quite fiddly: getting the buffering right is tricky, so I'd only use sockets if I absolutely had to, and it clearly isn't here.
My gut instinct is that Mach messaging would work more reliably and require less code, but again I think it would be quite unusual to use it in this way.
It's hard to give useful advice on exactly what you should do, as we don't know the reasons for the separation into 2 kexts, what their relationship is, what kind of communication is required, etc. There are many possible ways on how to exchange information, but whether they are a good idea will depend on the details of the project. (This sort of question isn't really suitable to Stack Overflow's format - this is the sort of problem for which a company will bring in an expert to consult. For a private project, you might have more luck on the Software Engineering Stack Exchange Site, where this sort of question is on-topic, although I'm not sure you'll get a good/useful answer. For a private project it's probably best you keep it simple and maybe combine the 2 kexts into one?)

Simple JVM to JVMs communication framework?

I know there are lots of options out there, and sorry to ask such a similar question again, but it's different enough to warrant it -- I think. I have one Java app, let's call it the "master", that will do some work, and then it needs to inform other Java apps in other JVMs about it. Today they are on the same machine, but this will not always be the case.
I'd prefer something that has an easy way to add/remove listeners (i.e., other JVMs), etc...so RMI or Web Services are not suitable as there'd be too much manual coding there to look after who is what, etc.
I'd also like the ability to add new Java apps (again, in other JVMs obviously) to the master's 'notify list', whatever it may be, without much effort -- preferably without needing to rebuild the master app.
What I'd really like is an easy messaging/communication framework, which requires some simple configuration.
I'm overwhelmed by the amount of frameworks and options out there...JMS, jgroups, the various MQ frameworks, RMI, Jini, etc, Web Services.
I'm looking for fast, simple, reliable, and easy! Any suggestions? I don't need complex or particularly advanced features.
Your master will have to be a server which is always available and the clients will have to register/unregister.
Maybe you can have a look at http://mina.apache.org/mina-project/userguide/ch2-basics/sample-tcp-server.html
Mina is also integrated in the Apache Camel project. (warning: Camel is a very addictive framework. The risk exists you will try to use it for all your future background processing :)

Linking Mulitiple Computers to Process a Task

I am unsure whether this question belongs here, so please feel free to migrate it if it doesn't.
My question is this, Is it possible to combine many different PC units to work as one?
Take for example, buying 3 different HP desktop PCs. Then link the hardware so that they act as one PC.
If so, please point me to some resources I can use.
Thanks for your time.
Note
I am not referring to linking them over a network, but rather, making the actual hardware work together.
I am not sure this is possible, so I am sure all my google search terms are not related to the issue.
You should realize that linking them over a network does not obviate their ability to work together to complete a task. Most supercomputers and clusters today are interconnected via a network (albeit a very high speed one like Infiniband). The key is to have software that can understand that it's operating in a distributed environment (e.g. MPI libraries). You might also take a look at OpenMP or Hadoop. It really depends on what you want to do with it.
You can not link some computer together to behave like one!!! Therefore you will need special hardware, which offers you the possibility to extend the numbers of CPU's working together. (Like a cray)
If you are talking about write an application that will be processed by those computers, you may be referring to MPI.
You can use the Open MPI to do that, most of languages nowdays have MPI libraries.
You can find a more elaborated information about Parallel Computing on Wikipedia Parallel Computing Article.

SoftPhone and linux

We are thinking about writing a softphone app. It would basically be a component of a system that has calls queued up from a database. It would interface with a LINUX server which has Asterisk installed.
My first question is
Whether we should write the softphone at all or just buy one?
Secondly, if we do,
what base libraries should be use?
I see SIP Sorcery on CodePlex. More than anything, I am looking for a sense of direction here. Any comments or recommendations would be appreciated.
The answer would depend on the capabilities you have in your team and the place you see your core value and the essence of the service you provide.
In most cases, I'd guess that you don't really care about SIP or doing anything fancy with it that require access to its low level. In such a case, I'd recommend getting a ready-made softphone - either a commercial one or an open source one. I'd go for a commercial one, as it will give you the peace of mind as to its stability and assistance with bug fixing and stuff.
To directly answer your question, one of the many open source softphones are likely to fit your needs, and allow slight modifications as needed. Under most open source licenses there is no obligation to distribute your code as long as you only use it internally (do not distribute the binary.)
Trying to guess what you are trying to do, it sounds like a call center like scenario, so one of the many call queue implementations out there might fit your needs.
I had to write an own softphone and I found a great guide how to achieve it. In the guide there are 10 steps provided for having an own softphone (voip-sip-sdk.com on page 272)
I found it useful and maybe you will find it as well.

Should I learn how web frameworks work before I use them?

I'm interested in creating a basic web application (for learning, but I want to finish within a few months), and I've read that using a web framework can make that task much easier.
After reading about different frameworks online, it seems to me that using frameworks would hide a lot of detail on they work. I fear that if I use a framework, I won't really know how my website is running.
Is it important to understand how frameworks do what they do, or am I worrying too much? (eg. I don't know how the Linux kernel works, or the C compiler, etc.)
Even if you don't have a particular interest in web frameworks, I would say it's good to play with a few and then crack them open if only for the exposure to new design patterns and solutions that can be applied anywhere in development. (MVC in particular when talking about most web frameworks)
It is (to some extent) important to understand how frameworks work, but you'll never learn that without using them.
So, start using some framework and you'll get basic understanding of it. And then, if you have interest, you can always dig deeper into it (maybe even submit patches and participate in its development). But not in the opposite order.
Using your analogy, you don't become Linux kernel developer without being Linux user for some time.