Embedding Openfire - eclipse

Is it possible to embed an Openfire server (version 3.7.0) in a Java application?
I am trying to run integration tests on the server in Eclipse. However, because Openfire is in Standalone Mode (the condition for this being that it can find its ServerStarter bootstrap class), when the server tries to shutdown, it calls System.exit(0) which I do not want to happen.
Is there any way to stop this from happening, i.e. without just deliberately preventing Openfire from finding its bootstrap class?

I have a successful approach, which is fairly straightforward and much easier than trying to manually set up Openfire.
Install Openfire onto a machine(Mac, PC, etc), setup with the admin console using the embedded database, and then comment out the adminConsole from openfire.xml if you'd like.
Copy the directory to a location you want to run your unit tests from. If you want to ensure exact repeatability, then it would be wise to zip and unzip the directory every time you run the tests.
Ensure all the all the jars(openfire, hsqldb, mail, bouncycastle, jasper, etc) are added.
Now you should be able to start and stop normally. Openfire does have one quirk. Because it's singleton oriented, even if you shutdown, that singleton instance stays around, so if you want to use it in something like a unit test, you'll have to call XMPPServer.getInstance() to check if an instance already exists, then call the constructor if getInstance() returns null.
I hope that helps.

Related

How to make sessions persistent in Scalatra?

I have a webapp using the Scala-based Scalatra web framework. The problem is, anytime the application is re-deployed, or anytime the app-server is rebooted, all session data is lost. This means (to name one downside) users must re-login every time we make an update to the site.
Some research reveals there are, apparently, "container-specific" ways to make sessions persist across app and server reboots (e.g., in the case of Tomcat), but this has two shortcomings:
If the app is not always deployed in the same container (and in the case of Scalatra, an embedded Jetty is used for dev purposes) then I'll need separate configuration for each container.
Using a server-local configuration file is much more fickle -- it's likely to get lost in server migrations, and it won't be automatically available to each instance (e.g., to each developer) of the app, whereas something stored with the core application code is much easier to test, retain, and generally keep track of.
So, to sum up...
Is there a generic, container-neutral way to make sessions persistent? Even if only by overriding appropriate methods in the Java/Servlet stack and storing the session data manually?
Barring that, is there a way to store relevant configuration for multiple containers (e.g., for both Jetty and Tomcat) in my application code (web.xml or similar)?
Thanks -- any insights appreciated!

What are the limitations of the flask built-in web server

I'm a newbie in web server administration. I've read multiple times that flask built-in web server is not designed for "production", and must be used only for tests and debug...
But what if my app touchs only a thousand users who occasionnaly send data to the server ?
If it works, when will I have to bother with the configuration of a more sophisticated web server ? (I am looking for approximative metrics).
In a nutshell, I would love to find what the builtin web server can do (with approx thresholds) and what it cannot.
Thanks a lot !
There isn't one right answer to this question, but here are some things to keep in mind:
With the right amount of horizontal scaling, it is quite possible you could keep scaling out use of the debug server forever. When exactly you would need to start scaling (or switch to using a "real" web server) would also depend on the environment you are hosting in, the expectations of the users, etc.
The main issue you would probably run into is that the server is single-threaded. This means that it will handle each request one at a time, serially. This means that if you are trying to serve more than one request (including favicons, static items like images, CSS and Javascript files, etc.) the requests will take longer. If any given requests happens to take a long time (say, 20 seconds) then your entire application is unresponsive for that time (20 seconds). This is only the default, of course: you could bump the thread counts (or have requests be handled in other processes), which might alleviate some issues. But once again, it can still be slow under a "high" load. What is considered a "high" load will be dependent on your application and the expectations of a maximum acceptable response time.
Another issue is security: if you are concerned at ALL about security (and not just the security of the data in the application itself, but the security of the box that will be running it as well) then you should not use the development server. It is not ready to withstand any sort of attack.
Finally, the development server could just fail outright. It is not designed to be used as a long-running process (days, weeks, months), and so it has not been well tested to work in this capacity.
So, yes, it has limitations. Yes, you could still conceivably use it in production. And yes, I would still recommend using a "real" web server. If you don't like the idea of needing to install something like Apache or Nginx, you can still go with a solution that is still as easy as "run a python script" by using some of the WSGI Standalone servers, which can run a server that is designed to be in production with something just as simple as running python run_app.py in the command line. You typically just need to create a 4-5 line python script to import and create the server object, point it to your Flask app, and run it.
gunicorn could be run with only the following on the command line, no extra script needed:
gunicorn myproject:app
...where "myproject" is the Python package that contains the app Flask object. Keep in mind that one of developers of gunicorn would probably recommend against this approach. See https://serverfault.com/questions/331256/why-do-i-need-nginx-and-something-like-gunicorn.
The OP has long-since moved on, but for those who encounter this question in the future I would just add that setting up an Apache server, even on a laptop, is free and pretty easy. It can be readily configured for as few or as many features as you want just by uncomment in or commenting out lines in the config file. There might be an even easier GUI method for doing that nowdays, but just editing the configs is simple.

Insert message into a process running in gwt-console-server from external application?

I'm a jBPM noob running jBPM5.4 in AS7. I have tried posting this question on the jBPM duscussion board, but no luck, so I thought I'd try here on stack.
My Goal: Create the process in guvnor, run it in gwt-console-server, have my java application feed information to the process, and follow the current state in the jbpm Console.
So far, I have installed the jbpm console and console server as well as Guvnor and designer on jBOSS AS7. I am able to create a process in Guvnor and run and monitor that process from the jbpm Console. The missing piece is that I do not understand how to externally insert messages to the process that is running.
Using eclipse and the jBPM example, I can run a process and insert messages, but my goal is to use the jbpm console to monitor the processes.
I assume I need to access the knowledgesession running in the gwt-console-server, but I'm not sure how to do that. Is it safe to access/modify a session that is persisted out to a database (ie, both gwt-console-server and my custom app would be able to modify it) and then the jbpm console would read from it?
I see in the BPM Console reference (https://community.jboss.org/wiki/BPMConsoleReference) that there is an Integration Layer, but there is nothing about how to leverage that - and the like in the doc is broken :(
Can someone point me to an example of an external application feeding messages to a jbpm process that is being monitored by jbpm-console or suggest ways to accomplish this?
Thanks very much for any insight.
-J
PS. I have the new jBPM Developer's Guide, but can't find anything in it to help me with this (so if I am missing something, I can handle a reference back to that guide).
The jBPM console has a REST api that exposes a subset of the functionality. For example, if you model this feeding of information as the start of a process, or the sending of a signal, you could use the signal REST method to send this information to the console for processing.
It's also fine to use an external ksession to update a process instance. As long as they are using the same database to store the information, everything should be fine.
It turns out that the console is just using the logs, so as long as you log to the same DB the console is using (with JPAWorkingMemoryDbLogger) everything pretty much automagically works. You can use either JBPMHelper.newStatefulKnowledgeSession(kbase) or JBPMHelper.loadStatefulKnowledgeSession(kbase, sessionId) depending on if you want to use the knowledge session started from the Console. Also, if you borrow the Console's session, don't dispose it of course.
I read somewhere that you can give the session a business id (and soon do the same from your own code so that they automatically use the same session), but currently when I want to borrow the Console's session I use a kludge that just assumes the highest session is the one I want (it will be as long as the console is already running).

What's the best way to update code remotely?

For example, I have a website with various types of information. If that goes down I have a copy of the same website the users use on a local webserver, like Apache or IIS on the client. They use this local version until the Internet version returns. They can have no downtime, in other words.
The problem is that over time the Internet version will change while the client versions will remain the same unless I touch each client's machine to make the updates. I don't want to do that.
Is there a good way to keep my client up to date so that when I make a change on the server the client gets a copy so they can run it locally if needs be?
Thank you.
EDIT: do you think maybe using SVN and timely running of the update by the clients would work?
EDIT: they'll never ever submit anything. It's just so I don't have to update the client by hand, manually going to the machine. they're webpages that run in case the main server is down.
I will go for Git over SVN because of its distributed nature. Gives you multiple copies of code; use it along with this comment's solution:
Making git auto-commit
to autocommit.
Why not use something like HTTrack to make local copies of your actual internet site on each machine, rather then trying to do a separate deployment. That way you'll automatically stay in sync.
This has the advantage that if, at some point, part of your website is updated dynamically from a database, the user will still be able to have a static copy of the resulting site that is up-to-date.
There are tools like rsync which you can use periodically to sync the changes.

Avoid validating WSDL every time the web service is executed

I have a small app running in JBoss that uses web services and every time they are called, it parses the WSDL and tries to fetch from xmlsoap.org [1] in order to validate it (the WSDL).
Is there a way to avoid this validations? The problem is that:
It's slowing down the system and
Many times xmlsoap.org [1] doesn't return correctly (returns broken HTML instead of XML).
I could make schemas.xmlsoap.org point to localhost and serve the schema from there, but it seems like a very dirty solution. There must be a way to run JBoss/xerces in non-validating mode or something.
[1] http://schemas.xmlsoap.org/wsdl/
It does look like there's a way to run xerces in non-validating mode.
1) Use a resolver to cleanly deliver the schema from classpath.
2) Turn off validation. It's pretty unlikely that JBoss lacks a way to configure that.