Wicket-9: how to prevent the data stored in the session from being cleared after web-container restarting? - wicket

I've upgrade wicket from 8.12 to 9.5 and after that my application started clearing session data every time I restart tomcat instance. In particular all users have to sign in after that.
With 8.12 version session data persisted between restarting.
How can I configure similar behavior in 9.5 version?

This is not something that is managed by Wicket.
Wicket uses the HttpSession to store information. Http sessions are managed by the servlet container, like Apache Tomcat, Jetty, etc.
For Apache Tomcat read http://tomcat.apache.org/tomcat-9.0-doc/config/manager.html

Related

Adding wildfly jms subsystem with wildfly cli java api without reloading the server

So we need to provision (add the jms extension/subsystem/configurations to standalone.xml) the JMS subsystem in wildfly when starting the server if it's not already provisioned and that needs to happen automatically. We have an application written in java and we chose to provision the jms subsystem with wildfly's cli java api and that is executed when our application starts deploying. The thing is we need to provision the jms subsystem and use it in the same application.
Problem is, when we add the needed configurations in standalone.xml with wildfly's cli java api, the server requires a reload but we can't reload it because our app is already deploying, it tries to use the defined queues and fails because... Well, the subsystem is not active yet. On next server restart everything is ok but as you can guess in a production environment this is unacceptable. Is there any solution to this? I've tried adding a reload cli command in the end of the batch that creates the jms subsystem and it starts reloading the server but the deployment doesn't stop and exceptions start flying left and right.
Also the whole idea of reloading the server from the app while the app is deploying seems kinda wrong to me.
Thanks in advance.
Solution:
Well the solution in the end was an easy one, we just had to add a reload step in the batch operation that adds the jms subsystem. The problem was that we had a set of async operation that all fired off when the app was deployed so i just had to make sure none of them started until i can check for the messaging subsystem and reload wildfly if necessary. That way I'm not interrupting any async tasks forcefully.
You need to select appropriate profile i.e full or full-ha while starting server only. If you do this there will not be need to add JMS subsystem.
If you want to go with your approach only, add dependency of queue in
application. It will not start deploying until and unless queue is bound to server.
we need to perform reload operation when we add new subsystem, if you dont want to perform reload operation then I will suggest you to start server in 'admin-only' mode. When we start server in 'admin-only' mode then it open only management port(9990/9999). Configure messaging subsystem through CLI command reload server instance. Hope it helps..!!

Tomcat Server gets unresponsive

I am new in JPA and java Web App development.Please help me.
Overview:
I am developing one java web application with Apache Tomcat Server [Version 7.0.70]. And It is multi-user system.
Platform : JVM Java 1.7
Framework : JPA
Database : My-Sql Workbench 6.3
Jars: MySQL Connector and Eclipse-link
Problem:
Once deploying war file into server 10 clients parallel accessing to the server with 10 unique log-in sessions . After sometime [Uncertain] server gets un-responssive and no database operations working.
After this it need to restart the tomcat server. and system gets work finely. its a continuous process to handle the system.
Image : My Tomcat Server Error
This image content that shows error when server gets un-responssive
Is there any caching problem
Image persistence.xml : Persistence Unit Cashing
Thank you,
Please give me solution.

REST API on EC2 Stops running after a few hours

I am trying to run a very simple REST API backend on EC2 for an Android App I am making for a school project. In a previous project, I used a NodeJS library, expressJS, to quickly create a backend that executes SQL updates/queries, and in the current project, I am using Java and a Java library called Spark to do the same thing (SQL queries/updates). I start a refreshed backend with
git pull; mvn clean install; mvn exec:java;
because I am using Maven. Anyway for both the previous ExpressJS and the current Spark backend, I can talk to the server for an hour or two perhaps, then I have to restart it. Why doesn't it just keep running? Is there some problem with my connection to the database leaking memory? You can check out the project here. I tried using nohup, but that didn't do it either; it still crashed after a few hours. It's not getting too many requests. Any other comments on improvements to my process or backend are welcome as well.
Thanks!
I did not close my database connections. The server kept open the connection until crashing.

Unable to reconnect to Derby from Tomcat server started via Eclipse

I'm running on Win 7 using Eclipse 4.2 starting a web app on a Tomcat 7 server and using Derby database. I have tried many approaches but run consistently into a common problem:
Everything works just fine the first time I start up and run.
When I redeploy my application after a change, all database connections hang (any kind of restart).
If I stop Eclipse and restart Eclipse, that clears up the problem and the next run works fine again.
Having done some investigation, it appears that the problem is that the Derby port (1527) is not released from one execution of the server to the next. That seems very strange to me since Derby is started by the Tomcat instance which is a separate javaw process.
I've tried:
Configuring the Derby connection as a Tomcat resource
Establishing the connection within my code (rather than via Tomcat resource)
Both the embedded and the network driver
Starting / stopping the network driver from a servlet on startup and shutdown of the Tomcat server
Shutting down the embedded driver via servlet on shutdown of Tomcat
Again, every approach works fine to connect the first time.
One other symptom that doesn't appear to be related (except for as a possible indicator of whether or not shutdown completes correctly) is that the db.lck file for my database never gets deleted. However, whether or not it exists has no bearing on whether or not I can reconnect (only stopping/starting eclipse has an impact).
Any insight would be appreciated.
Thanks!
After some further investigation I'm going to call this a duplicate of: Cannot create JDBC driver of class ' ' for connect URL 'null' : I do not understand this exception. It's not quite the same thing, but that solution (creating META-INF/context.xml) allows it to proceed to failing calls rather than hangs, which is a significant improvement and suggests it's largely related.
I did finally figure this out. It turns out I had the derby jars in the Tomcat lib folder (for Tomcat) and in the deployment assembly for my application in Eclipse (rather than just in the build path). So Tomcat was using the built-in libs, while my app was using the embedded libs, and this resulted in conflicts. Leaving the libs as part of Tomcat and removing them from my war file solved the problem completely.

start solrserver deployed inside tomcat from within eclipse

I am working in windows 7 machine.I want to implement search using apache solr with hbase ta
bles as datasource. I have configured apache solr 4.3.1 in tomcat 7. I can able to deploy it successfully by manually starting tomcat server.
When i try to start solr server from within spring mvc web application it says solrserver started,but when i query the solr its giving the following without any errors:
page 0 of 0 containing UNKNOWN instances
As per my research on solr, it is mentioned embedded solrserver is unfit for production so i need to have httpsolrserver.
So somebody help me clear my head and give me some solution...
Thanks in advance..
For production, you would be better with hosting Solr as a seperate instance.
This would keep the responsibility separate, web application and search engine.
Indexing process are resource intensive and would the web application behavior as well.
This can be catered by Master Slave arch, providing best search performance.
External instance can be scaled at will and would not impact the web application.