I am using a POJO (Non-EJB) class inside my JBoss server and creating multiple instances of it. Will JBoss create an object pool and manage this resource or will it simple create as many objects as I instantiate ?
Plain Old Java Objects are "unmanaged" the container has no way to know that when you say "new" you don't really mean "new".
EJBs have managed life cycles, are trivial to implement. If you want pooling why not use one?
Related
I want to implement a project using kotlin/multiplatform consisting of a backend on the jvm and a web-app in js. The structure would be like this:
root
|- webapp (kotlin/js using kotlin-react)
|- shared (kotlin/multiplatform for shared data)
|- server (kotlin/jvm using micronauts)
The data classes used by the applications belong in the shared project, but to use jpa I need jvm-annotations.
A solution would be to not use kotlin data-classes and inherit in the jvm. I also tried to implement the jpa annotations using the experimental #OptionalExpectation but that went nowhere since:
they require non-annotation-types when used with typealias which can't be implemented with #OptionalExpectation.
let the multiplatform-annotations inherit from the multiplatform-annotations isn't possible since kotlin doesn't yet support annotation inheritance.
Should I refrain from using the data-class feature and use inheritance or is there a more elegant way?
I think in general model classes shouldn't be shared between different applications, with one of the the exceptions being applications that use the same data source.
If you want to share data structures between the Server and the Web app I would suggest creating DTO classes specifically for that.
A Data Transfer Object is an object that is used to encapsulate data, and send it from one subsystem of an application to another.
I found many examples online that put a call to Database.forConfig inside a trait, and each repository extends this trait. Some examples:
https://github.com/BBartosz/akkaRestApi/blob/master/src/main/scala/utils/DatabaseConfig.scala
https://github.com/Platoonhead/SlickWithScala/blob/master/src/main/scala/com/edu/knoldus/connection/ConnectedDbMysql.scala
https://github.com/cdiniz/slick-akka-http/blob/master/src/main/scala/utils/PersistenceModule.scala
Will it lead to creating too many instances of the DB client object, memory overhead, any other performance problems, when having many repositories?
Isn't it better to have one object that will call Database.forConfig and will have a link to the database?
What is the best practice here?
Here is an example of how I did it:
https://github.com/joesan/plant-simulator/blob/master/app/com/inland24/plantsim/config/AppConfig.scala
So what I basically do is that, I create a single copy of the variable that will call the Slick API and then I specify the number of threads (effectively the number of connections) that I want in the connection pool.
http://slick.lightbend.com/doc/3.0.0/database.html
I am developing a small (but growing) Java EE project based on the technologies EJB 3.1, JSF 2, CDI (WELD) and JPA 2 deployed on a JBOSS AS 7.1Beta1.
As a starting point I created a Maven project based on the Knappsack Maven archetypes.
My architecture is basically the same provided by the archetype and as my project grows I think this archetype seems to be reaching its limits. I want to modify the basic idea of the archetype according to my needs. But let me first explain how the project is organized at the moment.
The whole project is built around Seam like Home classes. The view is referencing them (via EL in xhtml templates). Most of the Home classes are #Named and #RequestScoped (or shortly #Model) or #ConversationScoped and Entripse Java Beans are #Injected. Basically these (normally #Local) EJBs are responsible for the database access (Some kind of DAOs) to get transactions managed automatically by the container. So every DAO class has its own EntityManager injected via CDI. At the moment every DAO integrates aspects which logically belong to each other (e. g. there is a SchoolDao in the archetype which is responsible for creating Teachers, Students and Courses).
This of course results in growing DAOs which have no well defined task and which become hard to maintain and hard to understand. And as a painful side effect the risk of duplicate code grows.
As a consequence I want to breakup this design by having only DAOs which are responsible for one specific task (a #StudentDao, a #TeacherDaoand so on). And at this point I am in trouble. As each DAO has a reference to its own EntityManager it cannot be guaranteed that something like the following will work (I think it never will :)
Teacher teacher = teacherDao.find(teacherId);
course.setTeacher(teacher);
courseDao.save(course);
The JPA implementaion complains about a null value for column COURSE.TEACHER_ID (assuming Course has a not nullable FK realtionship to Teacher). Each DAO holds its own EntityManager, the teacher is managed by the one in the TeacherDao, but the other one in the CourseDao tries to merge the Course #Entity.
Maybe the archetye I used is not suitable for larger applications. But what would be a appropriate design for such an aplication then IF the technologies I used are obligatory (EJB 3.1 for container managed transactions [and later on other business related stuff], JSF as view technologie, JPA as the database mapper and CDI as the 'must have because it's hip:)?
Edit:
I now have an EntityManager injected in the base class all other DAO classes inherit from. So all DAOs use the same instance (debugger shows same object id) but I still have the problem that all entities that I read from the database are immediately detached. This is something that makes me wonder as it means that there is either no container managed transaction or the transaction gets immediately closed after the entity was read. Each DAO is a #Local #Stateless EJB. They are injected into my JSF Beans (#Named and #RequestScoped) from where I want to make use of the CRUD operations. Is there anything I miss?
Having each DOA have its own EntityManager is a very bad design.
You should have an EntityManager per transaction/request and pass it to each DOA, or have them share the same one or get it from the context.
I have a requirement of creating java classes dynamically and make it accessible different jvms across the network. I tried to use reflection and javassist tool,but nothing worked. Let me explain the scenario
we are using Coherence distributed cache. It has a power of doing aggregation/filtering in parallel across the cluster. For example if a class has [dynamic class] has amount variable and getAmount/setAmount methods. Then if we execute COHERENCE queries, it will start process in parallel across the cluster.
I tried to create classes at run time by using javassist and reflection. I am able to access it from single JVM, but when I tried to access the same class from other jvm [through coherence cluster]. I am getting exception of class not found [as remote jvm is not having idea of this class].I can over come this by creating same class dynamically on remote jvm also and access the methods. But coherence in built methods/functions are not able to find the class.
could some one help me on this matter
A new class that gets created must be available to all nodes of the cluster. It means that the newly created bytecode must get on each node JVM's classpath/classloader. The simplest approach in my mind would be to put the generated classes on a shared network drive and have all JVMs point to that shared network location in their classpaths. Each time a JVM finds a reference to the new class it should load it dynamically from the network share.
You could copy the byte array that was created by javassist and send this byte array over the wire and load this byte array by a custom ClassLoader. This way, the class will be represented on all JVMs.
I've just started learning Google Web Toolkit and finished writing the Stock Watcher tutorial app.
Is my thinking correct that if one wants to persist a business object (like a Stock) using JDO and send it back and forth to/from the client over RPC then one has to create two separate classes for that object: One with the JDO annotations for persisting it on the server and another which is serialisable and used over RPC?
I notice the Stock Watcher has separate classes and I can theorise why:
Otherwise the gwt compiler would try
to generate javascript for everything
the persisted class referenced like
JDO and com.google.blah.users.User, etc
Also there may be logic on the server-side
class which doesn't apply to the client
and vice-versa.
I just want to make sure I'm understanding this correctly. I don't want to have to create two versions of all my business object classes which I want to use over RPC if I don't have to.
The short answer is: you don't need to create duplicate classes.
I recommend that you take a look from the following google groups discussion on the gwt-contributors list:
http://groups.google.com/group/google-web-toolkit-contributors/browse_thread/thread/3c768d8d33bfb1dc/5a38aa812c0ac52b
Here is an interesting excerpt:
If this is all you're interested in, I
described a way to make GAE and
GWT-RPC work together "out of the
box". Just declare your entities as:
#PersistenceCapable(identityType =
IdentityType.APPLICATION, detachable
= "false") public class MyPojo implements Serializable { }
and everything will work, but you'll
have to manually deal with
re-attachment when sending objects
from the client back to the server.
You can use this option, and you will not need a mirror (DTO) class.
You can also try gilead (former hibernate4gwt), which takes care of some details within the problems of serializing enhanced objects.
Your assessment is correct. JDO replaces instances of Collections with their own implementations, in order to sniff when the object graph changes, I suppose. These implementations are not known by the GWT compiler, so it will not be able to serialize them. This happens often for classes that are composed of otherwise GWT compliant types, but with JDO annotations, especially if some of the object properties are Collections.
For a detailed explanation and a workaround, check out this pretty influential essay on the topic: http://timepedia.blogspot.com/2009/04/google-appengine-and-gwt-now-marriage.html
I finally found a solution. Don't change your object at all, but for the listing do it this way:
List<YourCustomObject> secureList=(List<YourCustomObject>)pm.newQuery(query).execute();
return new ArrayList<YourCustomObject>(secureList);
The actual problem is not in Serializing the Object... the problem is to Serialize the Collection class which is implemented by Google and is not allowed to Serialize out.
You do not have to create two versions of the domain model.
Here are two tips:
Use a String encoded key, not the Appengine Key class.
pojo = pm.detachCopy(pojo)
...will remove all the JDO enhancements.
You don't have to create separate instances at all, in fact you're better off not doing it. Your JDO objects should be plain POJOs anyway, and should never contain business logic. That's for your business layer, not your persistent objects themselves.
All you need to do is include the source for the annotations you are using and GWT should compile your class just fine. Also, you want to avoid using libraries that GWT can't compile (like things that use reflection, etc.), but in all the projects I've done this has never been a problem.
I think that a better format to send objects through GWT is through JSON. In this case from the server a JSON string would be sent which would then have to be parsed in the client. The advantage is that the final Javascript which is rendered in the browser has a smaller size. thus causing the page to load faster.
Secondly to send objects through GWT, the objects should be serializable. This may not be the case for all objects
Thirdly GWT has inbuilt functions to handle JSON... so no issues on the client end