What are best practices for using Hibernate's hbm2java? - eclipse

I am using Hibernate, Maven, and Eclipse (STS build) to build a project. I'm using hbm.xml files to specify my schema. I want to use Hibernate's hbm2java to generate my model classes. I have it working well and generating the kind of code I want.
It runs perfectly from the command line, generating the model code and then building and testing as expected.
However, Eclipse seems unable to handle it. It will periodically "lose its mind" and be unable to resolve very simple imports and classes referenced in my DAO classes, which are hand-coded. The things it can't find are classes like HibernateUtil. Ironically, it appears to not have any trouble finding the model classes.
The unresolved classes are in target/classes/blah-blah folder at the end of the run. So they're apparently getting copied to the right place.
In a "continuous integration" environment, is it best to generate the sources once, commit them to my version control, and then disable code gen? Or is it possible to have the code generated each time, thus ensuring I pick up any database changes without human intervention?

IMHO, entities should be the core of your application, and should be designed, implemented and documented with care. They're supposed to be objects, with methods encapsulating behavior. Having them autogenerated is an absurdity, IMO.
Generating them at the very beginning might be an option to get you started, but once they've been generated, hand-craft them and don't generate them again. Add necessary properties and methods as the schema changes, and refactor existing code.
BTW, I really prefer using annotations for the mapping, because it's less verbose, less error-prone, and all the information is in a single place.

Try this:
From command line traverse to your project directory where the project's pom.xml is present and run:
mvn eclipse:clean eclipse:eclipse
If it says unable to find plugin eclipse then try:
mvn eclipse:install-plugin
First and then try the command above again.
In this way all the maven and project dependencies will be resolved at eclipse level also.
Let me know if this is not what you were looking for.

Related

sbt-assembly: Generate a minimal JAR file

I've been using sbt-assembly to generate standalone JAR file for my scala project. However, I would like to reduce the size of my JAR file (its currently around 150MB and there's defintely room for improvement there).
I used the following command to list the contents of the JAR file that's produced:
jar tf <JAR file>
This revealed that there are lots of classes in the generated JAR file that are not used in the project. I believe these classes get included as part of third-party JARs.
Questions
(a) Is there an option that I can use to instruct sbt-assembly to generate a minimal JAR file that does not include the third-party classes that are not used in my project?
(b) I could use AssemblyStrategy to manually specify which files need to be excluded. Is this a sound strategy? I'm a bit concerned that with this approach the JAR file might end up throwing unexpected ClassNotFound exceptions.
Thanks in advance.
It's not easy to say what's used in your project and what is not. If you include a dependency into a project it might bring a few other ones in. Those child dependencies might also require their own dependencies and so on.
By default if you include some dependency in your project you intend to use it. The author of a dependency usually does the same thing. Thus, there is usually not much you can throw away, it's there for a reason. There are couple cases when this is not true:
Dependency author includes additional dependencies that will be used only in some settings, and that does not apply to your project
You are using a mega-dependency when you actually need only one of its libraries/features.
There are counter examples to this as well: Scalatest does not ship pegdown for generating html test reports because you don't need it usually. But it might be needed if you try to use -h flag to generate html.
Imagine the case when you use Apache Tika for pdf parsing. It wraps PDFBox to do the parsing. You don't need a bloat of all other libraries in that case that parse MS documents. The best thing to do is not to exclude files manually via sbt exclude or sbt-assembly rules because there is a risk you get it wrong and get run time class loading exception. Instead you need to use the right dependency like PDFBox directly. Unfortunately this is a lot of manual work in many cases to figure out all dependencies that you need, so it's your choice: easy and fat JAR, or painful and lean.
There are two ways to exclude dependencies:
Exclude transitive dependencies with exclude. See the docs here.
Don't use the top level dependency and manually add its subdependencies as you need them.
Ok, one more less fun option: use provided and make sure libraries are copied to your target environment and are on classpath. If you have many jars using the same libraries this helps to share those.
You can visualize your dependency tree with this plugin: https://github.com/jrudolph/sbt-dependency-graph. It's very helpful when trying to figure out what you are using and what you can remove. There are some tools like tattletale and loosejar that people suggest but I haven't tried them. If anyone has experience with those please share.
What might want to look at are treeshakers
For Java there's the following (I have not tried/used it):
http://proguard.sourceforge.net/

Eclipse Scala IDE: can't open hierarchy for standard library classes

I have exactly the same problem as in this question: Eclipse: Using "Open Declaration" ... in a Scala project
However, I'm using the latest Scala IDE in version 3.0.2 (I have downloaded the Eclipse bundle from the site), and I would assume such basic functionality works by now, and apparently it's me who have something misconfigured.
I have created a new Scala project. Then I open some standard library class/trait/whatever, let's say scala.util.parsing.combinator.JavaTokenParsers. The source is neatly displayed, but when I try to show class hierarchy, I get the message: The resource is not on the build path of a Java project.
Also, searching for references etc. won't work.
I guess it is a matter of properly configuring the build path? Or maybe I should somehow attach Scala library sources to my project? But I can see the source, so aren't they attached already?
Here is the snapshot of my project configuration:
UPDATE:
By playing a bit with setting/resetting build path stuff, I managed to get rid of pop-up warning but the class hierarchy comes up empty and when searching for references I get only hits from my own sources, nothing from standard library.
In another workspace I also tried randomly adding and removing scala-library jars and got it work almost, but the type hierarchy comes up only with super-classes, without any sub-classes (which renders it quite useless). Searching for references works ok though.
Funny thing, I cannot make it work in my original workspace...
Gotta love Eclipse.
Your build path is not configured properly.
If you take a look under Scala Library[...] you have scala-library.jar we can only see one top-level package scala. There should be numerous other packages besides that. (Ruled Out)
I would recommend you follow these steps
Right-click project, build-path, Java-build-path, Libraries and make sure that the correct library is referenced there.
If it is the one you need, Try to remove this library and add it again, then clean and re-fresh the project. Also try this step in a fresh workspace.(something must have messed up this workspace )
Lastly. Goto the path D:\Eclipse For Scala\configuration\org.eclipse.osgi\bundles\286\1\.cp\lib and verify the sizes of the jars there. There should be 6 jars there and the size of scala-library jar should be around 6.8M. If size is smaller, consider re-downloading

How to Create a Spring+Primefaces+Hibernate (no maven) project in eclipse?

I am new to J2EE. I would like to create a Spring+Primefaces+Hibernate project.
I googled for it.
But I found all projects examples show in internet contains maven. My questions are
Is it possible to create a spring+primefaces+hibernate project in eclipse without Maven? If no, what is need of maven?
How to add the jar file of primefaces and spring and hibernate in eclipse?
Will the spring controller xml file (spring context or dispatcher servlet) be created automatically or manually?I mean Spring MVC.
Will the hibernate file (mapping file) also be created automatically or manually?
If possible, can anyone guide me to tutorial (preferably video) to implement the same?
I am using tomcat 7 and Eclipse - kepler.
Any help is appreciated.
If this is downvoted , do specify the reason also.
Although it's not a 'must' to use Maven or any other build tool, you should strongly consider using one.Eclipse Kepler has by default maven support but feel free to use other build tools(Gradle, Ant) or none(see 2.).Maven and the other build tools remove the headache of scaffolding, searching for dependencies(external jars like spring-mvc, hibernate, some db drivers), even deploying applications in a server.
If you chose not to use a build tool you have to manually get your project dependencies and enter them
into your project's buildpath(Right Click -> Build Path then enter their location).As you have noticed this step can be really really time consuming...
No, you have to manually create the configuration unless you use another project that already has what you need, again this might get easier with a build tool(maven archetypes for example)
The same as 3.
You won't have a hard time finding resources about these technologies, they are being used practically everywhere, and I think the Spring team has some videos in their YouTube channel.
Hope that helps a little!
1:* The fundamental difference between Maven and Ant is that Maven's design regards all projects as having a certain structure and a set of supported task work-flows (e.g., getting resources from source control, compiling the project, unit testing, etc.). While most software projects in effect support these operations and actually do have a well-defined structure, Maven requires that this structure and the operation implementation details be defined in the POM file. Thus, Maven relies on a convention on how to define projects and on the list of work-flows that are generally supported in all projects.
This design constraint resembles the way that an IDE handles a project, and it provides many benefits, such as a succinct project definition, and the possibility of automatic integration of a Maven project with other development tools such as IDEs, build servers, etc.
But one drawback to this approach is that Maven requires a user to first understand what a project is from the Maven point of view, and how Maven works with projects, because what happens when one executes a phase in Maven is not immediately obvious just from examining the Maven project file. In many cases, this required structure is also a significant hurdle in migrating a mature project to Maven, because it is usually hard to adapt from other approaches.
In Ant, projects do not really exist from the tool's technical perspective. Ant works with XML build scripts defined in one or more files. It processes targets from these files and each target executes tasks. Each task performs a technical operation such as running a compiler or copying files around. Targets are executed primarily in the order given by their defined dependency on other targets. Thus, Ant is a tool that chains together targets and executes them based on inter-dependencies and other Boolean conditions.
The benefits provided by Ant are also numerous. It has an XML language optimized for clearer definition of what each task does and on what it depends. Also, all the information about what will be executed by an Ant target can be found in the Ant script.
A developer not familiar with Ant would normally be able to determine what a simple Ant script does just by examining the script. This is not usually true for Maven.
However, even an experienced developer who is new to a project using Ant cannot infer what the higher level structure of an Ant script is and what it does without examining the script in detail. Depending on the script's complexity, this can quickly become a daunting challenge. With Maven, a developer who previously worked with other Maven projects can quickly examine the structure of a never-before-seen Maven project and execute the standard Maven work-flows against it while already knowing what to expect as an outcome.
It is possible to use Ant scripts that are defined and behave in a uniform manner for all projects in a working group or an organization. However, when the number and complexity of projects rises, it is also very easy to stray from the initially desired uniformity. With Maven this is less of a problem because the tool always imposes a certain way of doing thi
2:* You have to download all required jars file for hibernate/spring/primefaces from internet and place them in your project build path or in lib folder.
3:* Spring configuration files need to be created by you so that you can get the concept.
4:* Hibernate mapping files can be created by using reverse Engineering techniques for hibernate from where you can generates hbm files or you can use annotations if you dont want xml.
I suggest you to first create a sample java project in eclipse then download all required jars and place them in lib folder. Then configure hibernate in projects and spring integration.

ant deployment issues

i am looking to make our deployments here not suck and i need some help, if you can help me with these few things i owe you beer
right now whenever i make a change thats not to the jsps i need to clean-including-tomcat otherwise my change doesnt take. this is really annoying.
any clues as to what i can change to make it work?
my current build is really simple, just the regular old, javac, war, deploy
one thing that isnt done is that there is no build dir, the project itself contains a web-inf and the javac is done in place, then the war excludes all the .java resources and wars the project.
edit:
I am looking to fix this problem with least amount of effort - so while switching to maven and learning how to use it might solve this problem, but it will create another problem ;)
You've already identified some of the weaknesses, in your current build.
The easiest way that I can suggest to clean it up would be to start with the directory structure.
I highly recommend using the maven directory structure, I would go further to suggest using maven as a build tool instead of ant, however for some folk that remains open for debate.
The maven directory structure has been well thought out, I really like working on projects that use the maven directory structure, because they follow a convention that allows me to save a lot of time, by knowing from previous experience where to find the application components
java source
unit test source
resources etc.
Also by following the convention, the maven plugins work with less configuration required.
Another useful advantage that I get from working on maven based projects is good code metrics, to measure the health of the application. There are various report available as maven plugins, which will give you new insight into your codebase, including:
checkstyle
pmd
findbugs
and more.
Created a build directory where everything got copied before build
Added some flags to not copy over things that rarely change, like images (also to not remove them on clean)
Started using ant-reload task after deploying code
Now i don't need to restart tomcat on every build, and build takes much less time.

eclipse, one classpath for compiling, another for launching

example:
For logging, my code uses log4j. but other jars my code is dependent upon, uses slf4j instead. So both jars must be in the build path. Unfortunately, its possible for my code to directly use (depend on) slf4j now, either by context-assist, or some other developers changes. I would like any use of slf4j to show up as an error, but my application (and tests) will still need it in the classpath when running.
explanation:
I'd like to find out if this is possible in eclipse. This scenario happens often for me. I'll have a large project, that uses alot of 3rd party libraries. And of course those 3rd party jars have their own dependencies as well. So I have to include all dependencies in the classpath ("build path" in eclipse) for the application and its tests to compile and run (from within eclipse).
But I don't want my code to use all of those jars, just the few direct dependencies I've decided upon myself. So if my code accidentally uses a dependency of a dependency, I want it to show up as a compilation error. Ideally, as class not found, but any error would do.
I know I can manually configure the classpath when running outside of eclipse, and even within eclipse I can modify the classpath for a specific class I'm running (in the run configurations), but thats not manageable if you run alot of individual test cases, or have alot of main() classes.
It sounds like your project has enough dependency relationships that you might consider structuring it with OSGi bundles (plug-ins). Each bundle gets its own classloader and gets to specify what bundles (and optionally what version ranges, etc.) it depends on, what packages it exports, whether it re-exports stuff from its dependencies, etc.
Eclipse itself is structured out of Eclipse plug-ins and fragments, which are just OSGi bundles with an optional tiny bit of additional Eclipse wiring (plugin.xml, which is used to declare Eclipse "extension points" and "extensions") attached. Eclipse thus has fairly good tooling for creating and managing bundles built-in (via the Plug-in Development Environment). Much of what you find out there may lead you to conflate "OSGi bundle" with "plug-in that extends the Eclipse IDE", but the two concepts are quite separable.
The Eclipse tooling does distinguish rather clearly (and sometimes annoyingly, but in the "helpful medicine" way) between the bundles in your build environment vs. the bundles that a particular run configuration includes.
After a few years of living in OSGi land, the default Java "flat classpath" feels weird and even kind of broken to me, largely because (as you've experienced) it throws all JARs into one giant arena and hopes they can sort of work things out. The OSGi environment gives me a lot more control over dependency relationships, and as a "side effect" also naturally demands clarification of those relationships. Between these clear declarations and the tooling's enforcement of them, the project's structure is more obvious to everyone on the team.
if my code accidentally uses a dependency of a dependency, I want it to show up as a compilation error. Ideally, as class not found, but any error would do.
Put your code in one plug-in, your direct dependencies in other plug-ins, their dependencies in other plug-ins, etc. and declare each plug-in's dependencies. Eclipse will immediately do exactly what you want. You won't be offered dependencies' dependencies' contents in autocompletes; you'll get red squiggles and build errors; etc.
Why not use access rules to keep your code clean?
It looks like it would better be managed with maven, integrated in eclipse with m2eclipse.
That way, you can only execute part of the maven build lifecycle, and you can manage separate set of dependencies per build steps.
In my experience it helps to be more resrictive, I made the team filling out (paper) forms why this jar is needed and what license...
and they did rather type in a few lines of code instead of drag along 20 jars to open a file using only one line of code, or another fancy 'feature'.
Using maven could help for a while, but when you first spot jars having names like nightly-build or snapshot, you will know you're in jar-hell.
conclusion: Choose dependencies well
Would using the slf4j-over-log4j jar be useful? That allows using slf4j with actual logging going to log4j.