I've been trying to do a maven build of this project. It downloads a lot of artifacts from a repository and builds fine on one account (local account). However, when i log into a network account(same computer) with the same eclipse/maven installation, same workspace, same environment variables(I only have 2 user ones), it fails because it tries to read from a different directory that doesn't exist (C:/schema/blah/blah/blah) when it should be reading from (C:/workspace/Echo/target/main/xsd/blah/blah/blah)
Is there something inside eclipse or maven that stores local users settings i could look into. The code for this project would be nearly impossible to post since it has ~30 jar files. I'm at a loss.
Thank you in advance to anyone who has ideas or suggestions
~Ryan
check the .m2/settings.xml file in both user home directories. You may want to set the <localRepository> to the same (shared) location so it does not download the same artifacts from remote repositories twice.
Related
This is my first time using SVN or for that matter any version control. So, I've been able to check out a selenium project on my local machine. This source code was working fine on the other machine(my friend's), but on my machine it throws hundreds of errors such as "BeforeClass cannot be resolved to a type, Assert cannot be resolved to a type" etc.
I do know that this error could be because the required selenium jars may not have been setup in the build path. But, I can see all these selenium jars in the "lib" folder.
So, I want to understand if I need to reconfigure the build path. By the way, don't the project settings etc come by default as the same code works perfectly on other machine, which means the build path must have been configured there.
I know its a very basic question, but I assure you that I'm a naive coder.
Thanks for your help.
Note: I'm using Eclipse IDE
Eclipse's project configuration files (eg, .project, .classpath, .settings, etc) are designed to be checked in with the rest of the project. If done so, whenever the project is checked out to a workspace Eclipse will automatically use them to properly configure the project. Check that your friend checked in those files; if not, ask him to.
It looks like you did not add Eclipse project metadata files(.project,.classpath) & .settings folder to your source control system, so Eclipse doesn't know what your build path is or whether it is even a java project.
Go back to your other computer and look for the following files in your original project root...
.project
.classpath
.settings/*
Make sure all are present in Source Control System.
I have a Scala project I share via git between two (windows) machines. I have them set up using SBT and sbt-eclipse so I can edit and test within eclipse or build and test from the command-line.
Unfortunately my user name (and, therefore, the user profile directory) is different on the different machines. This means that when sbt fetches dependencies it puts them in different base directories on the two platforms. This wouldn't be a problem except that the full pathname is hardcoded into the eclipse .classpath file from sbt-eclipse. This means that I have to rerun the 'eclipse' task when I do a pull on my 'current' machine.
This must be even worse for others who do this kind of thing as a team. How is this normally handled? I would prefer to do a pull on whatever machine, even from within eclipse, and get started right away.
I would strongly recommend removing the eclipse-sbt-generated files (and all other generated files) from git. Each machine will have it's own .classpath file that is generated on that machine that is generated on that machine for that machine and can be regenerated whenever you want/need. Your build.sbt project files should be in git, so when you pull onto each machine, it will have the updated config, and you can just run sbt eclipse only when you have a dependency change.
Really, you should always avoid having generated files in source control. Have only the important stuff in your git project, and generate the rest as necessary.
I'm using eclipse 3.7 and my local maven repository has a bunch of jars in it. I tried to build the project on another computer that is on a different network and has *.jar download restrictions. I will not be able to get the restrictions lifted. Here is an example error:
e.g. Access denied to http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-compiler-plugin/2.3.2/maven-compiler-plugin-2.3.2.jar Error code 403, Forbidden (Content blocked ...)
It would be very nice if I could simply copy my .m2\repository\ folder into the .m2\repository\ on the other computer--both are running windows. I really don't want to manually install 100s of jars on the other computer.
Is xcopy for .m2\repository\ supported?
Yes, you can copy the .m2/repository folder to any other location/storage device you want.
Maven only needs to know the path to the local repository (by default it's in %USER_HOME%/.m2/repository). You can change that path in Maven's settings.xml (see settings reference) if you decide to keep the artifacts in a different location.
everytime i start with a fresh new workspace, m2eclipse downloads nexus-maven-repository-index.gz from the maven central repository.
this is good.
but,
some times, i just want to start a new workspace, and not wait for it to download,
it tried copying the whole .metadata directory from an old workspace to the new one,
but the list of maven artifacts are still empty.
is there a way i can cache it?
or at least download the file once, and the copy/extract/repackage it so that m2eclipse thinks it has already downloaded it and allows me to search for maven artifacts.
or a short version of the question
where and in what format is the "nexus-maven-repository-index.gz" file stored in the workspace?
The index is stored in the plugin's metadata location, i.e.
[workspace root]/.metadata/.plugins/org.maven.ide.eclipse/nexus
There will be one folder for each remote repository index in use.
You can configure the plugin to not download the index at startup too. Got to Window->Preferences->Maven and uncheck Download repository indexes at startup, you'll have to remember to reactivate it to get any updates though
Update:
I just verified that copying the metadata works. M2Eclipse will still contact the repository to download the deltas (assuming the above option is checked), but that only takes a few moments as it is only downloading the deltas.
Depending on your situation, you may want to try running a managed repository such as artifactory or nexus.
Configure it as the one-true-source-of-everything in maven, that way the initial download should only be from a local location and hence fast.
There is similar problem in my company, due to the network/security restrictions, the index file can't be downloaded by m2eclipse.
I have tried to use apache, to direct maven.org to my localhost to provide the index.(it should work, you can try). But again, network restriction disables local pc level ds resolution.
Last solution is try to downlaod nexus-maven-repository-index.zip, extract everything inside this zip, except the timestamp file, and extract and replace everything into the corresponding index folder for central repository.
It works. :-D
What are good ways of dealing with the issues surrounding plugin code that interacts with outside system?
To give a concrete and representative example, suppose I would like to use Subversion and Eclipse to develop plugins for WordPress. The main code body of WordPress is installed on the webserver, and the plugin code needs to be available in a subdirectory of that server.
I could see how you could simply checkout a copy of your code directly under the web directory on a development machine, but how would you also then integrate this with the IDE?
I am making the assumption here that all the code for the plugin is located under a single directory.
Do most people just add the plugin as a project in an IDE and then place the working folder for the project wherever the 'main' software system wants it to be? Or do people use some kind of symlinks to their home directory?
Short answer - I do have my development and production servers check out the appropriate directories directly from SVN.
For your example:
Develop on the IDE as you would normally, then, when you're ready to test, check in to your local repository. Your development webserver can then have that directory checked out and you can easily test.
Once you're ready for production, merge the change into the production branch, and do an svn update on the production webserver.
Where I work some folks like to use the FileSync Plugin for Eclipse for this purpose, though I have seen some oddities with that plugin where files in the target directory occasionally go missing. The whole structure is:
Ant task to create target directory at desired location (via copy commands, mostly)
FileSync Plugin configured to keep files in sync between development location and target location as you code (sync the Eclipse output folder to a location in the Web server's classpath, etc.)
Of course, symlinks may work better on systems that have good support for symlinks :-)
To me, adding a symlink pointing to your development folder seems like a tidy solution to the problem.
If the main project is on a different machine/webserver, you could use something like sshfs to mount your development directory into the right place on the webserver.