I'm developing a wizard that implements the org.eclipse.pde.ui.IPluginContentWizard interface. Thus it gets added as plug-in project template in the end of the plug-in project wizard. All files will be created just fine, but there is one error in the project. The plug-in is not declared to be a singleton which it must be when extending extension points.
How do I do that within the wizard? I figured it needs to be done in performFinish(IProject project, IPluginModelBase model, IProgressMonitor monitor) but neither the project nor the model gives me a possibility to do so.
Edit: For future readers: My mistake was, that I didn't add the extension via the API but rather via generating the plugin.xml "by hand". This caused no mechanism in the background to do their job and thus the singleton directive wasn't set.
This way will be too long, let's use more PDE API:
First, define the template section
import org.eclipse.pde.ui.templates.OptionTemplateSection;
public class YourTemplateSection extends OptionTemplateSection {
//implement abstract methods according your needs
#Override
protected void updateModel(IProgressMonitor monitor) throws CoreException {
IPluginBase plugin = model.getPluginBase();
//do what is needed
plugin.add(extension);//here the "singleton" directive will be set
}
}
then use the section with wizard
import org.eclipse.pde.ui.templates.ITemplateSection;
import org.eclipse.pde.ui.templates.NewPluginTemplateWizard;
public class YourContentWizard extends NewPluginTemplateWizard {
#Override
public ITemplateSection[] createTemplateSections() {
return new ITemplateSection[] { new YourTemplateSection() };
}
}
In case one does the same rookie mistake then me, I wanted to post my solution I came up after revisiting the project later:
Don't create the plugin.xml manually, use the PDE API of the plugin model to add extensions.
In the org.eclipse.pde.ui.IPluginContentWizard implementions's performFinish(...) method do this:
try {
IPluginExtension extension = model.getExtensions().getModel().getFactory().createExtension();
extension.setPoint("org.eclipse.elk.core.layoutProviders");
IPluginElement provider = model.getPluginFactory().createElement(extension);
provider.setName("provider");
provider.setAttribute("class", id + "." + algorithmName + "MetadataProvider");
extension.add(provider);
model.getExtensions().add(extension);
} catch (CoreException e) {
e.printStackTrace();
}
I need to find all the Java types (classe, interface, enum, annotation) in all projectes' source directories of workspace, given a name as String. For example, input "Test" would return all the Java types whose names start with "Test" defined in all the projects source directories. I tried to handle this with JDT's SearchEngine as follows
SearchPattern pattern = SearchPattern.createPattern("Test",
IJavaSearchConstants.TYPE, IJavaSearchConstants.DECLARATIONS,
SearchPattern.R_PREFIX_MATCH);
IJavaSearchScope scope = SearchEngine.createWorkspaceScope();
SearchRequestor requestor = new SearchRequestor() {
public void acceptSearchMatch(SearchMatch match) {
System.out.println("Found: " + match.getElement());
}
};
SearchEngine searchEngine = new SearchEngine();
try {
searchEngine.search(pattern, new SearchParticipant[] { SearchEngine
.getDefaultSearchParticipant() }, scope, requestor,
null);
} catch (CoreException e) {
e.printStackTrace();
}
The code above returned some java types not in Eclipse workspace. For example:
Found: TestSimple (not open) [in TestSimple.class [in <default> [in /home/me/test]]]
TestSimple.java is a file created from command line in my home directory and it has nothing to do with Eclipse projects. (My Eclipse workspace is defined in /home/me/eclipse/workspace.) I am not sure why it is included in the search result? Is SearchEngine the best way to handle this scenario, given the fact that I just need the java type names, I don't need anything inside a class such as field, method or reference? Thanks
You probably want to build a more confined search scope, then, asking each IJavaProject for all of its package fragment roots (IJavaProject#getAllPackageFragmentRoots()) and including only those returning K_SOURCE from IPackageFragmentRoot#getKind() to avoid getting types from referenced libraries as well. A search on just names shouldn't do much more than hit the indexes that are already being stored. You can't just rely on filenames because it would miss inner classes entirely.
How can I change the current working directory from within a Java program? Everything I've been able to find about the issue claims that you simply can't do it, but I can't believe that that's really the case.
I have a piece of code that opens a file using a hard-coded relative file path from the directory it's normally started in, and I just want to be able to use that code from within a different Java program without having to start it from within a particular directory. It seems like you should just be able to call System.setProperty( "user.dir", "/path/to/dir" ), but as far as I can figure out, calling that line just silently fails and does nothing.
I would understand if Java didn't allow you to do this, if it weren't for the fact that it allows you to get the current working directory, and even allows you to open files using relative file paths....
There is no reliable way to do this in pure Java. Setting the user.dir property via System.setProperty() or java -Duser.dir=... does seem to affect subsequent creations of Files, but not e.g. FileOutputStreams.
The File(String parent, String child) constructor can help if you build up your directory path separately from your file path, allowing easier swapping.
An alternative is to set up a script to run Java from a different directory, or use JNI native code as suggested below.
The relevant OpenJDK bug was closed in 2008 as "will not fix".
If you run your legacy program with ProcessBuilder, you will be able to specify its working directory.
There is a way to do this using the system property "user.dir". The key part to understand is that getAbsoluteFile() must be called (as shown below) or else relative paths will be resolved against the default "user.dir" value.
import java.io.*;
public class FileUtils
{
public static boolean setCurrentDirectory(String directory_name)
{
boolean result = false; // Boolean indicating whether directory was set
File directory; // Desired current working directory
directory = new File(directory_name).getAbsoluteFile();
if (directory.exists() || directory.mkdirs())
{
result = (System.setProperty("user.dir", directory.getAbsolutePath()) != null);
}
return result;
}
public static PrintWriter openOutputFile(String file_name)
{
PrintWriter output = null; // File to open for writing
try
{
output = new PrintWriter(new File(file_name).getAbsoluteFile());
}
catch (Exception exception) {}
return output;
}
public static void main(String[] args) throws Exception
{
FileUtils.openOutputFile("DefaultDirectoryFile.txt");
FileUtils.setCurrentDirectory("NewCurrentDirectory");
FileUtils.openOutputFile("CurrentDirectoryFile.txt");
}
}
It is possible to change the PWD, using JNA/JNI to make calls to libc. The JRuby guys have a handy java library for making POSIX calls called jnr-posix. Here's the maven info
As mentioned you can't change the CWD of the JVM but if you were to launch another process using Runtime.exec() you can use the overloaded method that lets you specify the working directory. This is not really for running your Java program in another directory but for many cases when one needs to launch another program like a Perl script for example, you can specify the working directory of that script while leaving the working dir of the JVM unchanged.
See Runtime.exec javadocs
Specifically,
public Process exec(String[] cmdarray,String[] envp, File dir) throws IOException
where dir is the working directory to run the subprocess in
If I understand correctly, a Java program starts with a copy of the current environment variables. Any changes via System.setProperty(String, String) are modifying the copy, not the original environment variables. Not that this provides a thorough reason as to why Sun chose this behavior, but perhaps it sheds a little light...
The working directory is a operating system feature (set when the process starts).
Why don't you just pass your own System property (-Dsomeprop=/my/path) and use that in your code as the parent of your File:
File f = new File ( System.getProperty("someprop"), myFilename)
The smarter/easier thing to do here is to just change your code so that instead of opening the file assuming that it exists in the current working directory (I assume you are doing something like new File("blah.txt"), just build the path to the file yourself.
Let the user pass in the base directory, read it from a config file, fall back to user.dir if the other properties can't be found, etc. But it's a whole lot easier to improve the logic in your program than it is to change how environment variables work.
I have tried to invoke
String oldDir = System.setProperty("user.dir", currdir.getAbsolutePath());
It seems to work. But
File myFile = new File("localpath.ext");
InputStream openit = new FileInputStream(myFile);
throws a FileNotFoundException though
myFile.getAbsolutePath()
shows the correct path.
I have read this. I think the problem is:
Java knows the current directory with the new setting.
But the file handling is done by the operation system. It does not know the new set current directory, unfortunately.
The solution may be:
File myFile = new File(System.getPropety("user.dir"), "localpath.ext");
It creates a file Object as absolute one with the current directory which is known by the JVM. But that code should be existing in a used class, it needs changing of reused codes.
~~~~JcHartmut
You can use
new File("relative/path").getAbsoluteFile()
after
System.setProperty("user.dir", "/some/directory")
System.setProperty("user.dir", "C:/OtherProject");
File file = new File("data/data.csv").getAbsoluteFile();
System.out.println(file.getPath());
Will print
C:\OtherProject\data\data.csv
You can change the process's actual working directory using JNI or JNA.
With JNI, you can use native functions to set the directory. The POSIX method is chdir(). On Windows, you can use SetCurrentDirectory().
With JNA, you can wrap the native functions in Java binders.
For Windows:
private static interface MyKernel32 extends Library {
public MyKernel32 INSTANCE = (MyKernel32) Native.loadLibrary("Kernel32", MyKernel32.class);
/** BOOL SetCurrentDirectory( LPCTSTR lpPathName ); */
int SetCurrentDirectoryW(char[] pathName);
}
For POSIX systems:
private interface MyCLibrary extends Library {
MyCLibrary INSTANCE = (MyCLibrary) Native.loadLibrary("c", MyCLibrary.class);
/** int chdir(const char *path); */
int chdir( String path );
}
The other possible answer to this question may depend on the reason you are opening the file. Is this a property file or a file that has some configuration related to your application?
If this is the case you may consider trying to load the file through the classpath loader, this way you can load any file Java has access to.
If you run your commands in a shell you can write something like "java -cp" and add any directories you want separated by ":" if java doesnt find something in one directory it will go try and find them in the other directories, that is what I do.
Use FileSystemView
private FileSystemView fileSystemView;
fileSystemView = FileSystemView.getFileSystemView();
currentDirectory = new File(".");
//listing currentDirectory
File[] filesAndDirs = fileSystemView.getFiles(currentDirectory, false);
fileList = new ArrayList<File>();
dirList = new ArrayList<File>();
for (File file : filesAndDirs) {
if (file.isDirectory())
dirList.add(file);
else
fileList.add(file);
}
Collections.sort(dirList);
if (!fileSystemView.isFileSystemRoot(currentDirectory))
dirList.add(0, new File(".."));
Collections.sort(fileList);
//change
currentDirectory = fileSystemView.getParentDirectory(currentDirectory);
The SpringApplicationContextLoader assumes that the application is either using 100% XML or 100% Java config. This is because #ContextConfiguration allows either a list of classes or locations/value, not both. If any is specified, SpringApplicationContextLoader ignores the Application class that creates and starts the SpringApplication.
Trying to make Boot work with a 100% Groovy/no-XML pet project, I ran across the above issue. My Application class has #EnableAutoConfiguration and #ComponentScan annotations on it, the former required by Boot to set up a Web server. The later I had to keep because of SPR-11627. On the other hand, if I omitted the locations/value on #ContextConfiguration, dependencies weren't set up (duh!).
I give the code below along with a patch that I locally made to SpringApplicationContextLoader. If there's a better way, please let me know.
MovieDatabaseRESTClientIntegrationTest.groovy
RunWith(SpringJUnit4ClassRunner)
#ContextConfiguration(value = ['classpath:client-config.groovy', 'classpath:integ-test-config.groovy'],
loader = PatchedSpringApplicationContextLoader)
#SpringApplicationConfiguration(classes = MovieDatabaseApplication)
#WebAppConfiguration
#IntegrationTest
class MovieDatabaseRESTClientIntegrationTest {
MovieDatabaseApplication.groovy
#EnableAutoConfiguration
#ComponentScan
class MovieDatabaseApplication {
SpringApplicationContextLoader.java fix
private Set<Object> getSources(MergedContextConfiguration mergedConfig) {
Set<Object> sources = new LinkedHashSet<Object>();
sources.addAll(Arrays.asList(mergedConfig.getClasses()));
sources.addAll(Arrays.asList(mergedConfig.getLocations()));
/* The Spring application class may have annotations on it too. If such a class is declared on the test class,
* add it as a source too. */
SpringApplicationConfiguration springAppConfig = AnnotationUtils.findAnnotation(mergedConfig.getTestClass(),
SpringApplicationConfiguration.class);
if (springAppConfig != null) {
sources.addAll(Arrays.asList(springAppConfig.classes()));
}
if (sources.isEmpty()) {
throw new IllegalStateException(
"No configuration classes or locations found in #SpringApplicationConfiguration. "
+ "For default configuration detection to work you need Spring 4.0.3 or better (found "
+ SpringVersion.getVersion() + ").");
}
return sources;
}
Also posted on Spring forum.
I could be wrong but I don't think there is any support for beans{} configuration in #ContextConfiguration and #SpringContextConfiguration is just an extension of that. A feature request in JIRA would be appropriate. Also there has never been any support for mixed configuration format (as the entry point at least) - you always have to choose either XML or #Configuration, or else supply your own ContextLoader. You also shouldn't have both #ContextConfiguration and #SpringContextConfiguration on the same class (the behaviour is undefined).
I'm currently developing an eclipse plugin. This plugin contains a project nature which depends on the javaScript nature of jsdt.
Now at a few details the JavaScripts that the projects of my nature can contain are somewhat special.
They can contain "compiler hints" which are basicly statements beginning with #
They can contain return statements outside of functions
But at this two points the standard validation of jsdt come in and marks them as errors (which is normally right). I already managed to get this errors filtered out in the properties of the JavaScript validator (manually).
My question is, how can i exclude these errors from the validation of jsdt automatically for the projects with my nature?
JSDT uses concrete syntax parser which generates syntax errors.
You can't disable this. Only semantics error or warnings can be configured.
However you can disable entire validation of JSDT.
Below solution will suppress errors ands warnings which are generated while we save some changes on java script files. (Auto Build, Build)
Open Properties Dialog of Your Project.
Choose Builders item.
Uncheck "JavaScript Validator". And Press OK button.
Remove current errors and warnings from Problems View
This solution can't eliminate error or warning annotations in editor while you edit. They will show up on editor temporarily only when you edit it.
After a lot of research, hours of deleting markers and debugging i finally managed to delete the errors i wanted. In a bad bad way of course but i've come to a point where i just wanted this to work no matter how it's done.
If you ever want to delete existing problems that had been created during the validation process of jsdt you need to do the following (and you must not ommit anything):
Create a class extending org.eclipse.wst.jsdt.core.compiler.ValidationParticipant
Override isActive(), buildStarting() and reconcile() methods.
So there are two things you basicly have to care about.
The actual problem markers that will be created or had already been created at the end of the validation process.
The Problems created by the validation process. They are of the type CategorizedProblem and can be obtained by the ReconcileContext object that is passed to the reconcile() method.
It seems to me that the CategorizedProblems will be translated to problem markers after the validation process.
So what you need to do is:
Delete all unwanted problem markers of all files in buildStarting (this removes problem markers from all files in your project that are about to be validated)
Iterate the CategorizedProblem objects of the ReconcileContext (getProblems())
Create a new Array containing only the CategorizedProblems you want to keep
Set this new Array to the ReconcileContext with putProblems()
Delete the unwanted markers again for that file (i don't know why this is needed, please don't ask, i don't care anymore :-/)
An example implementation of such a validationParticipant could look like this: (this one will filter out problems complaining about return statements outside of methods:
[...ommited imports ...]
public class MyValidationParticipant extends org.eclipse.wst.jsdt.core.compiler.ValidationParticipant{
#Override
public boolean isActive(IJavaScriptProject project) {
return true;
}
#Override
public void buildStarting(BuildContext[] files, boolean isBatch) {
super.buildStarting(files, isBatch);
for(BuildContext context : files){
IFile file = context.getFile();
deleteUnwantedMarkers(file);
}
}
#Override
public void reconcile(ReconcileContext context) {
IResource resource = context.getWorkingCopy().getResource();
CategorizedProblem[] newProblems = new CategorizedProblem[0];
ArrayList<CategorizedProblem> newProblemList = new ArrayList<CategorizedProblem>();
CategorizedProblem[] probs = context.getProblems("org.eclipse.wst.jsdt.core.problem");
if(probs != null){
for(CategorizedProblem p : probs){
if(!(p.getMessage().equals("Cannot return from outside a function or method."))){
newProblemList.add(p);
}
}
}
}
context.putProblems("org.eclipse.wst.jsdt.core.problem", newProblemList.toArray(newProblems));
deleteUnwantedMarkers(resource);
}
public static void deleteUnwantedMarkers(IResource resource){
if(resource.isSynchronized(IResource.DEPTH_INFINITE)){
try {
IMarker[] markers = resource.findMarkers(IMarker.PROBLEM, true, IResource.DEPTH_INFINITE);
if(markers != null && markers.length > 0){
for(IMarker m : markers){
Object message = m.getAttribute(IMarker.MESSAGE);
if(message.equals("Cannot return from outside a function or method.")){
m.delete();
}
}
}
}catch (CoreException e) {
e.printStackTrace();
}
}
}
}
As i said, this is kind of a bad solution since the code relies on the String of the error message. There should be better ways to identify the problems you don't want to have.
Don't forget to add a proper extension in your plugin.xml for the ValidationParticipant.