m2eclipse resource filtering - eclipse

I've having problems with resource filtering using m2eclipse Maven support in Eclipse. It seems that filtering only takes place on resources that have changed. This is fundamentally flawed because, if I have a file that references properties (e.g. ${my.property}, if the value of the property changes, the filtering will only be performed if the referencing file is also modified - if I only change the property value (in my pom.xml), the filtering is not applied to the files that that reference it.
So, if I make a change to a property in my pom file, the filtering is not applied. However, if I then go to the file that references that property (e.g. a Spring config file) then edit and save it, the filtering is applied.
I did read somewhere that:
"m2eclipse skips filtering if there were no resource changes during incremental build"
I'm using m2eclipse 0.10.x
Has anyone else come across this?
Thanks,
Andrew

Yes, this is mentioned in this lengthy discussion (the topic is not exactly about resources filtering but the current behavior is mentioned):
When resource filtering is enabled, m2eclipse will run specified goals ("process-resources resources:testResources" by default) to filter resources into project's output folder (target/classes or target-eclipse/classes) as part of Eclipse build. m2eclipse skips filtering if there were no resource changes during incremental build
And my understanding is that this was a design choice (see the last message from Eugene):
There was concern that resource filtering may affect performance in the IDE and not always needed (e.g. when filtering is only used to store some stuff about build into the result jar)
So, after a change in your POM, you should update the project configuration: right-click on your project then Maven > Update Project Configuration (and this will trigger process-resources).
To my knowledge, this is still the current status. There are many issues about resource filtering though, maybe check to see if you can find an similar one.

Yes, you are right.
You should open an issue in m2eclipse's bug tracker.

Related

Deploy BizTalk Schema Solution without redeploying dependent Solutions

I have three solutions. One is a schema solution that only has a schema File in it, lets call it the SchemaSolution.
The SchemaSolution is referenced in my other two solutions because the Solution1 creates xml instances of the schema in the SchemaSolution and drops it as self-correlating in the message box.
This works magically but if I want to update one of the solutions where the SchemaSolution is referenced (deploy to BizTalk) I always have to delete the other solutions. This is horrible and I was not able to find a solution until now.
Is there a (no hacky) way? I thought about merging all Projects into one solution, but this is the worst case scenario I can imagine to achieve my goal.
How can I deploy a project that is referenced in different solutions without deleting and redeploying everything?
BizTalk 2013R2 in use
No this is not supported and not recommended to try and hack your way into this idea (definitely need to alter the BizTalk database, and this is not even allowed by Microsoft i think).
I can give you 3 options:
Make the SchemaSolution as small as possible, like break it down into multiple schema solutions per process for instance, so the chances of you needing to change the solution will be smaller. Ideally, in this solution you would have 1 assembly/project per schema, so new schema's can be added without redeploy.
Another option would be to duplicate your schema's into your projects, this is a design choice you could make, but would require some more work as you need to specify schema's in your pipelines (or else it doesn't know which one you mean), and you have double work with changing the same schema's in multiple projects. The downside is, the schema's are not the same to BizTalk so you can't use it in another project without reference.
Your final option would be to get rid of the dependency of that schema completely, you can do this by creating your own internal/generic/cdm schema, which ideally would be more robust and less prone to changes. This schema would still be referenced by multiple projects, but since you're the one in charge of it, you can predict and mold it into your likings. Again, ideally, in this solution you would have 1 assembly/project per schema, so new schema's can be added without redeploy.
I have a very similar (if not the same) issue within a solution.
I have a set of integration projects dependent on a simple schema project. If I deploy one integration project, I must deploy the schema project, which means I must deploy all integration projects!
In order to deploy them independently, I simply turned the redeploy flag from true to false within properties (in VS) of the schema project..
This allows me to redeploy as many other dependent projects as I like without having to delete or mess around. I can deploy a single integration project with no effect on the others.
The only caveat, is when you redeploy, for some reason, VS flags the fact you have set redeploy to False on the schema project as an error and says that one of the projects was not deployed.
Not a true error, more of a warning imo.
I have been doing this in BT2016, I would assume you can do the same in 2013

Using each plugin in Nutch separately

I'm using extractor plugin with Nutch-1.15. The plugin makes use of parsed data.
The plugin works fine when used as a whole. The problem arises when a few changes are made to the custom-extractos.xml file.
The entire crawling process needs to be restarted even if there is a small change in the custom-extractors.xml file.
Is there a way that single plugin can be used separately on parsed data?
Since this plugin is a Parser filter, it must be used as part of the Parse step, and is not stand-alone.
However, there are a number of things you can do.
If you are looking to change the configuration on the fly (only affecting newly parsed documents), you can use the extractor.file property to specify any location on the HDFS, and replace this file as needed, it will be read by each task.
If you are want to reapply the changes to previously parsed documents, the answer is dependent on the specifics of your crawl, but you may be able to run the parse step again using nutch parse on old segments (you will need to delete the existing parse folders in the segments).

Eclipse indexing - what do the various options do

When you right-click > index on a project there are a few options:
Rebuild
Freshen All Files
Update with Modified Files
Re-resolve Unresolved Includes
I've been just hitting rebuild everytime but now I'm working on a huge project and can't afford to do that; when I modify a file, whether it's a .cpp or .h, I need to know which 'index' operation to do.
For each of the 'index' options:
What does it precisely do?
What is the cost (relative memory, CPU time)?
Documentation from Eclipse would be helpful but already searched and didn't find any.
Rebuild can only be performed on the whole project. It throws away the project's entire index and rebuilds it from scratch, indexing each file in the project.
Since it starts by throwing away the previous index, cancelling a Rebuild will result in an empty or partially built index.
The other actions can be performed either on the whole project, or on a folder or file (or group of folders/files) in the project.
They all go through the files in the selection, and update some or all of them in the index. Unlike Rebuild, they do not start by clearing the index, so cancelling them is relatively safe.
Freshen All Files updates all files in the selection. If called on the project, the end result is comparable to Rebuild.
Update with Modified Files only updates those files in the selection which have changed since the last time they were updated in the index, as determined by their timestamp and a hash of their contents.
Re-Resolve Unresolved Includes only updates those files in the selection for which configuration info (such as specified include paths) has changed, and the change resulted in an include that was previously unresolved now being resolved.
The performance characteristics can vary a lot depending on the project size and the kind of machine you're running on. I work on a very large project (millions of lines) for which a Rebuild can take 20-30 minutes on a relatively modern desktop. The operation is typically CPU-bound, but the indexer is currently single-threaded, so it will only use up one CPU core.
Finally, I'd like to mention again what I said in my comment on the question: if you configure the index to be updated automatically in Preferences | C/C++ | Indexer, you shouldn't need to manually invoke these commands at all, at least in theory. In practice, I find an occasional Rebuild is necessary (say once every few weeks), especially after a configuration change (e.g. adding a new include path).
Sources: this mailing list post, reading the implementation of the actions, and experience using CDT.

Why Rational Team Concert changes the files' last modified attribute?

I'm having some issues with the installation of Rational Team Concert on my server.
The thing is that when I upload some changes to the server (any kind), it changes the last modified attribute of the file, but it shouldn't do it.
Is there a way to avoid this behavior?
Thank you in advance!
This is something that we have tried to add to RTC SCM (and we still plan to). However, we found that it needs to be an option on load/update.
There are numerous details and discussions available # this work item on jazz.net
Regarding timestamp, getting over the fact that relying on it in a version control tool isn't always considered a best-practice (see "What's the equivalent of use-commit-times for git?"), it is actually a complex issue:
an SCM loader wouldn't use just timestamp to determined what file has changed (Task 179263)
you can have various requirements for that timestamp (like in Defect 159043, where the file timestamp of the modified file on disk that of when it was delivered, not when I accepted.). The variable JAZZ_CCM_SKIP_MOD_TIME=true is mentioned so check if that could improve your specific case.
it is all based on the assumption the timestamp is correctly set by the local workstation, which isn't always true, as illustrated in Task 77201

eclipse CVS usage: clean timestamps

during synchronisation with the CVS server, eclipse compares the content of the files (of course it uses internally CVS commands). But files without any content change are also shown as different, if they have another timestamp, because they are "touched". You always have to look manually per file comparison dialog if there was really a change in it or not.
Due to auto-generation I have some files that always get new timestamps and therefore I always have to check manually if they really contain any change.
At the eclipse docu I read :
Update and Commit Operations
There are several flavours of update and commit operations available
in the Synchronize view. You can perform the standard update and
commit operation on all visible applicable changes or a selected
subset. You can also choose to override and update, thus ignoring any
local changes, or override and commit, thus making the remote resource
match the contents of the local resource. You can also choose to clean
the timestamps for files that have been modified locally (perhaps by
an external build tool) but whose contents match that of the server.
That's exactly what I want to do. But I don't know how!? There is no further description/manual ...
Did anybody use this functionality and can help me (maybe even post a screenshot)?
Thanks in advance,
Mayoares
When you perform a CVS Update on a project (using context menu Team->Update), Eclipse implicitly updates the timestamp of local files whose contents match that of the server.