Find unused resource files (.jsp, .xhtml, images) in Eclipse - eclipse

I'm developing a large web application in Eclipse and some of the resources (I'm talking about files, NOT code) are getting deprecated, however, I don't know which are and I'm including them in my ending war file.
I know Eclipse recognizes file paths into its directory because I can access the link to an image or other page while I'm editing one of my xhtml pages (using Control). But is there a way to localize the unused resources in order to remove them?

Following these 3 steps would work for sites with a relatively finite number of dynamic pages:
Install your site on a filesystem mount'ed with atime (access time).
Try harvesting the whole site with wget.
Use find to see which files were not accessed recently.
Done.

As I know Eclipse doesn't have this (need this too).
I'm using grep in conjuction with bash scripting - shell script takes files in my resource folder, put filenames in list, greping throught source code for every record in the list and if grep find it it is removed.
At the end list is printed on console - just unused resources retain in the list.

UCDetector might be your best bet, specifically, the custom marker aspects of this tool.

In Eclipse I have not found a way. I have used the following shell command script.
Find .ftl template files which are NOT referenced in .java files
cd myfolder
find . -name "*.ftl" -printf "%f\n" |while read fname; do grep --include \*.java -rl "$fname" . > /dev/null || echo "${fname} not referenced" ; done;
or
Find all .ftl template files which are NOT referenced in .java, .ftl, .inc files
cd myfolder
find . -name "*.ftl" -printf "%f\n" |while read fname; do grep --include \*.java --include \*.ftl --include \*.inc -rl "$fname" . > /dev/null || echo "${fname} not referenced" ; done;
Note: on MacOSX you can use gfind instead of find in case -printf is not working.
Example output
productIndex2.ftl not referenced
showTestpage.ftl not referenced

Related

Git Bash find exec recursively on folders and files containing spaces

Question: In Git Bash on windows, how would you run the following in a way that it will also search folders with spaces in the name, and execute on files with spaces in the name?
$ find ./ -type f -name '*.png' -exec sh -c 'cwebp -q 75 $1 -o "${1%.png}.webp"' _ {} \;
Context I'm running Git Bash on windows, trying to execute a command on all found .png files to convert them to .webp format. It works for all files without spaces in the path, but it's failing to find files with spaces in the filename or files within folders that have spaces in the folder name.A few considerations:
I have many, many levels of folders to iterate through, and I can't run this command separately for each. I really need the recursion to work.I cannot change the folder names; it will break other dependencies (nor did I create the folder or filenames originally, so cut me some slack!)I arrived here by following the suggestions from this article: https://www.smashingmagazine.com/2018/07/converting-images-to-webp/the program, to my knowledge, doesn't ship with any built-in recursive command... golly that'd be handy
Any help you can provide will be appreciated. Thanks!

OracleSolaris 11.2 - ctags and vi

On a freshly installed OracleSolaris I have ctags from base-developer-utilities package. It doesn't support recursive, so I generate tags as follows:
% cd my_sources; rm -f tags; touch tags
% find . -name '*.c' -o -name '*.h' -exec ctags -v -u {} \;
The tags get generated, but for some reason vim is unable to understand it, i.e. it just doesn't see any tags although I added them with set tags, instead reports error E426: tag not found.
The tag is in tags file.
Does anybody have a clue what possibly can be wrong with it? Thanks.
If vi complains that the tag isn't there, then it's because it probably isn't. You could confirm that by opening the tags file with a text editor and search for it.
But the reason why it isn't there is because you are overwriting the contents of the tags file for each file find encounters, so it should only contain the tags of the last file. To overcome this you can add the -a argument, which is available according to its man page.
As an alternative you can try compiling a more recent ctags from source in order to use the recursive mode with the -R --languages=c arguments. If you decide to compile from source, I suggest that you use universal-ctags.

Recursively replace colons with underscores in Linux

First of all, this is my first post here and I must specify that I'm a total Linux newb.
We have recently bought a QNAP NAS box for the office, on this box we have a large amount of data which was copied off an old Mac XServe machine. A lot of files and folders originally had forward slashes in the name (HFS+ should never have allowed this in the first place), which when copied to the NAS were all replaced with a colon.
I now want to rename all colons to underscores, and have found the following commands in another thread here: pitfalls in renaming files in bash
However, the flavour of Linux that is on this box does not understand the rename command, so I'm having to use mv instead. I have tried using the code below, but this will only work for the files in the current folder, is there a way I can change this to include all subfolders?
for f in *.*; do mv -- "$f" "${f//:/_}"; done
I have found that I can find al the files and folders in question using the find command as follows
Files:
find . -type f -name "*:*"
Folders:
find . -type d -name "*:*"
I have been able to export a list of the results above by using
find . -type f -name "*:*" > files.txt
I tried using the command below but I'm getting an error message from find saying it doesn't understand the exec switch, so is there a way to pipe this all into one command, or could I somehow use the files I exported previously?
find . -depth -name "*:*" -exec bash -c 'dir=${1%/*} base=${1##*/}; mv "$1" "$dir/${base//:/_}"' _ {} \;
Thank you!
Vincent
So your for loop code works, but only in the current dir. Also, you are able to use find to build a file with all the files with : in the filename.
So, as you've already done all this, I would just loop over each line of your file, and perform the same mv command.
Something like this:
for f in `cat files.txt`; do mv $f "${f//:/_}"; done
EDIT:
As pointed out by tripleee, using a while loop is a better solution
EG
while read -r f; do mv "$f" "${f//:/_}"; done <files.txt
Hope this helps.
Will

Copy all contents of all files in a directory with a certain suffix

I have a bunch of directories named project1, project2, etc.
In those folders are a bunch of perl files (extension ".pl").
Basically, I want to just copy the contents of those .pl files into a new file, let's call it "everything.txt".
Can someone help me out with this? I really don't care which programming language it's done in, although I'd prefer something commandline. But perl, python, and Java would work too.
Edit: Also, there are some duplicate names, which shouldn't be a problem given I just want to write their contents out to a file, but just thought I'd let you know.
bash: cat project*/*.pl > everything.txt
In Unix-y systems:
find project1 project2 ... -name \*.pl -exec cat {} \; > everything.txt
To make, say, a proper .tar archive file that will let you recover the original file names and permissions:
tar cf everything.txt.tar $(find project1 project2 ... -name \*.pl)
(The $(...) syntax requires the bash shell).

How do I do a recursive find & replace within an SVN checkout?

How do I find and replace every occurrence of:
foo
with
bar
in every text file under the /my/test/dir/ directory tree (recursive find/replace).
BUT I want to be able to do it safely within an SVN checkout and not touch anything inside the .svn directories
Similar to this but now with the SVN restriction: Awk/Sed: How to do a recursive find/replace of a string?
There are several possiblities:
Using find:
Using find to create a list of all files, and then piping them to sed or the equivalent, as suggested in the answer you reference, is fairly straightforward, and only requires scanning through the files once.
You'd use one of the same answers as from the question you referenced, but adding -path '*/.svn' -prune -o after the find . in order to prune out the SVN directories. See this question for a discussion of using the prune option with find -- although note that they've got the pattern wrong. Thus, to print out all the files, you would use:
find . -path '*/.svn' -prune -o -type f -print
Then, you can pipe that into an xargs call or whatever to do the individual replacements, as suggested in the question you referenced. There is a lot of discussion there about different options, which I won't reproduce here, although I prefer the version from John Zwinck's answer:
find . -path '*/.svn' -prune -o -type f -exec sed -i 's/foo/bar/g' {} +
Using recursive grep:
If you have a system with GNU grep, you can use that to find the list of files as well. This is probably less efficient than find, but it does allow you to only call sed on the files that match, and I personally find the syntax a lot easier to remember (or figure out from manpages):
sed -i 's/foo/bar/g' `grep -l -R --exclude-dir='*/.svn' 'foo' .`
The -l option causes grep to only output the list of file names, rather than the matching lines.
Using a GUI editor:
Alternately, if you're using windows, do what I do -- get a copy of the NoteTab editor (available in a free version), and use its search-and-replace-on-disk command, which ignores hidden .svn directories automatically and just works.
Edit: Corrected find pattern to */.svn instead of .svn, added more details and some other possibilities. However, this depends on your platform and svn version: .svn without */ may be required in some cases, like on CentOS 7.
How about this?
grep -i "search_string" `find "*.some_extension"`
That is halfway solution to finding a search_string within files that have a specific extension....once you know the files that has the string, can be easily modified by piping it into sed....