OracleSolaris 11.2 - ctags and vi - solaris

On a freshly installed OracleSolaris I have ctags from base-developer-utilities package. It doesn't support recursive, so I generate tags as follows:
% cd my_sources; rm -f tags; touch tags
% find . -name '*.c' -o -name '*.h' -exec ctags -v -u {} \;
The tags get generated, but for some reason vim is unable to understand it, i.e. it just doesn't see any tags although I added them with set tags, instead reports error E426: tag not found.
The tag is in tags file.
Does anybody have a clue what possibly can be wrong with it? Thanks.

If vi complains that the tag isn't there, then it's because it probably isn't. You could confirm that by opening the tags file with a text editor and search for it.
But the reason why it isn't there is because you are overwriting the contents of the tags file for each file find encounters, so it should only contain the tags of the last file. To overcome this you can add the -a argument, which is available according to its man page.
As an alternative you can try compiling a more recent ctags from source in order to use the recursive mode with the -R --languages=c arguments. If you decide to compile from source, I suggest that you use universal-ctags.

Related

Can we wget with file list and renaming destination files?

I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.

Search for files & file names using silver searcher

Using Silver Searcher, how can I search for:
(non-binary) files with a word or pattern AND
all filenames, with a word or pattern including filenames of binary files.
Other preferences: would like to have case insensitive search and search through dotfiles.
Tried to alias using this without much luck:
alias search="ag -g $1 --smart-case --hidden && ag --smart-case --hidden $1"
According to the man page of ag
-G --file-search-regex PATTERN
Only search files whose names match PATTERN.
You can use the -G option to perform searches on files matching a pattern.
So, to answer your question:
root#apache107:~/rpm-4.12.0.1# ag -G cpio.c size
rpm2cpio.c
21: off_t payload_size;
73: /* Retrieve payload size and compression type. */
76: payload_size = headerGetNumber(h, RPMTAG_LONGARCHIVESIZE);
the above command searches for the word size in all files that matches the pattern cpio.c
Reference:
man page of ag version 0.28.0
Note 1:
If you are looking for a string in certain file types, say all C sources code, there is an undocumented feature in ag to help you quickly restrict searches to certain file types.
The commands below both look for foo in all php files:
find . -name \*.php -exec grep foo {}
ag --php foo
While find + grep looks for all .php files, the --php switch in the ag command actually looks for the following file extensions:
.php .phpt .php3 .php4 .php5 .phtml
You can use --cpp for C++ source files, --hh for .h files, --js for JavaScript etc etc. A full list can be found here
Try this:
find . | ag "/.*SEARCHTERM[^/]*$"
The command find . will list all files.
We pipe the output of that to the command ag "/.*SEARCHTERM[^/]*$", which matches SEARCHTERM if it's in the filename, and not just the full path.
Try adding this to your aliases file. Tested with zsh but should work with bash. The problem you encountered in your example is that bash aliases can't take parameters, so you have to first define a function to use the parameter(s) and then assign your alias to that function.
searchfunction() {
echo $(ag -g $1 --hidden)
echo $(ag --hidden -l $1)
}
alias search=searchfunction
You could modify this example to suit your purpose in a few ways, eg
add/remove the -l flag depending on whether or not you want text results to show the text match or just the filename
add headers to separate text results and filename results
deduplicate results to account for files that match both on filename and text, etc.
[Edit: removed unnecessary --smart-case flag per Pablo Bianchi's comment]
Found this question looking for the same answer myself. It doesn't seem like ag has any native capability to search file and directory names. The answers above from Zach Fogg and Jikku Jose both work, but piping find . can be very slow if you're working in a big directory.
I'd recommend using find directly, which is much faster than piping it through ag:
Linux (GNU version of find)
find -name [pattern]
OSX (BSD version of find)
find [pattern]
If you need more help with find, this guide from Digital Ocean is pretty good. I include this because the man pages for find are outrageously dense if you just want to figure out basic usage.
To add to the previous answers, you can use an "Or" Regular Expression to search within files matching different file extensions.
For example to just search a string in C++ header files [.hpp] and Makefiles [.mk] ) :
ag -G '.*\.(hpp|mk)' my_string_to_search
After being unsatisfied with mdfind, find, locate, and other attempts, the following worked for me. It uses tree to get the initial list of files, ag to filter out directories, and then awk to print the matching files themselves.
I wound up using tree because it was more (and more easily) configurable than the other solutions I tried and is fast.
This is a fish function:
function ff --description 'Find files matching given string'
tree . --prune --matchdirs -P "*$argv*" -I "webpack" -i -f --ignore-case -p |
ag '\[[^d].*' |
awk '{print $2}'
end
This gives output similar to the following:
~/temp/hello_world $ ff controller
./app/controllers/application_controller.rb
./config/initializers/application_controller_renderer.rb
~/temp/hello_world $

Find unused resource files (.jsp, .xhtml, images) in Eclipse

I'm developing a large web application in Eclipse and some of the resources (I'm talking about files, NOT code) are getting deprecated, however, I don't know which are and I'm including them in my ending war file.
I know Eclipse recognizes file paths into its directory because I can access the link to an image or other page while I'm editing one of my xhtml pages (using Control). But is there a way to localize the unused resources in order to remove them?
Following these 3 steps would work for sites with a relatively finite number of dynamic pages:
Install your site on a filesystem mount'ed with atime (access time).
Try harvesting the whole site with wget.
Use find to see which files were not accessed recently.
Done.
As I know Eclipse doesn't have this (need this too).
I'm using grep in conjuction with bash scripting - shell script takes files in my resource folder, put filenames in list, greping throught source code for every record in the list and if grep find it it is removed.
At the end list is printed on console - just unused resources retain in the list.
UCDetector might be your best bet, specifically, the custom marker aspects of this tool.
In Eclipse I have not found a way. I have used the following shell command script.
Find .ftl template files which are NOT referenced in .java files
cd myfolder
find . -name "*.ftl" -printf "%f\n" |while read fname; do grep --include \*.java -rl "$fname" . > /dev/null || echo "${fname} not referenced" ; done;
or
Find all .ftl template files which are NOT referenced in .java, .ftl, .inc files
cd myfolder
find . -name "*.ftl" -printf "%f\n" |while read fname; do grep --include \*.java --include \*.ftl --include \*.inc -rl "$fname" . > /dev/null || echo "${fname} not referenced" ; done;
Note: on MacOSX you can use gfind instead of find in case -printf is not working.
Example output
productIndex2.ftl not referenced
showTestpage.ftl not referenced

etags auto generation

There were some people on stackoverflow having a problem like this but not exactly this and not exactly the solution I'm looking for. The problem is auto generating tag file by etags if the tag file didn't exist ( through emacs). I wanna log all the files and it is not limited to c or whatever and auto load it through emacs. I'm not interested in having any role in loading tag file.
Any idea?
For me I put the following line in my makefile file:
tags:
find -type f -name "*.[ch]" -print0 | xargs -0 etags -o TAGS -a -l c
I refresh the tags with M-! compile, then make tags.
Emacs auto-detects that the TAGS file was refreshed, and asks you if you need to re-load it.
Otherwise, you can type M-x tags-reset-tags-table, and when you search something with M-., Emacs auto-loads the new generated file.

How do I do a recursive find & replace within an SVN checkout?

How do I find and replace every occurrence of:
foo
with
bar
in every text file under the /my/test/dir/ directory tree (recursive find/replace).
BUT I want to be able to do it safely within an SVN checkout and not touch anything inside the .svn directories
Similar to this but now with the SVN restriction: Awk/Sed: How to do a recursive find/replace of a string?
There are several possiblities:
Using find:
Using find to create a list of all files, and then piping them to sed or the equivalent, as suggested in the answer you reference, is fairly straightforward, and only requires scanning through the files once.
You'd use one of the same answers as from the question you referenced, but adding -path '*/.svn' -prune -o after the find . in order to prune out the SVN directories. See this question for a discussion of using the prune option with find -- although note that they've got the pattern wrong. Thus, to print out all the files, you would use:
find . -path '*/.svn' -prune -o -type f -print
Then, you can pipe that into an xargs call or whatever to do the individual replacements, as suggested in the question you referenced. There is a lot of discussion there about different options, which I won't reproduce here, although I prefer the version from John Zwinck's answer:
find . -path '*/.svn' -prune -o -type f -exec sed -i 's/foo/bar/g' {} +
Using recursive grep:
If you have a system with GNU grep, you can use that to find the list of files as well. This is probably less efficient than find, but it does allow you to only call sed on the files that match, and I personally find the syntax a lot easier to remember (or figure out from manpages):
sed -i 's/foo/bar/g' `grep -l -R --exclude-dir='*/.svn' 'foo' .`
The -l option causes grep to only output the list of file names, rather than the matching lines.
Using a GUI editor:
Alternately, if you're using windows, do what I do -- get a copy of the NoteTab editor (available in a free version), and use its search-and-replace-on-disk command, which ignores hidden .svn directories automatically and just works.
Edit: Corrected find pattern to */.svn instead of .svn, added more details and some other possibilities. However, this depends on your platform and svn version: .svn without */ may be required in some cases, like on CentOS 7.
How about this?
grep -i "search_string" `find "*.some_extension"`
That is halfway solution to finding a search_string within files that have a specific extension....once you know the files that has the string, can be easily modified by piping it into sed....