openstack swift objects search - openstack-swift

How to search files (Objects) inside the sub-folder in container.
my folder structure anu/anusuya/sample.txt.
my curl is < curl -i https://url/Contname/?"limit=2&prefix=sa" >. But it returns empty. If I search an it will return the value.

Openstack Swift containers, so far, doesn't have folders, only objects. Although we can see the slashes (anu/anusuya/sample.txt.), they are just part of the object's name.
An alternative solution is to use a tool like grep to filter the response.

Related

How do I copy a directory recursively and expand/resolve any symbolic links (symlinks) it contains in Swift?

This SO answer shows how to copy a directory recursively and expand all the symlinks from the command line using bash. I cannot find the same functionality in Swift. Does such a function exist? How do I do this concisely in Swift? I could call this bash command from Swift, but would prefer a pure Swift solution.
FileManager.copyItem copies the symbolic links without resolving them.
This SO answer shows how to resolve a single symlink. However, I would prefer not to traverse the directory manually, checking if each item is symlink that needs to be resolved.
To the best of my knowledge you need to write file manager code that traverses the directory structure manually.

How to rename partly the downloaded file using wget?

I'd like to download many files (about 10000) from ftp-server. Names of the files are too long. I'd like to save them only with the date in names. For example: ABCDE201604120000-abcde.nc I prefer to be 20160412.nc
Is it possible?
I am not sure if wget provides similar functionality, nevertheless with curl, one can profit from the relatively rich syntax it provides in order to specify the URL of interest. For example:
curl \
"https://ftp5.gwdg.de/pub/misc/openstreetmap/SOTMEU2014/[53-54].{mp3,mp4}" \
-o "file_#1.#2"
will download files 53.mp3, 53.mp4, 54.mp3, 54.mp4. The output file is specified as file_#1.#2 - here, #1 is replaced by curl with the value of the sequence [53-54] corresponding to the file being downloaded. Similarly, #2 is replace with either mp3 or mp4. Thus, e.g., 53.mp3 will be saved as file_53.mp3.
ewcz's answer works fine if you can enumerate the file names as shown in the post. However, if the filenames are difficult to enumerate, for example, because the integers are sparsely populated, this solution would result in a lot of 404 Not Found requests.
If this is the case, then it is probably better to download all the files recursively, as you have shown, and rename them afterwards. If the file names follow a fixed pattern, you can select the substring from the original name and use it as the new name. In the given example, the new file names start at position 5 and are 8 characters long. The following bash command renames all *.nc files in the current directory.
for f in *.nc; do mv "$f" "${f:5:8}.nc" ; done
If the filenames do not follow a fix pattern and might vary in length, you can use more complex pattern substitution using sed, see SO post for an example.

Merge two PO files and overwrite matching translation rules

I'm attempting to merge two PO files.
I have a base.po file that has general translations.
I have an extra.po which has extra translations that I'd like to add to the base file OR overwrite translations for if there are matching translation IDs.
I've tried using msgmerge:
$ msgmerge extra.po base.po -o merge.po
But this comments out any translations with matching IDs.
Looking at the msgmerge documentation, it doesn't look like there is any option to effect this behavior.
I'd like to be able to have multiple extra translation files (extra1.po, extra2.po, etc.) so I can merge them with the base translation file and use them in different contexts.
Does anyone know how to do what I'm attempting?
Turns out I needed to be using msgcat instead.
The below command creates a PO file merge.po that contains all of the translations from extra.po and adds any additional translations from base.po.
The --use-first option specifies that if there is a matching translation id between the two files, to choose the translation from extra.po.
$ msgcat extra.po base.po -o merge.po --use-first

vifm search files in subfolders

How can I search files just like with / command but recursively scanning subfolders?
Or maybe there are other approaches to get a list of files that match some pattern in the current folder including all subfolders.
:find command
There is :fin[d] command for that. Internally it invokes find utility (this is configurable via 'findprg' option), so you can do everything find is capable of. That said, in most cases the simple form of the command suffices:
:find *.sh
Note that by default argument is treated as regular file pattern (-name option of find), which is different from regular expressions accepted by /. For searching via regexp, use:
:find -regex '.*_.*'
If you want to scan only specific subfolders, just select them before running the command and search will be limited only to those directories.
:find command brings up a menu with search results. If you want to process them like regular files (e.g. delete, copy, move), hit b to change list representation.
Alternative that uses /
Alternatively you can populate current view with list of files in all subdirectories with command like (see %u):
:!find%u
and then use /, although this might be less efficient.

grabbing all .nc files from URL to get data using matlab

I d like to get all .nc files from URL to get and read the data using matlab. However, the filename is always very long and vary amongst all files.
For instance, I have
url = 'http://sourcename/filename.nc'
the sourcename is always the same, however the filename is very long and vary, so I would like to just use * to be able to grab whatever .nc file in the url
e.g.
url = 'http://sourcename/*.nc'
but this does not work and I am guessing I need to get the exact name - so I am not sure what to do here?
On the other hand, it could be also interesting for me to get the name of each file and record it, but not sure how to do that either.
Thanks a lot in advance!!
HTTP does not implement a filesystem abstraction. This means that each of those URLs that you request could be handled in a completely different way. There is also in many cases no way to get a list of allowable URLs off of a parent (a directory listing, in other words).
It may be the case for you that http://sourcename/ actually returns an index document containing a list of the files. In that case, first fetch that document. Then you'll have to parse the contents to extract the list of files. Then you can loop over those files, form new URLs for each one, and fetch them in sequence.
If you have a list of the file names in a text file, you can use the wget utility to process the file and fetch all the listed files. This file would be formatted as follows:
http://url.com/file1.nc
http://url.com/file2.nc
(etc)
You would then invoke wget as follows:
$ wget -i url-file.txt
Alternatively, you may be able to use wget to fetch the files recursively, if they are all located in the same directory on the web server, e.g.:
$ wget -r -l1 http://url.com/directory
The -r flag says to recurse, the -l1 flag says to go no deeper than 1 level when recursing.
This solution is external to Matlab, but once you have all of the files downloaded, you can work with them all locally.
wget is a fairly standard utility available on linux systems. It is also available for OSX and Windows as well. The wget homepage is here: https://www.gnu.org/software/wget/