Extracting file names from an online data server in Matlab - matlab

I am trying to write a script that will allow me to download numerous (1000s) of data files from a data server (e.g, http://hydro1.sci.gsfc.nasa.gov/thredds/catalog/GLDAS_NOAH10SUBP_3H/2011/345/). Unfortunately, the names of the files in each directory are not formatted in a similar way (the time that they were created were appended to the end of the file name). I need to be able to specify the file name to subset the data (I have a special tool for these data types) and download it. I cannot find a function in matlab that will extract the file names.
I have looked at URLREAD, but it downloads everything including html code.
Thanks for your help!

You can easily parse the link.
x=urlread(url)
links=regexp(x,'<a href=''([^>]+)''>','tokens')
Reads every link, you have to filter all unwanted links.
For example this gets all grb files:
a=regexp(x,'<a href=''([^>]+.grb)''>','tokens')

Related

Rename a group of .txt files based on content that appears after a specific string in the text file?

I have a folder with hundreds of text files in it. The file names have zero meaning. I am trying to extract a variable number that appears after a string in each file and and rename the files using it. I'm somewhat of a Powershell newbie (an old timer form the DOS world) but I have written many useful scripts with it. I've searched high an low, this one has me stumped.
Any and all suggestions are welcome, or I'd happily pay someone for the snippet of code. -Ed

Is it possible to extract metadata such as Content Created date from files - I can't get this with PowerShell

I need to extract the "Content Created" date out of thousands of files, but haven't been able to find a way to do this using PowerShell / other Command Line utility.
Does someone out there know a way to obtain this metadata? If so, please can you advise me. Thanks.
I've looked at various resources online, including this site, but haven't been successful thus far.
Here's a screenshot explaining what I'm trying to do.
I've been unable to find a native powershell cmdlet which does what you want. However, I found this article: Use PowerShell to Find Metadata from Photograph Files and the script it used: get file meta data function.
The article talks about image files, but the function is not specific for image files.
I tested it out on a folder containing a Word and an Excel file and the returned Metadata from the Word file contains the Content Created date. The Excel file does not contain/return that value. This is not unexpected as the Details tab of properties for the Excel file does not contain a Content Created value so it seems to be specific for Word files, and maybe some other file or document types.
Update:
You write that you need to extract this info from thousands of files, but if those files are anything but Word-files you probably won't be able to do that.
As far as I can tell this should work with the file types exposing the type of metadata you want. However, it seems that the ContentCreated property is unique to Word. I tried adding a text file (.txt), Acrobat PDF (.pdf), MS Access (.mdb), Excel (.xlxs) and a Word doc (.docx) file to my test folder and the only one that has/returns that metadata property is the Word file.
You should also be aware that the script seems to return metadata localized, so for me to programatically get the info i wanted I had to pipe the output of the script to Select-Object -Property Name,'InnehÄll skapat' (which is the Swedish name for Content created). So if you're running on a non-english system you may need to check what the output looks like before creating your Select-Object statement.
PowerQuery in Excel 2013 or later (data tab). Connect to data> Folder.

load unix executable file to ascii

I am simply trying to load ascii files with two columns of data (spectral data).
They were saved originally as .asc.
I need to open and edit them using text editor before I can load them into Matlab to erase the headers, but some of them somehow got converted to unix executable foramt with the .asc extension. And others are plain text docs also with the same extension. I have no idea why they got saved with the same extension and with my same manipulation as different kind formats.
When I use the load command in Matlab, the plain text docs load normally as expected but the ones saved as unix executable kinds give me this error:
Error using load Unable to read file filename.asc: No such file or
directory.
How can I either resave them (still with the same extension) or otherwise load them to be read by Matlab as standard two column data matrixes?
Thanks!
If these are truly plain text files, try renaming the file from xxx.asc to xxx.txt. Then, see if you are able to edit them as desired.

Extracting specific file from zip in matlab

Currently I have a zipfile containing several thousand .xml files, extracted the folder is 1.5gb in size.
I have a function that matches data with specific files inside this zip file. I then want to read this specific file and extract additional data.
My question:
Is there any way to extract these specific files from the archive without unzipping the entire archive?
The built in unzip.m function can only be used to unzip the entire file so it won't work so I am thinking I have to use the COM interface or some other approach.
Matlab version: R2013a
While searching for solutions I found this:Read the data of CSV file inside Zip File without extracting the contents in Matlab
But I can't get the code in the answer to work for my situation
Edit:
Credit to Hoki and Intelk
zipFilename = 'HMDB.zip';
zipJavaFile = java.io.File(zipFilename);
zipFile=org.apache.tools.zip.ZipFile(zipJavaFile);
entries=zipFile.getEntries;
cnt=1;
while entries.hasMoreElements
tempObj=entries.nextElement;
file{cnt,1}=tempObj.getName.toCharArray';
cnt=cnt+1;
end
ind=regexp(file,'$*.xml$');
ind=find(~cellfun(#isempty,ind));
file=file(ind);
file = cellfun(#(x) fullfile('.',x),file,'UniformOutput',false);
And not forgetting the
zipFile.close

Best way to get a database friendly list of Veteran Affairs Hospital

I sincerely apologize if this isn't the proper forum to discuss this, but I wasn't sure where to go or what would be the best option.
Basically, I'm trying to find a database friendly list of veteran affairs hospitals. The closest thing that I've been able to find is www.va.gov/ofcadmin/docs/CATB.pdf as it has all the information I'm looking for:
Region
Address
City in a separate column
Zip Code in a separate column
State
Facility # (also known as StationID)
VISN
Symbol
I've tried exporting that PDF out into CSV but it's a complete nightmare to get working. So, I was curious if anyone had any ideas or insights into how I could accomplish this task.
First, here's a CSV file containing the data found in CATB.pdf. The very first line contains the column headers, and the rest of the file contains the contents.
http://tmp.alexloney.com/CATB.csv
Now, for the more detailed explanation...I took the PDF you provided a link to, converted it to an HTML document using Adobe Acrobat, then I used a lot of Regular Expressions to parse the file and clean it up. Once the file was cleaned up enough, I was able to write a program to parse through the remainder of the file, grab the state and region, and spit it all out in a nicely formatted CSV.
Hope that helps you!
I believe that PDFILL has an option in it that will convert a PDF file to Excell. Once in Excell you should have no problem converting to a CSV file.