Download a file via Imanage NRL - imanage

I need to download files pointed to by iManage NRL's. The only way I can see is to take apart the contents of the NRL to get the details and then ask iManage to copy the file to the local computer. But, this concerns me if iManage change the format of the NRL, so I was wondering is there a way of getting say the profile or copying the document to the local machine by passing the NRL or its contents to some function in iManage and not having to take it apart with my code?

There isn't a method within the 8.5 API that takes an NRL file and returns you a document. IManage are unlikely to change the format of the NRL file as this would break backwards compatibility with all existing NRL files in use so I believe it's safe to just parse the NRL file to extract the information. You can use a regular expression such as the following (with the Text.RegularExpressions.RegexOptions.Multiline + Text.RegularExpressions.RegexOptions.IgnoreCase options) to extract the source server, database document number and version and utilise that in your app for document extraction.
"{2}!nrtdms:0:!session:{2}:!database:{3}:!document:{0},{1}:"
This SO answer has an example how to get a physical file from a document number.

Related

Copy-PnPFile returns 401

I am traying to copy files betwen SP Document Librarys, with PnP PowerShell command "Copy-PnPFile",
but I get "Copy-PnPFile : The remote server returned an error: (401) Unauthorized"
In the script before this comand I am reading, items, creating folder and it works, only Copy-PnPFile dont work.
Copy-PnPFile -SourceUrl $Item["FileRef"] -TargetUrl $TargetFileUrl
Make sure $Item["FileRef"] is a relative path. Of course $TargetFileUrl should also be a relative path.
Microsoft Official:
Copy-PnPFile:Copies a file or folder to a different location. This location can be within the same document library, same site, same site collection or even to another site collection on the same tenant. Currently there is a 200MB file size limit for the file or folder to be copied. Notice that if copying between sites or to a subsite you cannot specify a target filename, only a folder name.
Here is a nice article for your referenceļ¼š
SharePoint Online: Copy File Between Document Libraries using
PowerShell
My mistake.
In source and target relative path forgot to include "SiteCollectionURL".

Blast+ Local Configuration: How to configure nt and nr databases?

I am configuring Blast+ on my mac (os sierra) and am having trouble configuring my nr and nt databases that I also downloaded locally. I am trying to follow NCBI's instructions here, and am getting hung up on the Configuration and Example Execution steps.
They say to change my .bash_profile so that it says:
export PATH=$PATH:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/ncbi-blast-2.6.0+/bin
That works fine, and they say configure a path for BLASTDB "similarly" but to the file where my DB will be, so I have done this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/nt.00
which specifies the exact folder that I got when I unzipped the nt tar file from their FTP. With this path, if I run the command...
blastn -query test_query.fa -db nt.00 -task blastn -outfmt "7 qseqid sseqid evalue bitscore" -max_target_seqs 5
then it runs successfully and I get results, but I am worried that these are only being checked against the nt.00 section of the entire nt.00 database file, especially because if I run my test_query.fa sequence on the Web Blast, I get different results.
Also, their instructions say that the path only needs to point to the folder that contains the whole database folder nt.00, from the tar I unzipped--and not the specific nt.00 itself--, which in my case would just be "blastdb/" (As opposed to "blastdb/nt.00/" which then contains nt.00.nhd, nt.00.nal, etc.). That makes sense because when I am working I want to be able to blastn on the nt database but also blastp on the nr one, etc. by changing the -db flag on my command, and there shouldn't be a problem with having them all in this folder, right? But if I must specify the path for BLASTDB with the nt.00 DB added to the end, how could I ever use nr.00 in the same folder (blastdb/)? Essentially, I want to do as the instructions say, and just have this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/
And then depending on what database I want to use I could just say so after the -db flag on my command. But when I make the path like that above, it gives me this error:
BLAST Database error: No alias or index file found for nucleotide database [nt] in search path [/Users/LJStout::/Users/LJStout/Documents/Luke/Research/Pedulla 17-18/blast/blastdb:]
I have tried running that same blastn command from above and swapping out "nt" for "nt.00", and have tried these commands with the path for BLASTDB ending in both "blastdb/" and "blastdb/nt" and of course "blastdb/nt.00" which is the only one that runs without errors.
Here's an example of another thread I read where the OP is worried about his executions not checking the entire nt.00 folder, this was different than my problem however.
Thanks for you help!
This whole problem came down to having the nt.00 & nr.00 files, the original folders that result from unzipping their respective .tar.gz's, in the same parent folder when it should be that their contents are in the same parent folder. I simply deleted the folders they came in and copied the contents over to my new, singular parent. I was kind of mislead by the instructions, it was a simple mistake. Now, I have one folder, blastdb/ that contains all of the contents of every database I plan on using, including nt,nr, and refseq.

How can i download a report using wget

I am trying to use WGET to automate download of a file which is generated by a report server (PDF format). However, the problem i am having is that the file name is never known (generated randomly by the server) and the URL accepts parameters that will change eg. Date=. Name= ID= etc.
For Example if i were to pass http://url.com/date=&name=&id= in internet explorer, i will get a download dialog prompting me to download a file with file xyz123.pdf
Is it possible that i can use WGET to pass these parameters to the report server and automatically download the generated PDF file
Just put the full url in quotes - It should go and fetch the file:
wget "http://url.com/date=foo&name=baa&id=baz"
Thanks,
//P

How can I resume downloads in Perl?

I have a project that depends upon some other binaries to be downloaded from web at install time.For this what i do is:
if ( file-present-in-src/)
# skip that file
else
# use wget to download the file
The problem with this approach is that when I interrupt a download in middle, and do invoke the script next time, the partially downloaded file is also skipped (which is not desired), also I want wget to resume the download of the partially downloaded file.
How should I go about it:
Possible Solutions I could think of:
Let the file to be downloaded to some file say download_tmp. Move to original file
if successful.
Handle SIG{'INT'} to write proper cleanup code.
But none of these could help resume the partial file download,
Any insights?
Fist, I don't understand what this has to do with Perl, since you're using wget to do the dowloading ... You could use libwww-perl (perldoc LWP) and have more control about the download process.
Then I second your idea of downloading to a "tmp" filename and move the file on success.
However I think you need to go further and verify the integrity of the files. Doing an MD5 or SHA hash is very easy, and match the downloaded one with what you're expecting. You can have a short file on server containing the checksum (filename.md5). Determine success only when you have a match.
Note that catching all the signals and generally trying to make the process unkillable, and then expecting it to have worked is bound to fail at one point or another. There could be a network timeout, a crash, power failure, configuration problem on the server ... you should instead assume downloads can fail, because they will, and code so that your process can recover.
Finally you're not telling us what kind of binaries you're downloading and what you're doing with them. Since you use wget I'm going to assume you're on Unix; you should consider using RPM+Yum or the likes, they handle all this for you. RPM are easy to write, really.
use your first approach ..
download to "FileName".tmp
move "FileName".tmp to "FileName" move! not copy
once per diem clean out all .tmp files (paranoia rulez)
You could just use wget's -N and -c options and remove the entire "if file exists" logic.

Where does CGI.pm normally create temporary files?

On all my Windows servers, except for one machine, when I execute the following code to allocate a temporary files folder:
use CGI;
my $tmpfile = new CGITempFile(1);
print "tmpfile='", $tmpfile->as_string(), "'\n";
The variable $tmpfile is assigned the value '.\CGItemp1' and this is what I want. But on one of my servers it's incorrectly set to C:\temp\CGItemp1.
All the servers are running Windows 2003 Standard Edition, IIS6 and ActivePerl 5.8.8.822 (upgrading to later version of Perl not an option). The result is always the same when running a script from the command line or in IIS as a CGI script (where scriptmap .pl = c:\perl\bin\perl.exe "%s" %s).
How I can fix this Perl installation and force it to return '.\CGItemp1' by default?
I've even copied the whole Perl folder from one of the working servers to this machine but no joy.
#Hometoast:
I checked the 'TMP' and 'TEMP' environment variables and also $ENV{TMP} and $ENV{TEMP} and they're identical.
From command line they point to the user profile directory, for example:
C:\DOCUME~1\[USERNAME]\LOCALS~1\Temp\1
When run under IIS as a CGI script they both point to:
c:\windows\temp
In registry key HKEY_USERS/.DEFAULT/Environment, both servers have:
%USERPROFILE%\Local Settings\Temp
The ActiveState implementation of CGITempFile() is clearly using an alternative mechanism to determine how it should generate the temporary folder.
#Ranguard:
The real problem is with the CGI.pm module and attachment handling. Whenever a file is uploaded to the site CGI.pm needs to store it somewhere temporary. To do this CGITempFile() is called within CGI.pm to allocate a temporary folder. So unfortunately I can't use File::Temp. Thanks anyway.
#Chris:
That helped a bunch. I did have a quick scan through the CGI.pm source earlier but your suggestion made me go back and look at it more studiously to understand the underlying algorithm. I got things working, but the oddest thing is that there was originally no c:\temp folder on the server.
To obtain a temporary fix I created a c:\temp folder and set the relevant permissions for the website's anonymous user account. But because this is a shared box I couldn't leave things that way, even though the temp files were being deleted. To cut a long story short, I renamed the c:\temp folder to something different and magically the correct '.\' folder path was being returned. I also noticed that the customer had enabled FrontPage extensions on the site, which removes write access for the anonymous user account on the website folders, so this permission needed re-applying. I'm still at a loss as to why at the start of this issue CGITempFile() was returning c:\temp, even though that folder didn't exist, and why it magically started working again.
The name of the temporary directory is held in $CGITempFile::TMPDIRECTORY and initialised in the find_tempdir function in CGI.pm.
The algorithm for choosing the temporary directory is described in the CGI.pm documentation (search for -private_tempfiles).
IIUC, if a C:\Temp folder exists on the server, CGI.pm will use it. If none of the directories checked in find_tempdir exist, then the current directory "." is used.
I hope this helps.
Not the direct answer to your question, but have you tried using File::Temp?
It is specifically designed to work on any OS.
If you're running this script as you, check the %TEMP% environment variable to see if if it differs.
If IIS is executing, check the values in registry for TMP and TEMP under
HKEY_USERS/.DEFAULT/Environment