File not found error when using cyberduck CLI for OneDrive - powershell

I want to upload encrypted backups on OneDrive using cyberduck to avoid local copys. Having a local file called file.txt I want to upload in folder Backups from OneDrive root, I used this command:
duck --username <myUser> --password <myPassword> --upload onedrive://Backups .\file.txt
Transfer incomplete…
File not found. /. Please contact your web hosting service provider for assistance.
It's not even possible to get the directory content using duck --username <myUser> --password <myPassword> --listonedrive://Backups command. This also cause a File not found error.
What I'm doing wrong?
I exactly followed the documentation and have no clue why this is not working. Cyberduck was installed by using chocolately, current version is Cyberduck 6.6.2 (28219)

Just testing this out, it looks like OneDrive sets a unique identifier for the root folder. You can find that by either inspecting the value of the cid parameter in the URL of your OneDrive site or I found it by using the following command
duck --list OneDrive:///
Note, having three slashes is important. It would appear the first two are part of the protocol name and the first specifies you want the root. The result should look like a unique id of some sort like this: 36d25d24238f8242, which you can then use to upload your files like:
duck --upload onedrive://36d25d24238f8242/Backups .\file.txt
Didn't see any of that in the docs... just tinkering with it. So I might recommend opening a bug with duck to update their docs if this works for you.

What happens if you use the full path to the file, it looks like it is just complaining about not finding the file to uploads so could be you are in a different directory or something so it needs the full path to the source file.

Related

Using p4 zip and unzip to export files from one perforce server to another

I was trying to export files along with their revision history inside my depot folder from 2015.2 to 2019 perforce server.Also , I would want perforce to create new user on my new server corresponding to the commiter/submitter on my original 2015 repo.
Perforce replicate looked like overkill for my current task and then I came across this read on perforce's website that mentioned P4 zip.
This looked like it will solve my problem, but the article has a few issues I could not understand.
Let's say I am moving data from server1_ip:port --> server2_ip:port
I am currently following these steps
Making zip of folder to be copied using
p4 remote my_remote_spec , setting
Address: server1_ip:port
DepotMap://depot/... //depot2/...
p4 -p server1_ip:port zip -o test.zip -r my_remote_spec -A //depot/.... But on this step I get permission denied error. This is weird to me because the user although not super/admin has access to files i ask to get zipped.
Also, when i did try with a super user, i could not find test.zip even though i was not prompted any errors.
Isn't the above command supposed to generate a zip file inside the directory which i run it from?
Is the unzip command supposed to be run after a p4 login from user of second server?
Lastly, from the document why is a third port , 1667 mentioned in the transfer of files from server running on 1666 and 1777.
on this step I get permission denied error. This is weird to me because the user although not super/admin has access to files i ask to get zipped.
This is expected:
C:\Perforce\test>p4 help zip
zip -- Package a set of files and their history for use by p4 unzip
...
The zip command requires super permission granted by p4 protect.
Isn't the above command supposed to generate a zip file inside the directory which i run it from?
Similar to p4 admin checkpoint, the zip file is written to the server machine (relative to the server root, if you don't specify an absolute path), rather than being transferred to the local client directory. This is not explicitly stated in the documentation (which seems like an oversight), but if you look in the root directory of the server where you ran the zip, you should find your test.zip there.
Is the unzip command supposed to be run after a p4 login from user of second server?
Yes, any time you run a command against a particular server, you will need to be logged in to that server. In the case of p4 unzip you will need at least admin permission on the second server.
Lastly, from the document why is a third port , 1667 mentioned in the transfer of files from server running on 1666 and 1777.
I'm pretty sure that's a typo; whoever wrote the article started off using ports 1666 and 1777, changed their mind halfway through, and didn't proofread. :)

What am I screwing up trying to download particular file types with wget?

I am attempting to regularly archive a few file types hosted on a community website where our admin has been MIA for years, in case he dies or just stops paying for the hosting.
I am able to download all of the files I need using wget -r -np -nd -e robots=off -l 0 URL but this leaves me with about 60,000 extra files to waste time both downloading and deleting.
I am really only looking for files with the extensions "tbt" and "zip". When I add in -A tbt,zip to the input, wget then only downloads a single file, "index.html.tmp". It immediately deletes this file because it doesn't match the file type specified, and then the process stops entirely, with wget announcing that it is finished. It does not attempt to download any of the other files that it grabs when the -A flag is not included.
What am I doing wrong? Why does specifying file types in the way that I did cause it to finish after only looking at one file?
Possibly you're hitting the same problem I've hit when trying to do something similar. When using --accept, wget determines whether a links refers to a file or directory based on whether or not it ends with a /.
For example, say I have a directory named files, and a web page that has:
Lots o' files!
If I were to request this with wget -r, then I wget would happily GET /files, see that it was an HTML document containing a bunch of links, and continue to download those links.
However, if I add -A zip to my command line, and run wget with --debug, I see:
appending ‘http://localhost:8080/files’ to urlpos.
[...]
Deciding whether to enqueue "http://localhost:8080/files".
http://localhost:8080/files (files) does not match acc/rej rules.
Decided NOT to load it.
In other words, wget thinks this is a file (no trailing /) and it doesn't match our acceptance criteria, so it gets rejected.
If I modify the remote file so that it looks like...
Lots o' files!
...then wget will follow the link and download files as desired.
I don't think there's a great solution to this problem if you need to use wget. As I mentioned in my comment, there are other tools available that may handle this situation more gracefully.
It's also possible you're experiencing a different issue; the output of adding --debug to your command line clarify things in that case.
I also experienced this issue, on a page where all the download links looked something like this: filedownload.ashx?name=file.mp3. The solution was to match for both the linked file, and the downloaded file. So my wget accept flag looked like this: -A 'ashx,mp3'. I also used the --trust-server-names flag. This catches all the .ashx that are linked in the webpage, then when wget does the second check, all the mp3 files that were downloaded will stay.
As an alternative to --trust-server-names, you may also find the --content-disposition flag helpful. Both flags help rename the file that gets downloaded from filedownload.ashx?name=file.mp3 to just file.mp3.

Blast+ Local Configuration: How to configure nt and nr databases?

I am configuring Blast+ on my mac (os sierra) and am having trouble configuring my nr and nt databases that I also downloaded locally. I am trying to follow NCBI's instructions here, and am getting hung up on the Configuration and Example Execution steps.
They say to change my .bash_profile so that it says:
export PATH=$PATH:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/ncbi-blast-2.6.0+/bin
That works fine, and they say configure a path for BLASTDB "similarly" but to the file where my DB will be, so I have done this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/nt.00
which specifies the exact folder that I got when I unzipped the nt tar file from their FTP. With this path, if I run the command...
blastn -query test_query.fa -db nt.00 -task blastn -outfmt "7 qseqid sseqid evalue bitscore" -max_target_seqs 5
then it runs successfully and I get results, but I am worried that these are only being checked against the nt.00 section of the entire nt.00 database file, especially because if I run my test_query.fa sequence on the Web Blast, I get different results.
Also, their instructions say that the path only needs to point to the folder that contains the whole database folder nt.00, from the tar I unzipped--and not the specific nt.00 itself--, which in my case would just be "blastdb/" (As opposed to "blastdb/nt.00/" which then contains nt.00.nhd, nt.00.nal, etc.). That makes sense because when I am working I want to be able to blastn on the nt database but also blastp on the nr one, etc. by changing the -db flag on my command, and there shouldn't be a problem with having them all in this folder, right? But if I must specify the path for BLASTDB with the nt.00 DB added to the end, how could I ever use nr.00 in the same folder (blastdb/)? Essentially, I want to do as the instructions say, and just have this:
export BLASTDB=$BLASTDB:$HOME/Documents/Luke/Research/Pedulla\ 17-18/blast/blastdb/
And then depending on what database I want to use I could just say so after the -db flag on my command. But when I make the path like that above, it gives me this error:
BLAST Database error: No alias or index file found for nucleotide database [nt] in search path [/Users/LJStout::/Users/LJStout/Documents/Luke/Research/Pedulla 17-18/blast/blastdb:]
I have tried running that same blastn command from above and swapping out "nt" for "nt.00", and have tried these commands with the path for BLASTDB ending in both "blastdb/" and "blastdb/nt" and of course "blastdb/nt.00" which is the only one that runs without errors.
Here's an example of another thread I read where the OP is worried about his executions not checking the entire nt.00 folder, this was different than my problem however.
Thanks for you help!
This whole problem came down to having the nt.00 & nr.00 files, the original folders that result from unzipping their respective .tar.gz's, in the same parent folder when it should be that their contents are in the same parent folder. I simply deleted the folders they came in and copied the contents over to my new, singular parent. I was kind of mislead by the instructions, it was a simple mistake. Now, I have one folder, blastdb/ that contains all of the contents of every database I plan on using, including nt,nr, and refseq.

Ctools do not show up in pentaho UI

I am using Pentaho CE 5 on windows. I would like to use CTools but I can't make them show up in the File -> New menu to use them.
Being behind a proxy, I can not use the Marketplace plugin, so I have tried a manual installation.
First, I tried to use the ctools-installer.sh. I have run the following command line in cygwin (wget and unzip are installed):
./ctools-installer.sh -s /cygdrive/d/Users/[user]/Mes\ Programmes/pentaho/biserver-ce/pentaho-solutions/ -w /cygdrive/d/Users/[user]/Mes\ programmes/pentaho/biserver-ce/tomcat/webapps/pentaho/
The script starts, asks me what module I want to install, and begins the downloads.
For each module, I get an output like (set -x added to the script) :
echo -n 'Downloading CDF...' Downloading CDF...+ wget -q --no-check-certificate 'http://ci.analytical-labs.com/job/Webdetails-CDF-5-Release/lastSuccessfulBuild/artifact/bi-platform-v2-plugin/dist/zip/dist.zip'
-O .tmp/cdf/dist.zip SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc
'[' '!' -z '' ']'
rm -f .tmp/dist/marketplace.xml
unzip -o .tmp/cdf/dist.zip -d .tmp End-of-central-directory signature not found. Either this file is not a zipfile, or it
constitutes one disk of a multi-part archive. In the latter case
the central directory and zipfile comment will be found on the last
disk(s) of this archive. unzip: cannot find zipfile directory in
.tmp/cdf/dist.zip,
and cannot find .tmp/cdf/dist.zip.zip, period.
chmod -R u+rwx .tmp
echo Done Done
Then the script ends. I have seen on this page (pentaho-bi-suite) that it is the normal output. Nevertheless, it seems a bit strange to me and when I start my pentaho server (login: admin/password), I cannot see any new tools in the menus.
After a look to a few other tutorials and the script itself, I have downloaded the .zip snapshots for every tool and unzipped them in the system directory of my pentaho server. Same result.
I would like to make the .sh works, what can I try or adjust ?
Thanks
EDIT 05/06/2014
I checked the dist.zip files dowloaded by the script and they are all empty. It seems that wget cannot fetch the zip files, and therefore the installation fails.
When I try to get any webpage through wget, it fails. I think it is because of the proxy.
Here is my .wgetrc file, located in my user's cygwin home folder:
use_proxy=on
http_proxy=http://[url]:[port]
https_proxy=http://[url]:[port]
proxy_user=[user]
proxy_password=[password]
How could I make this work?
EDIT 10/06/2014
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline.
I guess this is related with the -r option.
I consider this post solved, since it not a CTools issue anymore.
Difficult to identify the issue in the above procedure..
but you can refer this blog he is key member of pentaho itself..
In the end, I have changed my network connection settings to bypass the proxy. It seems that there is an offline mode for the installer, so one can download all needed files on a proxy-free environment and then run the script offline. I guess this is related with the -r option.
I consider this post solved, since it is not a CTools issue anymore.
You can manually install the components from http://www.webdetails.pt/ctools/ or if you have pentaho 5.1 or above, you add the following parameters to CATALINA_OPTS option (in start-pentaho.bat or start-pentaho.sh):
-Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttp.nonProxyHosts="localhost|127.0.0.1|10...*"
http://docs.treasuredata.com/articles/pentaho-dataintegration#tips-how-can-i-use-pentaho-through-a-proxy

postgreSQL COPY command error

Hallo everyone once again,
I did various searches but couldn't gind a suitable/applicable answer to the simple problem below:
On pgAdminIII (Windows 7 64-bit) I am running the following command using SQL editor:
COPY public.Raw20120113 FROM 'D:\my\path\to\Raw CSV Data\13_01_2012.csv';
I tried many different variations for the path name and verified the path, but I keep getting:
ERROR: could not open file "D:\my\path\to\Raw CSV Data\13_01_2012.csv" for reading: No such file or directory
Any suggestions why this happens?
Thank you all in advance
Petros
UPDATE!!
After some tests I came to the following conclusion: The reason I am getting this error is that the path includes some Greek characters. So, while Windows uses codepage 1253, the console is using 727 and this whole thing is causing the confusion. So, some questions arise, you may answer them if you like or prompt me to other questions?
1) How can I permanently change the codepageof the console?
2) How can I define the codepage is SQL editor?
Thank you again, and sorry if the place to post the question was inappropriate!
Try DIR "D:\my\path\to\Raw CSV Data\13_01_2012.csv" from command line and see if it works - just to ensure that you got the directory, file name, extension etc correct.
The problem is that COPY command runs on server so it takes the path to the file from the server's scope.
To use local file to import you need to use \COPY command. This takes local path to the file into account and loads it correctly.