gcloud compute scp error: All sources must be local files - copy

I tried to copy file from my google cloud instance to local machine with the following command:
gcloud compute scp nlp-2:to_test.txt C:\Temp
And got back the following error message:
ERROR: (gcloud.compute.scp) All sources must be local files when destination is remote. Got sources: [nlp-2:to_test.txt], destination:
C:Temp
What is exactly wrong? I am confident that the same command worked like 2 days ago.
Update: I am connecting to Ubuntu 16.04 (google instance) from Win 7 (local machine)

In order to resolve copying files to the instance, I had to create a path in D: (in your case can be C:) the same as the one represented by ~ in the ubuntu instance (/home/example_name/) and put the files to copy in that windows directory:
sudo gcloud beta compute scp --project="projectname" --zone="zonename" ~/Filename.zip instancename:~/
The reason is because the console scp does not support :

I have just tried to replicate the issue running the following code on a Google Cloud SDK Shell from a machine with Windows Server 2008 R2:
gcloud compute scp instance-1:/home/username/file C:\Users\username\file2
where instance-1 is a Debian 4.9.51-1 and I have been able to copy the file.
Therefore I think you misspelled something writing the command (also because you wrote that it was working for you as well some days ago) or I didn't understand correctly your configuration.
If this is the case an you provide some information more editing the question?
EDIT
I tested as well to do SCP between Debian machine having "weird" names and I have been always able to copy the files both from a remote location and to a remote location:
gcloud compute scp instance-1:/paolo '/C:\\Temp'
and
gcloud compute scp instance-2:'/C:\\Temp' .
Note despite the weird notation that C:\Temp is a file stored in a Linux instance

You may like to use as it worked for me:
in my case every file was in the folder jupyter!
gcloud beta compute scp --project "project_name" --zone "zone_name" instance_name:~jupyter/file_name /home/Downloads

Related

vagrant up fails with: cannot translate name # rb_sysopen when trying to run homestead

When I run vagrant up I get the following error:
Vagrant/embedded/gems/2.2.14/gems/vagrant-2.2.14/plugins/hosts/suse/host.rb:20:in `initialize': Cannot translate name. # rb_sysopen - /etc/os-release (Errno::ELOOP)
I have installed Vagrant for Windows and I'm trying to launch Laravel's Homestead that I cloned in WSL2 by cd'ing into the Z: directory that WSL2 provides via PowerShell (so that I have access to Vagrant that's installed on Windows).
cd Z:\home\coder\projects\homestead
It seems that Vagrant is trying to recognize the OS from the filesystem if I'm understanding correctly. So if you're trying to run Vagrant on Windows across a network share that is Unix/WSL/Linux it seems that it will try to run as if it is Unix and fail.
Solution
I was able to copy the homestead directory from the network share into my Windows environment and then navigate to that directory and run vagrant up successfully using powershell.
Another Option
It sounds like you should also be able to install Vagrant within WSL2 and use it from within WSL2 instead of PowerShell.
Another possibility to note is that you can invoke exes from within WSL2, but it sounds like it will not work properly if you were trying to run Window's Vagrant from within WSL2.
Research
https://github.com/roots/trellis/issues/1083
https://www.vagrantup.com/docs/other/wsl.html
https://discourse.roots.io/t/command-vagrant-up-in-wsl-is-failed/16528

Can't retrieve MongoDB to local drive using SCP from AWS EC2

I have a Docker container using Strapi (which used MondoDB) on a now defunct AWS EC2. I need the content off that server - it can't run because it's too full. So i've tried to retrieve all the files using SCP - which worked a treat apart from download the database content (the actual stuff i need - Strapi and docker book up fine, but because it has to database content, it treats it as a new instance).
Every time i try to download the contents on db from AWS i get 'permission denied'
I'm using SCP something like this
scp -i /directory/to/***.pem -r user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:strapi-docker/* /your/local/directory/files/to/download
Does anyone know how i can get this entire docker container running locally with the database content?
You can temporarily change permissions (recursively) on the directory in question to be world-readable using chmod.

how do I elevate my gcloud scp and ssh commands?

I want to be able to fire commands at my instance with gcloud because it handles auth for me. This works well but how do I run them with sudo/root access?
For example I can copy files to my accounts folder:
gcloud compute scp --recurse myinst:/home/me/zzz /test --zone us-east1-b
But I cant copy to /tmp:
gcloud compute scp --recurse /test myinst:/tmp --zone us-east1-b
pscp: unable to open directory /tmp/.pki: permission denied
19.32.38.265147.log | 0 kB | 0.4 kB/s | ETA: 00:00:00 | 100%
pscp: unable to open /tmp/ks-script-uqygub: permission denied
What is the right way to run "gcloud compute scp" with sudo? Just to be clear, I of course can ssh into the instance and run sudo interactively
Edit: for now im just editing the permissions on the remote host
Just so I'm understanding correctly, are you trying to copy FROM the remote /tmp folder, or TO it? This question sounds like you're trying to copy to it, but the code says you're trying to copy from it.
This has worked for me in the past for copying from my local drive to a remote drive, though I have some concern over running sudo remotely:
gcloud compute scp myfile.txt [gce_user]#myinst:~/myfile.txt --project=[project_name];
gcloud compute ssh [gce_user]#myinst --command 'sudo cp ~/myfile.txt /tmp/' --project=[project_name];
You would reverse the process (and obviously rewrite the direction and sequence of the commands) if you needed to remotely access the contents of /tmp and then copy them down to your local drive.
Hope this helps!

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.

Cant find gcloud utility using MAMP

After attempting to innitialize cluster/kube-up via php using the following code from my local virtual host:
$old_path = getcwd();
chdir('/Users/username/kubernetes');
$output = shell_exec('cluster/kube-up.sh');
chdir($old_path);
print_r("<pre>$output</pre>") ;
I received the following error:
Can't find gcloud in PATH. Do you wish to install the Google Cloud SDK? [Y/n]
I have gcloud available in my bash_profile. I am also running MAMP and included the path variable in /Applications/MAMP/Library/bin/envvars_* and envvars-std -
I am still getting this prompt. Any ideas?
I managed to bypass this by doing the following:
I created a script file in my local kubernetes directory.
In the script, I inserted the following code:
export PATH="/Users/username/google-cloud-sdk/bin:$PATH"
cluster/kube-up.sh
This then ran kube-up.sh creating a cluster with values I had set in kubernetes/cluster/gceconfig-default.sh using php.