I want to be able to fire commands at my instance with gcloud because it handles auth for me. This works well but how do I run them with sudo/root access?
For example I can copy files to my accounts folder:
gcloud compute scp --recurse myinst:/home/me/zzz /test --zone us-east1-b
But I cant copy to /tmp:
gcloud compute scp --recurse /test myinst:/tmp --zone us-east1-b
pscp: unable to open directory /tmp/.pki: permission denied
19.32.38.265147.log | 0 kB | 0.4 kB/s | ETA: 00:00:00 | 100%
pscp: unable to open /tmp/ks-script-uqygub: permission denied
What is the right way to run "gcloud compute scp" with sudo? Just to be clear, I of course can ssh into the instance and run sudo interactively
Edit: for now im just editing the permissions on the remote host
Just so I'm understanding correctly, are you trying to copy FROM the remote /tmp folder, or TO it? This question sounds like you're trying to copy to it, but the code says you're trying to copy from it.
This has worked for me in the past for copying from my local drive to a remote drive, though I have some concern over running sudo remotely:
gcloud compute scp myfile.txt [gce_user]#myinst:~/myfile.txt --project=[project_name];
gcloud compute ssh [gce_user]#myinst --command 'sudo cp ~/myfile.txt /tmp/' --project=[project_name];
You would reverse the process (and obviously rewrite the direction and sequence of the commands) if you needed to remotely access the contents of /tmp and then copy them down to your local drive.
Hope this helps!
Related
I'm just trying to move a simple text file from the local host to the remote host. I'm using Google's Cloud computing and more specifically, I'm using the gcloud command line tool. Here are the instructions and errors I received:
Admins-MacBook-Pro-4:downloads kylefoley$ gcloud compute scp lst_calc.txt instance-1:/home/kylefoley76/hey.txt
No zone specified. Using zone [us-central1-a] for instance: [instance-1].
Updating project ssh metadata...⠧Updated [https://www.googleapis.com/compute/v1/projects/atomic-drake-250022].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
Warning: Permanently added 'compute.1494876250178113937' (ECDSA) to the list of known hosts.
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
scp: /home/kylefoley76/hey.txt: Permission denied
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I then tried putting root# in front of the remote path and got the following error:
Admins-MacBook-Pro-4:downloads kylefoley$ gcloud compute scp lst_calc.txt root#instance-1:/home/kylefoley76/hey.txt
No zone specified. Using zone [us-central1-a] for instance: [instance-1].
Updating project ssh metadata...⠛Updated [https://www.googleapis.com/compute/v1/projects/atomic-drake-250022].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
root#35.193.247.37: Permission denied (publickey).
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
root#35.193.247.37: Permission denied (publickey).
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
root#35.193.247.37: Permission denied (publickey).
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
root#35.193.247.37: Permission denied (publickey).
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
It was then clear that the program was caught in an infinite loop of some kind.
UPDATE
Also, I want to make it clear that my problem is not a linux problem but a gcloud problem. A lot of people who have this problem recommend putting the files in the /tmp folder. On the remote Linux computer that I'm connected to it seems that I have all of the necessary permissions. I've created folders and files on this remote machine and I've moved the files around with terminal, so I think that rules out the possibility that my problem lies with the permissions of the Linux computer itself.
Create a tmp dir under your home in your instance with chmod 777 and send files to that.
gcloud compute scp ./app.tar.gz my-vm:~/tmp
Reason of the message:
This message means that the network connection from the client to the server is working, and that SSH is running. However, key-based authenticatication failed.
Troubleshooting steps:
Make sure that you have authenticated to gcloud as an IAM user with the compute instance admin role.
run gcloud auth login [IAM-USER] then try gcloud compute ssh again.
Verify that persistent SSH Keys metadata for gcloud is set for either the project or instance.
gcloud compute project-info describe --format flattened | grep
commonInstanceMetadata.items | grep ssh | grep -v expireOn
It's possible that you lost the private key, mismatched a keypair, etc. You can force gcloud to generate a new SSH keypair by doing the following:
If present, by moving ~/.ssh/google_compute_engine and ~/.ssh/google_compute_engine.pub. For example:
mv ~/.ssh/google_compute_engine.pub ~/.ssh/google_compute_engine.pub.old
mv ~/.ssh/google_compute_engine ~/.ssh/google_compute_engine.old
Try gcloud compute ssh [INSTANCE-NAME] again. A new keypair will be created and the public key will be added to the SSH keys metadata.
Verify that the Linux Guest Environment scripts are installed and
running. If the Linux Guest
Environment is not installed, re-install it.
I'm performing usual operation of fetching kubernetes cluster credentials from GCP. The gcloud command doesn't fetch the credentials and surprisingly updates the ownership of the local directory:
~/tmp/1> ls
~/tmp/1> gcloud container clusters get-credentials production-ng
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) Unable to write file [/home/vladimir/tmp/1]: [Errno 21] Is a directory: '/home/vladimir/tmp/1'
~/tmp/1> ls
ls: cannot open directory '.': Permission denied
Other commands, like gcloud container clusters list work fine. I've tried to reinstall the gcloud.
This happens if your KUBECONFIG has an empty entry, like :/Users/acme/.kube/config
gcloud resolves the empty value as the current directory, changes permissions and tries to write to it
Reported at https://issuetracker.google.com/issues/143911217
It happened to be a problem with kubectl. Reinstalling it solved this strange issue.
If you, like me, have stuck with strange gcloud behavior, following points could help to track an issue:
Checking alias command and if it's really pointing to the intended binary;
Launch separate docker container with gsutil and feed it your config files. If the gcloud container clusters get-credentials ... runs smoothly there, than it's the problem with binaries (not configuration):
docker run -it \
-v $HOME/.config:/root/.config \
-v $HOME/.kube:/root/.kube google/cloud-sdk:217.0.0-alpine sh
Problem with binary can be solved just by reinstalling/updating;
If it's a problem with configs, then you could back them up and reinstall kubectl / gsutil from scratch using not just apt-get remove ..., but apt-get purge .... Be aware: purge removes config files!
Hope this would help somebody else.
I tried to copy file from my google cloud instance to local machine with the following command:
gcloud compute scp nlp-2:to_test.txt C:\Temp
And got back the following error message:
ERROR: (gcloud.compute.scp) All sources must be local files when destination is remote. Got sources: [nlp-2:to_test.txt], destination:
C:Temp
What is exactly wrong? I am confident that the same command worked like 2 days ago.
Update: I am connecting to Ubuntu 16.04 (google instance) from Win 7 (local machine)
In order to resolve copying files to the instance, I had to create a path in D: (in your case can be C:) the same as the one represented by ~ in the ubuntu instance (/home/example_name/) and put the files to copy in that windows directory:
sudo gcloud beta compute scp --project="projectname" --zone="zonename" ~/Filename.zip instancename:~/
The reason is because the console scp does not support :
I have just tried to replicate the issue running the following code on a Google Cloud SDK Shell from a machine with Windows Server 2008 R2:
gcloud compute scp instance-1:/home/username/file C:\Users\username\file2
where instance-1 is a Debian 4.9.51-1 and I have been able to copy the file.
Therefore I think you misspelled something writing the command (also because you wrote that it was working for you as well some days ago) or I didn't understand correctly your configuration.
If this is the case an you provide some information more editing the question?
EDIT
I tested as well to do SCP between Debian machine having "weird" names and I have been always able to copy the files both from a remote location and to a remote location:
gcloud compute scp instance-1:/paolo '/C:\\Temp'
and
gcloud compute scp instance-2:'/C:\\Temp' .
Note despite the weird notation that C:\Temp is a file stored in a Linux instance
You may like to use as it worked for me:
in my case every file was in the folder jupyter!
gcloud beta compute scp --project "project_name" --zone "zone_name" instance_name:~jupyter/file_name /home/Downloads
Excuse my dev ops naiveté but I assume all you need to deploy to a machine is a proper SSH key, a port to expose, the machine's IP address a login and the code to deploy.
So are there any simple solutions that deploy code to a remote server with the only input being an SSH key, a Dockerfile and the code itself? I'm thinking it could be set up in a deterministic (almost functional) manner where the input is server IP address, login, and the output is a running server.
I've tried setting up Dokku on digital ocean (https://www.digitalocean.com/community/tutorials/how-to-use-the-digitalocean-dokku-application) and that requires a DNS record, and git. I don't need those as dependencies.
Thanks
If I understand your question correctly, you don't needed anything more than scp, ssh and a couple of shell scripts.
Let's say you want to deploy your code from serverA to serverB.
On serverB, create a directory with you Dockerfile. Also, create a shell script, let's call it build_image.sh, that runs your docker build command using sudo.
Also, on serverB, create a shell script that builds your code from source (if necessary).
Finally, on serverB, create a shell script that calls your code build script, your docker build script and at the end runs your new docker image. Let call this script do_it_all.sh.
Make sure that you chmod 755 all shell scripts.
Now, on serverA, you have a directory with the source code. scp that directory to serverB into the directory with the Dockerfile.
Next, from serverA use ssh to call do_it_all.sh on serverB. This will build your code, build your image and deploy a container without the need for extra software, packages, git, DNS records, etc.
You can even automate this process using cron or something else to have nightly deployments, if you wish, or deployments under other conditions.
Example scripts/commands:
On serverB:
build_image.sh:
#!/bin/bash
sudo docker build -t my_image
build_code.sh (optional, adjust to your code):
#!/bin/bash
cd /path/to/my/code
./configure
make
do_it_all.sh:
#!/bin/bash
cd /path/to/my/dockerfile
sudo docker stop my_container #stop the old container
sudo docker rm my_container #remove the old container
sudo docker rmi my_image #remove the old image
./build_code.sh #comment out if not needed
./build_image.sh
sudo docker run -d --name my_container my_image
On serverA:
scp -r /path/to/my/code serverB:/path/to/my/dockerfile
ssh serverB '/path/to/my/dockerfile/do_it_all.sh'
That should be it. Adjust for your system.
To deploy to a brand new system, just write a script on serverA that uses ssh to copy create necessary directories on serverB ssh serverB 'mkdir /path/to/dockerfile'. Next, copy your Dockerfile and your build scripts and your code from serverA to serverB using scp. Then run do_it_all.sh on serverB from serverA using ssh.
We've been running postgresql 8.4 for quite some time. As with any database, we are slowly reaching our threshold for space. I added another 8 GB EBS drive and mounted it to our instance and configured it to work properly on a directory called /files
Within /files, I manually created
Correct me if I'm wrong, but I believe all postgresql data is stored in /var/lib/postgresql/8.4/main
I backed up the database and I ran sudo /etc/init.d/postgresql stop. This stops the postgresql server. I tried to copy and paste the contents of /var/lib/postgresql/8.4/main into the /files directory but that turned out be a HUGE MESS! due to file permissions. I had to go in and chmod the contents of that folder just so that I could copy and paste them. Some files did not copy fully because of root permissions. I modified the data_directory parameter in postgresql.conf to point to the files directory
data_directory = '/files/postgresql/main'
and I ran sudo /etc/init.d/postgresql restart and the server failed to start. Again probably due to permission issues. Amazon EC2 only allows you to access the service as ubuntu by default. You can only access root from within the terminal which makes everything a lot more complicated.
Is there a much cleaner and more efficient step by step way of doing this?
Stop the server.
Copy the datadir while retaining permissions - use cp -aRv.
Then (easiest, as it avoids the need to modify initscripts) just move the old datadir aside and symlink the old path to the new location.
Thanks for the accepted answer. Instead of the symlink you can also use a bind mount. That way it is independent from the file system. If you want to use a dedicated hard drive for the database you can also mount it normally. to the data directory.
I did the latter. Here are my steps if someone needs a reference. I ran this as a script on many AWS instances.
# stop postgres server
sudo service postgresql stop
# create new filesystem in empty hard drive
sudo mkfs.ext4 /dev/xvdb
# mount it
mkdir /tmp/pg
sudo mount /dev/xvdb /tmp/pg/
# copy the entire postgres home dir content
sudo cp -a /var/lib/postgresql/. /tmp/pg
# mount it to the correct directory
sudo umount /tmp/pg
sudo mount /dev/xvdb /var/lib/postgresql/
# see if it is mounted
mount | grep postgres
# add the mount point to fstab
echo "/dev/xvdb /var/lib/postgresql ext4 rw 0 0" | sudo tee -a /etc/fstab
# when database is in use, observe that the correct disk is being used
watch -d grep xvd /proc/diskstats
A clarification. It is the particular AMI that you used that sets ubuntu as the default user, this may not apply to other AMIs.
In essence if you are trying move data manually, you will probably need to do so as the root user, and then make sure its available to whatever user postgres is running with.
You also do have the option of snapshotting the volume and increasing the size of the a volume created from the snapshot. Then you could replace the volume on your instance with the new volume (You probably will have to resize the partition to take advantage of all the space).