Boto GCS authentication setup failure: no such file - google-cloud-storage

I am trying to set Boto to work with GCS with Oauth2 authentication. Gsutil config -e begins the authentication process, but when it asks "What is the full path to your private key file?" I get OSError: No such file or directory.
Why would this happen? It doesn't work with the .json version of the private key file either. I wish Boto for GCS didn't need a path to the private key file.

I made it work by skipping gsutil config -e . I went to my Windows computer where Boto was authenticated, and copied the .boto file to my home directory in Ubuntu.
In the .boto file under [Credentials] the un-commented lines with authentication keys had to be updated for this machine. Everything works now. The relevant part of the .boto file:
[Credentials]
# Google OAuth2 service account credentials (for "gs://" URIs):
gs_service_client_id = ...80o98m552#developer.gserviceaccount.com
gs_service_key_file = /home/edmund_spenser/Downloads/myproj-14002ffcc31.p12
gs_service_key_file_password = notasecret
If you are having trouble getting Boto set up with service account credentials you can paste the above into your .boto file and change the values to your credentials. There were four other lines in the file that were un-commented:
https_validate_certificates = True
default_api_version = 2
content_language = en
default_project_id = myproject
I include them here just in case. Hopefully your terminal works and you can just use gsutil config -e to set up Boto.

Related

Pushing a signed image to ACR from Azure Release pipeline

I'm following this documentation to push signed images to ACR from Azure pipelines.
However, this only describes the changes needed in yaml tasks. I'm using a classic release pipeline, and I'm facing some issues.
I'm trying to push the image using an Azure CLI script. Before the script task, I'm using the Secure files in pipeline to download the private key file and used the below CLI script -
echo '---------Create Private Delegate Key for signing--------'
mkdir -p ./docker/trust/private
echo 'Created Trust Directory'
echo 'Copying $(privateKey.secureFilePath) to ./docker/trust/private'
cp $(privateKey.secureFilePath) ./docker/trust/private
I'm getting the below error on running
echo $(SigningPassphrase) | docker push --disable-content-trust=false $(registry)/$REPOSITORY_NAME:$BUILD_TAG
Error:
no valid signing keys for delegation roles
I added the following lines in the script to load the private key -
chmod 600 ./docker/trust/private/$(KeyFileName)
echo '-----Loading Key-----'
docker trust key load ./docker/trust/private/$(KeyFileName)
But signing of the image is still failing after loading the key. I also tried changing the key file name to the repository key.
Am I placing the file in an incorrect location? It's being placed in /home/vsts/.docker/trust/private.
What should be the location to place the private key file in, so that docker can recognize it to sign the images?

scp from local to remote - "no such file or directory"

I am attempting to copy a local file to a remote server using scp on my macbook.
I am continuously getting the error "no such file or directory" when I know the file exists(I have checked and rechecked the path). The file has file rwx privileges for u,g and o. The file is not a symlink.
The syntax I am using is:
scp a2.pdf username#remoteserver:~pathto/directory/
The file a2.pdf is in the root directory of my local machine. I have also copied the path exactly as it shows when I use pwd in the directory it is contained it so like this:
scp Users/LocalUsername/a2.pdf username#remoteserver:~pathto/directory/
I am initiating this command while logged into the remote server. The error is given for the local path.
If I attempt to specify localhost information as such:
scp username#localhost:a2.pdf remoteusername#remoteserver:~~pathto/directory/
The prompt I get is to give my localhost password. I try my mac password and I am given permission denied.
I am not sure how to move on from this and any advice would be very much appreciated.
I ran the command from my local machine instead and that fixed the problem. In the local file, I gave the command scp file.txt remoteusername#remoteserver.etc:

GCS - multiple credentials in a single boto file

New to GCS (just got started with it today). Looks very promising.
Is there anyway to use multiple S3 accounts (or GCS) in a single boto file? I only see the option to assign keys to one S3 and one GCS account in a single file. I'd like to use multiple credentials.
We're like to copy from S3 to S3, or GCS to GCS, with each of those buckets using different keys.
You should be able to setup multiple profiles within your .boto file.
You could add something like:
[profile prod]
gs_access_key_id=....
gs_secret_access_key=....
[profile dev]
gs_access_key_id=....
gs_secret_access_key=....
And then from your code you can add a profile_name= parameter to the connection call:
con = boto.gs.connection(profile_name="dev")
You can definitely use multiple boto files, just make sure that the credentials in each of them are valid. Every time you need to switch between them, run the following command with the right path.
$ BOTO_CONFIG=/path/to_boto gsutil cp SOME_FILE gs://bucket
Example :
BOTO_CONFIG=/etc/boto.cfg gsutil -m cp text.txt gs://bucket
Additionally, you can have aliases for your different profiles. Just create an alias for each command and you are set !

gsutil AccessDeniedException: 401 Login Required

So I run the following:
gsutil -m cp -R file.png gs://bucket/file.png
And I get the following error message:
Copying file://file.png [Content-Type=application/pdf]...
Uploading file.png: 42.59 KiB/42.59 KiB
AccessDeniedException: 401 Login Required
CommandException: 1 files/objects could not be transferred.
I'm not sure what the problem is since I ran config and I can see all my buckets. Does anyone know what I need to do?
Note: I do not have gcloud, I just installed gsutil and ran the config.
Login to Google Cloud is needed for accessing any Cloud service. You need to use below command which will guide you through login steps like typing verification code you generate by opening browser link given in console.
gcloud auth login
I was getting a similar response, and was able to solve this problem by looking at the read permissions on the .boto file. In my case, I was using a service account and the .boto file that was created by
gsutil config -e
only had read permissions set for user. Since it was being read by a service running as a different user, it wasn't able to read the file and yielding a 401 Login Required error. I fixed it by adding read permissions for the service's group.
In the least sophisticated case, you could fix it by giving any user read permission with
chmod a+r .boto
A more detailed explanation for troubleshooting
To get more information, run the same command with a -D flag, like:
gsutil -m -D cp ....
In the debug output, look at:
Command being run: /path/to/gsutil
config_file_list: /path/to/boto/config
Create your login credentials using the executable at /path/to/gsutil, not gcloud auth or any other gsutil executable on the machine, using:
/path/to/gsutil config
For a service account, use:
/path/to/gsutil config -e
These should create a .boto config file in your home directory, $HOME/.boto. If you are running the gsutil command this file should be referenced in the config_file_list variable in the debug output. If not, see below to change it.
Running gsutil under a service account or as another user
If you are running as another user, and need to reference a newly-created config file, set the environment variable BOTO_CONFIG (don't forget to export it):
BOTO_CONFIG=/path/to/$HOME/.boto
export BOTO_CONFIG
By setting this variable, when you execute gsutil, it will reference the config file you have placed in BOTO_CONFIG. You can confirm that you are referencing the correct config file by looking at the config_file_list variable in the gsutil -D command's output.
make sure the referenced .boto file is readable by the user who is executing the gsutil command
Running the /path/to/gsutil with the BOTO_CONFIG variable set allowed me to execute gsutil as another user, referencing an arbitrary BOTO_CONFIG file that was set up with a service-account's credentials.
To set up the service account:
https://console.cloud.google.com/permissions/serviceaccounts
The key file from the service account set-up process needs to be downloaded, and the path to it is requested during the gsutil config -e step.
This may be an issue with how gsutil/boto handles the OS path separators on Windows, as referenced here. This should eventually get merged into the sdk tools, but until then the following should work:
Go to
google-cloud-sdk\platform\gsutil\third_party\boto\boto\pyami\config.py
and replace the line:
for path in os.environ['BOTO_PATH'].split(':'):
with:
for path in os.environ['BOTO_PATH'].split(os.path.pathsep):
Next, go to
google-cloud-sdk\bin\bootstrapping\gsutil.py
replace the lines that use ':'
if boto_config:
boto_path = ':'.join([boto_config, gsutil_path])
elif boto_path:
# this is ':' for windows as well, hardcoded into the boto source.
boto_path = ':'.join([boto_path, gsutil_path])
else:
path_parts = ['/etc/boto.cfg',
os.path.expanduser(os.path.join('~', '.boto')),
gsutil_path]
boto_path = ':'.join(path_parts)
with
if boto_config:
boto_path = os.path.pathsep.join([boto_config, gsutil_path])
elif boto_path:
# this is ':' for windows as well, hardcoded into the boto source.
boto_path = os.path.pathsep.join([boto_path, gsutil_path])
else:
path_parts = ['/etc/boto.cfg',
os.path.expanduser(os.path.join('~', '.boto')),
gsutil_path]
boto_path = os.path.pathsep.join(path_parts)
Restart cmd and now the error should go away.

ERROR chef powershell ec2

I am trying to connect to ec2 (AWS) from my powershell (windows7)
I added the following lines to the knife.rb file:
knife[:aws_access_key_id] = ENV['XXXXXX']
knife[:aws_secret_access_key] = ENV['xxxxxxxxxxxxxxxxxxxxxxxxxx']
I run for example
knife ec2 server list --region eu-west-1
but get the following:
knife : ERROR: You did not provide a valid 'AWS Access Key Id' value. At line:1 char:1
ERROR: You did ... Key Id' value.:String) [], RemoteException
ERROR: You did not provide a valid 'AWS Secret Access Key' value.
do I need to upload the knife.rb file to the server after I saved it? (how?)
where should I save my pem file and how should I use it in the commands? i tried for example:
knife ec2 server create -I ami-6e7bd919 -N MyEc2Instance -x ec2-user -r "role[webserver]" -i C:\Users\MyName\Documents\openvoip.pem --region eu-west-1
Thanks!
The knife.rb file should have the following and not as described in the question:
knife[:aws_access_key_id] = "XXXXXXXXXXXXXX"
knife[:aws_secret_access_key] = "XXXXXXXXXXXXXXXXXXXX"
I had no such ENV variables, so I set the credentials directly.
To use environment variables you have to put them first in you ~/.bashrc file:
vi ~/.bashrc
export AWS_ACCESS_KEY_ID=/home/yourname/.ec2/prodaccess
export AWS_SECRET_ACCESS_KEY=/home/yourname/.ec2/prodsecret
Save. then Source .bashrc file:
. ~/.bashrc
Run: env # to see if your new variables are propagated to environment. Then you may run your knife ec2 command
If you plan on sharing your knife.rb, you might want to keep the environment variables and setup as described under 'Many Users, Same Repo' in the docs, https://docs.getchef.com/config_rb_knife.html.
In order to use the ENV vars, use something like below as opposed to the key values that you appear to use:
knife[:aws_access_key_id] = ENV['AWS_ACCESS_KEY_ID']
knife[:aws_secret_access_key] = ENV['AWS_SECRET_ACCESS_KEY']
Then, when running knife, ensure your local environment variables are set.