gsutil Failure: [Errno 1] _ssl.c:504: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed - google-cloud-storage

I am getting the below error message while trying to access my google cloud storage from one of my Google Compute Engine instance using gsutil command. Below is the command and error message.
Command:
gsutil ls gs://my-storage-bucket
Error message:
Your "Oauth 2.0 User Account" credentials are invalid. For more help, see "gsutil help creds", or re-run the gsutil config command (see "gsutil help config").
Failure: [Errno 1] _ssl.c:504: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed.
I also tried authenticating the SDK using gcloud auth login and gsutil config and was also able to generate authorization code from the link generated after running the command gsutil config. The command again exited with the same error message. I did not get any solution even after searching for hours. The command was working fine few days before and it is not throwing any error in my local machine and other instances. Please help me out.

Are you using an old version of gsutil? There have been a lot of upgrades in the past few versions to the libraries that gsutil depends on for handling HTTP requests. I've seen ~5 mentions of this error in the past couple weeks, and nearly all of them have been fixed by updating gsutil.
However, if it's not possible for you to update gsutil, check for any recent updates your system's OpenSSL package. I've seen one person mention that this happened after moving from 1.0.1 to 1.0.2, and that moving back to 1.0.1 made the error stop appearing. However, I'd generally not advise downgrading a security library, and would suggest trying all reasonable alternatives before resorting to that.

I know, it's been late to answer here.
But I have faced such situation and it got fixed by upgrading the gcloud.
Here is the link of my blog post, hope it helps.
https://easyonror.wordpress.com/2018/10/24/anexperience-with-fixing-gsutil/

Related

Confluent Kafka 101 tutorial follow through, Error: Get "https://confluent.cloud:8090/security/1.0/authenticate": dial tcp <ip>:8090: i/o timeout

I've been following through Confluent's official tutorial as found on YouTube https://www.youtube.com/watch?v=oI7VAS9KSS4
When it comes to the section (roughly starting 5'28'') about Confluent CLI, confluent login --save couldn't work without a --url flag, see the following screenshot
and by default the url is "https://confluent.cloud". So I had to do confluent login --save --url "https://cofluent.cloud" then was prompted to type in the username and password, and then I was stuck with this Error: Get "https://confluent.cloud:8090/security/1.0/authenticate": dial tcp :8090: i/o timeout Does anybody know how to solve this?
I'm using a Ubuntu on WSL(Windows Subsystem for Linux) on a Windows 10 PC.
Ok, I figured this out by myself. Upgrading Confluent solved the problem.
Prior to upgrading I was on version v1.22.0, with which even confluent update wasn't an option. See the following screenshot for verification.
The I upgraded with the command as shown below:
If you compare before VS. after upgrading, you'll notice that some new command options were enabled, including the highlighted cloud-signup command.
The I ran the confluent cloud-signup command, was prompted to type in the following info: email, First Name, Last Name, Two-letter country code, Organization, Terms and Policy Agreement (y/n), and it told me "Error: Failed to sign up". Why? Because I had signed up all these info on https://confluent.cloud UI already. So if you've never signed up on the UI before, you definitely can do this via the CLI.
Since I already signed up, I ran command login. This time, with the newer version, it doesn't throw the complaint that login must be appended with the --url flag any more. And I could login without any issue.
After confluent login worked out, I ran confluent login --save and Confluent writes the credentials to a netrc file it creates called "<my_home_directory>.netrc" so that in the future I don't have to manually type in the credentials again.

AWS AmazonS3Client request returns error "The remote certificate is invalid according to the validation procedure"

We have an application using AWS SDK AmazonS3Client to communicate with the S3 service to get files downloaded. With thousands of instances running fine, however, we got a few sites getting the following exception error message:
The remote certificate is invalid according to the validation procedure.
The versions of the AmazonSDK.S3.dll and AmazonS3.Core.dll we're using are 3.3.102.18 and 3.3.103.1 respectively. These had been running for over a year without problems until recently.
Has anyone else experienced the similar issue? What could be the root cause of the problem? How do we resolve it?
Thanks!
This has been determined as an issue with proxy server loaded with incorrect certificate from the service provider. So, not really the application issue.

Running conda command in command line

I am trying to install the MLxtend library for python. The command to do this should be simply typing:
conda install mlxtend --channel conda-forge.
I tried running it in the command prompt right when it opened up (which was C:\Users\Ben. When this didn't work, I changed directories to C:\anaconda\anaconda3\scripts and ran the same command there. It again didn't work so I tried to update anaconda by running: conda update conda. This again didn't work and I am not sure exactly what is going on that is causing the error. Everything on Google and everywhere else hasn't worked so I'm hoping y'all know how to fix this.
Error I'm getting:
WARNING: The conda.compat module is deprecated and will be removed in a future release.
Collecting package metadata: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
If your current network has https://www.anaconda.com blocked, please file
a support request with your network engineering team.
SSLError(MaxRetryError('HTTPSConnectionPool(host=\'repo.anaconda.com\', port=443): Max retries exceeded with url: /pkgs/main/noarch/repodata.json.bz2 (Caused by SSLError("Can\'t connect to HTTPS URL because the SSL module is not available."))'))

Kubernetes error code 403: user anonymous cannot get path

I been trying to install kubernetes for the first time, after the initial setup, I was finally able to execute the kube-up.bash script:
kubernetes/cluster/kube-up.bash
Everything goes well and I even see the list of cluster services installed:
Then the book I am checking says:
"Go to: https//your_master_ip/ui/"
But when I try I can only see the following:
I assumed I did not performed a proper setup on during the auth process, so I did:
gcloud auth list
But my active account is there with my Google email, so I am not sure what I am doing wrong.
I am able to access the Google Cloud Platfrom and see the project I created for this, I can see the traffic also.
Also, the kubectl commands are not working on my system, it throws a bash error that It was not able to locate the order.
Can someone please assist me?
Regards

Visual studio release management - deploy with ps/dsc encountered error with server certificate

I'm trying to run a simple ps script on a target computer (my local machine) from our RM server through the RM client. However the release falls over when it reaches deploy using ps/dsc. The error message reads:
Connecting to remote server ### failed with the following error message : The server certificate on the destination computer (###:5985) has the following errors:
Encountered an internal error in the SSL library.
However as you can see by the winrm port number, I'm using HTTP not HTTPS to communicate with my machine, so surely SSL should not come into it. So has anyone else come across this or have any idea what I could be doing wrong?
UPDATE: the machines are part of the same domain.
In the deploy using DSC action keep UseHTTPS variable to false and skipCACheck to true, just in case.
BTW, how long does it take for the action to show this error message in the logs? Also, as someone mentioned in the comments, are you able to manually run the script using PS remoting?
If none of the above helps, we would need more details. Try looking into the event logs for the target machine right after your deployment failed and check for any errors.
I came across same issue ,On installing Azure service certification VM tailed,Resolved issue.