Not able to connect to cluster. Facing Certificate signed by unknown authority - kubernetes

I am not sure either what I am trying to do is possible or correct way.
One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access.
After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace.
I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master'
But when I try to list of existing pods using 'cluster/kubecfg.sh list pods'
I see
"F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe")
I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.

You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.

It is probably due to a not implemented feature, see this issue:
https://github.com/GoogleCloudPlatform/kubernetes/issues/1886
you can copy the files from /usr/share/nginx/... on the master
into your home dir and try again.

I figured out a workaround: set the -insecure_skip_tls_verify option
In kubecfg.sh, change the code near the bottom to
else
auth_config=(
"-insecure_skip_tls_verify"
)
fi
Obviously this is insecure and you are putting yourself at risk of a man in the middle attack, etc.

Related

self signed certificate in certificate chain on github copilot

I installed the GitHub copilot but the extension do not work, always show the following error :
What could I do to solve this ?
Copilot error: “GitHub Copilot could not connect to server. Extension activation failed: self-signed certificate in certificate chain” is generally caused using CoPilot behind a Corporate network.
Most corporate networks have a ‘Man-in-the-middle’ appliance that dynamically breaks open all secure SSL traffic leaving home to enter the internet. This ensures they can inspected any traffic leaving, including your online banking. Usually automation scrubs the traffic looking for theft of company secrets or IP and raises alerts. It all gets logged and reviewed further if need be.
This action leaves behind a fake cert chain as a fingerprint. The cert for the called site is replaced with a fake, and one signed by the company’s own private CA authority. Hence the self-signed cert in the cert chain error.
From any company device (Phones\Laptop) the company CA is already installed as a trusted CA. So the local browsers and other desktop apps trust this faked cert chain - and so do not raise any concerns someone is snooping your secure network traffic (the company does own the network and the device).
By default VSCode is not trusting the installed desktop certs, and so it noticed that the GitHub cert is no longer signed by a trusted public CA authority.
As Rypox states above, the VSCode extension ‘Win-CA’ (must be set to ‘append’ mode) solves this issue. It tells VSCode to also trust the CA’s installed on the employees desktop. This makes VSCode happy again trusting the fake cert chain. No 'whitelisting' needed and not 'VPN' related. But certinly not that obvious either. An interesting CA trust issue.
Confirming this does exist is easy from your browser. Go to any outside site (like Amazon) and review the sites “Cert” to see who the CA’s are (Certification Path). It should ‘not’ contain any reference to your company. Look at that same cert from outside the company network on your own personal laptop.
… “bit of a glitch in the Matrix”, installing Win-CA helps hides it again and all looks back to normal.
Had the same issue with a corporate proxy, the win-ca extension resolved it.
In settings switch to append mode (it's not the default)
Restart VsCode
PS: this is a windows only solution (for mac see another post - self signed certificate in certificate chain on github copilot)
On macOS, you can use this script to monkey patch the Copilot extension to make this work:
_VSCODEDIR="$HOME/.vscode/extensions"
_COPILOTDIR=$(ls "${_VSCODEDIR}" | grep -E "github.copilot-[1-9].*" | sort -V | tail -n1) # For copilot
_COPILOTDEVDIR=$(ls "${_VSCODEDIR}" | grep "github.copilot-nightly-" | sort -V | tail -n1) # For copilot-nightly
_EXTENSIONFILEPATH="${_VSCODEDIR}/${_COPILOTDIR}/dist/extension.js"
_DEVEXTENSIONFILEPATH="${_VSCODEDIR}/${_COPILOTDEVDIR}/dist/extension.js"
if [[ -f "$_EXTENSIONFILEPATH" ]]; then
echo "Found Copilot Extension, applying 'rejectUnauthorized' patches to '$_EXTENSIONFILEPATH'..."
perl -pi -e 's/,rejectUnauthorized:[a-z]}(?!})/,rejectUnauthorized:false}/g' ${_EXTENSIONFILEPATH}
sed -i.bak 's/d={...l,/d={...l,rejectUnauthorized:false,/g' ${_EXTENSIONFILEPATH}
else
echo "Couldn't find the extension.js file for Copilot, please verify paths and try again or ignore if you don't have Copilot..."
fi
if [[ -f "$_DEVEXTENSIONFILEPATH" ]]; then
echo "Found Copilot-Nightly Extension, applying 'rejectUnauthorized' patches to '$_DEVEXTENSIONFILEPATH'..."
perl -pi -e 's/,rejectUnauthorized:[a-z]}(?!})/,rejectUnauthorized:false}/g' ${_DEVEXTENSIONFILEPATH}
sed -i.bak 's/d={...l,/d={...l,rejectUnauthorized:false,/g' ${_DEVEXTENSIONFILEPATH}
else
echo "Couldn't find the extension.js file for Copilot-Nightly, please verify paths and try again or ignore if you don't have Copilot-Nightly..."
fi
Save as something like monkey-patch-copilot.sh, then chmod +x monkey-patch-copilot.sh. You should then be able to run: ./monkey-patch-copilot.sh to apply the patch.
Note: I am not the original author. This was found on the Copilot feedback forum.
For any MacOS users, the VSCode extension linhmtran168.mac-ca-vscode can help as well with this. It is similar to the previously mentioned win-ca.
https://marketplace.visualstudio.com/items?itemName=linhmtran168.mac-ca-vscode
This looks like a similar error to what I am getting. I believe that the source of this in our corporate network is a ssl inspection process such that when the https traffic is opened and inspected that it breaks the certificate chain and this error shows up. A fix would be to add the GitHub Copilot servers to the ssl inspection whitelist so that that traffic is not inspected.
Corporate VPN was the problem (same as #mark-derry's).
Jetbrain's PyCharm / DataSpell allows to accept self signed certificates.
VSCode doesn't seem to have this option yet.
I found a solution for this which works for me in case of Intellij. I have blogged about it at https://sidd.io/2023/01/github-copilot-self-signed-cert-issue/
At a high level I think the architecture of the plugin might be same :
IDE Native CoPilot Plugin ---making RPC call---> NodeJS based CoPilot Agent
And this NodeJS based CoPilot Agent agent has issues with the Self Signed Certs (at least in my case).
Fix is as follows :
Export the self-signed certificate in discussion
Convert it into .pem format if not already
Export the path of this .pem cert to NODE_EXTRA_CA_CERTS variable
Restart your IDE and it should work
Easy!
Method 1 : just excute this code.
git config --global http.sslVerify false
Method 2:
FOllow this guide! and Thank me later because I have saved you a time of husel ? :) . you're welcome!
https://mattferderer.com/fix-git-self-signed-certificate-in-certificate-chain-on-windows

Allow own signed certificat in owncloud on a synology

I have owncloud version 9.1.8 running on a synology. Now I installed onlyoffice on a local server with a self signed certificat. It is important to know, that the onlyoffice server is running locally in a network. So I cannot access the server like e.g. with lets encrypt, because I only have a local server name and not a public server name. Lets Encrypt therefore cannot verify the server. However if I want (and if you have a solution doing that), I can access the internet using the server.
Now i have the problem, that owncloud delivers me the following error message
"Error while downloading the document file to be converted."
when I want to save the url in the onlyoffice configuration in owncloud. I guess the problem is, that I am using a self signed certificat. Do you know what I can do? Google does not really help me.
"Error while downloading the document file to be converted."
means that DocumentServer cannot validate your storage's self-signed certificate (OC in your case)
There are 2 possible workarounds:
1) Change "rejectUnauthorized" to false in the /etc/onlyoffice/documentserver/default.json config file
2) Change the default Node.js CAstore:
Edit the files:
/etc/supervisor/conf.d/onlyoffice-documentserver-converter.conf
/etc/supervisor/conf.d/onlyoffice-documentserver-docservice.conf
Add a flag --use-openssl-ca to the parameters in this line
Then you need to add your certificate to the the default CA store and restart ONLYOFFICE services:
supervisorctl restart all

Cannot Delete an AWS VPC

I want to delete an AWS VPC which I don't know how it came into existence. When I try to delete it in AWS Console, it says:
We could not delete the following VPC (vpc-0a72ac71) Network interface
'eni-ce2a0d10' is currently in use. (Service: AmazonEC2; Status Code:
400; Error Code: InvalidParameterValue; Request ID:
821d8a6d-3d9b-4c24-b372-314ea9b18b23)
As it mentions "AmazonEC2" in the error message, I suspected there might be some EC2 instances residing in this VPC. So I went into EC2 dashboard but found no EC2 exist there. However, I found there are two security groups associated with this vpc. So I decided to delete them hoping that's the cause of the error. But when I tried to do so, I got this message:
As the message says, these security groups are associated with some network interfaces. Therefore, I decided to 'Detach' those but I got this error message:
Error deleting network interfaces eni-ce2a0d10: You do not have
permission to access the specified resource. eni-0b7ff712: You do not
have permission to access the specified resource.
But I'm the root user so I assume I should be able to do whatever I want to do except if the resource is made by aws itself or another root account.
I know somewhere this network interface is being used but it will be very time-consuming to go through each aws service and check that.
I've already checked AWS RDS service and no instance or rds subnet is made.
I've already checked this question and this with no luck.
I found the root cause of this issue.
Short Answer:
That VPC was created solely for the WorkDocs service instance. So AWS was preventing me to delete its VPC and any of its dependent services and pieces.
How I figured it out:
First, I noticed something interesting has been written in the 'Description' column of the 'undeletable' Network Interfaces (you can see them in the last OP's figure):
"AWS created network interface for directory d-90672d6b72."
From "directory", I suspected that this might have something do to with AWS Directory Service. So I went to this service and noticed there is a directory associated with the VPC:
So I tried to remove this directory but I got this error message:
Error - Directory cannot be deleted This directory still has
authorized applications, and cannot be deleted.  To delete this
directory, complete all of the following steps: • Delete the WorkDocs
site attached to this directory.
 
Therefore, I went to AWS WorkDocs Service and found it and deleted it:
So now the directory is also deleted (circled in red), I went back to delete those network interfaces. However I realized that they are vanished! (I guess Amazon removed them on its own). I went to VPC service to see whether I'm now able to delete the VPC. Guess what? That VPC was vanished too!
Now I understand what was happening. That VPC was created solely for the WorkDocs service instance. I wish Amazon was more transparent about it.
As a more generic answer to the "Error deleting network interface" issue, it happens when a network interface was created automatically for a higher-level AWS resource.
The Generic solution is to manage the network interface in the higher level resource directly such as WorkDocs or EFS.
In my case it happened when I wanted to delete a security group assigned to network interfaces created by an EFS volume.
So I went in the EFS console and removed the security group from the EFS.

Issue connecting composer to Blockchain on Bluemix - identity or token does not match

I have fabric composer 0.72 installed on my mac, and I was able to follow this thread to get it connected to my Blockchain (v.61 of Fabric) on Bluemix.
fabric-composer-integration-with-bluemix-blockchain-service
Now I am trying to build an ubuntu (16.04) docker container and run composer-rest-server there. When I try to connect to my blockchain service from my docker container (using the same id, WebAppAdmin, that I used on my mac) I get an error:
Discovering types from business network definition ...
Connection fails: Error: Identity or token does not match.
It will be retried for the next request.
{ Error: Identity or token does not match.
at /home/composer/.nvm/versions/node/v6.10.3/lib/node_modules /composer-rest-server/node_modules/grpc/src/node/src/client.js:417:17 code: 2, metadata: Metadata { _internal_repr: {} } }
I tried copying the cert from my mac to my docker container:
/home/composer/.composer-credentials/member.WebAppAdmin
but when I did that I got a different error that says "signature does not verify". I did some additional testing, and I discovered that if I used an id that I had not previously used with composer (i.e. user_type1_0) then I could connect, and I could see a new cert in my .composer-credentials directory.
I tried deleting that container and building a new one (I dorked something else up) I could not use that same userid again.
Does anybody know how security and these certs are supposed to work? It would seem as though something to do with certificate generation/validation is tied to the client (i.e. hardware address), such that if I try to re-use an id on a different machine, the certs or keys or something don't match. I have a way to make things work, but it doesn't seem like it's the right way if I can't use the same id from different machines.
Thanks!
Hi i tried to recreate this by having blockchain running on a unix machine and then i copied my connection profile and certificate to my mac and then edited my connection profile to update the ip address and key store. I then did a composer network ping and it worked fine.
I am using composer v0.7.4 so you could try that?
I have also faced this issue, and concluded that
There is inconsistent behavior while deploying network using composer on Cloud environment includeing Bluemix. Problem is not with composer, but with fabric 0.6.
I am assuming that this issue is also indirectly related to following known bugs into fabric 0.6, which will not be fixed in fabric 0.6.
ERROR:
"
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:217:12)
"
So far, We have understood that following three JIRA are root cause , where essentially the cloud networking layer ends up killing the idle event hub connection after a period of inactivity and the fabric SDK cannot handle this.
https://jira.hyperledger.org/browse/FAB-4002 FAB-3310
https://jira.hyperledger.org/browse/FAB-3310
or FAB-2787
Conclusion:
There is no alternative way of fixing this issue with Bluemix or any cloud environment with fabric 0.6
You may not experience this issue with Fabric 1.0, but there is still possibilities as all above mentioned defects are not fixed yet.

Kube-apiserver complains about remote error bad certificate

I reinstalled some nodes and a master. Now on the master I am getting:
Sep 15 04:53:58 master kube-apiserver[803]: I0915 04:53:58.413581 803 logs.go:41] http: TLS handshake error from $ip:54337: remote error: bad certificate
Where $ip is one of the nodes.
So I likely need to delete or recreate certificates. What would the location of those be? Any recommended commands to recreate or remove those or copy them from node to master or vice versa? Whatever gets me past this error message...
Take a look through the Creating Certificates section of authentication.md. It walks you through the certificates that you need to create and how to pass them to the system components, and you should be able to use that to re-generate certificates for your cluster.