How would i make a cat script that sanitises it's input? - haproxy

I am experimenting with load balancers on my infrastructure which is protected by letsencrypt.
I followed a blog article https://blog.bigdinosaur.org/finally-moving-to-letsencrypt-with-haproxy-varnish-and-nginx/
which has worked very well the problem I am having is letsencrypt support SAN (Subject Access Names) the script generates the certificates properly but the cat command it uses to automate combining the certificate and the private key does not support having commas in it.
for example the script to run would be le-renew.sh domain.tld,www.domain.tld
This issues the certificate using the normal certbot procedure.
it then attempts to run cat /etc/letsencrypt/live/domain.tld,www.domain.tld/fullchain.pem /etc/letsencrypt/live/domain.tld,www.domain.tld/privkey.pem
This is where it fails with the error of No Such File or directory which is understandable seen as certbot creates the file /etc/letsencrypt/live/domain.tld/fullchain.pem
Is there a way to make the cat script ignore whats between the comma and the slash so it matches certbot's way of doing things?
Sorry if that was a bit of a ramble.

Related

self signed certificate in certificate chain on github copilot

I installed the GitHub copilot but the extension do not work, always show the following error :
What could I do to solve this ?
Copilot error: “GitHub Copilot could not connect to server. Extension activation failed: self-signed certificate in certificate chain” is generally caused using CoPilot behind a Corporate network.
Most corporate networks have a ‘Man-in-the-middle’ appliance that dynamically breaks open all secure SSL traffic leaving home to enter the internet. This ensures they can inspected any traffic leaving, including your online banking. Usually automation scrubs the traffic looking for theft of company secrets or IP and raises alerts. It all gets logged and reviewed further if need be.
This action leaves behind a fake cert chain as a fingerprint. The cert for the called site is replaced with a fake, and one signed by the company’s own private CA authority. Hence the self-signed cert in the cert chain error.
From any company device (Phones\Laptop) the company CA is already installed as a trusted CA. So the local browsers and other desktop apps trust this faked cert chain - and so do not raise any concerns someone is snooping your secure network traffic (the company does own the network and the device).
By default VSCode is not trusting the installed desktop certs, and so it noticed that the GitHub cert is no longer signed by a trusted public CA authority.
As Rypox states above, the VSCode extension ‘Win-CA’ (must be set to ‘append’ mode) solves this issue. It tells VSCode to also trust the CA’s installed on the employees desktop. This makes VSCode happy again trusting the fake cert chain. No 'whitelisting' needed and not 'VPN' related. But certinly not that obvious either. An interesting CA trust issue.
Confirming this does exist is easy from your browser. Go to any outside site (like Amazon) and review the sites “Cert” to see who the CA’s are (Certification Path). It should ‘not’ contain any reference to your company. Look at that same cert from outside the company network on your own personal laptop.
… “bit of a glitch in the Matrix”, installing Win-CA helps hides it again and all looks back to normal.
Had the same issue with a corporate proxy, the win-ca extension resolved it.
In settings switch to append mode (it's not the default)
Restart VsCode
PS: this is a windows only solution (for mac see another post - self signed certificate in certificate chain on github copilot)
On macOS, you can use this script to monkey patch the Copilot extension to make this work:
_VSCODEDIR="$HOME/.vscode/extensions"
_COPILOTDIR=$(ls "${_VSCODEDIR}" | grep -E "github.copilot-[1-9].*" | sort -V | tail -n1) # For copilot
_COPILOTDEVDIR=$(ls "${_VSCODEDIR}" | grep "github.copilot-nightly-" | sort -V | tail -n1) # For copilot-nightly
_EXTENSIONFILEPATH="${_VSCODEDIR}/${_COPILOTDIR}/dist/extension.js"
_DEVEXTENSIONFILEPATH="${_VSCODEDIR}/${_COPILOTDEVDIR}/dist/extension.js"
if [[ -f "$_EXTENSIONFILEPATH" ]]; then
echo "Found Copilot Extension, applying 'rejectUnauthorized' patches to '$_EXTENSIONFILEPATH'..."
perl -pi -e 's/,rejectUnauthorized:[a-z]}(?!})/,rejectUnauthorized:false}/g' ${_EXTENSIONFILEPATH}
sed -i.bak 's/d={...l,/d={...l,rejectUnauthorized:false,/g' ${_EXTENSIONFILEPATH}
else
echo "Couldn't find the extension.js file for Copilot, please verify paths and try again or ignore if you don't have Copilot..."
fi
if [[ -f "$_DEVEXTENSIONFILEPATH" ]]; then
echo "Found Copilot-Nightly Extension, applying 'rejectUnauthorized' patches to '$_DEVEXTENSIONFILEPATH'..."
perl -pi -e 's/,rejectUnauthorized:[a-z]}(?!})/,rejectUnauthorized:false}/g' ${_DEVEXTENSIONFILEPATH}
sed -i.bak 's/d={...l,/d={...l,rejectUnauthorized:false,/g' ${_DEVEXTENSIONFILEPATH}
else
echo "Couldn't find the extension.js file for Copilot-Nightly, please verify paths and try again or ignore if you don't have Copilot-Nightly..."
fi
Save as something like monkey-patch-copilot.sh, then chmod +x monkey-patch-copilot.sh. You should then be able to run: ./monkey-patch-copilot.sh to apply the patch.
Note: I am not the original author. This was found on the Copilot feedback forum.
For any MacOS users, the VSCode extension linhmtran168.mac-ca-vscode can help as well with this. It is similar to the previously mentioned win-ca.
https://marketplace.visualstudio.com/items?itemName=linhmtran168.mac-ca-vscode
This looks like a similar error to what I am getting. I believe that the source of this in our corporate network is a ssl inspection process such that when the https traffic is opened and inspected that it breaks the certificate chain and this error shows up. A fix would be to add the GitHub Copilot servers to the ssl inspection whitelist so that that traffic is not inspected.
Corporate VPN was the problem (same as #mark-derry's).
Jetbrain's PyCharm / DataSpell allows to accept self signed certificates.
VSCode doesn't seem to have this option yet.
I found a solution for this which works for me in case of Intellij. I have blogged about it at https://sidd.io/2023/01/github-copilot-self-signed-cert-issue/
At a high level I think the architecture of the plugin might be same :
IDE Native CoPilot Plugin ---making RPC call---> NodeJS based CoPilot Agent
And this NodeJS based CoPilot Agent agent has issues with the Self Signed Certs (at least in my case).
Fix is as follows :
Export the self-signed certificate in discussion
Convert it into .pem format if not already
Export the path of this .pem cert to NODE_EXTRA_CA_CERTS variable
Restart your IDE and it should work
Easy!
Method 1 : just excute this code.
git config --global http.sslVerify false
Method 2:
FOllow this guide! and Thank me later because I have saved you a time of husel ? :) . you're welcome!
https://mattferderer.com/fix-git-self-signed-certificate-in-certificate-chain-on-windows

How to setup a Kubernetes secret using GoDaddy certs for Nginx Ingress controller

I'm struggling to setup a kubernetes secret using GoDaddy certs in order to use it with the Ingress Nginx controller in a Kubernetes cluster.
I know that GoDaddy isn't the go-to place for that but that's not on my hands...
Here what I tried (mainly based on this github post):
I have a mail from GoDaddy with two files: generated-csr.txt and generated-private-key.txt.
Then I downloaded the cert package on GoDaddy's website (I took the "Apache" server type, as it's the recommended on for Nginx). The archive contains three files (with generated names): xxxx.crt and xxxx.pem (same content for both files, they represent the domain cert) and gd_bundle-g2-g1.crt (which is the intermediate cert).
Then I proceed to concat the domain cert and the intermediate cert (let's name it chain.crt) and tried to create a secret using these file with the following command:
kubectl create secret tls organization-tls --key generated-private-key.txt --cert chain.crt
And my struggle starts here, as it throw this error:
error: tls: failed to find any PEM data in key input
How can I fix this, or what I'm missing?
Sorry to bother with something trivial like this, but it's been two days and I'm really struggling to find proper documentation or example that works for the Ingress Nginx use case...
Any help or hint is really welcome, thanks a lot to you!
This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.
As OP mentioned in comments, the issue was solved by adding a new line in the beginning of the file.
"The key wasn't format correctly as it was lacking a newline in the
beginning of the file. So this particular problem is now resolved."
Similar issue was also addressed in this answer.
The issue is tricky but easy to fix.
The private key file given by GoDaddy is not properly encoded: it is encoded in UTF8 with BOM, so it starts with a byte that shouldn't be there. It is not understood by nginx ingress when ingesting the private key, and leads to the error.
The simple fix is to run the following command to properly encode the private key file:
iconv -c -f UTF8 -t ASCII generated-private-key.txt > generated-private-key-anssi.txt
And then you get the base64 private key as usual:
cat generated-private-key-anssi.txt | base64 -w 0
Now, the ingress properly gets the ssl certificate. In case you need to see the logs of the ingress to see how the CERT is processed, just list pods & logs in the ingress-nginx namespace.

GCS encryption always fails on big files

I'm trying to encrypt a file on GCS with my own key using gsutil rewrite command (following https://cloud.google.com/storage/docs/using-encryption-keys)
As instructed I'm using a boto file including
[GSUtil]
encryption_key = p9syBNA0ycKxGotK3XinNZC6aCpdn3ZQ7WWOhKNgBaY=
It is working without a problem on small files but fails constantly on big ones.
I'm running the command:
gsutil rewrite -k -O gs://ywz-tmp/bigfile.txt
Is that a know issue?
Any workaround?
Feel free to use the file and key (both were generated for this post)
The fix for this issue should be in production now.

installing kubernetes on coreos with rkt and automated script

I'm trying to install kuberentes with rkt on my real (not virtual) coreos servers at home using the scripts at https://github.com/coreos/coreos-kubernetes/tree/master/multi-node/generic and I have some questions.
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
thanks!
update
Rob thank you so much for your response. I wasn't clear enough regarding etcd2. I already have etcd2 tls installed and properly configured on my coreos servers. so I configured my etcd servers in the controller-install.sh file:
export ETCD_ENDPOINTS="https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379"
but when I run the controller-install.sh script, it returns and repeat the following output:
Waiting for etcd...
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
Trying: https://coreos-2.tux-in.com:2379
Trying: https://coreos-3.tux-in.com:2379
...
so I was guessing it's because i didn't define etcd related tls certificates in the controller script and that is why it stuck in that faze.
on my macbook pro laptop I have the following alias configured:
alias myetcdctl="~/apps/etcd-v3.0.8-darwin-amd64/etcdctl --endpoint=https://coreos-2.tux-in.com:2379 --ca-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/ca.pem --cert-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1.pem --key-file=/Users/ufk/Projects/coreos/tux-in/etcd/certs/certs-names/etcd1-key.pem --timeout=10s"
so when I run myetcdctl member list I get:
8832ce6a269a7dac: name=ccff826d5f564c67abf35467306f80a0 peerURLs=https://coreos-3.tux-in.com:2380 clientURLs=https://coreos-3.tux-in.com:2379 isLeader=true
a2c0ac9708ef90fc: name=dc38bc8f20e64940b260d3f7b260430d peerURLs=https://coreos-2.tux-in.com:2380 clientURLs=https://coreos-2.tux-in.com:2379 isLeader=false
so I'm guessing that I don't really have a problem there.
any ideas?
thanks!
my etcd2 is using tls keys, I can't see anywhere in the script where I can define where the certificates are located.
These scripts don't start an etcd server. You will need to set one up manually and will be able to use TLS and as many nodes as you would like. This isn't clear in the current form of the document, I will attempt a PR to fix.
can I supply a domain instead of IP for ADVERTISE_IP and CONTROLLER_ENDPOINT ?
Only CONTROLLER_ENDPOINT be a domain name.
when I tried to install kubernetes manually I needed start the rkt service api. it doesn't state in the documents that it needed here, does it mean that I don't need it if I use these scripts? or is it just something that's missing in the documents?
These scripts include/start the rkt API service. As you can see below, it also has a Restart parameter set (source):
[Unit]
Before=kubelet.service
[Service]
ExecStart=/usr/bin/rkt api-service
Restart=always
RestartSec=10
[Install]
RequiredBy=kubelet.service

Process x509 client certificates in Perl

I am working with Web::ID and have some questions.
From the FAQ for Web::ID:
How can I use WebID in Perl?
[...]
Otherwise, you need to use Web::ID directly. Assuming you've configured your web server to request a client certificate from the browser, and you've managed to get that client certificate into Perl in PEM format, then it's just:
my $webid = Web::ID->new(certificate => $pem);
my $uri = $webid->uri;
And you have the URI.
Anyway I'm stuck at the .. get that client certificate into Perl .. part.
I can see the client certificate is being passed along to the script by examining the %ENVenvironment variable. But I am still unsure how to actually process it in the way that Web::ID does... like examine the SAN.
According to the documentation of mod_ssl you will find the PEM encoded client certificate in the environment variable SSL_CLIENT_CERT, so all you need is to call
my $webid = Web::ID->new(certificate => $ENV{SSL_CLIENT_CERT});
However, Apache does not set the SSL_CLIENT_CERT environment variable by default. This is for performance reasons - setting a whole bunch of environment variables before spawning your Perl script (via mod_perl, or CGI, or whatever) is wasteful if your Perl script doesn't use them, so it only sets a small set of environment variables by default. You need to configure Apache correctly to tell it you want ALL DA STUFFZ. In particular you want something like this in .htaccess, or your virtual host config, or server config file:
SSLOptions +StdEnvVars +ExportCertData
While you're at it, you also want to make sure Apache is configured to ask clients to present a certificate. For that you want something like:
SSLVerifyClient optional_no_ca
All this is kind of covered in the documentation for Web::ID but not especially thoroughly.