can't get concourse to accept self signed certs when looking up Docker images - concourse

I'm trying to get the helloworld sample to run. Problem is my company using a MITM proxy that replaces all certs on https connections with its own. So all tools that try to go to an https url fail.
In this case it is the code that downloads a Docker image from the official registry:
resource script '/opt/resource/check []' failed: exit status 1
stderr:
failed to ping registry: 2 error(s) occurred:
* ping https: Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
* ping http: Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority
I tried to add the insecure_registries option but that doesn't seem to work:
jobs:
- name: hello-world
plan:
- task: say-hello
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ubuntu,
insecure_registries: ["docker.io:80"]
run:
path: echo
args: ["Hello, world!"]
Any ideas what I might be doing wrong?

This is a problem a number of users have encountered and one we are trying to find a general solution to that we can use for all resources. If you are interested in our progress on that, you can read more on this GitHub issue.
In the meantime, you can try using the ca_certs option to pass your man in the middle proxy's certificates into the resource. Note that ca_certs can not be used in combination with insecure_registries. Without seeing your exact configuration I can't give an exact solution but if ca_certs does not solve your issue, you should also look into the client_certs flag.
You can read more about all of these options in the docker-image-resource documentation here.

Related

Lando wtih ParcelJS: exposing port

I'm trying to use ParcelJS with Lando and there's one problem if you want HMR to work. You need to expose a port and that seems to be much harder than it should be with Lando. :(
So I know I need to do this for my ParcelJS watch command:
parcel watch dev/scripts.js --out-dir prod/ --hmr-port 6101
Then I need to expose the port I've assigned, in this case "6101" to Docker (via my Lando config file). But that's where it's tricky, apparently, because of the proxy setup Lando uses.
My current .lando.yml config is below, but it doesn't work as expected and the port is not exposed. I still get a "scripts.js:224 WebSocket connection to 'wss://testwp.lndo.site:6101/' failed:" error message from my ParcelJS generated script file in my browser's dev tools:
name: testwp
recipe: wordpress
config:
php: '8.0'
via: nginx
webroot: wordpress
database: mysql:8.0
services:
appserver:
portforward: 6101
I saw a similar post about a problem with LocalWP which does about the same thing Lando does.
Can you maybe try to add the flag --hmr-hostname localhost.
Its ether that or --hmr-hostname testwp.lndo.site.
UPDATE:
After checking the parcel CLI docs the flag could also be --hmr-host localhost try that aswell.

vault-secrets-provider alias not recognized with docker-kaniko

I'm having some issues when trying to use Hashicorp vault template (kubernetes with Google Kubernetes Engine) with to.be.continuous.
Actually when I use it with Google Docker Kaniko layer I got an error message: ... wget: bad address 'vault-secrets-provider'.
It seems that Kaniko doesn't recognize the vault-secrets-provider layer. Would you please help me with this? Or perhaps, where I can ask for some help?
This is a summary of .gitlab-ci.yml
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "templates/gitlab-ci-k8s-vault.yml"
...
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"
Error Message:
[ERROR] Failed getting secret K8S_DEFAULT_KUBE_CONFIG:
... wget: bad address 'vault-secrets-provider'
I tried many times directly without Vault layer and Kaniko works ok, I mean without Vault secrets.
How I can accomplish this? I tried modifying the kaniko template but without success.
I will appreciate any help with this.
To fix your issue, first upgrade the docker template to its latest version (2.3.0 at the time this response was written).
Then depending on your case you have 2 options:
Docker needs to handle some of your secrets managed by Vault: then you shall also activate the Vault variant for Docker,
Docker doesn't needs to handle any secret managed by Vault: don't use the Vault variant for Docker, you'll have a warning message from Docker not being able to decode the secret (basically the same as the one you had, but not failing the build),
You shall simply use it in your .gitlab-ci.yml file:
include:
# Docker template
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker.yml'
# Vault variant for Docker (depending on your above case)
- project: 'to-be-continuous/docker'
ref: '2.3.0'
file: '/templates/gitlab-ci-docker-vault.yml'
# Kubernetes template
- project: 'to-be-continuous/kubernetes'
ref: '2.0.4'
file: '/templates/gitlab-ci-k8s.yml'
- project: "to-be-continuous/kubernetes"
ref: "2.0.4"
file: "/templates/gitlab-ci-k8s-vault.yml"
K8S_DEFAULT_KUBE_CONFIG: "#url#http://vault-secrets-provider/api/secrets/noprod?field=kube_config"
VAULT_BASE_URL: "http://myvault.myserver.com/v1"

Vapor cloud deploy failed: Sockets Error: Failed trying to connect to http://redis.eu.vapor.cloud:6379

I setup Vapor project manually with swift package manager. I follow the documentation.
It build and runs successfully in my local machine, both for debug and release build.
But it failed to deploy to vapor cloud:
....
....
env: development
db: none
replicas: 1
replica size: free
branch: development
build: clean
Creating deployment [Done]
Connecting to build logs ...
Waiting in Queue [Failed]
Error: Sockets Error: Failed trying to connect to http://redis.eu.vapor.cloud:6379
Identifier: Sockets.SocketsError.connectFailed
Here are some possible causes:
- The hostname or port is not valid
Anyone knows what caused this error?
I open an issue on GitHub and got this response:
Hi, It's usually caused by either a firewall, or a proxy preventing connection to our Redis cluster, that is providing log feedback to the terminal.
We have seen it a couple of times, and are working on allowing to see log output in the dashboard, for these kind of situations :)
Still cannot find the solution.

generated serviceaccount token is rejected by kube-apiserver

I have one successfully working cluster, with out any problems, I've tried to make a copy of it. It's working basically, except one issue - token generated by apiserver is not valid with error message:
6 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error
I have api server started up with following parameters:
kube-apiserver --address=0.0.0.0 --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service-cluster-ip-range=10.116.0.0/23 --client_ca_file=/srv/kubernetes/ca.crt --basic_auth_file=/srv/kubernetes/basic_auth.csv --authorization-mode=AlwaysAllow --tls_cert_file=/srv/kubernetes/server.cert --tls_private_key_file=/srv/kubernetes/server.key --secure_port=6443 --token_auth_file=/srv/kubernetes/known_tokens.csv --v=2 --cors_allowed_origins=.* --etcd-config=/etc/kubernetes/etcd.config --allow_privileged=False
I think I'm missing something but can't find what exactly, any help will be appreciated!
So, apparently it was wrong server.key used by controller manager.
According to kubernetes documentation token is generated by controller manager.
While I was doing copy of the all my configuration, I had to change ipaddress and had to change certificate due to this as well. But controller-manager started with "old" certificate and after the change created wrong keys because server.key.
You can see this below flag for api server, it works for me. Check this.
--insecure-bind-address=${OS_PRIVATE_IPV4}
--bind-address=${OS_PRIVATE_IPV4}
--tls-cert-file=/srv/kubernetes/server.cert
--tls-private-key-file=/srv/kubernetes/server.key
--client-ca-file=/srv/kubernetes/ca.crt
--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota
--token-auth-file=/srv/kubernetes/known_tokens.csv
--basic-auth-file=/srv/kubernetes/basic_auth.csv
--etcd_servers=http://${OS_PRIVATE_IPV4}:4001
--service-cluster-ip-range=10.10.0.0/16
--logtostderr=true
--v=5

Kube-apiserver complains about remote error bad certificate

I reinstalled some nodes and a master. Now on the master I am getting:
Sep 15 04:53:58 master kube-apiserver[803]: I0915 04:53:58.413581 803 logs.go:41] http: TLS handshake error from $ip:54337: remote error: bad certificate
Where $ip is one of the nodes.
So I likely need to delete or recreate certificates. What would the location of those be? Any recommended commands to recreate or remove those or copy them from node to master or vice versa? Whatever gets me past this error message...
Take a look through the Creating Certificates section of authentication.md. It walks you through the certificates that you need to create and how to pass them to the system components, and you should be able to use that to re-generate certificates for your cluster.