K0s cluster is not getting created with active firewall on Redhat 8.4 - k0s

I am using k0sctl to create a cluster but it's not getting set up if the firewall is on.
Operating System - RedHat 8.4
k0s version - v1.24.2+k0s.0
k0sctl version - v0.13.0
k0sctl file -
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
hosts:
- ssh:
address: 10.210.24.11
user: root
port: 22
keyPath: /root/.ssh/id_rsa
role: controller
- ssh:
address: 10.210.24.12
user: root
port: 22
keyPath: /root/.ssh/id_rsa
role: worker
k0s:
version: 1.24.2+k0s.0
dynamicConfig: false
Error that I am getting is - failed to connect from worker to kubernetes api at https://X.X.X.X:6443 - check networking

Related

k3d multi cluster communication

Assuming having 2 separate k3d clusters (namely: vault, dev)
is there is a way to have a distinct URL for each cluster (preferably with https) for example: vault.cluster.internal and dev.cluster.internal
and allow apps deployed in dev.cluster.internal to lookup something or interact with apps in the vault.cluster.internal ?
The cluster definitions are as follows:
dev.yaml:
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: dev
servers: 1
agents: 3
network: k3d-cluster
kubeAPI:
host: "dev.cluster.internal"
hostIP: "127.0.0.1"
image: rancher/k3s:v1.24.3-k3s1
ports:
- port: 3000:3000
nodeFilters:
- loadbalancer
options:
k3d:
wait: true
timeout: "60s"
k3s:
extraArgs:
- arg: --tls-san=dev.cluster.internal
nodeFilters:
- server:*
- arg: --disable=metrics-server
nodeFilters:
- server:*
- arg: --disable=traefik
nodeFilters:
- server:*
kubeconfig:
updateDefaultKubeconfig: true
switchCurrentContext: false
and the vault.yaml:
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: vault
servers: 1
agents: 3
network: k3d-cluster
kubeAPI:
host: "vault.cluster.internal"
hostIP: "127.0.0.1"
image: rancher/k3s:v1.24.3-k3s1
ports:
- port: 8200:8200
nodeFilters:
- loadbalancer
options:
k3d:
wait: true
timeout: "60s"
k3s:
extraArgs:
- arg: --tls-san=vault.cluster.internal
nodeFilters:
- server:*
- arg: --disable=metrics-server
nodeFilters:
- server:*
- arg: --disable=traefik
nodeFilters:
- server:*
kubeconfig:
updateDefaultKubeconfig: true
switchCurrentContext: false
Can this be done without using service mesh?
Can I update the coredns in the clusters to allow resolving the other cluster host names, and how?
Can this be done with docker network configurations, and how?
This is basically to simulate real world clusters (but for local development)
I found 3 solutions for the problem.
The first solution is to add HostAliases section to the dev cluster definition, and make it point to the external IP of the vault cluster loadbalancer:
for example:
you can run the following command on the vault cluster after initializing it
$ kubectl --context k3d-vault --namespace vault get services
NAME TYPE CLUSTER-IP EXTERNAL-IP ...
...
vault LoadBalancer 10.43.34.131 172.24.0.3 ...
^^^^^^^^^^
...
dev.yaml would be
#...
ports:
- port: 3000:3000
nodeFilters:
- loadbalancer
hostAliases:
- ip: 172.24.0.3
hostnames:
- vault.cluster.internal
#...
# (alternatively, this can be automated using the following command without editing `dev.yaml` file)
$ KMS_IP=$(kubectl --context k3d-vault --namespace vault get services | grep LoadBalancer | awk -F " " '{ print $4 }')
$ k3d cluster create --config dev.yaml --host-alias $KMS_IP:vault.cluster.internal
this solution allow resolving of hostname (as you would expect in a production cluster)...
The second solution works similarly but using docker network inspect k3d-cluster (where k3d-cluster is the docker network name in cluster definition)
Similarly, run docker network inspect k3d-cluster and note down the IP of the loadbalancer subnet defined by docker:
...
"cad3f3XXXXXX": {
"Name": "k3d-vault-serverlb",
"EndpointID": "47d5XXXX"
"MacAddress": "02:42:ac:18:00:04",
"IPv4Address": "172.24.0.4/16", #<<< This IP can be used in dev cluster HostAliases
"IPv6Address": ""
}
...
The last solution is simpler but less flexible.
it uses host.k3d.internal as the name for the other cluster (allowing to resolve it) but you have to take care of port mapping as all of the clusters would be resolving to use the same URL for the services (which isn't ideal, but easy enough to test multi-cluster communication/bugs/etc).
In other words, configure the dev cluster VAULT_ADDR to be host.k3d.internal:8200 instead of vault.cluster.internal:8200
This is not flexible with TLS/HTTPS (AFAIK).

'No healthy upstream' error when Envoy proxy is set up manually

I have a very simple environment with a client, a server and an envoy proxy, each running on a separate docker, communicating over http.
When I set it using docker-compose it works.
However, when I set up the dockers and the network manually (with docker network create, setting the aliases, etc.), I get a "503 - no healthy upstream" message when the client tries to send requests to the server. curl to the network alias works from the envoy container. Any idea what is the difference between using docker-compose and setting up the network and containers manually?
envoy.yaml:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: service }
http_filters:
- name: envoy.filters.http.router
typed_config: {}
clusters:
- name: service
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: round_robin
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: server-stub
port_value: 5000
admin:
access_log_path: "/tmp/envoy.log"
address:
socket_address:
address: 0.0.0.0
port_value: 9901
The docker-compose file that worked (but I don't want to use docker-compose, I am using scripts that set up each docker separately):
version: "3.8"
services:
envoy:
image: envoyproxy/envoy:v1.16-latest
ports:
- "10000:10000"
- "9901:9901"
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml
server-stub:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
I can't reproduce this. It works fine with your docker-compose file, and it works fine manually. Here are the manual steps I took:
$ docker network create test-net
$ docker container run --network test-net --name envoy -p 10000:10000 -p 9901:9901 --mount type=bind,src=/home/john/projects/tester/envoy.yaml,dst=/etc/envoy/envoy.yaml envoyproxy/envoy:v1.16-latest
$ docker run --network test-net --name server-stub johnharris85/simple-hostname-reporter:3
My sample app also listens on port 5000. I used your exact envoy config. Using Docker 20.10.8 if relevant.

Connect from GKE to Cloud SQL through private IP

I am trying to connect from a pod in GKE to Google Cloud SQL.
Last weekend I make it work, but when I deleted the pod and recreated it was not working and I am not sure why.
Description
I have a nodejs application that it is dockerized. It uses the library sequelize and connects to postgres database.
Sequelize is reading the variables from the environment and in kubenetes I pass them through a secret
apiVersion: v1
kind: Secret
metadata:
name: myapi-secret
namespace: development
type: Opaque
data:
MYAPI_DATABASE_CLIENT: XXX
MYAPI_DATABASE_PORT : XXX
MYAPI_DATABASE_HOST: XXX
MYAPI_DATABASE_NAME : XXX
MYAPI_DATABASE_USERNAME: XXX
MYAPI_DATABASE_PASSWORD: XXX
And my pod definition
apiVersion: v1
kind: Pod
metadata:
name: myapi
namespace: development
labels:
env: dev
app: myapi
spec:
containers:
- name: myapi
image: gcr.io/companydev/myapi
envFrom:
- secretRef:
name: myapi-secret
ports:
- containerPort: 3001
name: myapi
When I deploy the pod I get a connection error to the database
Error: listen EACCES: permission denied tcp://podprivateip:3000
at Server.setupListenHandle [as _listen2] (net.js:1300:21)
at listenInCluster (net.js:1365:12)
at Server.listen (net.js:1462:5)
at Function.listen (/usr/src/app/node_modules/express/lib/application.js:618:24)
at Object.<anonymous> (/usr/src/app/src/app.js:46:5)
at Module._compile (internal/modules/cjs/loader.js:1076:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:941:32)
at Function.Module._load (internal/modules/cjs/loader.js:782:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
Emitted 'error' event on Server instance at:
at emitErrorNT (net.js:1344:8)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
code: 'EACCES',
errno: -13,
syscall: 'listen',
address: 'tcp://podprivateip:3000',
port: -1
}
I couldn't realize what I am missing
Thanks to #kurtisvg I was able to realize that I was not passing the host and port through env variables to express. However I still have a connection error
UnhandledPromiseRejectionWarning: SequelizeConnectionError: connect ETIMEDOUT postgresinternalip:5432
It is strange because the postgres (cloud sql) and the cluster (gke) are in the same gcp network, but it is like the pod can't see the database.
If I run a docker-compose in my local this connection is working.
You're connecting over private IP, but the port you've specified appears to be 3000. Typically Cloud SQL listens on the default port for the database engine:
MySQL - 3306
Postgres - 5432
SQL Server - 1433

Fabric v2.0 in kubernetes (minikube) - error Peer channel join - TLS issue because of pod's names

I am trying to set up the Fabric v2.0 test-network (https://hyperledger-fabric.readthedocs.io/en/release-2.0/test_network.html) on kubernetes (locally on minikube). I have an error whith peer channel join.
I created kubernetes files based on the docker-compose-test-net.yaml of the test-network. I successfully deploy the following pods:
an orderer (raft)
2 peers (peer0-org1-example-com and peer0-org2-example-com)
a fabric-tools pod.
I successfully generate the crypto material with cryptogen and configtxgen.
I successfully create the channel:
when I am in the fabric-tools pod:
bash-5.0# peer channel create -o orderer-example-com:7050 -c $CHANNEL_NAME --ordererTLSHostnameOverride orderer.example.com -f /fabric/${CHANNEL_NAME}.tx --tls --cafile $ORDERER_CA
2020-02-11 08:10:14.057 CET [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2020-02-11 08:10:14.080 CET [cli.common] readBlock -> INFO 002 Expect block, but got status: &{NOT_FOUND}
...
2020-02-11 08:10:15.105 CET [cli.common] readBlock -> INFO 00c Received block: 0
But when I try for the first peer to join the channel, I have an error. I have been spending days on this, and I cannot find a solution. Your help would be much appreciated!!
in the fabric-tools pod:
bash-5.0# peer channel join -b $CHANNEL_NAME.block
Error: error getting endorser client for channel: endorser client failed to connect to peer0-org1-example-com:7051: failed to create new connection: context deadline exceeded
what I see in the peer0-org1-example-com pod logs:
[31m2020-02-11 08:11:29.945 CET [core.comm] ServerHandshake -> ERRO 1b9[0m TLS handshake failed with error remote error: tls: bad certificate server=PeerServer remoteaddress=172.17.0.6:43270
[36m2020-02-11 08:11:29.945 CET [grpc] handleRawConn -> DEBU 1ba[0m grpc: Server.Serve failed to complete security handshake from "172.17.0.6:43270": remote error: tls: bad certificate
Thank you!!
UPDATE:
If I run peer channel join directly on the peer0-org1-example-com pod, I can see that there is a certificate issue:
addrConn.createTransport failed to connect to {peer0-org1-example-com:7051 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for peer0.org1.example.com, peer0, localhost, peer0.org1.example.com, peer0, localhost, peer0.org1.example.com, peer0, localhost, not peer0-org1-example-com". Reconnecting.
It seems that it would accept the connection for peer0.org1.example.com but not for peer0-org1-example-com. But in Kubernetes, it does not allow me to put dots in the names of the services and the deployments, that is why I put dashes. Do you know how to solve this?
I tried to make the cryptogen tool generate certificates for peer0-org1-example-com, but it messed things up. The better would be, I think, to make kubernetes names with dots, but I can't seem to make it.
The names in peer deployments files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: peer0-org1-example-com
spec:
selector:
matchLabels:
name: peer0-org1-example-com
replicas: 1
template:
metadata:
labels:
name: peer0-org1-example-com
The names in peer services files:
apiVersion: v1
kind: Service
metadata:
name: peer0-org1-example-com
labels:
run: peer0-org1-example-com
spec:
type: ClusterIP
selector:
name: peer0-org1-example-com
ports:
- protocol: TCP
port: 7051
name: grpc
We had a similar dot/dash certificate issue with OpenShift and solved it by setting a CommonName with dashes for each Host in our crypto-config file. Maybe this will work for you too.
Something like this:
PeerOrgs:
- Name: Org1
Domain: org1-example-com
EnableNodeOUs: true
Specs:
- Hostname: peer0
CommonName: "peer0-org1-example-com"
- Hostname: peer1
CommonName: "peer1-org1-example-com"
CA:
Hostname: ca
CommonName: "ca-org1-example-com"
PeerOrgs:
- Name: Org2
Domain: org2-example-com
EnableNodeOUs: true
Specs:
- Hostname: peer0
CommonName: "peer0-org2-example-com"
- Hostname: peer1
CommonName: "peer1-org2-example-com"
CA:
Hostname: ca
CommonName: "ca-org2-example-com"
OrdererOrgs:
- Name: Orderer
Domain: example.com
EnableNodeOUs: true
Specs:
- Hostname: orderer
CommonName: "orderer-example-com"
UPDATE:
We also changed all dot addresses in the configtx.yaml like this:
Orderer: &OrdererDefaults
...
EtcdRaft:
Consenters:
- Host: orderer-example-com
...
Addresses:
- orderer-example-com:7050
UPDATE 2:
probably you have to change the csr part in the fabric-ca-server-config.yaml of each org too:
csr:
cn: ca-example-com
names:
- C: US
ST: "New York"
L: "New York"
O: example-com
OU:
hosts:
- localhost
- example-com
ca:
expiry: 131400h
pathlength: 1
csr:
cn: ca-org1-example-com
names:
- C: US
ST: "North Carolina"
L: "Durham"
O: org1-example-com
OU:
hosts:
- localhost
- org1-example-com
ca:
expiry: 131400h
pathlength: 1
csr:
cn: ca-org2-example-com
names:
- C: UK
ST: "Hampshire"
L: "Hursley"
O: org2-example-com
OU:
hosts:
- localhost
- org2-example-com
ca:
expiry: 131400h
pathlength: 1

Keycloak behind Kong and strange redirect

Setup:
minikube version: v0.27.0
Kong (helm install stable/kong) / version 1.0.2
Keycloak (helm install stable/keycloak) / version 4.8.3.Final
I have a self signed SSL certificate for my "hello.local".
What I need to achieve: Keycloak behind Kong at "https://hello.local/".
My steps:
1) fresh minikube
2) Install Keycloak with helm, following values.yaml:
keycloak:
basepath: ""
replicas: 1
...
extraEnv: |
- name: PROXY_ADDRESS_FORWARDING
value: "true"
(that would create service auth-keycloak-http)
3) Install Kong with helm, following values.yaml:
replicaCount: 1
admin:
ingress:
enabled: true
hosts: ['hello.local']
proxy:
type: LoadBalancer
ingress:
enabled: true
hosts: ['hello.local']
tls:
- hosts:
- hello.local
secretName: tls-certificate
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
tls:
enabled: true
postgresql:
...
4) I setup service and route for Kong
Service:
Protocol: http
Host: auth-keycloak-http
Port: 80
Route:
Hosts: hello.local
After that I can open "https://hello.local" and can see welcome page from Keycloak where I can click Administration Console and after that I have redirect to "https://hello.local:8443/admin/master/console/" in my browser. So we should not have redirect with another port at this point.
Setup with 2 docker images (Keycloak + Kong) is working if PROXY_ADDRESS_FORWARDING is true.
How can I make Keycloak (helm chart) to work behind Kong (helm chart) in kubernetes cluster as expected, without redirect?
This is being discussed in github issue 1, github issue 2 and github issue 3. Also, Similar questions on stackoverflow
Original answer:
Seems, it is necessary to setup following environment variables in values.yaml of keycloak helm chart:
...
extraEnv: |
- name: KEYCLOAK_HTTP_PORT
value: "80"
- name: KEYCLOAK_HTTPS_PORT
value: "443"
- name: KEYCLOAK_HOSTNAME
value: example.com
...
All of them are required, after that, redirect would work correctly.
Added 2021 Sep:
Issue with weird behavior with redirect to port 8443 for some action (like go to Account management with the link on the top right of admin console).
In fact we do not need to set any KEYCLOAK_HTTP_PORT or KEYCLOAK_HTTPS_PORT.
Some changes are required on proxy side. On proxy we need to set x-forwarded-port to 443 for this route.
In my case we use Kong:
On the route, where Keycloak is exposed, we need to add (this one worked for me):
serverless > post function with following content:
ngx.var.upstream_x_forwarded_port=443
More info on KONG and x_forwarded_*