Access consul-api from consul-connect-service-mesh - kubernetes

In a consul-connect-service-mesh (using k8) how do you get to the consul-api itself?
For example to access the consul-kv.
I'm working through this tutorial, and I'm wondering how
you can bind the consul (http) api in a service to localhost.
Do you have to configure the Helm Chart further?
I would have expected the consul-agent to always be an upstream service.
The only way i found to access the api is via the k8-service consul-server.
Environment:
k8 (1.22.5, docker-desktop)
helm consul (0.42)
consul (1.11.3)
used helm-yaml
global:
name: consul
datacenter: dc1
server:
replicas: 1
securityContext:
runAsNonRoot: false
runAsGroup: 0
runAsUser: 0
fsGroup: 0
ui:
enabled: true
service:
type: 'NodePort'
connectInject:
enabled: true
controller:
enabled: true

You can access the Consul API on the local agent by using the Kubernetes downward API to inject an environment variable in the pod with the IP address of the host. This is documented on Consul.io under Installing Consul on Kubernetes: Accessing the Consul HTTP API.
You will also need to exclude port 8500 (or 8501) from redirection using the consul.hashicorp.com/transparent-proxy-exclude-outbound-ports label.

My current final solution is a (connect)service based on reverse proxy (nginx) that targets consul.
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-kv-proxy
data:
nginx.conf.template: |
error_log /dev/stdout info;
server {
listen 8500;
location / {
access_log off;
proxy_pass http://${MY_NODE_IP}:8500;
error_log /dev/stdout;
}
}
---
apiVersion: v1
kind: Service
metadata:
# This name will be the service name in Consul.
name: consul-kv-proxy
spec:
selector:
app: consul-kv-proxy
ports:
- protocol: TCP
port: 8500
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul-kv-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul-kv-proxy
spec:
replicas: 1
selector:
matchLabels:
app: consul-kv-proxy
template:
metadata:
name: consul-kv-proxy
labels:
app: consul-kv-proxy
annotations:
'consul.hashicorp.com/connect-inject': 'true'
spec:
containers:
- name: consul-kv-proxy
image: nginx:1.14.2
volumeMounts:
- name: config
mountPath: "/usr/local/nginx/conf"
readOnly: true
command: ['/bin/bash']
#we have to transform the nginx config to use the node ip address
args:
- -c
- envsubst < /usr/local/nginx/conf/nginx.conf.template > /etc/nginx/conf.d/consul-kv-proxy.conf && nginx -g 'daemon off;'
ports:
- containerPort: 8500
name: http
env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumes:
- name: config
configMap:
name: consul-kv-proxy
# If ACLs are enabled, the serviceAccountName must match the Consul service name.
serviceAccountName: consul-kv-proxy
A downstream service (called static-client) now can be declared like this
apiVersion: v1
kind: Service
metadata:
name: static-client
spec:
selector:
app: static-client
ports:
- port: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-client
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-client
spec:
replicas: 1
selector:
matchLabels:
app: static-client
template:
metadata:
name: static-client
labels:
app: static-client
annotations:
'consul.hashicorp.com/connect-inject': 'true'
'consul.hashicorp.com/connect-service-upstreams': 'consul-kv-proxy:8500'
spec:
containers:
- name: static-client
image: curlimages/curl:latest
# Just spin & wait forever, we'll use `kubectl exec` to demo
command: ['/bin/sh', '-c', '--']
args: ['while true; do sleep 30; done;']
serviceAccountName: static-client
Assume we have a key-value in consul called "test".
From a pod of the static-client we can now access the consul-web-api with:
curl http://localhost:8500/v1/kv/test
This solution still lacks fine-tuning (i have not try https, or ACL).

Related

Cannot Connect Kubernetes Secrets to Kubernetes Deployment (Values Are Empty)

I have a Golang Microservice Application which has following Kubernetes Manifest Configuration...
apiVersion: v1 # Service for accessing store application (this) from Ingress...
kind: Service
metadata:
name: store-internal-service
namespace: store-namespace
spec:
type: ClusterIP
selector:
app: store-internal-service
ports:
- name: http
port: 8000
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-application-service
namespace: store-namespace
labels:
app: store-application-service
spec:
selector:
matchLabels:
app: store-internal-service
template:
metadata:
labels:
app: store-internal-service
spec:
containers:
- name: store-application
image: <image>
envFrom:
- secretRef:
name: project-secret-store
ports:
- containerPort: 8000
protocol: TCP
imagePullPolicy: Always
env:
- name: APPLICATION_PORT
value: "8000"
- name: APPLICATION_HOST
value: "localhost"
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Secret
metadata:
name: project-secret-store
namespace: store-namespace
type: Opaque
stringData:
# Prometheus Server Credentials...
PROMETHEUS_HOST: "prometheus-internal-service"
PROMETHEUS_PORT: "9090"
# POSTGRESQL CONFIGURATION.
DATABASE_HOST: "postgres-internal-service"
DATABASE_PORT: "5432"
DATABASE_USER: "postgres_user"
DATABASE_PASSWORD: "postgres_password"
DATABASE_NAME: "store_db"
And Also for Test Purposes, I've specified following Variables in order to receive values from secrets in my application..
var (
POSTGRES_USER = os.Getenv("DATABASE_USER")
POSTGRES_PASSWORD = os.Getenv("DATABASE_PASSWORD")
POSTGRES_DATABASE = os.Getenv("DATABASE_NAME")
POSTGRES_HOST = os.Getenv("DATABASE_HOST")
POSTGRES_PORT = os.Getenv("DATABASE_PORT")
)
The Problem is when run my application, and after some time go check the logs of my application using kubectl logs <my-application-pod-name> --namespace=store-namespace, turns out that all this Golang variables are empty, despite the fact that they all has been declared in the Secret...
There is probably some other issues, that can cause this problem, but if there is some errors in configuration to point out, please share with your thoughts about it :)

k8s custom metric exporter json returned values

I've tried to make a custom metric exporter for my kubernetes ..
And still failing to get the right value in order to exploit if within an hpa with this simplistic "a" variable ..
I don't really know how to export it according to different routes .. if I do {"a":"1"}, the error message on the hpa tells me it's missing kind .. then another fields ..
So I've did sum up the complete reproducible experiment below, in order to make it the simplest ever
Any Idea on how to complete this task ?
Thanks a lot for any clue, advice, enlightment, comment, notice
apiVersion: v1
kind: ConfigMap
metadata:
name: exporter
namespace: test
data:
os.php: |
<?php // If anyone knows a simple replacement other than swoole .. he's welcome --- Somehow I only think I'll need to know what the json output is expected for theses routes
$server = new Swoole\HTTP\Server("0.0.0.0",443,SWOOLE_PROCESS,SWOOLE_SOCK_TCP | SWOOLE_SSL);
$server->set(['worker_num' => 1,'ssl_cert_file' => __DIR__ . '/example.com+5.pem','ssl_key_file' => __DIR__ . '/example.com+5-key.pem']);
$server->on('Request', 'onMessage');
$server->start();
function onMessage($req, $res){
$value=1;
$url=$req->server['request_uri'];
file_put_contents('monolog.log',"\n".$url,8);//Log
if($url=='/'){
$res->end('{"status":"healthy"}');return;
} elseif($url=='/metrics'){
$res->end('a '.$value);return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1'){ // <-- This url is called lots of time in the logs
$res->end('{"kind": "APIResourceList","apiVersion": "v1","groupVersion": "custom.metrics.k8s.io/v1beta1","resources": [{"name": "namespaces/a","singularName": "","namespaced": false,"kind": "MetricValueList","verbs": ["get"]}]}');return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter/a'){
$res->end('{"kind": "MetricValueList","apiVersion": "custom.metrics.k8s.io/v1beta1","metadata": {"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter-svc/a"},"items": [{"describedObject": {"kind": "Service","namespace": "test","name": "test-metrics-exporter-svc","apiVersion": "/v1"},"metricName": "a","timestamp": "2020-06-21T08:35:58Z","value": "'.$value.'","selector": null}]}');return;
}
$res->status(404);return;
}
---
apiVersion: v1
kind: Service
metadata:
name: test-metrics-exporter
namespace: test
annotations:
prometheus.io/port: '443'
prometheus.io/scrape: 'true'
spec:
ports:
- port: 443
protocol: TCP
selector:
app: test-metrics-exporter
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-metrics-exporter
namespace: test
spec:
selector:
matchLabels:
app: test-metrics-exporter
template:
metadata:
labels:
app: test-metrics-exporter
spec:
terminationGracePeriodSeconds: 1
volumes:
- name: exporter
configMap:
name: exporter
defaultMode: 0744
items:
- key: os.php
path: os.php
containers:
- name: test-metrics-exporter
image: openswoole/swoole:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: exporter
mountPath: /var/www/os.php
subPath: os.php
command:
- /bin/sh
- -c
- |
touch monolog.log;
apt update && apt install wget -y && wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64
cp mkcert-v1.4.3-linux-amd64 /usr/local/bin/mkcert && chmod +x /usr/local/bin/mkcert
mkcert example.com "*.example.com" example.test localhost 127.0.0.1 ::1
php os.php &
tail -f monolog.log
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpine
namespace: test
spec:
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
spec:
terminationGracePeriodSeconds: 1
containers:
- name: alpine
image: alpine
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |
tail -f /dev/null
---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: alpine
namespace: test
spec:
scaleTargetRef:
kind: Deployment
name: alpine
apiVersion: apps/v1
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
target:
kind: Service
name: test-metrics-exporter
metricName: a
targetValue: '1'
---
# This is the Api hook for custom metrics
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
namespace: test
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
version: v1beta1
service:
name: test-metrics-exporter
namespace: test
port: 443

visual studio kubernetes project 503 error in azure

I have created a kubernetes project in visual studio 2019, with the default template. This template creates a WeatherForecast controller.
After that I have published it to my ARC.
I used this command to create the AKS:
az aks create -n $MYAKS -g $MYRG --generate-ssh-keys --z 1 -s Standard_B2s --attach-acr /subscriptions/mysubscriptionguid/resourcegroups/$MYRG/providers/Microsoft.ContainerRegistry/registries/$MYACR
And I enabled HTTP application routing via the azure portal.
I have deployed it to azure kubernetes (Standard_B2s), with the following deployment.yaml:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
service.yaml:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes1
spec:
type: ClusterIP
selector:
app: kubernetes1
ports:
- port: 80 # SERVICE exposed port
name: http # SERVICE port name
protocol: TCP # The protocol the SERVICE will listen to
targetPort: http # Port to forward to in the POD
ingress.yaml:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes1
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: kubernetes1.<uuid (removed for this post)>.westeurope.aksapp.io # Which host is allowed to enter the cluster
http:
paths:
- backend: # How the ingress will handle the requests
service:
name: kubernetes1 # Which service the request will be forwarded to
port:
name: http # Which port in that service
path: / # Which path is this rule referring to
pathType: Prefix # See more at https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
But when I go to kubernetes1..westeurope.aksapp.io or kubernetes1..westeurope.aksapp.io/WeatherForecast I get the following error:
503 Service Temporarily Unavailable
nginx/1.15.3
It's working now. For other people who have the same problem. I have updated my deployment config from:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
to:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1
spec:
selector: # Define the wrapping strategy
matchLabels: # Match all pods with the defined labels
app: kubernetes1 # Labels follow the `name: value` template
template: # This is the template of the pod inside the deployment
metadata:
labels:
app: kubernetes1
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: mycontainername.azurecr.io/kubernetes1:latest
name: kubernetes1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: http
I don't know exactly which line solved the problem. Feel free to comment it if you know which line the problem was.

Cant configure ingress on gcloud properly

I am trying to deploy a simple app on google cloud. I am testing the gitlab kluster integration.
Here is my yaml k8:
---
apiVersion: v1
kind: Service
metadata:
name: service
namespace: "my-service"
labels:
run: service
spec:
type: NodePort
selector:
run: "service"
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-api
namespace: "my-service"
labels:
run: service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "service-ingress"
namespace: "my-service"
labels:
run: service
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: service
servicePort: 9000
If I log into the pod I can curl the service on the nodePort designed IP, but if I try to hit the ingress address I just get a error:
I am not sure why there are 2 backend services on the loadbalancer that is created automatically, the one that points to my app shows as unhealthy
[load balancer backends1
You need to define readiness probe in your pod spec because GKE ingress controller picks up health check from the readiness probe.
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}

How to deny default but allow HTTP and TCP traffic in istio kubernetes cluster?

I have a cluster with istio injection enabled and cockroach db stateful set defined:
apiVersion: v1
kind: ServiceAccount
metadata:
name: cockroachdb-serviceaccount
---
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: tcp
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cockroachdb-statefulset
labels:
version: v20.1.2
spec:
serviceName: cockroachdb
replicas: 3
selector:
matchLabels:
app: cockroachdb
template:
metadata:
labels:
app: cockroachdb
version: v20.1.2
spec:
serviceAccountName: cockroachdb-serviceaccount
containers:
- name: cockroachdb
image: cockroachdb/cockroach:v20.1.2
ports:
- containerPort: 26257
name: tcp
- containerPort: 8080
name: http
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
env:
- name: COCKROACH_CHANNEL
value: kubernetes-insecure
command:
- "/bin/bash"
- "-ecx"
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
- "exec /cockroach/cockroach start --logtostderr --insecure --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join cockroachdb-statefulset-0.cockroachdb,cockroachdb-statefulset-1.cockroachdb,cockroachdb-statefulset-2.cockroachdb --cache 25% --max-sql-memory 25%"
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 5
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 4Gi
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: cockroachdb-public
spec:
host: cockroachdb-public
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: cockroachdb-public
spec:
hosts:
- cockroachdb-public
http:
- match:
- port: 8080
route:
- destination:
host: cockroachdb-public
port:
number: 8080
tcp:
- match:
- port: 26257
route:
- destination:
host: cockroachdb-public
port:
number: 26257
and a service that accesses it:
apiVersion: v1
kind: ServiceAccount
metadata:
name: downstream-serviceaccount
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: downstream-deployment-v1
labels:
app: downstream
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: downstream
version: v1
template:
metadata:
labels:
app: downstream
version: v1
spec:
serviceAccountName: downstream-serviceaccount
containers:
- name: downstream
image: downstream:0.1
ports:
- containerPort: 80
env:
- name: DATABASE_URL
value: postgres://roach#cockroachdb-public:26257/roach?sslmode=disable
---
apiVersion: v1
kind: Service
metadata:
name: downstream-service
labels:
app: downstream
spec:
type: ClusterIP
selector:
app: downstream
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: downstream-service
spec:
host: downstream-service
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: downstream-service
spec:
hosts:
- downstream-service
http:
- name: "downstream-service-routes"
match:
- port: 80
route:
- destination:
host: downstream-service
port:
number: 80
Now I'd like to restrict access to cockroach db only to downstream-service and to cockroachdb itself (since nodes need intercommunication between each other).
I'm trying to restrict the traffic with something like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all
namespace: default
spec:
{}
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-downstream
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/downstream-serviceaccount"]
- to:
- operation:
ports: ["26257"]
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
- to:
- operation:
ports: ["26257"]
but doesn't seem to do anything. I can still e.g. access cockroachdb-public:8080 cluster HTTP UI from downstream-service.
Now when I add the following:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all-to-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: DENY
rules:
- to:
- operation:
ports: ["26257"]
then all the traffic is blocked (including the traffic between cockroachdb nodes).
What am I doing wrong here?
You are having the same problem that a guy couple of days ago. In your authorization policy you have two policies:
service account downstream-serviceaccount (and cockroachdb-serviceaccount for the other authorization policy) from default namespace can access the service with labels app: cockroachdb on any port on default namespace.
Any service account, from any namespace can access the service with labels app: cockroachdb, on port 26257.
In order to make it an AND, you would do this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
to: <- remove the dash from here
- operation:
ports: ["26257"]
Same with the other AuthorizationPolicy object. Also note that you don't need to explicitly create a DENY policy. When you create an ALLOW one, it automatically denies everything else.