SSL Certificate Error with python-arango Library - kubernetes

I am trying to connect the Python-Arango library to an application. I have set up the ArangoDB on Kubernetes nodes using this tutorial. My yaml file for the cluster is like this:
---
apiVersion: "database.arangodb.com/v1alpha"
kind: "ArangoDeployment"
metadata:
name: "arango-cluster"
spec:
mode: Cluster
image: arangodb/arangodb:3.7.6
tls:
caSecretName: arango-cluster-ca
agents:
storageClassName: my-local-storage
resources:
requests:
storage: 2Gi
dbservers:
storageClassName: my-local-storage
resources:
requests:
storage: 17Gi
externalAccess:
type: NodePort
nodePort: 31200
Setup seems fine, since I am able to access the web UI as well as through Arango shell. However, when I am using the python-arango library to connect my application to the DB, I am getting a certificate related error:
Max retries exceeded with url: /_db/testDB/_api/document/demo/10010605 (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
When doing kubectl get secrets, I see arango-cluster-ca there, which I have explicitly mentioned in the YAML file above. I have also set the verification flag in the python code False as follows
db = client.db(name='testDB', verify=False, username='root', password='')
Yet, it does not bypass the verification as expected.
I would like to understand what I could have missed - either during setup, or in the Python call - which is not letting me bypass this SSL certificate error issue, or if it's possible to set the certificate up. I tried this Arango tutorial to setup a certificate, but it did not give me success.
Thanks.

The only workaround I was able to figure out was to opt for the unsecured route.
Instead of having arango-cluster-ca in the spec.tls.caSecretName field of arango cluster config file, I set the field to None. It allowed me to connect with http without any issues.
Would still like to know if there is some workaround to get it connected via https, so I am still open to answers, else I would close this.

Related

HTTPRoute set a timeout

I am trying to set up a multi-cluster architecture. I have a Spring Boot API that I want to run on a second cluster (for isolation purposes). I have set that up using the gateway.networking.k8s.io API. I am using a Gateway that has an SSL certificate and matches an IP address that's registered to my domain in the DNS registry. I am then setting up an HTTPRoute for each service that I am running on the second cluster. That works fine and I can communicate between our clusters and everything works as intended but there is a problem:
There is a timeout of 30s by default and I cannot change it. I want to increase it as the application in the second cluster is a WebSocket and I obviously would like our WebSocket connections to stay open for more than 30s at a time. I can see that in the backend service that's created from our HTTPRoute there is a timeout specified as 30s. I found a command to increase it gcloud compute backend-services update gkemcg1-namespace-store-west-1-8080-o1v5o5p1285j --timeout=86400
When I run that command it would increase the timeout and the webSocket connection will be kept alive. But after a few minutes this change gets overridden (I suspect that it's because it's managed by the yaml file). This is the yaml file for my backend service
kind: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: public-store-route
namespace: namespace
labels:
gateway: external-http
spec:
hostnames:
- "my-website.example.org"
parentRefs:
- name: external-http
rules:
- matches:
- path:
type: PathPrefix
value: /west
backendRefs:
- group: net.gke.io
kind: ServiceImport
name: store-west-1
port: 8080
I have tried to add either a timeout, timeoutSec, or timeoutSeconds under every level with no success. I always get the following error:
error: error validating "public-store-route.yaml": error validating data: ValidationError(HTTPRoute.spec.rules[0].backendRefs[0]): unknown field "timeout" in io.k8s.networking.gateway.v1beta1.HTTPRoute.spec.rules.backendRefs; if you choose to ignore these errors, turn validation off with --validate=false
Surely there must be a way to configure this. But I wasn't able to find anything in the documentation referring to a timeout. Am I missing something here?
How do I configure the timeout?
Edit:
I have found this resource: https://cloud.google.com/kubernetes-engine/docs/how-to/configure-gateway-resources
I have been trying to set up a LBPolicy and attatch it it the Gateway, HTTPRoute, Service, or ServiceImport but nothing has made a difference. Am I doing something wrong or is this not working how it is supposed to? This is my yaml:
kind: LBPolicy
apiVersion: networking.gke.io/v1
metadata:
name: store-timeout-policy
namespace: sandstone-test
spec:
default:
timeoutSec: 50
targetRef:
name: public-store-route
group: gateway.networking.k8s.io
kind: HTTPRoute

How can I acces the Kubernetes Dashboard on a remote Machine

I am new at kubernetes and I am trying to setup a clsuter on a remote server.
For this I use microk8s and a server from hetzner-cloud (https://www.hetzner.com/de/cloud).
I entered the server with ssh and followd the microk8s installation instructions for linux(https://microk8s.io/). That all seemed to work fine. My problem is now that I have not found a way to access the kubernetes-dashboard.
I've tryed the workaround with NodePort and microk8s kubectl proxy --disable-filter = true but it does not work and is not recommended for security reasons. With the disable Filter Method it is possible to access the login page but it does not respond.
I've also tryed to access the dashbourd from outside using a ssh tunnel and this tutorial: How can I remotely access kubernetes dashboard with token
The tunnel seems to works fine, but I still cannot access the port.
Now I have two main questions:
1: How do you usualy use kubernetes, if kubernetes does not wan you to access the dashboard from outside. Because don't you run your services usualy on a rented server that is not in you living room? What's the point I simply do not get?
2: How can I access the Dashborad?
I would be really happy, if anybody could help me. I am already struggling with this problem since a month now. :)
best greetings,
mamo
In order to access the K8s services from outside using HTTP, you should configure and use ingress controller.
after ingress is running, you will be able to specify a "path" or route and a port and name that points to your service.
Once this is done, you should be able to access the dashboard
Sample configuration (reference)
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dashboard-google
namespace: kube-system
annotations:
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- kube.mydomain.com
secretName: tls-secret
rules:
- host: kube.mydomain.com
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 443

How do I map a service containing a static webpage under a subpath using ambassador in kubernetes?

How do I map a service containing a static webpage under a subpath using ambassador in kubernetes ?
this is my yaml file
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: grafana
spec:
prefix: /grafana/
method: GET
service: monitoring-grafana.default:80
timeout_ms: 40000
**
and this is the response i get when trying to navigate
If you're seeing this Grafana has failed to load its application files
This could be caused by your reverse proxy settings.
If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath
If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build
Sometimes restarting grafana-server can help
**
Have you read through the Monitoring page on the Ambassador Docs? There's a bit about implementing Prometheus + Grafana if that helps: https://www.getambassador.io/docs/latest/howtos/prometheus/

Kubernetes access Network Fileshare

Recently we have started using Kubernetes as a path for moving forward with new projects. We started implementing some of them and we are now struggling with one issue. How to access network file share ?
Our Kubernetes cluster is linux based cluster installed on Windows machine. Services hosted in that cluster need to be able to access file share which is accessible on that machine (i.e. \\myFileShare\myfolder ).
We can't find a solution to this one. We have tried using "https://www.nuget.org/packages/SharpCifs.Std/" library to acccess the files over SMB but it turned out it, the library won't support SMB 2.0.
We were also thinking about mounting this drive as Persistent Volume but if i understand correctly persistent volume is supposed to have its lifecycle managed by Kubernetes so i don't think it's designed for this kind of stuff.
We have tried to find solution in the internet but we didn't find anything, but i'm pretty sure we are not the first people who need to access Network fileshare from Kuberenetes cluster. Did anyone struggle with this problem before and could provide us some solution to that one ?
Have a look at cifs-volumedriver or this Kubernetes CIFS Volume Driver.
It should apply to your case, and it works with SMB2.1
The following is an example of PersistentVolume that uses the volume driver.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mycifspv
spec:
capacity:
storage: 1Gi
flexVolume:
driver: juliohm/cifs
options:
opts: sec=ntlm,uid=1000
server: my-cifs-host
share: /MySharedDirectory
secretRef:
name: my-secret
accessModes:
- ReadWriteMany
Credentials are passed using a Secret, which can be declared as follows:
apiVersion: v1
data:
password: ###
username: ###
kind: Secret
metadata:
name: my-secret
type: juliohm/cifs
NOTE: Pay attention to the secret's type field, which MUST match the
volume driver name. Otherwise the secret values will not be passed to
the mount script.
Also, please take a look at this question on Stack. Its author has the same problem and shows how to solve it.

kubernetes dashboard will not load

I am completely new to Kubernetes, so go easy on me.
I am running kubectl proxy but am only seeing the JSON output. Based on this discussion I attempted to set the memory limits by running:
kubectl edit deployment kubernetes-dashboard --namespace kube-system
I then changed the container memory limit:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
...
spec:
...
template:
metadata:
...
spec:
containers:
- image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.1
imagePullPolicy: IfNotPresent
livenessProbe:
...
name: kubernetes-dashboard
ports:
- containerPort: 9090
protocol: TCP
resources:
limits:
memory: 1Gi
I still only get the JSON served when I save that and visit http://127.0.0.1:8001/ui
Running kubectl logs --namespace kube-system kubernetes-dashboard-665756d87d-jssd8 I see the following:
Starting overwatch
Using in-cluster config to connect to apiserver
Using service account token for csrf signing
No request provided. Skipping authorization
Successful initial request to the apiserver, version: v1.10.0
Generating JWE encryption key
New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
Initializing JWE encryption key from synchronized object
Creating in-cluster Heapster client
Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
Serving insecurely on HTTP port: 9090
I read through a bunch of links from a Google search on the error but nothing really worked.
Key components are:
Local: Ubuntu 18.04 LTS
minikube: v0.28.0
Kubernetes Dashboard: 1.8.3
Installed via:
kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
Halp!
Have you considered using the minikube dashboard? You can reach it by:
minikube dashboard
Also you will get json on http://127.0.0.1:8001/ui because it is deprecated, so you have to use full proxy URL as it states in the dashboard github page.
If you still want to use this 'external' dashboard for some future not minikube related projects or there is some other reason I don't know about you can reach it by:
kubectl proxy
and then:
http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard:/proxy/
note that in the documentation it is https which is not correct in this case (might be documentation error or it might be clarified in the documentation part which I suggest you read if you need further information on web UI).
Hope this helps.