I have currently Trino deployed in my Kubernetes cluster using the official Trino(trinodb) Helm Chart. In the same way I deployed Apache superset.
Using port forwarding of trino to 8080 and superset to 8088, I am able to access the UI for both from localhost but also I am able to use the trino command line API to query trino using:
./trino --server http:localhost:8080
I don't have any authentication set
mysql is setup correctly as Trino catalog
when I try to add Trino as dataset for Superset using either of the following sqlalchemy URLs:
trino://trino#localhost:8080/mysql
trino://localhost:8080/mysql
When I test the connection from Superset UI, I get the following error:
ERROR: Could not load database driver: TrinoEngineSpec
Please advise how I could solve this issue.
You should install sqlalchemy-trino to make the trino driver available.
Add these lines to your values.yaml file:
additionalRequirements:
- sqlalchemy-trino
bootstrapScript: |
#!/bin/bash
pip install sqlalchemy-trino &&\
if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi
If you want more details about the problem, see this Github issue.
I added two options that do the same thing because in some versions the additionalRequirements doesn't work and you may need the bootstrapScript option to install the driver.
Related
I have setup an Rancher (RKE) (kuberbetes) for my application.
and application using the postgres so i have setup Crunchydata postgres operator and create postgres cluster using that.
everything fine but now i want to see the pg_activity for my postgresql.
how i can see the activity of whole postgres ?
you use the monitoring tools in rancher to monitor the Postgres.
apart from that you can SSH inside the respective pod of the database and use the cli command and check the output.
In rancher, you can also use the client tool to connect with the rancher and run the cli command to check the pg_activity.
Client docker image : https://hub.docker.com/r/jbergknoff/postgresql-client/
you can also deploy the GUI docker client on rancher and use it
GUI postgress client : https://hub.docker.com/r/dpage/pgadmin4/
GUI Example : https://dataedo.com/kb/query/postgresql/list-database-sessions#:~:text=Using%20pgAdmin,all%20connected%20sessions%20(3).
I want to start monitoring my postgreSQL servers via Prometheus. Prometheus is up and running.
Prometheus.yml:
- job_name: 'postgres-exporter'
scrape_interval: 5s
static_configs:
- targets: ['sql01:9187']
Found this postgresql node exporter: https://github.com/wrouesnel/postgres_exporter
How do I need to install this exporter? The github readme is talking about building it via Mage?
I have downloaded the following file via releases: https://github.com/wrouesnel/postgres_exporter/releases/download/v0.4.7/postgres_exporter_v0.4.7_linux-386.tar.gz on my postgresql server.
How to continue from here? Do I need to install Go first?
I've configured the env var:
export DATA_SOURCE_NAME="postgresql://<adminuser>:<adminpw>#hostname:5432/test_db"
Appreciate any help!
Ty
Why not run it with the provided Docker container?
From their README.md:
docker run --net=host -e DATA_SOURCE_NAME="postgresql://postgres:password#localhost:5432/postgres?sslmode=disable" wrouesnel/postgres_exporter
To answer your question, yes you will need to install Go to build that project. You could skip installing Go by running the docker image instead.
Edit: Just realized you downloaded the release.
It's as simple as unzipping the tarball: tar -xvf postgres_exporter_v0.4.7_linux-386.tar.gz and running it (./path/to/postgres_exporter, assuming you have the environment variables set.
I'm trying to get keycloak set up as a helm chart requirement to run some integration tests. I can get it to bring it up and run it, but I can't figure out how to set up the realm and client I need. I've switched over to the 1.0.0 stable release that came out today:
https://github.com/kubernetes/charts/tree/master/stable/keycloak
I wanted to use the keycloak.preStartScript defined in the chart and use the /opt/jboss/keycloak/bin/kcadm.sh admin script to do this, but apparently by "pre start" they mean before the server is brought up, so kcadm.sh can't authenticate. If I leave out the keycloak.preStartScript I can shell into the keycloak container and run the kcadm.sh scripts I want to use after it's up and running, but they fail as part of the pre start script.
Here's my requirements.yaml for my chart:
dependencies:
- name: keycloak
repository: https://kubernetes-charts.storage.googleapis.com/
version: 1.0.0
Here's my values.yaml file for my chart:
keycloak:
keycloak:
persistence:
dbVendor: H2
deployPostgres: false
username: 'admin'
password: 'test'
preStartScript: |
/opt/jboss/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password 'test'
/opt/jboss/keycloak/bin/kcadm.sh create realms -s realm=foo -s enabled=true -o
CID=$(/opt/jboss/keycloak/bin/kcadm.sh create clients -r foo -s clientId=foo -s 'redirectUris=["http://localhost:8080/*"]' -i)
/opt/jboss/keycloak/bin/kcadm.sh get clients/$CID/installation/providers/keycloak-oidc-keycloak-json
persistence:
dbVendor: H2
deployPostgres: false
Also a side annoyance is that I need to define the persistence settings in both places or it either fails or brings up postgresql in addition to keycloak
I tried this too and also hit this problem so have raised an issue. I prefer to use -Dimport with a realm .json file but your points suggest a postStartScript option would make sense so I've included both in the PR on that issue
the Keycloak chart has been updated. Have a look at these PRs:
https://github.com/kubernetes/charts/pull/5887
https://github.com/kubernetes/charts/pull/5950
I am trying to install Kubernetes locally on my CentOS. I am following this blog http://containertutorials.com/get_started_kubernetes/index.html, with appropriate changes to match CentOS and latest Kubernetes version.
./kube-up.sh script runs and exists with no errors and I don't see the server started on port 8080. Is there a way to know what was the error and if there is any other procedure to follow on CentOS 6.3
The easiest way to install the kubernetes cluster is using kubeadm. The initial post which details the steps of setup is here. And the detailed documentation for the kubeadm can be found here. With this you will get the latest released kubernetes.
If you really want to use the script to bring up the cluster, I did following:
Install the required packages
yum install -y git docker etcd
Start docker process
systemctl enable --now docker
Install golang
Latest go version because default centos golang is old and for kubernetes to compile we need at least go1.7
curl -O https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.8.1.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Setup GOPATH
export GOPATH=~/go
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOBIN
Download k8s source and other golang dependencies
Note: this might take sometime depending on your internet speed
go get -d k8s.io/kubernetes
go get -u github.com/cloudflare/cfssl/cmd/...
Start cluster
cd $GOPATH/src/k8s.io/kubernetes
./hack/local-up-cluster.sh
In new terminal
alias kubectl=$GOPATH/src/k8s.io/kubernetes/cluster/kubectl.sh
kubectl get nodes
I'm creating a container with a connection to a cloudsql database, when I run the image with kubernetes It does not have an external IP that I can use to allow the new image to connect to the database. But as this is part of the init configuration I can't wait to know what is the public IP to add to the whitelist databases.
I know that are ways to connect a database through services in the same cluster, but I can't figure out how to connect with the cloudsql provided by google.
There are two ways to solve that:
The first option is to use a cloudsql proxy using the instructions available in: https://cloud.google.com/sql/docs/sql-proxy
In your docker image you need to ensure that fuse is available in your installation, in wasn't my case (using a ubuntu:trusty-20160119 as base image). If you need to able that, then use the following steps in your Dockerfile:
# install fusermount
# RUN apt-get install build-essential -y
# RUN wget https://github.com/libfuse/libfuse/releases/download/fuse_2_9_5/fuse-2.9.5.tar.gz
# RUN tar -xzvf fuse-2.9.5.tar.gz
# RUN cd fuse-2.9.5 && ./configure && make -j8 && make install
Then at the startup of your container you must create a script that open the socket as described in https://cloud.google.com/sql/docs/sql-proxy#example_proxy_invocations_and_connection_strings.
The second way is just to allow the ips from the nodes that support the kubernetes cluster in the whitelist for the cloudsql.
I prefer the first option, because it works in any machine I deploy the image and I don't need to care about to add or remove ips if I need to deliver more nodes in the kubernetes cluster.