Prometheus postgresql node_exporter configuration - postgresql

I want to start monitoring my postgreSQL servers via Prometheus. Prometheus is up and running.
Prometheus.yml:
- job_name: 'postgres-exporter'
scrape_interval: 5s
static_configs:
- targets: ['sql01:9187']
Found this postgresql node exporter: https://github.com/wrouesnel/postgres_exporter
How do I need to install this exporter? The github readme is talking about building it via Mage?
I have downloaded the following file via releases: https://github.com/wrouesnel/postgres_exporter/releases/download/v0.4.7/postgres_exporter_v0.4.7_linux-386.tar.gz on my postgresql server.
How to continue from here? Do I need to install Go first?
I've configured the env var:
export DATA_SOURCE_NAME="postgresql://<adminuser>:<adminpw>#hostname:5432/test_db"
Appreciate any help!
Ty

Why not run it with the provided Docker container?
From their README.md:
docker run --net=host -e DATA_SOURCE_NAME="postgresql://postgres:password#localhost:5432/postgres?sslmode=disable" wrouesnel/postgres_exporter
To answer your question, yes you will need to install Go to build that project. You could skip installing Go by running the docker image instead.
Edit: Just realized you downloaded the release.
It's as simple as unzipping the tarball: tar -xvf postgres_exporter_v0.4.7_linux-386.tar.gz and running it (./path/to/postgres_exporter, assuming you have the environment variables set.

Related

Add Trino dataset to Apache Superset

I have currently Trino deployed in my Kubernetes cluster using the official Trino(trinodb) Helm Chart. In the same way I deployed Apache superset.
Using port forwarding of trino to 8080 and superset to 8088, I am able to access the UI for both from localhost but also I am able to use the trino command line API to query trino using:
./trino --server http:localhost:8080
I don't have any authentication set
mysql is setup correctly as Trino catalog
when I try to add Trino as dataset for Superset using either of the following sqlalchemy URLs:
trino://trino#localhost:8080/mysql
trino://localhost:8080/mysql
When I test the connection from Superset UI, I get the following error:
ERROR: Could not load database driver: TrinoEngineSpec
Please advise how I could solve this issue.
You should install sqlalchemy-trino to make the trino driver available.
Add these lines to your values.yaml file:
additionalRequirements:
- sqlalchemy-trino
bootstrapScript: |
#!/bin/bash
pip install sqlalchemy-trino &&\
if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi
If you want more details about the problem, see this Github issue.
I added two options that do the same thing because in some versions the additionalRequirements doesn't work and you may need the bootstrapScript option to install the driver.

How to create a "DOckerfile" to containerize a "Flutter" app to deploy it on a Kubernetes cluster?

I am just wondering to know how should I create a docker file for a Flutter app then deploy it on a Kubernetes cluster?
I found the following Dockerfile and server.sh script from this website but I am not sure if this a correct way of doing it?
# Install Operating system and dependencies
FROM ubuntu:22.04
RUN apt-get update
RUN apt-get install -y curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6 python3
RUN apt-get clean
# download Flutter SDK from Flutter Github repo
RUN git clone https://github.com/flutter/flutter.git /usr/local/flutter
# Set flutter environment path
ENV PATH="/usr/local/flutter/bin:/usr/local/flutter/bin/cache/dart-sdk/bin:${PATH}"
# Run flutter doctor
RUN flutter doctor
# Enable flutter web
RUN flutter channel master
RUN flutter upgrade
RUN flutter config --enable-web
# Copy files to container and build
RUN mkdir /app/
COPY . /app/
WORKDIR /app/
RUN flutter build web
# Record the exposed port
EXPOSE 5000
# make server startup script executable and start the web server
RUN ["chmod", "+x", "/app/server/server.sh"]
ENTRYPOINT [ "/app/server/server.sh"]
And:
#!/bin/bash
# Set the port
PORT=5000
# Stop any program currently running on the set port
echo 'preparing port' $PORT '...'
fuser -k 5000/tcp
# switch directories
cd build/web/
# Start the server
echo 'Server starting on port' $PORT '...'
python3 -m http.server $PORT
I did all the steps and it seems it works fine but as long as I use skaffold I don't know how/where to put the following command to automate this step as well (I have already ran this command manually):
docker run -i -p 8080:5000 -td flutter_docker
I still like to know was the above files, proper/official way to doing that or there is a better way of it?
EDIT: I created the following deployment & service file to put the deploy the created image on Kubernetes local Kind cluster but when I run kubectl get pods I can not find this image but I find it by doing docker images. Why this happens and how can I put in on a Kubernetes pod instead of docker images?
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: front
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
The question (title) is misleading.
There are 2 parts.
How to containerize the app (in this case flutter app).
How to deploy the app on the k8s cluster.
To deal with the first part, You have Dockerfile. There is room for improvement but I think this Dockerfile should work. Then you need to build a container image. Please refer to the official documentation. Finally, you need to push this created container image to some repository. (We may skip this pushing stage but to make things simple I am suggesting pushing the image)
For the second part, you should be familiar with basic Kubernetes concepts. You can run the container from a previously built container image with the help of the k8s Pod object. To access the application, you need one more k8s object and that is the Service (Load balancer or Node port type).
I know things are a bit complex (at initial levels) but please follow a good course/book I have gone through the blog post you shared, and this talks only about the first part and not the second part. You will have a container image at the end of this blog post.
I suggest going through the free playground offered by killer shell, if you don't want to set up a k8s cluster on your own, that is again another learning curve. Skip the first tile on this page this is just a playground, but from the second tile, they have enough material.
Improvements for Edited Question:
server.sh: maintaining a startup script is quite standard practice if you have complex logic to start the process. We can skip this file but in that case, a few steps will be added to Dockerfile.
kubectl get pods does not show you images but it will show you running pods in the cluster (in default namespace). Not sure how you ran and connected to the cluster. But try to add output of the command.
few pointers to impve dockerfile:
Use a small base image footprint. Ubuntu: xx has many packages pre-installed, maybe you don't need all of them. Ubuntu has slim images also or try to find a flutter image.
Try to reduce Run statements. you can club 2-3 commands in one. this will reduce layers in the image.
instead of RUN git clone, you should clone code before docker build and copy/add code in the container image. In this way, you can control which files you need to add to the image. You also don't require to have a git tool installed in the container image.
RUN ["chmod", "+x", "/app/server/server.sh"] and RUN mkdir both statements are not needed at all if you write Dockerfile smartly.
Dockerfiles should be clean, crisp, and precise.
PS: Sorry but this is not a classroom section. I know this is a bit complex thing for beginners. But please try to learn from some good sources/books.

Pip installing a package inside of a Kubernetes cluster

I have installed Apache Superset from its Helm Chart in a Google Cloud Kubernetes cluster. I need to pip install a package that is not installed when installing the Helm Chart. If I connect to the Kubernetes bash shell like this:
kubectl exec -it superset-4934njn23-nsnjd /bin/bash
Inside there's no python available, no pip and apt-get doesn't find most of the packages.
I understand that during the container installation process the packages are listed in the Dockerfile, I suppose that I need to fork the docker container, modify the Dockerfile, register the container to a container registry and make a new Helm Chart that will run this container.
But all this seems too complicated for a simple pip install, is there a simpler way to do this?
Links:
Docker- https://hub.docker.com/r/amancevice/superset/
Helm Chart - https://github.com/helm/charts/tree/master/stable/superset
As #Murli mentioned, you should use pip3. However, one thing you should remember is, helm is for managing k8s, i.e. what goes into the cluster should be traceable. So I recommend you the following:
$ helm get stable/superset
modify the values.yaml. In my case, I added jenkins-job-builder to pip3:
initFile: |-
pip3 install jenkins-job-builder
/usr/local/bin/superset-init --username admin --firstname admin --lastname user --email admin#fab.org --password admin
superset runserver
and just pass the values.yaml to helm install.
$ helm install --values=values.yaml stable/superset
Thats it.
$ kubectl exec -it doltish-gopher-superset-696448b777-8b9c6 which jenkins-jobs
/usr/local/bin/jenkins-jobs
$
Docker file seems to be installing python3 package.
Try 'python3' or "pip3" instead of 'python'/'pip'
Make the container, a little more dev work and many fewer alerts from pager duty

Local Kubernetes on CentOS

I am trying to install Kubernetes locally on my CentOS. I am following this blog http://containertutorials.com/get_started_kubernetes/index.html, with appropriate changes to match CentOS and latest Kubernetes version.
./kube-up.sh script runs and exists with no errors and I don't see the server started on port 8080. Is there a way to know what was the error and if there is any other procedure to follow on CentOS 6.3
The easiest way to install the kubernetes cluster is using kubeadm. The initial post which details the steps of setup is here. And the detailed documentation for the kubeadm can be found here. With this you will get the latest released kubernetes.
If you really want to use the script to bring up the cluster, I did following:
Install the required packages
yum install -y git docker etcd
Start docker process
systemctl enable --now docker
Install golang
Latest go version because default centos golang is old and for kubernetes to compile we need at least go1.7
curl -O https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.8.1.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Setup GOPATH
export GOPATH=~/go
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOBIN
Download k8s source and other golang dependencies
Note: this might take sometime depending on your internet speed
go get -d k8s.io/kubernetes
go get -u github.com/cloudflare/cfssl/cmd/...
Start cluster
cd $GOPATH/src/k8s.io/kubernetes
./hack/local-up-cluster.sh
In new terminal
alias kubectl=$GOPATH/src/k8s.io/kubernetes/cluster/kubectl.sh
kubectl get nodes

How to start up a Kubernetes cluster using Rocket?

I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo.
When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message:
Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly.
According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation?
Thanks in advance.
You need to set three environment variables before running ./hack/local-up-cluster.h:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
This is described in the docs for getting started with a local rkt cluster.
Try running export CONTAINER_RUNTIME="rocket" and then re-running the script.