Unable to run unsupported-workflow: Error 1193: Unknown system variable 'transaction_isolation' - cadence-workflow

When running the unsupported-workflow command on Cadence 16.1 against 5.7 Mysql Aurora 2.07.2 . I'm encountering the following error:
Error: connect to SQL failed
Error Details: Error 1193: Unknown system variable 'transaction_isolation'
I've set $MYSQL_TX_ISOLATION_COMPAT=true . Are there other settings I need to modify in order for this to run?

It's just fixed in https://github.com/uber/cadence/pull/4226 but not in release yet.
You can use it either building the tool, or use the docker image:
update docker image via docker pull ubercadence/cli:master
run the command docker run --rm ubercadence/cli:master --address <> adm db unsupported-workflow --conn_attrs tx_isolation=READ-COMMITTED --db_type mysql --db_address ...
For SQL tool:
cadence-sql-tool --connect-attributes tx_isolation=READ-COMMITTED ...

Related

How can I use pg_dump in Kubernetes in order to generate dump from a remote PostgreSQL (PGAAS)?

I would like to generate dump from a remote PostgreSQL database (PGAAS) with commands or Python code.
Firstly I tried locally to do the work but I have an error :
pg_dump: error: server version: 13.9; pg_dump version: 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)
I tried this code :
import subprocess
dump_file = "database_dump.sql"
with open(dump_file, "w") as f:
print(f)
subprocess.call(["pg_dump", "-Fp", "-d", "dbdev", "-U", "pgsqladmin", "-h", "hostname"-p", "32000"], stdout=f)
How can I do to have a pod (container) doing this work and where version is the same that server version, without entering pgaas password manually ?
pg_dump: error: server version: 13.9; pg_dump version: 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1)
As you can see this error is caused because of a version mismatch checking the version of your PGaaS database and the database version you are using on your local machine. If your local version is lower than that of the server version you can upgrade the local version. Follow this document for upgrading your pg version.
If you want to take dumps at regular intervals in an easy way you can have a cron job scheduled on your vm for running your code. Since you want to use kubernetes, build a docker image with your code in it and create a kubernetes job and run it with kube-scheduler and you can use environment variables for encrypting your password.

Run Docker through Xcode XCTestCase

I'm setting up tests for my Xcode Multiplatform App. To create a test environment I created a docker image that I want to run an XCTestCase against. I understand I'd have to make sure Docker is running before running the test. That said I'm having problems getting docker to build (run or kill).
I'm using a makefile to store the commands and planned to run the docker build and docker run commands in the setUpWithError while running the docker kill command in tearDownWithError. To run the commands I used Process to execute the shell commands. This works with ls but when I run the docker command I'm told docker: No such file or directory. Through ls I know I'm in the right location where the files exist. When switching to the terminal it works.
Is there something blocking docker from being run from my XCTestCase? If so is there anyway to get this to work or do I need to manually start docker and the docker container before running these tests?
Update
Got Docker running as it was a pathway issue. Rather than saying docker build... I now give it the entire path so /usr/local/bin/docker build.... New issue is that Xcode doesn't give me the permission to run this. I get:
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create?name=remarkable": dial unix /var/run/docker.sock: connect: operation not permitted.
See 'docker run --help'.
Is there a way to allow me to run this in ONLY this XCTestCase?

How to resolve DNS lookup error when trying to run example microservice application using minikube

Dear StackOverflow community!
I am trying to run the https://github.com/GoogleCloudPlatform/microservices-demo locally on minikube, so I am following their development guide: https://github.com/GoogleCloudPlatform/microservices-demo/blob/master/docs/development-guide.md
After I successfully set up minikube (using virtualbox driver, but I tried also hyperkit, however the results were the same) and execute skaffold run, after some time it will end up with following error:
Building [shippingservice]...
Sending build context to Docker daemon 127kB
Step 1/14 : FROM golang:1.15-alpine as builder
---> 6466dd056dc2
Step 2/14 : RUN apk add --no-cache ca-certificates git
---> Running in 0e6d2ab2a615
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/main: DNS lookup error
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/community: DNS lookup error
ERROR: unable to select packages:
git (no such package):
required by: world[git]
Building [recommendationservice]...
Building [cartservice]...
Building [emailservice]...
Building [productcatalogservice]...
Building [loadgenerator]...
Building [checkoutservice]...
Building [currencyservice]...
Building [frontend]...
Building [adservice]...
unable to stream build output: The command '/bin/sh -c apk add --no-cache ca-certificates git' returned a non-zero code: 1. Please fix the Dockerfile and try again..
The error message suggest that DNS does not work. I tried to add 8.8.8.8 to /etc/resolv.conf on a minikube VM, but it did not help. I've noticed that after I re-run skaffold run and it fails again, the content /etc/resolv.conf returns to its original state containing 10.0.2.3 as the only DNS entry. Reaching the outside internet and pinging 8.8.8.8 form within the minikube VM works.
Could you point me to a direction how can possible I fix the problem and learn on how the DNS inside minikube/kubernetes works? I've heard that problems with DNS inside Kubernetes cluster are frequent problems you run into.
Thanks for your answers!
Best regards,
Richard
Tried it with docker driver, i.e. minikube start --driver=docker, and it works. Thanks Brian!
Sounds like issue was resolved for OP but if one is using docker inside minikube then below suggestion worked for me.
Ref: https://github.com/kubernetes/minikube/issues/10830
minikube ssh
$>sudo vi /etc/docker/daemon.json
# Add "dns": ["8.8.8.8"]
# save and exit
$>sudo systemctl restart docker

Web server startup failed: Current version is too old. Please upgrade to Long Term Support version firstly

CentOS 7
Docker 20.10.5
java version "1.8.0_162"
PostgreSQL 9.6.2
I try to start SonarQube 7.6.9 in Docker like this:
sudo docker run --name sonarqube -p 9000:9000 --add-host host.docker.internal:host-gateway -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD=sonar -e SONARQUBE_JDBC_URL=jdbc:postgresql://host.docker.internal:5432/sonar sonarqube:7.6.9
But I get error:
ERROR web[][o.s.s.p.PlatformImpl] Web server startup failed: Current version is too old. Please upgrade to Long Term Support version firstly.
P.S.
If I start without connect to PostgreSQL then SonarQube 7.6.9 is success run
sudo docker run --name sonarqube -p 9000:9000 sonarqube:7.6.9
I was getting this error when trying to upgrade from Sonar 6.7.5 to Sonar 8.9.2
Searching in the web i found that this happens because migration route is needed between versions.
In my case it was not possible to jump directly from version 6.7.5 to version 8.9.2. First I had to jump to version 7.9.6 and then to version 8.9.2 (
you can read more about Migration path in: https://docs.sonarqube.org/latest/setup/before-you-upgrade/).
I hope this can serve you.

Singularity failing to create slave on Rancher with RHEL 7 instances

I'm trying to deploy Singularity on Rancher with a set of RHEL 7.3 (3.10.0) instances. Everything else works fine but the slave node keeps failing to start giving the following error.
Failed to create a containerizer: Could not create MesosContainerizer:
Failed to create launcher: Failed to create Linux launcher: Failed to
determine the hierarchy where the subsystem freezer is attached
How can I resolve this?
Have you tried this sollution
try to use mesos-slave option --launcher=posix
you can permanently set it by echo 'MESOS_SLAVE_PARAMS="--launcher=posix"' >> restricted/host
with new image you can also do it in mesos way: echo MESOS_LAUNCHER=posix >> restricted/env
In case of Ubuntu (Debian), you can add this value to /etc/default/mesos-slave
MESOS_LAUNCHER=posix