Singularity failing to create slave on Rancher with RHEL 7 instances - docker-compose

I'm trying to deploy Singularity on Rancher with a set of RHEL 7.3 (3.10.0) instances. Everything else works fine but the slave node keeps failing to start giving the following error.
Failed to create a containerizer: Could not create MesosContainerizer:
Failed to create launcher: Failed to create Linux launcher: Failed to
determine the hierarchy where the subsystem freezer is attached
How can I resolve this?

Have you tried this sollution
try to use mesos-slave option --launcher=posix
you can permanently set it by echo 'MESOS_SLAVE_PARAMS="--launcher=posix"' >> restricted/host
with new image you can also do it in mesos way: echo MESOS_LAUNCHER=posix >> restricted/env
In case of Ubuntu (Debian), you can add this value to /etc/default/mesos-slave
MESOS_LAUNCHER=posix

Related

Rancher agent installation: systemctl not found

I have a rancher installation on cloud (integrated with harvester) and a couple of VM's in a local node (with K3os), created with harvester.
Now I would like to connect the K3S cluster running on a VM with rancher, but when I try to run in the VM the script of agent given to me by rancher, it goes into an error:
systemctl: command not found
Am I doing something wrong?
I found the problem.
When you run a VM with k3os, a k3s cluster within it is also started in the VM as mentioned before. So I was wrong choosing "Create a cluster", i should have chosen instead "import a cluster". In this way, the script you run into the VM works perfectly.

Using MUP returned a container Errors about nonexistent endpoints and containers are normal

I have been using this MUP config for the deployment until recently. When I encountered an issue and I had to stop, reboot the Instance multiple times.
Then, This causes the meteor app container to shut down and the MongoDB container is running just fine but wasn't accessible through an SSH tunnel on a MongoDB GUI (but running systemctl status mongo shows active status: activating.
I troubleshoot and run docker ps -a. It shows the MongoDB container only as a running container and the meteor app container completely shutdown.
I tried running the MUP deployment in an attempt to get the meteor app container up and running.
However, I got an error Removing docker containers. Errors about nonexistent endpoints and containers are normal.
I run the mup setup command successfully and then I tried running mup reconfig and I got the same above error, I have attached the screenshot of the error below.
To Reproduce this error
Create a meteor app with Iron-meteor.
Setup an Instance (Ec2).
Setup Deployment with Meteor-up
Deploy your app with Meteor-up.
SSH into the instance and run cmd docker ps. Should see at least two running containers, app and mongo respectively.
Run a cmd to stop the app container while the mongo container is running.
Finally, Goto your project and redeployed with mup
Should see a similar error as above. for step 6 restarting the instance in my case shut down the two containers and I was able to get the mongo container back up and running.
However, I couldn't get the app container running, so I tried redeploying with the expectation that a new app container would be created if it doesn't exist on the instance.
UPDATED!
I don't know if this will help, but in my experience, mup likes a fresh instance better than an existing one.
My first step would be a mup stop command. This will shut down the docker instances. Then you can remove them with docker rm, and you can remove the images with docker rmi. Then do a mup setup again, followed by a mup deploy.
If the first one doesn't work, you can basically start with a fresh vm, as in the droplet or ec2 instance. This is generally quite successful.

Unable to run unsupported-workflow: Error 1193: Unknown system variable 'transaction_isolation'

When running the unsupported-workflow command on Cadence 16.1 against 5.7 Mysql Aurora 2.07.2 . I'm encountering the following error:
Error: connect to SQL failed
Error Details: Error 1193: Unknown system variable 'transaction_isolation'
I've set $MYSQL_TX_ISOLATION_COMPAT=true . Are there other settings I need to modify in order for this to run?
It's just fixed in https://github.com/uber/cadence/pull/4226 but not in release yet.
You can use it either building the tool, or use the docker image:
update docker image via docker pull ubercadence/cli:master
run the command docker run --rm ubercadence/cli:master --address <> adm db unsupported-workflow --conn_attrs tx_isolation=READ-COMMITTED --db_type mysql --db_address ...
For SQL tool:
cadence-sql-tool --connect-attributes tx_isolation=READ-COMMITTED ...

How to resolve DNS lookup error when trying to run example microservice application using minikube

Dear StackOverflow community!
I am trying to run the https://github.com/GoogleCloudPlatform/microservices-demo locally on minikube, so I am following their development guide: https://github.com/GoogleCloudPlatform/microservices-demo/blob/master/docs/development-guide.md
After I successfully set up minikube (using virtualbox driver, but I tried also hyperkit, however the results were the same) and execute skaffold run, after some time it will end up with following error:
Building [shippingservice]...
Sending build context to Docker daemon 127kB
Step 1/14 : FROM golang:1.15-alpine as builder
---> 6466dd056dc2
Step 2/14 : RUN apk add --no-cache ca-certificates git
---> Running in 0e6d2ab2a615
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/main: DNS lookup error
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.13/community: DNS lookup error
ERROR: unable to select packages:
git (no such package):
required by: world[git]
Building [recommendationservice]...
Building [cartservice]...
Building [emailservice]...
Building [productcatalogservice]...
Building [loadgenerator]...
Building [checkoutservice]...
Building [currencyservice]...
Building [frontend]...
Building [adservice]...
unable to stream build output: The command '/bin/sh -c apk add --no-cache ca-certificates git' returned a non-zero code: 1. Please fix the Dockerfile and try again..
The error message suggest that DNS does not work. I tried to add 8.8.8.8 to /etc/resolv.conf on a minikube VM, but it did not help. I've noticed that after I re-run skaffold run and it fails again, the content /etc/resolv.conf returns to its original state containing 10.0.2.3 as the only DNS entry. Reaching the outside internet and pinging 8.8.8.8 form within the minikube VM works.
Could you point me to a direction how can possible I fix the problem and learn on how the DNS inside minikube/kubernetes works? I've heard that problems with DNS inside Kubernetes cluster are frequent problems you run into.
Thanks for your answers!
Best regards,
Richard
Tried it with docker driver, i.e. minikube start --driver=docker, and it works. Thanks Brian!
Sounds like issue was resolved for OP but if one is using docker inside minikube then below suggestion worked for me.
Ref: https://github.com/kubernetes/minikube/issues/10830
minikube ssh
$>sudo vi /etc/docker/daemon.json
# Add "dns": ["8.8.8.8"]
# save and exit
$>sudo systemctl restart docker

How to start up a Kubernetes cluster using Rocket?

I'm using a Chromebook Pixel 2, and it's easier to get Rocket working than Docker. I recently installed Rocket 1.1 into /usr/local/bin, and have a clone of the Kubernetes GitHub repo.
When I try to use ./hack/local-up-cluster.sh to start a cluster, it eventually fails with this message:
Failed to successfully run 'docker ps', please verify that docker is installed and $DOCKER_HOST is set correctly.
According to the docs, k8s supports Rocket. Can someone please guide me about how to start a local cluster without a working Docker installation?
Thanks in advance.
You need to set three environment variables before running ./hack/local-up-cluster.h:
$ export CONTAINER_RUNTIME=rkt
$ export RKT_PATH=$PATH_TO_RKT_BINARY
$ export RKT_STAGE1_IMAGE=PATH=$PATH_TO_STAGE1_IMAGE
This is described in the docs for getting started with a local rkt cluster.
Try running export CONTAINER_RUNTIME="rocket" and then re-running the script.