No disk on an openstack instance - centos

I am using OpenStack packstack Train release. I want to create a Centos 7 instance using an ISO file. I run the following commands:
glance image-create --name "centos" --file CentOS-7-x86_64-DVD-1908.iso \
--disk-format iso --container-format bare --visibility public
I create after that the instance:
openstack server create --flavor m1.medium --image centos VM \
--nic net-id=9b6e4c51-3955-4603-b0aa-7aa739de7db3 \
--security-group 8a5e9d72-752c-4d3f-be9a-35bc9a7c30a5
Until now everything is working fine, but when I go to check the console, and start the Centos installation, I find that no disk is available to create the instance. How can I solve that please ?

Related

How to install chrony on redhat 8 minimal

Im using keycloak docker image and need to synchronize time with chrony. however I cannot install chrony, because its not in repository i assume.
I use image from https://hub.docker.com/r/jboss/keycloak
ist based on registry.access.redhat.com/ubi8-minimal
Steps to reproduce:
~$ docker run -d --rm -p 8080:8080 --name keycloak jboss/keycloak
~$ docker exec -it -u root keycloak bash
root#707c136d9c8a /]# microdnf install chrony
error: No package matches 'chrony'
I'm not able to find working repo which provides chrony for redhat 8 minimal
Apparently i need synchronize time on host system, nothing to do with container itself.. Silly me, i need a break..

What is the best way to install tensorflow and mongodb in docker?

I want to create a docker container or image and have tensorflow and mongodb installed, I have seen that there are docker images for each application, but I need them to be working together, from a mongodb database I must extract the data to feed a model created in tensorflow.
Then I want to know if it is possible to have a configuration like that, since I have tried with a ubuntu container and inside it to install the applications I need, but I don't know if there is another way to do it.
Thanks.
Interesting that I find this post, and just found one solution for myself. Maybe not the one for you, BTW.
What I did is: docker pull mongo and run as daemon:
#!/bin/bash
export VOLUME='/home/user/code'
docker run -itd \
--name mongodb \
--publish 27017:27017 \
--volume ${VOLUME}:/code \
mongo
Here
the 'd' in '-itd' means running as daemon (like service, not
interactive).
The --volume may not be used.
Then docker pull tensorflow/tensorflow and run it with:
#!/bin/bash
export VOLUME='/home/user/code'
docker run \
-u 1000:1000 \
-it --rm \
--name tensorflow \
--volume ${VOLUME}:/code \
-w /code \
-e HOME=/code/tf_mongodb \
tensorflow/tensorflow bash
Here
the -u make docker bash with same ownership as host machine;
the --volume make host folder /home/user/code mapping to /code in docker;
the -w work make docker bash start from /code, which is /home/user/code in host;
the -e HOME= option sign bash $HOME folder such that later you can pip install.
Now you have bash prompt such that you can
create virtual env folder under /code (which is mapping to /home/user/code),
activate venv,
pip install pymongo,
then you can connect to mongodb you run in docker (localhost may not work, please use host IP address).

Entando 6 Installation Issue

I have been trying to install Entando 6 on my Mac following the instructions on http://docs.entando.com, however when deploying to Kubernetes I get an error with quickstart-kc-deployer. Has anyone managed to successfully go through with the installation?
deployment failure
Also I am new to Kubernetes and trying to access any logs, however as of now I have not been able to access logs and understand a bit more what the root cause of the failure is. Help on that is also more than welcome as well.
Thanks.
If you're in a local development environment the best bet would be to try the new instructions at dev.entando.org. If you're installing on a cloud Kubernetes provider try the updated instructions here.
I've reproduced them here for completeness:
Install Multipass (https://multipass.run/#install
Launch VM
multipass launch --name ubuntu-lts --cpus 4 --mem 8G --disk 20G
Open a shell multipass shell ubuntu-lts
Install k3s curl -sfL https://get.k3s.io | sh -
Download Entando custom resource definitions
curl -L -C - https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/custom-resources.tar.gz | tar -xz
Create custom resources
sudo kubectl create -f dist/crd
Create namespace
sudo kubectl create namespace entando
Download Helm chart
curl -L -C - -O https://raw.githubusercontent.com/entando/entando-releases/v6.2.0/dist/qs/entando.yaml
Configure access to your cluster
IP=$(hostname -I | awk '{print $1}')
sed -i "s/192.168.64.25/$IP/" entando.yaml
If you want to deploy on a cloud provider (EKS, AKS, GKE) then there are new instructions under the Configuration and Operations section at
https://dev.entando.org/next/tutorials

How to run cluster initialization script on GCP after creation of cluster

I have created a Google Dataproc cluster, but need to install presto as I now have a requirement. Presto is provided as an initialization action on Dataproc here, how can I run this initialization action after creation of the cluster.
Most init actions would probably run even after the cluster is created (though I haven't tried the Presto init action).
I like to run clusters describe to get the instance names, then run something like gcloud compute ssh <NODE> -- -T sudo bash -s < presto.sh for each node. Reference: How to use SSH to run a shell script on a remote machine?.
Notes:
Everything after the -- are args to the normal ssh command
The -T means don't try to create an interactive session (otherwise you'll get a warning like "Pseudo-terminal will not be allocated because stdin is not a terminal.")
I use "sudo bash" because init actions scripts assume they're being run as root.
presto.sh must be a copy of the script on your local machine. You could alternatively ssh and gsutil cp gs://dataproc-initialization-actions/presto/presto.sh . && sudo bash presto.sh.
But #Kanji Hara is correct in general. Spinning up a new cluster is pretty fast/painless, so we advocate using initialization actions when creating a cluster.
You could use initialization-actions parameter
Ex:
gcloud dataproc clusters create $CLUSTERNAME \
--project $PROJECT \
--num-workers $WORKERS \
--bucket $BUCKET \
--master-machine-type $VMMASTER \
--worker-machine-type $VMWORKER \
--initialization-actions \
gs://dataproc-initialization-actions/presto/presto.sh \
--scopes cloud-platform
Maybe this script can help you: https://github.com/kanjih-ciandt/script-dataproc-datalab

Local Kubernetes on CentOS

I am trying to install Kubernetes locally on my CentOS. I am following this blog http://containertutorials.com/get_started_kubernetes/index.html, with appropriate changes to match CentOS and latest Kubernetes version.
./kube-up.sh script runs and exists with no errors and I don't see the server started on port 8080. Is there a way to know what was the error and if there is any other procedure to follow on CentOS 6.3
The easiest way to install the kubernetes cluster is using kubeadm. The initial post which details the steps of setup is here. And the detailed documentation for the kubeadm can be found here. With this you will get the latest released kubernetes.
If you really want to use the script to bring up the cluster, I did following:
Install the required packages
yum install -y git docker etcd
Start docker process
systemctl enable --now docker
Install golang
Latest go version because default centos golang is old and for kubernetes to compile we need at least go1.7
curl -O https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.8.1.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Setup GOPATH
export GOPATH=~/go
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOBIN
Download k8s source and other golang dependencies
Note: this might take sometime depending on your internet speed
go get -d k8s.io/kubernetes
go get -u github.com/cloudflare/cfssl/cmd/...
Start cluster
cd $GOPATH/src/k8s.io/kubernetes
./hack/local-up-cluster.sh
In new terminal
alias kubectl=$GOPATH/src/k8s.io/kubernetes/cluster/kubectl.sh
kubectl get nodes