Problems with NVIDIA-Linux-x86_64-470.63.01-grid.run to Deploy to Cloud Run - deployment

I'm trying to deploy on google cloud. So, my Dockerfile :
FROM ubuntu:20.04
RUN apt update
RUN apt -y install sudo
RUN apt -y install curl
RUN sudo apt install -y build-essential
RUN curl -O https://storage.googleapis.com/nvidia-drivers-us-public/GRID/GRID13.0/NVIDIA-Linux-x86_64-470.63.01-grid.run
RUN chmod +x NVIDIA-Linux-x86_64-470.63.01-grid.run
RUN sudo /bin/bash NVIDIA-Linux-x86_64-470.63.01-grid.run
FROM python:3.8
RUN adduser meat
RUN passwd -d meat
USER meat
WORKDIR /home/meat
RUN python3 -m venv meat-env
RUN /bin/bash -c "source meat-env/bin/activate"
RUN /usr/local/bin/python -m pip install --upgrade pip
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
and I got this error:
Step 8/20 : RUN sudo /bin/bash NVIDIA-Linux-x86_64-470.63.01-grid.run
---> Running in 811998f9cea8
Verifying archive integrity...
OK
Uncompressing NVIDIA Accelerated Graphics Driver for Linux-x86_64 470.63.01
...............
and several dots(.) later
[91mError opening terminal: unknown.
[0m
unable to stream build output: The command '/bin/sh -c sudo /bin/bash NVIDIA-Linux-x86_64-470.63.01-grid.run' returned a non-zero code: 1
Failed to build the app. Error: unable to stream build output: The command '/bin/sh -c sudo /bin/bash NVIDIA-Linux-x86_64-470.63.01-grid.run' returned a non-zero code: 1
I made something wrong with my Dockerfile? Or is there a differente way to do the command?

#DazWilkin's comment is correct:
It's not possible to install graphics driver to Cloud Run fully managed because it doesn't expose any GPU.
You should deploy it in Cloud Run Anthos(GKE). But you'll need to configure your GKE Cluster to install your GPU, According to the documentation, you should follow the steps:
Add a GPU-enabled node pool to your GKE cluster.
In this step, you can enable Enable Virtual Workstation (NVIDIA GRID). Please choose a GPU such as NVIDIA Tesla T4, P4 or P100 to enable NVIDIA GRID.
Install NVIDIA's device drivers to the nodes.
You can now create a service that will consume GPUs and deploy the image to Cloud Run for Anthos:
Setting up your service to consume GPUs

Related

ECS container exit code 2

i am creating an ECS cluster with docker image library/wordpress:latest and i get the desired task in running state but when i build this image using following Dockerfile and push it to my dockerhub repo and then try to create this cluster using my new image the containers fails by giving Exit code 2
Could you please suggest me what am i doing wrong here?
#Base image
FROM wordpress:latest
LABEL version="latest" maintainer="xxxxxxx <xxxxxx>"
# Update apt
RUN apt-get update
# Add a user for running applications.
RUN useradd apps
RUN mkdir -p /home/apps && chown apps:apps /home/apps
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
# Install all necessary packages
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config x11vnc xvfb fluxbox wget wmctrl gnupg2 unzip zip
# Set the Chrome repo.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
# Install Chrome.
RUN apt-get update && apt-get -y install google-chrome-stable
# Install Chrome driver
RUN wget https://chromedriver.storage.googleapis.com/94.0.4606.61/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
# create folder to store requirements.txt file
RUN mkdir /home/automation
RUN mkdir /home/automation/FrontEnd
WORKDIR /home/automation
# Copy config and scripts
COPY requirements.txt ./requirements.txt
COPY TestSuites /home/automation/FrontEnd/TestSuites
COPY Resources /home/automation/FrontEnd/Resources
COPY TestRunner.py /home/automation/FrontEnd
COPY TestRail/ /home/automation/TestRail
COPY run-frontend-tests.sh /home/automation/run-tests.sh
COPY FrontEndResultParser.py /home/automation/FrontEnd/FrontEndResultParser.py
# Install python 3.9 and pip3
RUN apt-get -y install python3-dev python3.9 python3-pip
# Install dependencies
RUN pip install "setuptools==58.0.0"
RUN pip install -r requirements.txt
CMD ["sh", "run-tests.sh"]
i am basically just trying to run a script into the container
I used a worpress image and built my own image out of it, thought it would keep the container up and my script will be executed but that didnt happen. My ECS cluster didnt have any running task, all i saw iin the events was service stage-fe-auto has started 1 tasks: task e83587e734c94f77. and when i opened the task details, it had Exit Code 2 and Working directory /home/app but in my Dockerfile my work directory is differen. Not sure what i did wrong

MongoDB connection failed in ubuntu container

I create a docker container with the base image ubuntu:18.04 and installed MongoDB using the shell in this container. Now, I commit and export this container and use this container image as a base image in a new container in which I need the MongoDB status as running for further process. I am using Dockerfile for both containers.
First Dockerfileis :
#getting base image ubuntu
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y apt-utils wget gnupg gnupg2 curl
ENTRYPOINT ["tail", "-f", "/dev/null"]
CMD ["echo","Create base image"]
After building this Dockerfile, I accessed shell installed MongoDB, created a new db and collection, commit and export in the tar file.
Next, I import this tar file as image mongoubuntu:1.0
Now, I create another Dockerfile in which I want to run some commands which need MongoDB status as running.
The second Dockerfile is as follows:
FROM mongoubuntu:1.0
RUN apt-get update && apt-get -y install sudo && apt-get -y install curl
RUN apt-get install -y wget #install wget lib
COPY . /
RUN chmod +x /genesis.sh
RUN /genesis.sh
RUN chmod +x /wallet.sh
RUN /wallet.sh >> /config.sh
ENTRYPOINT ["tail", "-f", "/dev/null"]
CMD ["echo","Install EOS Complete"]
When I run this as docker build -t eossample:1.5 . It gets build successful. When I check the bash shell for the newly created container using this image, MongoDB is not running. If I manually start MongoDB it shows running status. Please help. How could I start MongoDB from DockerFile?

mongodb: unrecognized service in Docker

I created my own Docker container that includes the latest version of ubuntu, python3.7 and mongodb.
Dockerfile
FROM ubuntu:latest
MAINTAINER Docker
# Update apt-get sources AND install MongoDB
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y software-properties-common
RUN apt install -y gnupg2
RUN gpg2 --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys F3B1AA8B
# Installation:
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get install -y python3.7
#Mongodb
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
RUN apt-add-repository 'deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse'
RUN apt-get update
RUN apt-get install -y mongodb-org
# Create the MongoDB data directory
RUN mkdir -p /data/db
# Create the MongoDB data directory
RUN mkdir -p /data/code
RUN mongod --version
RUN mongod --dbpath /data/db --fork --logpath /data/db/log
# COPY some Code to Container
COPY dev /data/code
# Installing pip for python modules
RUN apt-get install -y python3-pip
# Install modules
WORKDIR /data/code/
RUN pip3 install -r requirements.txt
RUN service mongodb start
RUN python3 main.py
RUN python3 server.py
EXPOSE 80
# Set /bin/bash as the dockerized entry-point application
ENTRYPOINT ["/bin/bash"]
when I run the build command:
docker build -t myContainer --no-cache .
it runs successfully till to the point where mongodb should start as a service
.
.
.
Removing intermediate container 3d43e1d1cd96
---> 62f10ce67e07
Step 21/25 : RUN service mongodb start
---> Running in 42e08e7d7638
mongodb: unrecognized service
How do I start the service? I'm trying to start the service with the command: service mongodb start. Isn't that correct? And what does the line:
Removing intermediate container 3d43e1d1cd96
means?
Firstly, it should be service mongod start i guess. But this is not going to solve your problem.
While using Docker, your process has to be a foreground process service mongod start will go into background & your container will exit immediately.
You should use mongod foreground process as below -
CMD ["mongod"]
Put the above CMD at the end of Dockerfile to make sure your container runs mongod.
Official Dockerfile -
https://github.com/docker-library/mongo/blob/40056ae591c1caca88ffbec2a426e4da07e02d57/3.4/Dockerfile
If you want to run multiple processes, use docker ENTRYPOINT in conjunction with supervisord or use a wrapper script.
Ref - https://docs.docker.com/config/containers/multi-service_container/

Not all pre-reqs install correctly for Hyperledger Composer

I've been following the Hyperledger Composer tutorial. I managed to install Ubuntu 16.04 on Hyper-V on my Windows 10 Enterprise. I then started on the following pre-req installation instructions:
https://hyperledger.github.io/composer/installing/installing-prereqs.html
I ran the prereqs-ubuntu.sh script. It ran fine with no errors. I examined the logs and saw that it had successfully installed npm 5.6.0, node 8.9.4, docker 17.12.x, docker composer 1.13.x, and Python 2.7.12.
However, when I run run $ sudo npm --version
it tells me that the npm command is not found
Same with $ sudo node --version
Not found...?!
Why would that be when the log clearly shows that npm and node where successfuly installed?!
Well, what I did and managed through:
--> install nodejs and npm:
sudo snap install node --classic --channel=8
so you get the latest node8.
--> then to solve "sudo" problem with node specify the npm prefix:
npm config set prefix ~/.node_modules
add the following to .bash_profile
export PATH=$HOME/.node_modules/bin:$PATH
Now the packages will install into your user directory and no permissions will be harmend.
--> install nvm (to get exactly node 8.9 version on the next step):
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
or
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
Verify:
node -v nvm
which should output 'nvm' if the installation was successful.
--> get and set node 8.9 version:
nvm install v8.9.0
nvm use 8.9.0
--> reset PATHs:
echo export PATH="$HOME/npm/bin:$PATH" >> ~/.bashrc
npm config set prefix ~/npm
echo "export NODE_PATH=$NODE_PATH:/home/$USER/npm/lib/node_modules" >> ~/.bashrc && source ~/.bashrc
--> at this stage the docker previous setup shall be destroyed:
docker kill $(docker ps -q)
docker rm $(docker ps -aq)
docker rmi $(docker images dev-* -q)
--> Installing the rest of prereqs:
sudo apt-add-repository -y ppa:git-core/ppa
sudo apt-get update
# install git
sudo apt-get install -y git
# Ensure that CA certificates are installed
sudo apt-get -y install apt-transport-https ca-certificates
# Add Docker repository key to APT keychain
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Update package lists
sudo apt-get update
# Verifies APT is pulling from the correct Repository
sudo apt-cache policy docker-ce
# Install Docker
echo "# Installing Docker"
sudo apt-get -y install docker-ce
# Add user account to the docker group
sudo usermod -aG docker $(whoami)
# Install docker compose
echo "# Installing Docker-Compose"
sudo curl -L "https://github.com/docker/compose/releases/download/1.13.0/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Install unzip, required to install hyperledger fabric.
sudo apt-get -y install unzip
--> now you can install Fabric dev. env. (assuming the rest of prereq components stand available):
npm install -g composer-cli
etc.
I think you need to log out and close the shell. And then restart with the new session, as the shell stores your session.
Also, after installation, the use of sudo is not recommended as mentioned on IBM hyperledger website.

How to start railo service in background on the Docker

My name Trang,
I have created Docker image on https://registry.hub.docker.com/u/trangunghoa/railo-mysql/
It is run ok.
Now I created Dockerfile but I can't start automatic Railo service. Please help me.
I have start by some commands at shell script:
exec /opt/railo/railo_ctl start
exec /opt/railo/railo_ctl start -D FOREGROUND
service railo_ctl restart
exec service railo_ctl restart
No command it work.
I looked inside your Dockerfile and identified the problem.
You can only use one CMD inside a Dockerfile. (if you use multiple CMD the old one will override) More info : https://docs.docker.com/reference/builder/#cmd
You need to know that Docker isn't made for running multiple process without a little bit of help. I suggest using supervisord : https://docs.docker.com/articles/using_supervisord/
You can't use RUN service inside a Dockerfile, the reason is simple the command service will be executed and start a daemon, then notify the execution was successful. The temporary container will be killed (and the daemon too) and after that the change will be committed.
What your Dockerfile should look like :
FROM ubuntu:trusty
MAINTAINER Trang Lee <trangunghoa#gmail.com>, Seta International Vietnam(info#setacinq.vn)
#Install base packages
RUN apt-get -y update
RUN apt-get install -y openjdk-7-jre-headless
RUN apt-get install -y tomcat7 tomcat7-admin apache2 libapache2-mod-jk
RUN apt-get purge -y openjdk-6-jre-headless icedtea-6-jre-cacao openjdk-6-jre-lib icedtea-6-jre-jamvm
RUN apt-get install -y supervisor
# config to enable .htaccess
ADD apache_default /etc/apache2/sites-available/000-default.conf
RUN a2enmod rewrite
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
# start service
ADD start-apache2.sh /start-apache2.sh
ADD railo.sh /railo.sh
ADD run.sh /run.sh
RUN chmod +x /*.sh
#RUN sudo service apache2 start
# install railo
RUN apt-get install -y wget
RUN wget http://www.getrailo.org/railo/remote/download42/4.2.1.000/tomcat/linux/railo-4.2.1.000-pl2-linux-x64-installer.run
RUN chmod -R 744 railo-4.2.1.000-pl2-linux-x64-installer.run
RUN ./railo-4.2.1.000-pl2-linux-x64-installer.run --mode unattended --railopass “123456”
# remove railo setup
#RUN rm -rf railo-4.2.1.000-pl2-linux-x64-installer.run
#RUN sudo service railo_ctl start
RUN mkdir -p /etc/service/railo
ADD start-railo.sh /etc/service/railo/run
RUN chmod 755 /etc/service/railo/run
# EXPOSE <port>
EXPOSE 80 8888
#CMD ["/railo.sh"]
#CMD ["/start-apache2.sh"]
# Supervisord configuration
RUN mkdir /var/log/supervisor
ADD ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord"]
With your supervisord.conf file looking something like that :
[supervisord]
nodaemon=true
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
[program:railo]
command=/bin/bash -c "exec /opt/railo/railo_ctl start -D FOREGROUND"