I am new to docker, so was trying all basic stuff.
I have used following dockerfile to generate my working docker images
FROM ubuntu:14.04
MAINTAINER Alok Agarwal "alok.alok.com"
RUN apt-get update
#Install git
RUN apt-get install -y git
RUN mkdir -p /root/.ssh/
ADD id_rsa /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN chmod 700 /root/.ssh/id_rsa
RUN git clone git#github.com:user/user.git
EXPOSE 80
I am able to clone my repo in my local system using ssh but when doing from docker its giving
fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository exists.
I have put my id_rsa file same place where my dockerfile reside but still doesnt know why it is continuously failing.
Am I missing any basic step.
Advance Thanks for your time
Look at my example, I have a private ssh key in the directory where I dockerize app(ssh_keys/id_rsa), and public key I have already upload to the private repo:
FROM ubuntu:14.04
MAINTAINER Alok Agarwal "alok.alok.com"
RUN apt-get update
#Install git
RUN apt-get install -y git
RUN /bin/bash -l -c "mkdir /root/.ssh"
ADD ssh_keys/id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN echo "Host github.com\n\tStrictHostKeyChecking no\n" >> /root/.ssh/config
RUN mkdir -p /www/app
RUN git clone git#github.com:my_private_repo/repo.git /www/app
Related
i am creating an ECS cluster with docker image library/wordpress:latest and i get the desired task in running state but when i build this image using following Dockerfile and push it to my dockerhub repo and then try to create this cluster using my new image the containers fails by giving Exit code 2
Could you please suggest me what am i doing wrong here?
#Base image
FROM wordpress:latest
LABEL version="latest" maintainer="xxxxxxx <xxxxxx>"
# Update apt
RUN apt-get update
# Add a user for running applications.
RUN useradd apps
RUN mkdir -p /home/apps && chown apps:apps /home/apps
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
# Install all necessary packages
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config x11vnc xvfb fluxbox wget wmctrl gnupg2 unzip zip
# Set the Chrome repo.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
# Install Chrome.
RUN apt-get update && apt-get -y install google-chrome-stable
# Install Chrome driver
RUN wget https://chromedriver.storage.googleapis.com/94.0.4606.61/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
# create folder to store requirements.txt file
RUN mkdir /home/automation
RUN mkdir /home/automation/FrontEnd
WORKDIR /home/automation
# Copy config and scripts
COPY requirements.txt ./requirements.txt
COPY TestSuites /home/automation/FrontEnd/TestSuites
COPY Resources /home/automation/FrontEnd/Resources
COPY TestRunner.py /home/automation/FrontEnd
COPY TestRail/ /home/automation/TestRail
COPY run-frontend-tests.sh /home/automation/run-tests.sh
COPY FrontEndResultParser.py /home/automation/FrontEnd/FrontEndResultParser.py
# Install python 3.9 and pip3
RUN apt-get -y install python3-dev python3.9 python3-pip
# Install dependencies
RUN pip install "setuptools==58.0.0"
RUN pip install -r requirements.txt
CMD ["sh", "run-tests.sh"]
i am basically just trying to run a script into the container
I used a worpress image and built my own image out of it, thought it would keep the container up and my script will be executed but that didnt happen. My ECS cluster didnt have any running task, all i saw iin the events was service stage-fe-auto has started 1 tasks: task e83587e734c94f77. and when i opened the task details, it had Exit Code 2 and Working directory /home/app but in my Dockerfile my work directory is differen. Not sure what i did wrong
after i had some previous problem to Dockerise my MySQL Kitura SETUP here : Docker Build Kitura Sqift Container - Shim.h mysql.h file not found
I am running in a new Problem i can not solve following the Guide from : https://www.kitura.io/docs/deploying/docker.html .
After i followed all the steps and also did the fixing on the MySQL issue previously i was now able to run the following command :
docker run -p 8080:8080 -it myapp-run
THis however leads to the following issue :
error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory
i assume something tries again to open the libmysqclclient from some wrong Environmental Directories ?
But how can i fix this issues by building the docker images ... is there any way and better a smart way ?
Thanks a lot again for the help.
I was able to update and enhance my dockerfile this is now running smoothly and also can be used for CI and CD tasks.
FROM ibmcom/swift-ubuntu-runtime:latest
##FROM ibmcom/swift-ubuntu-runtime:5.0.1
LABEL maintainer="IBM Swift Engineering at IBM Cloud"
LABEL Description="Template Dockerfile that extends the ibmcom/swift-ubuntu-runtime image."
# We can replace this port with what the user wants
EXPOSE 8080
# Default user if not provided
ARG bx_dev_user=root
ARG bx_dev_userid=1000
# Install system level packages
RUN apt-get update && apt-get dist-upgrade -y
RUN apt-get update && apt-get install -y sudo libmysqlclient-dev
# Add utils files
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/run-utils.sh /swift-utils/run-utils.sh
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/common-utils.sh /swift-utils/common-utils.sh
RUN chmod -R 555 /swift-utils
# Create user if not root
RUN if [ $bx_dev_user != "root" ]; then useradd -ms /bin/bash -u $bx_dev_userid $bx_dev_user; fi
# Bundle application source & binaries
COPY ./.build /swift-project/.build
# Command to start Swift application
CMD [ "sh", "-c", "cd /swift-project && .build/release/Beautylivery_Server_New" ]
I'm trying to use the Remote - Containers extension for Visual Studio Code, but when I "Open Folder in Container", I get this error:
Run: docker exec 0d0c1eac6f38b81566757786f853d6f6a4f3a836c15ca7ed3a3aaf29b9faab14 /bin/sh -c set -o noclobber ; mkdir -p '/home/appuser/.vscode-server/data/Machine' && { > '/home/appuser/.vscode-server/data/Machine/.writeMachineSettingsMarker' ; } 2> /dev/null
mkdir: cannot create directory ‘/home/appuser’: Permission denied
My Dockerfile uses:
FROM python:3.7-slim
...
RUN useradd -ms /bin/bash appuser
USER appuser
I've also tried:
RUN adduser -D appuser
RUN groupadd -g 999 appuser && \
useradd -r -u 999 -g appuser appuser
USER appuser
Both of these work if I build them directly. How do I get this to work?
What works for me is to create a non-root user in my Dockerfile and then configure the VS Code dev container to use that user.
Step 1. Create the non-root user in your Docker image
ARG USER_ID=1000
ARG GROUP_ID=1000
RUN groupadd --system --gid ${GROUP_ID} MY_GROUP && \
useradd --system --uid ${USER_ID} --gid MY_GROUP --home /home/MY_USER --shell /sbin/nologin MY_USER
Step 2. Configure .devcontainer/devcontainer.json file in the root of your project (should be created when you start remote dev)
"remoteUser": "MY_USER" <-- this is the setting you want to update
If you use docker compose, it's possible to configure VS Code to run the entire container as the non-root user by configuring .devcontainer/docker-compose.yml, but I've been happy with the process described above so I haven't experimented further.
You might get some additional insight by reading through the VS Code docs on this topic.
go into your WSL2 and check what is your local uid (non-root) using command id.
in my case it is UID=1000(ubuntu).
Change your dockerfile, to something like this:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /home/ubuntu
COPY . /home/ubuntu
# Creates a non-root user and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers
RUN useradd -u 1000 ubuntu && chown -R ubuntu /home/ubuntu
USER ubuntu
# During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug
CMD ["python", "app.py"]
For Example:
Sys-admin installed Oracle JDK on Ubuntu - it is about 5 lines bash commands:
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | sudo /usr/bin/debconf-set-selections
sudo apt-get install -y oracle-java8-installer
sudo apt-get install oracle-java8-set-default
We would need to save those commands as bash code snippet and tag them as "Oracle JDK". Or for example PostgreSQL installation or any other stuff System team do repeatedly and need to find it quickly for quick revision
Please? any advise?
CoderVault appears to be a good solution for teams doing all kinds of code snippets: https://github.com/codervault/codervault
If you want it directly in Bash, you would need to cook up some own form of snippet file, source it and sync it. For example, create a folder .functions and a file ~/.functions/shared with the following content:
# Setup, do not change
alias sniplist="awk '/function/ {print $2;}' ~/.functions/shared"
function syncSnippets {
P=$(pwd) && \
cd ~/.functions && \
git pull --rebase <HOST>/path/to/repo && \
cd $P
source ~/.functions/shared
}
# Add snippets below
function OracleJDK {
sudo add-apt-repository ppa:webupd8team/java && \
sudo apt-get update && \
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | sudo /usr/bin/debconf-set-selections && \
sudo apt-get install -y oracle-java8-installer && \
sudo apt-get install oracle-java8-set-default
}
Add more functions as needed. The && is for running all commands in sequence and aborting when one of them fails, \ for being able to continue the same command on another line. This way, you're going to run a single chained command.
Create a Git repository (git init), add a server (git remote add origin <HOST>/path/to/repo), commit and push. Remember: you don't need a Git server to host a repository. You could just init it on the server and have it accessible by SSH, that's all you need.
All anyone else needs to do is clone this repo (git clone <HOST>/path/to/repo ~/.functions), source it in the shell config file (source ~/.functions/shared) and start another shell. Done.
Should you ever forget which snippets are available, just run sniplist and get a handy shortlist of functions.
I have a project on github, that contains a .travis.yml file with a before_install hook to do things that require sudo. To move the project to container type infrastructure I have to remove the sudo dependency of the project. Question is - how?
On this page of Travis CI documentation, in the before_install section they're providing scripts to run in this hook:
http://docs.travis-ci.com/user/installing-dependencies/
However those scripts depend on sudo which I'm trying to get rid of. What are possible workaround for this? I still need to have the scripts run, but they won't without sudo.
Thanks.
Edit:
Had to replace most of the data with Xs, but you can still get the idea os what's happening in the code:
- "sudo apt-key adv --keyserver hkp://xxxxxxxx.ubuntu.com:XX --recv XXXXXX"
- "echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist XXgen' | sudo tee /etc/apt/sources.list.x/xxx.list"
- "sudo apt-get update"
- "sudo apt-get install mongodb-org-server"
- curl -O https://download.xxxxxx.org/xxxxxx/xxxxxx/xxxxxx-X.X.X.deb && sudo dpkg -i --force-confnew xxxxxx-X.X.X.deb
- sudo service xxxxxx start && sleep 10
As you can see, there are multiple sudo calls that need to be cleared up.
Edit:
I need to install ElasticSearch 1.7 and MongoDB 2.6