Updating packages with yum-cron through ebextensions? - command-line

I'm currently trying to install a package on an Amazon Linux EC2 server. I am trying to use an ebextensions file to set yum-cron to look for package updates daily, but I'm fairly confident that I'm not doing it correctly.
Here's the code in my .config file:
commands:
01do_update_yum:
command: yum -y update
02install_clamAV:
command: yum -y install clamav clamd
03install_yum_cron:
command: yum -y install yum-cron
04install_gedit:
command: yum -y install gedit
05set_crontab_daily_update:
command: -c "gedit /etc/yum/yum-cron.conf"
06change_apply_updates:
command: apply_updates = yes
Unless I'm missing something, steps 4 through 6 are not going to work without the user (me) running them. Basically, what I'm wondering, is if anyone knows how I can do what I'm trying to do, but within my .config file.
Thanks so much!
-Matt

You need to change your script so the first line reads container_commands.
Here's an example I deploy with our builds:
container_commands:
yum_update:
command: yum update -y
ignoreErrors: true
install_mysql:
command: yum install -y mysql
install_mlocate:
command: yum install -y mlocate
ignoreErrors: true
update_db:
command: updatedb
ignoreErrors: true
install_expect:
command: yum install -y expect
See also this example snippet from the AWS docs
container_commands:
collectstatic:
command: "django-admin.py collectstatic --noinput"
01syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
02migrate:
command: "django-admin.py migrate"
leader_only: true
99customize:
command: "scripts/customize.sh"

Related

ECS container exit code 2

i am creating an ECS cluster with docker image library/wordpress:latest and i get the desired task in running state but when i build this image using following Dockerfile and push it to my dockerhub repo and then try to create this cluster using my new image the containers fails by giving Exit code 2
Could you please suggest me what am i doing wrong here?
#Base image
FROM wordpress:latest
LABEL version="latest" maintainer="xxxxxxx <xxxxxx>"
# Update apt
RUN apt-get update
# Add a user for running applications.
RUN useradd apps
RUN mkdir -p /home/apps && chown apps:apps /home/apps
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
# Install all necessary packages
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config x11vnc xvfb fluxbox wget wmctrl gnupg2 unzip zip
# Set the Chrome repo.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
# Install Chrome.
RUN apt-get update && apt-get -y install google-chrome-stable
# Install Chrome driver
RUN wget https://chromedriver.storage.googleapis.com/94.0.4606.61/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
# create folder to store requirements.txt file
RUN mkdir /home/automation
RUN mkdir /home/automation/FrontEnd
WORKDIR /home/automation
# Copy config and scripts
COPY requirements.txt ./requirements.txt
COPY TestSuites /home/automation/FrontEnd/TestSuites
COPY Resources /home/automation/FrontEnd/Resources
COPY TestRunner.py /home/automation/FrontEnd
COPY TestRail/ /home/automation/TestRail
COPY run-frontend-tests.sh /home/automation/run-tests.sh
COPY FrontEndResultParser.py /home/automation/FrontEnd/FrontEndResultParser.py
# Install python 3.9 and pip3
RUN apt-get -y install python3-dev python3.9 python3-pip
# Install dependencies
RUN pip install "setuptools==58.0.0"
RUN pip install -r requirements.txt
CMD ["sh", "run-tests.sh"]
i am basically just trying to run a script into the container
I used a worpress image and built my own image out of it, thought it would keep the container up and my script will be executed but that didnt happen. My ECS cluster didnt have any running task, all i saw iin the events was service stage-fe-auto has started 1 tasks: task e83587e734c94f77. and when i opened the task details, it had Exit Code 2 and Working directory /home/app but in my Dockerfile my work directory is differen. Not sure what i did wrong

CloudFormation - User Data run as Ubuntu user

I have the following user data in my CFN template:
UserData:
'Fn::Base64':
!Sub |
#!/bin/bash
sudo apt-get update;
sudo apt-get upgrade -y;
sudo apt-get -y install python-pip;
sudo apt-get -y install gcc;
sudo apt-get -y install gcc-c++;
sudo apt-get install awscli -y;
sudo apt-get install python-mysqldb;
echo "$(pwd)" >> /home/ubuntu/current1.txt
cd /home/ubuntu/;
echo "$(pwd)" >> /home/ubuntu/current2.txt
pip install apache-airflow;
pip install celery==4.4.0;
pip install kombu==4.5.0;
echo "$(pwd)" >> /home/ubuntu/current3.txt
cd /home/ubuntu/airflow/;
echo "$(pwd)" >> /home/ubuntu/current4.txt
mv airflow.cfg airflow.cfg.original_1;
cd /home/ubuntu/;
nohup airflow initdb;
nohup airflow webserver -p 8080 >> webserver.log &;
nohup airflow scheduler >> scheduler.log &;
nohup airflow worker >> worker.log &;
If I do cd /home/ubuntu and then if install apache-airflow it is still getting installed under root.
I want to install the apache-airflow under /home/ubuntu.
How to install packages under /home/ubuntu user ?
I ran into a similar situation when automating the installation of Ghost on an Ubuntu instance. You can try switching users. I would have to test this when attempting to install a package using pip specifically. But here is an example of how I had to run some specific setup commands as a non-root user:
su ghost-user << 'EOF'
cd /ghost-app/ghost
ghost install --no-setup --no-stack --dbhost 10.16.11.80 --dbuser ghost --dbpass myterribledbasepassword --dbname ghost_prod
EOF

mongodb: unrecognized service in Docker

I created my own Docker container that includes the latest version of ubuntu, python3.7 and mongodb.
Dockerfile
FROM ubuntu:latest
MAINTAINER Docker
# Update apt-get sources AND install MongoDB
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y software-properties-common
RUN apt install -y gnupg2
RUN gpg2 --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys F3B1AA8B
# Installation:
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get install -y python3.7
#Mongodb
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6
RUN apt-add-repository 'deb [ arch=amd64,arm64 ] http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse'
RUN apt-get update
RUN apt-get install -y mongodb-org
# Create the MongoDB data directory
RUN mkdir -p /data/db
# Create the MongoDB data directory
RUN mkdir -p /data/code
RUN mongod --version
RUN mongod --dbpath /data/db --fork --logpath /data/db/log
# COPY some Code to Container
COPY dev /data/code
# Installing pip for python modules
RUN apt-get install -y python3-pip
# Install modules
WORKDIR /data/code/
RUN pip3 install -r requirements.txt
RUN service mongodb start
RUN python3 main.py
RUN python3 server.py
EXPOSE 80
# Set /bin/bash as the dockerized entry-point application
ENTRYPOINT ["/bin/bash"]
when I run the build command:
docker build -t myContainer --no-cache .
it runs successfully till to the point where mongodb should start as a service
.
.
.
Removing intermediate container 3d43e1d1cd96
---> 62f10ce67e07
Step 21/25 : RUN service mongodb start
---> Running in 42e08e7d7638
mongodb: unrecognized service
How do I start the service? I'm trying to start the service with the command: service mongodb start. Isn't that correct? And what does the line:
Removing intermediate container 3d43e1d1cd96
means?
Firstly, it should be service mongod start i guess. But this is not going to solve your problem.
While using Docker, your process has to be a foreground process service mongod start will go into background & your container will exit immediately.
You should use mongod foreground process as below -
CMD ["mongod"]
Put the above CMD at the end of Dockerfile to make sure your container runs mongod.
Official Dockerfile -
https://github.com/docker-library/mongo/blob/40056ae591c1caca88ffbec2a426e4da07e02d57/3.4/Dockerfile
If you want to run multiple processes, use docker ENTRYPOINT in conjunction with supervisord or use a wrapper script.
Ref - https://docs.docker.com/config/containers/multi-service_container/

PowerShell Core in Debian Docker Container Error

I'm new to Docker and am trying to create a Docker image with Raspbian base and PowerShell Core installed.
EDIT: Updated Dockerfile to include libicu52 package, which resolved the main error: lack of libpsl-native or dependencies not available. Changed CMD parameters and now have a different error.
Here is my Dockerfile:
# Download the latest RPi3 Debian image
FROM resin/raspberrypi3-debian:latest
# Update the image and install prerequisites
RUN apt-get update && apt-get install -y \
wget \
libicu52 \
libunwind8 \
&& apt-get clean
# Grab the latest tar.gz
RUN wget https://github.com/PowerShell/PowerShell/releases/download/v6.0.0-rc.2/powershell-6.0.0-rc.2-linux-arm32.tar.gz
# Make folder to put PowerShell
RUN mkdir ~/powershell
# Unpack the tar.gz file
RUN tar -xvf ./powershell-6.0.0-rc.2-linux-arm32.tar.gz -C ~/powershell
# Run PowerShell
CMD pwsh -v
New error:
hostname: you must be root to change the host name
/bin/sh: 1: pwsh: not found
How do I resolve these errors?
Thanks in advance!
Instead of downloading from source and extracting it in your container, I'd recommend using the official apt installer packages for your Dockerfile from Microsoft's official Debian repository as described at:
https://learn.microsoft.com/en-us/powershell/scripting/setup/installing-powershell-core-on-macos-and-linux?view=powershell-6#debian-9
So transforming that to Dockerfile format:
# Install powershell related system components
RUN apt-get install -y \
gnupg curl apt-transport-https \
&& apt-get clean
# Import the public repository GPG keys
RUN curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
# Register the Microsoft's Debian repository
RUN sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-debian-stretch-prod stretch main" > /etc/apt/sources.list.d/microsoft.list'
# Install PowerShell
RUN apt-get update \
&& apt-get install -y \
powershell
# Start PowerShell
CMD pwsh
Alternatively you can also try to start from one of the original Microsoft docker Linux images, but of course then you need to solve then the raspberry installation for yourself:
https://hub.docker.com/r/microsoft/powershell/tags/

Not all pre-reqs install correctly for Hyperledger Composer

I've been following the Hyperledger Composer tutorial. I managed to install Ubuntu 16.04 on Hyper-V on my Windows 10 Enterprise. I then started on the following pre-req installation instructions:
https://hyperledger.github.io/composer/installing/installing-prereqs.html
I ran the prereqs-ubuntu.sh script. It ran fine with no errors. I examined the logs and saw that it had successfully installed npm 5.6.0, node 8.9.4, docker 17.12.x, docker composer 1.13.x, and Python 2.7.12.
However, when I run run $ sudo npm --version
it tells me that the npm command is not found
Same with $ sudo node --version
Not found...?!
Why would that be when the log clearly shows that npm and node where successfuly installed?!
Well, what I did and managed through:
--> install nodejs and npm:
sudo snap install node --classic --channel=8
so you get the latest node8.
--> then to solve "sudo" problem with node specify the npm prefix:
npm config set prefix ~/.node_modules
add the following to .bash_profile
export PATH=$HOME/.node_modules/bin:$PATH
Now the packages will install into your user directory and no permissions will be harmend.
--> install nvm (to get exactly node 8.9 version on the next step):
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
or
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
Verify:
node -v nvm
which should output 'nvm' if the installation was successful.
--> get and set node 8.9 version:
nvm install v8.9.0
nvm use 8.9.0
--> reset PATHs:
echo export PATH="$HOME/npm/bin:$PATH" >> ~/.bashrc
npm config set prefix ~/npm
echo "export NODE_PATH=$NODE_PATH:/home/$USER/npm/lib/node_modules" >> ~/.bashrc && source ~/.bashrc
--> at this stage the docker previous setup shall be destroyed:
docker kill $(docker ps -q)
docker rm $(docker ps -aq)
docker rmi $(docker images dev-* -q)
--> Installing the rest of prereqs:
sudo apt-add-repository -y ppa:git-core/ppa
sudo apt-get update
# install git
sudo apt-get install -y git
# Ensure that CA certificates are installed
sudo apt-get -y install apt-transport-https ca-certificates
# Add Docker repository key to APT keychain
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Update package lists
sudo apt-get update
# Verifies APT is pulling from the correct Repository
sudo apt-cache policy docker-ce
# Install Docker
echo "# Installing Docker"
sudo apt-get -y install docker-ce
# Add user account to the docker group
sudo usermod -aG docker $(whoami)
# Install docker compose
echo "# Installing Docker-Compose"
sudo curl -L "https://github.com/docker/compose/releases/download/1.13.0/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Install unzip, required to install hyperledger fabric.
sudo apt-get -y install unzip
--> now you can install Fabric dev. env. (assuming the rest of prereq components stand available):
npm install -g composer-cli
etc.
I think you need to log out and close the shell. And then restart with the new session, as the shell stores your session.
Also, after installation, the use of sudo is not recommended as mentioned on IBM hyperledger website.