I have the following user data in my CFN template:
UserData:
'Fn::Base64':
!Sub |
#!/bin/bash
sudo apt-get update;
sudo apt-get upgrade -y;
sudo apt-get -y install python-pip;
sudo apt-get -y install gcc;
sudo apt-get -y install gcc-c++;
sudo apt-get install awscli -y;
sudo apt-get install python-mysqldb;
echo "$(pwd)" >> /home/ubuntu/current1.txt
cd /home/ubuntu/;
echo "$(pwd)" >> /home/ubuntu/current2.txt
pip install apache-airflow;
pip install celery==4.4.0;
pip install kombu==4.5.0;
echo "$(pwd)" >> /home/ubuntu/current3.txt
cd /home/ubuntu/airflow/;
echo "$(pwd)" >> /home/ubuntu/current4.txt
mv airflow.cfg airflow.cfg.original_1;
cd /home/ubuntu/;
nohup airflow initdb;
nohup airflow webserver -p 8080 >> webserver.log &;
nohup airflow scheduler >> scheduler.log &;
nohup airflow worker >> worker.log &;
If I do cd /home/ubuntu and then if install apache-airflow it is still getting installed under root.
I want to install the apache-airflow under /home/ubuntu.
How to install packages under /home/ubuntu user ?
I ran into a similar situation when automating the installation of Ghost on an Ubuntu instance. You can try switching users. I would have to test this when attempting to install a package using pip specifically. But here is an example of how I had to run some specific setup commands as a non-root user:
su ghost-user << 'EOF'
cd /ghost-app/ghost
ghost install --no-setup --no-stack --dbhost 10.16.11.80 --dbuser ghost --dbpass myterribledbasepassword --dbname ghost_prod
EOF
Related
i am creating an ECS cluster with docker image library/wordpress:latest and i get the desired task in running state but when i build this image using following Dockerfile and push it to my dockerhub repo and then try to create this cluster using my new image the containers fails by giving Exit code 2
Could you please suggest me what am i doing wrong here?
#Base image
FROM wordpress:latest
LABEL version="latest" maintainer="xxxxxxx <xxxxxx>"
# Update apt
RUN apt-get update
# Add a user for running applications.
RUN useradd apps
RUN mkdir -p /home/apps && chown apps:apps /home/apps
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
# Install all necessary packages
RUN apt-get -y install build-essential libpoppler-cpp-dev pkg-config x11vnc xvfb fluxbox wget wmctrl gnupg2 unzip zip
# Set the Chrome repo.
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
&& echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list
# Install Chrome.
RUN apt-get update && apt-get -y install google-chrome-stable
# Install Chrome driver
RUN wget https://chromedriver.storage.googleapis.com/94.0.4606.61/chromedriver_linux64.zip \
&& unzip chromedriver_linux64.zip \
&& mv chromedriver /usr/bin/chromedriver \
&& chown root:root /usr/bin/chromedriver \
&& chmod +x /usr/bin/chromedriver
# create folder to store requirements.txt file
RUN mkdir /home/automation
RUN mkdir /home/automation/FrontEnd
WORKDIR /home/automation
# Copy config and scripts
COPY requirements.txt ./requirements.txt
COPY TestSuites /home/automation/FrontEnd/TestSuites
COPY Resources /home/automation/FrontEnd/Resources
COPY TestRunner.py /home/automation/FrontEnd
COPY TestRail/ /home/automation/TestRail
COPY run-frontend-tests.sh /home/automation/run-tests.sh
COPY FrontEndResultParser.py /home/automation/FrontEnd/FrontEndResultParser.py
# Install python 3.9 and pip3
RUN apt-get -y install python3-dev python3.9 python3-pip
# Install dependencies
RUN pip install "setuptools==58.0.0"
RUN pip install -r requirements.txt
CMD ["sh", "run-tests.sh"]
i am basically just trying to run a script into the container
I used a worpress image and built my own image out of it, thought it would keep the container up and my script will be executed but that didnt happen. My ECS cluster didnt have any running task, all i saw iin the events was service stage-fe-auto has started 1 tasks: task e83587e734c94f77. and when i opened the task details, it had Exit Code 2 and Working directory /home/app but in my Dockerfile my work directory is differen. Not sure what i did wrong
I'm running this Docker file in MAC M1:
Dockerfile
ARG VARIANT=16-bullseye
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT}
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends vim wget redis-tools
ARG MONGO_CLI_VERSION=4.4
RUN wget -qO - https://www.mongodb.org/static/pgp/server-${MONGO_CLI_VERSION}.asc | sudo apt-key add -
RUN echo "deb http://repo.mongodb.org/apt/debian buster/mongodb-org/${MONGO_CLI_VERSION} main" | tee /etc/apt/sources.list.d/mongodb-org-${MONGO_CLI_VERSION}.list
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install --no-install-recommends mongodb-mongosh \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
RUN wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-debian11-x86_64-100.5.3.deb
RUN apt install ./mongodb-database-tools-*-100.5.3.deb
RUN su node -c "wget -O ~/.git-completion.bash https://raw.githubusercontent.com/git/git/master/contrib/completion/git-completion.bash"
RUN su node -c "echo -e '\n# Git Completion' >> ~/.bashrc"
RUN su node -c "echo -e 'source ~/.git-completion.bash\n' >> ~/.bashrc"
The response is shown in the image attached.
On this line:
RUN wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-debian11-x86_64-100.5.3.deb
You're trying to install a package named mongodb-database-tools-debian11-x86_64-100.5.3.deb that is for Intel processors (x64_64) in your ARM64 image. That's not going to work.
MongoDB doesn't seem to offer a package for Debian on ARM64 on their download page. They do offer one for Ubuntu ARM64.
There doesn't seem to be a variant of javascript-node that builds on Ubuntu 18 Bionic, however, I think you can keep on using this Debian 11 Bullseye variant, because the version of MongoDB-database-tools for Ubuntu 16 seems to install just fine:
RUN wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-ubuntu1604-arm64-100.5.3.deb
Build the image and test:
docker build -t test . ; docker run --rm test mongodump --version
mongodump version: 100.5.3
git version: 139703c0587796da96c367f365473d0266f9cede
Go version: go1.17.10
os: linux
arch: arm64
compiler: gc
If you want this image to build on both x86 and ARM, check what architecture you're on before downloading the .deb:
RUN if [ "$(arch)" = "aarch64" ] || [ "$(arch)" = "arm64" ]; then\
wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-ubuntu1604-arm64-100.5.3.deb;\
else\
wget https://fastdl.mongodb.org/tools/db/mongodb-database-tools-debian11-x86_64-100.5.3.deb;\
fi;
RUN apt install ./mongodb-database-tools-*-100.5.3.deb
Try another platform for example x86_64
https://docs.docker.com/desktop/mac/apple-silicon/
Not all images are available for ARM64 architecture. You can add
--platform linux/amd64 to run an Intel image under emulation
So I'm having this problem where for some reason I can't install any package on my ubuntu system.
I'm currently on Ubuntu 16.10.
terminal install logs
Update:
I've done entered those commands and got this.
after update and apt-cache
What should I do now?
sudo apt-get install wget ca-certificates
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
sudo apt-get update
sudo apt-get install postgresql postgresql-contrib
After installing the PostgreSQL database server, by default, it creates a user ‘postgres’ with role ‘postgres’. Also creates a system account with the same name ‘postgres’. So to connect to Postgres server, log in to your system as user postgres and connect database.
sudo su - postgres
psql
First do
sudo apt-get update
You should get no errors upon updating. In case you do, then you might have issues with your firewall, or something blocking you from updating repositories. Check the output carefully.
And then search for the correct (exact!) package name using this command:
apt-cache search postgresql
As a last resort you could add external 3rd Party repository as described in this answer. Just remember to use your distribution name instead of "xenial".
It should work.
$ sudo apt-get install postgresql postgresql-client
If you are getting (E: Unable to locate package postgresql-12) while migrating following step may helps you:
sudo apt-get -y install bash-completion wget
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc |
sudo apt-key add -
sudo apt-get update
sudo apt-get -y install postgresql-12 postgresql-client-12
sudo systemctl status postgresql
ref:install postgres12 in ubuntu-18.04
Following Commands worked for me: sudo apt-get install wget ca-certificates
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ lsb_release -cs-pgdg main" >> /etc/apt/sources.list.d/pgdg.list'
sudo apt-get update
sudo apt install postgresql-11 libpq-dev
I've been following the Hyperledger Composer tutorial. I managed to install Ubuntu 16.04 on Hyper-V on my Windows 10 Enterprise. I then started on the following pre-req installation instructions:
https://hyperledger.github.io/composer/installing/installing-prereqs.html
I ran the prereqs-ubuntu.sh script. It ran fine with no errors. I examined the logs and saw that it had successfully installed npm 5.6.0, node 8.9.4, docker 17.12.x, docker composer 1.13.x, and Python 2.7.12.
However, when I run run $ sudo npm --version
it tells me that the npm command is not found
Same with $ sudo node --version
Not found...?!
Why would that be when the log clearly shows that npm and node where successfuly installed?!
Well, what I did and managed through:
--> install nodejs and npm:
sudo snap install node --classic --channel=8
so you get the latest node8.
--> then to solve "sudo" problem with node specify the npm prefix:
npm config set prefix ~/.node_modules
add the following to .bash_profile
export PATH=$HOME/.node_modules/bin:$PATH
Now the packages will install into your user directory and no permissions will be harmend.
--> install nvm (to get exactly node 8.9 version on the next step):
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
or
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
Verify:
node -v nvm
which should output 'nvm' if the installation was successful.
--> get and set node 8.9 version:
nvm install v8.9.0
nvm use 8.9.0
--> reset PATHs:
echo export PATH="$HOME/npm/bin:$PATH" >> ~/.bashrc
npm config set prefix ~/npm
echo "export NODE_PATH=$NODE_PATH:/home/$USER/npm/lib/node_modules" >> ~/.bashrc && source ~/.bashrc
--> at this stage the docker previous setup shall be destroyed:
docker kill $(docker ps -q)
docker rm $(docker ps -aq)
docker rmi $(docker images dev-* -q)
--> Installing the rest of prereqs:
sudo apt-add-repository -y ppa:git-core/ppa
sudo apt-get update
# install git
sudo apt-get install -y git
# Ensure that CA certificates are installed
sudo apt-get -y install apt-transport-https ca-certificates
# Add Docker repository key to APT keychain
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Update package lists
sudo apt-get update
# Verifies APT is pulling from the correct Repository
sudo apt-cache policy docker-ce
# Install Docker
echo "# Installing Docker"
sudo apt-get -y install docker-ce
# Add user account to the docker group
sudo usermod -aG docker $(whoami)
# Install docker compose
echo "# Installing Docker-Compose"
sudo curl -L "https://github.com/docker/compose/releases/download/1.13.0/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Install unzip, required to install hyperledger fabric.
sudo apt-get -y install unzip
--> now you can install Fabric dev. env. (assuming the rest of prereq components stand available):
npm install -g composer-cli
etc.
I think you need to log out and close the shell. And then restart with the new session, as the shell stores your session.
Also, after installation, the use of sudo is not recommended as mentioned on IBM hyperledger website.
This is my Dockerfile for installing Postgres.
# Set the base image to Ubuntu
FROM ubuntu:14.04
# Update the repository sources list
RUN apt-get update -y
################## BEGIN INSTALLATION ######################
# Install wget
RUN apt-get install wget -y
# Setup Postgres repository
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc
| sudo apt-key add -
# Add Postgres repository
RUN sh -c "echo "deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main" >> /etc/apt/sources.list.d/postgresql.list"
# Update repository
RUN apt-get update -y
# Install Postgres with Postgis
RUN apt-get install postgresql-9.3-postgis-2.1 -y
How can i add an Entrypoint for Postgres so that Postgres is automatically started in a Docker-container
My solution to start Postgres automatic:
RUN chmod +x /etc/init.d/postgresql
CMD service postgresql start && tail -F /var/lib/postgresql/data/serverlog
You can take ideas from the official docker-library/postgres Dockerfile:
ENTRYPOINT ["/docker-entrypoint.sh"]
EXPOSE 5432
CMD ["postgres"]
They use a docker-entrypoint.sh script which will, at the end, launch postgres
exec gosu postgres "$#"