New Relic PHP Agent Kubernetes (GKE) - kubernetes

Could you please advise on setting up permissions with the docker file for the www-data user to start PHP agent within the docker container running on GKE. Please advise.
FROM php:7.4-fpm as test
RUN \
curl -L https://download.newrelic.com/php_agent/release/newrelic-php5-10.1.0.313-linux.tar.gz | tar -C /tmp -zx && \
export NR_INSTALL_USE_CP_NOT_LN=1 && \
export NR_INSTALL_SILENT=1 && \
/tmp/newrelic-php5-*/newrelic-install install && \
rm -rf /tmp/newrelic-php5-* /tmp/nrinstall* && \
sed -i \
-e 's/"REPLACE_WITH_REAL_KEY"/"My-Key"/' \
-e 's/newrelic.appname = "PHP Application"/newrelic.appname = "test"/' \
-e 's/;newrelic.daemon.app_connect_timeout =.*/newrelic.daemon.app_connect_timeout=15s/' \
-e 's/;newrelic.daemon.start_timeout =.*/newrelic.daemon.start_timeout=5s/' \
/usr/local/etc/php/conf.d/newrelic.ini
USER www
php app related build. etc....
Thank you very much.

In your docker file you are changing the user to USER www due to that it's not running.
As suggested in error it is expected to run by the root user so you can remove the USER www line from docker and try building a new docker image with --no-cache and it will start working with root.
Official ref : https://docs.newrelic.com/docs/apm/agents/php-agent/advanced-installation/docker-other-container-environments-install-php-agent/

Related

Prepare coursier artifact for offline use inside container

I have an sbt project producing my artifact xyz.
I would like to put it along with all its dependencies in the docker container so it can be used using
coursier launch --mode offline xyz
The important part is that preparation should take use of local cursier cache from host.
I tried
executing sbt publishLocal,
then resolving my artifact dependencies (cursier resolve xyz),
then preparing to directories - local & cache - by copying resolved artifact into them
then copying those directories into docker container (as coursier cache and ivy local respectively).
This didn't work because coursier doesn't list .pom and .xml files in its output. I tried copying whole directories (abc/1.0.0 instead of abc/1.0.0/some.jar) but AFAIK there is no reliable way to know how many folders up one has to go because maven and ivy have different dir structures.
while my usecase is not quite identical to yours -- I figure I'd write up my findings and maybe my solution works for you as well!
here's my sample dockerfile, I used this to install scalafmt in an offline-compatible way
FROM ubuntu:jammy
RUN : \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* # */ stackoverflow highlighting bug
ARG CS=v2.1.0-RC4
ARG CS_SHA256=176e92e08ab292531aa0c4993dbc9f2c99dec79578752f3b9285f54f306db572
ARG JDK_SHA256=aef49cc7aa606de2044302e757fa94c8e144818e93487081c4fd319ca858134b
ENV PATH=/opt/coursier/bin:$PATH
RUN : \
&& curl --location --silent --output /tmp/cs.gz "https://github.com/coursier/coursier/releases/download/${CS}/cs-x86_64-pc-linux.gz" \
&& echo "${CS_SHA256} /tmp/cs.gz" | sha256sum --check \
&& curl --location --silent --output /tmp/jdk.tgz "https://download.java.net/openjdk/jdk17/ri/openjdk-17+35_linux-x64_bin.tar.gz" \
&& echo "${JDK_SHA256} /tmp/jdk.tgz" | sha256sum --check \
&& mkdir -p /opt/coursier \
&& tar --strip-components=1 -C /opt/coursier -xf /tmp/jdk.tgz \
&& gunzip /tmp/cs.gz \
&& mv /tmp/cs /opt/coursier/bin \
&& chmod +x /opt/coursier/bin/cs \
&& rm /tmp/jdk.tgz
ENV COURSIER_CACHE=/opt/.cs-cache
RUN : \
&& cs fetch scalafmt:3.6.1 \
&& cs install scalafmt:3.6.1 --dir /opt/wd/bin
the key to offline execution for me was to use cs fetch and set COURSIER_CACHE
here's the offline execution succeeding:
$ docker run --net=none --rm -ti cs /opt/wd/bin/scalafmt --version
scalafmt 3.6.1

Facing issues due to ownership on mounted folder with Docker

Following command works fine
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v /var/lib/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v /var/lib/openproject/static:/var/db/openproject \
openproject/community:8
But this command doesn't start container
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v ~/Dropbox/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v ~/Dropbox/openproject/static:/var/db/openproject \
openproject/community:8
I've also tried making /var/lib/openproject/pgdata symlink to ~/Dropbox/openproject/pgdata. But it also didn't work.
Docker logs say, PostgreSQL Config owner (postgres:102) and data owner (app:1000) do not match, and config owner is not root.
Is there any way to mount non-root folder on root folder inside the docker container and solve this issue?

Jar file is not executed in with Dockerfile

I am building docker image of my application. And I would like to run jar file when I run my docker image. However, I get this error:
Could not find or load main class
Main class is set in the manifest file of the jar file. If I run my jar file from terminal or bash script it works fine. So this error is only observed while running docker:
docker run -v my-volume:/workdir container-name
Are there some configurations missing in my Dockerfile or jar file should be copied/added?
Here is my Dockerfile:
FROM java:8
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
WORKDIR /opt
# Install Scala
RUN \
cd /root && \
curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
tar -xf scala-$SCALA_VERSION.tgz && \
rm scala-$SCALA_VERSION.tgz && \
echo >> /root/.bashrc && \
echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc
# Install SBT
RUN \
curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
dpkg -i sbt-$SBT_VERSION.deb && \
rm sbt-$SBT_VERSION.deb
# Install Spark
RUN \
cd /opt && \
curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
tar xvfz $SPARK_ARCH && \
rm $SPARK_ARCH && \
echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc
EXPOSE 9851 9852 4040 9092 9200 9300 5601 7474 7687 7473
VOLUME /workdir
CMD java -cp "target/scala-2.11/demo_consumer.jar" consumer.SparkConsumer
I believe this is because the command you execute from the Docker container are not in the right folder. You could try to execute the commands from the workdir:
docker run -v my-volume:/workdir -w /workdir container_name
If that does not work, you could inspect what's inside the container. Either with a ls:
docker run -v my-volume:/workdir -w /workdir container_name bash -c 'ls -lah'
Or by accessing its bash session:
docker run -v my-volume:/workdir -w /workdir container_name bash
p.s: if bash does not work, try with sh.

How do I upgrade Docker Postgresql without removing existing data?

I am beginner both of docker and postgresql. 
How do I upgrade docker postgresql 9.5 into 9.6 without losing my 
current database? 
fyi: im using ubuntu version 14 and docker 17.09 
Thanks in advance.
To preserve data across a docker container a volume is required. This volume will mount directly onto the file system of the container and be persevered when the container is killed. It sounds though that the container was created without a volume attached. The best way to get that data is to use copy the data folder for the container and move to the host file system. Then create a docker container with the new image. Copy the data directory to the running container's data directory in this case pgdata:/var/lib/postgresql/data
docker cp [containerID]:/var/lib/postgresql/data /home/user/data/data-dir/
docker stop [containerID]
docker run -it --rm -v pgdata:/var/lib/postgresql/data postgres
docker cp /home/user/data/data-dir [containereID]:/var/lib/postgresql/data
In case that doesn't work i would just dump the current databases, and re-upload them to the new container
You do not store database files to external storage (outside of container).
Then i know only 1 way to store your database:
1) Backup database
2) Shutdown postgres 9.5 container
3) Run new postgres 9.6 container
4) Restore backup
You can use pg_dumpall for backuping full database:
pg_dumpall > backupfile
The resulting dump can be restored with psql:
psql -f backup postgres
I know it's been some time since you asked it, but I hope my solution will help future Googlers :)
I've tried to create a solution that is stateless as possible, to be compatible with CI and upgrade scripts.
The script:
Backs up the whole pg instance using pg_dumpall.
Uses the dump to create the new instance using initdb and psql -f.
The only requirement is a volume with some existing pg_data directory in it.
docker stop lms_db_1
DB_NAME=lms
DB_USERNAME=lmsweb
DB_PASSWORD=123456
CURRENT_DATE=$(date +%d-%m-%Y_%H_%M_%S)
MOUNT_PATH=/pg_data
PG_OLD_DATA=/pg_data/11/data
PG_NEW_DATA=/pg_data/13/data
BACKUP_FILENAME=v11.$CURRENT_DATE.sql
BACKUP_PATH=$MOUNT_PATH/backup/$BACKUP_FILENAME
BACKUP_DIR=$(dirname "$BACKUP_PATH")
VOLUME_NAME=lms_db-data-volume
# Step 1: Create a backup
docker run --rm -v $VOLUME_NAME:$MOUNT_PATH \
-e PGDATA=$PG_OLD_DATA \
-e POSTGRES_DB="${DB_NAME:-db}" \
-e POSTGRES_USER="${DB_USERNAME:-postgres}" \
-e POSTGRES_PASSWORD="${DB_PASSWORD:-postgres}" \
postgres:11-alpine \
/bin/bash -c "chown -R postgres:postgres $MOUNT_PATH \
&& su - postgres /bin/bash -c \"/usr/local/bin/pg_ctl -D \\\"\$PGDATA\\\" start\" \
&& mkdir -p \"$BACKUP_DIR\" \
&& pg_dumpall -U $DB_USERNAME -f \"$BACKUP_PATH\" \
&& chown postgres:postgres \"$BACKUP_PATH\""
# Step 2: Create a new database from the backup
docker run --rm -v $VOLUME_NAME:$MOUNT_PATH \
-e PGDATA=$PG_NEW_DATA \
-e POSTGRES_DB="${DB_NAME:-db}" \
-e POSTGRES_USER="${DB_USERNAME:-postgres}" \
-e POSTGRES_PASSWORD="${DB_PASSWORD:-postgres}" \
postgres:13-alpine \
/bin/bash -c "ls -la \"$BACKUP_DIR\" \
&& mkdir -p \"\$PGDATA\" \
&& chown -R postgres:postgres \"\$PGDATA\" \
&& rm -rf $PG_NEW_DATA/* \
&& su - postgres -c \"initdb -D \\\"\$PGDATA\\\"\" \
&& su - postgres -c \"pg_ctl -D \\\"\$PGDATA\\\" -l logfile start\" \
&& su - postgres -c \"psql -f $BACKUP_PATH\" \
&& printf \"\\\nhost all all all md5\\\n\" >> \"\$PGDATA/pg_hba.conf\" \
"

Couchbase running in a container not accessible

So I've created this Dockerfile:
FROM centos
EXPOSE 7081 8092 11210
RUN yum install -y \
hostname \
initscripts \
openssl098e \
pkgconfig \
sudo \
tar \
wget \
&& wget http://packages.couchbase.com/releases/3.0.2/couchbase-server-enterprise-3.0.2-centos6.x86_64.rpm \
&& yum install -y couchbase-server-enterprise-3.0.2-centos6.x86_64.rpm \
&& rm -f ./couchbase-server-enterprise-3.0.2-centos6.x86_64.rpm \
CMD /opt/couchbase/bin/couchbase-server start -- -noinput
And that seems to be working (running the couchbase server) and to build and run it I do:
docker build -t="my/couchbase" .
docker run -itd --name=couchbase -p 11210:11210 -p 8091:7081 -p 8092:8092 my/couchbase
Now for some reason I can't connect to it via http. I tried to get ip address of the container with docker inspect couchbase | grep IP
and then going to http://containters_ip:7081
It's trying to get there for a very long time, but eventually times out.
What am I doing wrong?
You need EXPOSE 8091 8092 11210 (think of this as "the container listens on these ports") and -p 7081:8091 to get the mapping you seek. In -p it's hostport:containerport order.