I have my app in a Docker container..
FROM keymetrics/pm2-docker-alpine:7
ENV environment development
# Add PM2 modules
RUN pm2 install pm2-mongodb
ADD . .
EXPOSE 3000
CMD ["pm2-docker", "ecosystem.config.js", "--env ${environment}"]
And I use PM2 with ecosystem.config.js
module.exports = {
apps: [
{
name: 'api',
script: 'api/index.js',
env: {
PORT: process.env.PORT || 3000,
API_MONGO_URL: process.env.MONGO_URL || 'mongodb://localhost/api',
},
env_production: {
MONGO_URL: `mongodb://${process.env.MONGODB_PORT_27017_TCP_ADDR}:${process.env.MONGODB_PORT_27017_TCP_PORT}/api`,
},
},
],
};
This app depends on MongoDB service... so I run this with linked container..
docker run -it --name api -e environment=production --link mongodb:mongodb api:latest sh
The problem is when I execute this command inside the container ...
/app # pm2-docker ecosystem.config.js --env $environment
[STREAMING] Now streaming realtime logs for [all] processes
0|pm2-mong | You have triggered an unhandledRejection, you may have forgotten to catch a Promise rejection:
0|pm2-mong | Error: double colon in host identifier
0|pm2-mong | at module.exports (/root/.pm2/node_modules/mongodb/lib/url_parser.js:89:13)
0|pm2-mong | at connect (/root/.pm2/node_modules/mongodb/lib/mongo_client.js:480:16)
0|pm2-mong | at /root/.pm2/node_modules/mongodb/lib/mongo_client.js:234:7
0|pm2-mong | at Function.MongoClient.connect (/root/.pm2/node_modules/mongodb/lib/mongo_client.js:230:12)
0|pm2-mong | at Object.init (/root/.pm2/node_modules/pm2-mongodb/lib/stats/client.js:53:15)
0|pm2-mong | at Object.init (/root/.pm2/node_modules/pm2-mongodb/lib/stats/index.js:78:12)
0|pm2-mong | at /root/.pm2/node_modules/pm2-mongodb/app.js:33:9
0|pm2-mong | at Object.PMX.initModule (/root/.pm2/node_modules/pmx/lib/pmx.js:116:12)
0|pm2-mong | at Object.<anonymous> (/root/.pm2/node_modules/pm2-mongodb/app.js:4:5)
0|pm2-mong | at Module._compile (module.js:571:32)
PM2 | App [pm2-mongodb] with id [0] and pid [21], exited with code [0] via signal [SIGINT]
PM2 | Starting execution sequence in -fork mode- for app name:pm2-mongodb id:0
PM2 | App name:pm2-mongodb id:0 online
This error is caused because I use pm2-mongodb module, I need to configure the module as follow...
pm2 set pm2-mongodb:ip ${MONGODB_PORT_27017_TCP_ADDR}
pm2 set pm2-mongodb:port ${MONGODB_PORT_27017_TCP_PORT}
So the question is... when/where I can do that?
The MONGODB_PORT_27017_TCP_ADDR and MONGODB_PORT_27017_TCP_PORT will be available after I run the container only.
Related
I want to unit-test my app which uses a postgres database inside a docker.
EDIT: based on the suggested answer I modified the Dockerfile:
FROM postgres
USER postgres
RUN sleep 2 # remark 1
RUN initdb # remark 2
RUN sleep 3 # remark 1
RUN psql --host=localhost -l
The remarks are:
from this reference
Try putting a sleep in there and see if it's still a problem
from the docs:
The default postgres user and database are created in the entrypoint with initdb.
Here is the Dockerfile from the original question:
FROM postgres
COPY input.json .
RUN createdb -h localhost -p 7654 -U moish myLovelyAndTemporaryDb
#
# [ 1 ] run application on input.json
# [ 2 ] check db content after run
#
When I use the above Dockerfile I seem to be missing something:
(The errors from the edited version are the same)
$ docker build --tag host --file Dockerfile .
[+] Building 0.3s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 125B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/postgres:latest 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 40B 0.0s
=> CACHED [1/3] FROM docker.io/library/postgres 0.0s
=> [2/3] COPY input.json . 0.0s
=> ERROR [3/3] RUN createdb -h localhost -p 7654 -U moish myLovelyAndTemporaryDb 0.2s
------
> [3/3] RUN createdb -h localhost -p 7654 -U moish myLovelyAndTemporaryDb:
#7 0.188 createdb: error: connection to server at "localhost" (127.0.0.1), port 7654 failed: Connection refused
#7 0.188 Is the server running on that host and accepting TCP/IP connections?
#7 0.188 connection to server at "localhost" (::1), port 7654 failed: Cannot assign requested address
#7 0.188 Is the server running on that host and accepting TCP/IP connections?
------
Postgres database starts only after you create a container based on postgres image. docker build process doesn't start entrypoint script. You might need a bash script or CI pipeline where you firstly start postgres container and then use it in your unit tests
$ docker run --name mypg -p 5432:5432 -e POSTGRES_PASSWORD=mypgpass -d postgres:9
# copy a script to the mypg container
$ docker cp run.sh mypg:/root/run.sh
# run the script
$ docker exec mypg bash /root/run.sh
...
# use postgres client on your host to connect to mypg container
$ PGPASSWORD="mypgpass" psql -U postgres -p 5432 -h localhost -c "select version()"
version
--------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 9.6.24 on x86_64-pc-linux-gnu (Debian 9.6.24-1.pgdg90+1), compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
(1 row)
Postgres container docs
Postgres client authentication
EDIT:
By trying to run initdb, psql etc directly in Dockerfile, you are reinventing the docker-entrypoint.sh
During the build step of the postgres Docker you cannot run postgres commands. Postgres database will only be available after you run the Docker.
As specified in the postgres Docker documentation you can add customization to your postgres instance through scripts placed in docker-entrypoint-initdb.d directory.
Alternatively you could use a RUN directive to start the postgres database and after that run the postgres commands you want (making sure to wait for the DB to accept connections), as mentioned here.
Another side note, I personally avoid using real databases for unit testing applications. To me, it's always better to mock the database for unit tests, in python you can do this with unittest.mock.
Here is a complete answer based on the concepts of slava-kuravsky and mello:
$ docker build --tag host --file Dockerfile .
$ docker run -d -t --name pg host
$ docker exec pg bash run.sh
The script can do what I want, currently it only lists the databases:
$ cat run.sh # <--- copied during docker build
echo "Hello Postgres World"
psql --host=localhost -l
The Dockerfile does only initialization:
$ cat Dockerfile
FROM postgres
USER postgres
COPY run.sh . # <--- the testing script
RUN sleep 2 # <--- probably not needed anymore
RUN initdb
RUN sleep 3 # <--- probably not needed anymore
When I perform the three commands above I get what I expect š :
$ docker build --tag host --file Dockerfile .
[+] Building 7.2s (10/10) FINISHED
# ... omitted for brevity ...
$ docker run -d -t --name pg host
608ac7324e838924c9a5d0cfe65c8000e33350b86faf9df4511ef5fcf7440597
$ docker exec pg bash run.sh
Hello Postgres World
List of databases
Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges
-----------+----------+----------+------------+------------+------------+-----------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
(3 rows)
I am trying to make an initial build of a PWA (Progressive Web App) that is vanilla JS/HTML/CSS using using TWA (Trusted Web Activity) and Bubblewrap, but get the message cli ERROR spawn jarsigner ENOENT
The $ bubblewrap init --manifest=https://my-pwa.com/manifest.json step was seemingly successful.
However, when I go to build the project I get the following:
$ bubblewrap build
,-----. ,--. ,--. ,--.
| |) /_,--.,--| |-.| |-.| |,---.,--. ,--,--.--.,--,--.,---.
| .-. | || | .-. | .-. | | .-. | |.'.| | .--' ,-. | .-. |
| '--' ' '' | `-' | `-' | \ --| .'. | | \ '-' | '-' '
`------' `----' `---' `---'`--'`----'--' '--`--' `--`--| |-'
`--'
Please, enter passwords for the keystore /home/my-user/AndroidStudioProjects/android.keystore and alias android.
? Password for the Key Store: ***************
? Password for the Key: ***************
Building the Android App...
- Generated Android APK at ./app-release-signed.apk
cli ERROR spawn jarsigner ENOENT
Further context:
$ bubblewrap doctor
doctor Your jdkpath and androidSdkPath are valid.
$ node -v
v19.3.0
$ printf "%s\n" $PATH
/home/my-user/.local/share/nvm/v19.3.0/bin
/usr/local/sbin
/usr/local/bin
/usr/sbin
/usr/bin
/sbin
/bin
/usr/games
/usr/local/games
/snap/bin
/snap/bin
$cat ~/.bubblewrap/config.json
{"jdkPath":"/usr/lib/jvm/default-java/","androidSdkPath":"/home/my-user/Android/Sdk/"}
Answers or any clues on where to investigate next appreciated, thanks.
My issue was resolved by putting a copy of OpenJDK 11 within my home directory and updating the jdkPath in /home/my-user/.bubblewrap/config.json with its location: {"jdkPath":"/home/my-user/Android/jdk-11.0.17+8","androidSdkPath":"/home/my-user/Android/Sdk/"}
I'm trying to run a kubectl command from ansible.
Basically the command will tell me if at least one pod is running from a deployment.
kubectl get deploy sample-v1-deployment -o json -n sample | jq '.status.conditions[] | select(.reason == "MinimumReplicasAvailable") | .status' | tr -d '"'
I tried to run it from a playbook but I'm getting
Unable to connect to the server: net/http: TLS handshake timeout
This is my playbook:
- hosts: master
gather_facts: no
become: true
tasks:
- name: test command
shell: kubectl get deploy sample-v1-deployment -o json -n sample | jq '.status.conditions[] | select(.reason == "MinimumReplicasAvailable") | .status' | tr -d '"'
register: result
This is the output from ansible:
changed: [k8smaster01.test.com] => {
"changed": true,
"cmd": "kubectl get deploy sample-v1-deployment -o json -n sample | jq '.status.conditions[] | select(.reason == \"MinimumReplicasAvailable\") | .status' | tr -d '\"'",
"delta": "0:00:10.507704",
"end": "2019-04-02 20:59:17.882277",
"invocation": {
"module_args": {
"_raw_params": "kubectl get deploy sample-v1-deployment -o json -n sample | jq '.status.conditions[] | select(.reason == \"MinimumReplicasAvailable\") | .status' | tr -d '\"'",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"rc": 0,
"start": "2019-04-02 20:59:07.374573",
"stderr": "Unable to connect to the server: net/http: TLS handshake timeout",
"stderr_lines": [
"Unable to connect to the server: net/http: TLS handshake timeout"
],
"stdout": "",
"stdout_lines": []
}
I can run the command manually on the master server without problems. I was also able to use k8s module to create different things on my kubernetes cluster.
I know there is a kubectl module on ansible, could it be the problem?
Thanks
I found a couple of workarounds.
One was to use the k8s_facts module
- name: Ensure running application
k8s_facts:
namespace: sample
kind: Pod
label_selectors:
- app=sample-v1-app
register: pod_list
until: pod_list.resources[0].status.phase == 'Running'
delay: 10
retries: 3
Its simple and gets the works done.
The second workaround was to use the raw module instead of shell or command
- name: Get running status
raw: kubectl get deploy sample-v1-deployment -o json -n sample | jq -r '.status.conditions[] | select(.reason == "MinimumReplicasAvailable") | .status'
I'm not sure about using raw. It looks like a hammer for a simple task.
But reading about the module makes me think this problem is related with the syntax (quotes, double quotes, |) more than the command it self.
Executes a low-down and dirty SSH command, not going through the
module subsystem. This is useful and should only be done in a few
cases. A common case is installing python on a system without python
installed by default. Another is speaking to any devices such as
routers that do not have any Python installed.
Looks like you can connect to your kube-apiserver on the master from a shell, but not from ansible. The error message indicates differences in the kubeconfig.
You can see the kube-apiserver endpoint configured on your ~/.kube/config like this:
$ kubectl config view --minify -o jsonpath='{.clusters[].cluster.server}'
It's typically something like this: https://<servername>:6443. You can try running the command from ansible to see if you get the same kube-apiserver.
Another thing is you can try is to print the value of the KUBECONFIG env variable from ansible to see if it's set to something different from ~/.kube/config
Hope it helps!
I want to write a script to manage the WildFly start and deploy, but I'm having trouble now. To check if the server has started, I found the command
./jboss-cli.sh -c command=':read-attribute(name=server-state)' | grep running
But when the server is starting, because the controller is not available, ./jboss-cli.sh -c fails to connect and returns an error.
Is there a better way to check whether WildFly started completely?
I found a better solution. The command is
netstat -an | grep 9990 | grep LISTEN
Check the management port (9990) state before the WildFly is ready to accept management commands.
After that, use ./jboss-cli.sh -c command=':read-attribute(name=server-state)' | grep running to check if the server has started. Change the port
if the management port config is not the default 9990.
Here is my start & deploy script, the idea is continually check until the server started.
Then, use the jboss-cli command to deploy my application. And just print the log to the screen, so don't need to use another shell to tail the log file.
#!bin/sh
totalRow=0
printLog(){ #output the new log in the server.log to screen
local newTotal=$(awk 'END{print NR}' ./standalone/log/server.log) #quicker than wc -l
local diff=$(($newTotal-$totalRow))
tail -n $diff ./standalone/log/server.log
totalRow=$newTotal
}
nohup bin/standalone.sh>/dev/null 2>&1 &
echo '======================================== Jboss-eap-7.1 is starting now ========================================'
while true #check if the port is ready
do
sleep 1
if netstat -an | grep 9990 | grep LISTEN
then
printLog
break
fi
printLog
done
while true #check if the server start success
do
if bin/jboss-cli.sh --connect command=':read-attribute(name=server-state)' | grep running
then
printLog
break
fi
printLog
sleep 1
done
echo '======================================== Jboss-eap-7.1 has started!!!!!! ========================================'
bin/jboss-cli.sh --connect command='deploy /bcms/jboss-eap-7.1/war/myApp.war' &
tail -f -n0 ./standalone/log/server.log
I started a MongoDB container like so:
docker run -d -p 27017:27017 --net=cdt-net --name cdt-mongo mongo
I saw that my MongoDB container exited:
0e35cf68a29c mongo "docker-entrypoint.sā¦" Less than a second ago Exited (1) 3 seconds ago cdt-mongo
I checked my Docker logs, I see:
$ docker logs 0e35cf68a29c
about to fork child process, waiting until server is ready for connections.
forked process: 21
2018-01-12T23:42:03.413+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2018-01-12T23:42:03.417+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/tmp.aLmNg7ilAm: No space left on device
ERROR: child process failed, exited with error number 1
Does anyone know what this error is about? Not enough space in the container?
I had to delete old Docker images to free up space, here are the commands I used:
# remove all unused / orphaned images
echo -e "Removing unused images..."
docker rmi -f $(docker images --no-trunc | grep "<none>" | awk "{print \$3}") 2>&1 | cat;
echo -e "Done removing unused images"
# clean up stuff -> using these instructions https://lebkowski.name/docker-volumes/
echo -e "Cleaning up old containers..."
docker ps --filter status=dead --filter status=exited -aq | xargs docker rm -v 2>&1 | cat;
echo -e "Cleaning up old volumes..."
docker volume ls -qf dangling=true | xargs docker volume rm 2>&1 | cat;
We've experienced this problem recently while using docker-compose with mongo and a bunch of other services. There are two fixes which have worked for us.
Clear down unused stuff
# close down all services
docker-compose down
# clear unused docker images
docker system prune
# press y
Increase the image memory available to docker - this will depend on your installation of docker. On Mac, for example, it defaults to 64Gb and we doubled it to 128Gb via the UI.
We've had this problem in both Windows and Mac and the above fixed it.