Rundeck from docker: no logs? - rundeck

Running rundeck from docker (default backend), but noticed there are no logs. This documentation seems not complete / not valid for docker deployment: https://docs.rundeck.com/docs/administration/maintenance/logs.html
All the logs inside docker:/home/rundeck/server/logs have 0 size.
How to review the logs when running as a docker ?
Thanks,

The execution logs are stored at the /home/rundeck/var/logs/rundeck path, so, a good idea is to mount it as a volume (to see them in your local filesystem), take a look at this docker-compose example:
version: '3'
services:
rundeck:
image: rundeck/rundeck:4.2.1
environment:
RUNDECK_GRAILS_URL: http://localhost:4440
RUNDECK_DATABASE_DRIVER: org.mariadb.jdbc.Driver
RUNDECK_DATABASE_USERNAME: rundeck
RUNDECK_DATABASE_PASSWORD: rundeck
RUNDECK_DATABASE_URL: jdbc:mariadb://mysql/rundeck?autoReconnect=true&useSSL=false&allowPublicKeyRetrieval=true
RUNDECK_LOGGING_STRATEGY: FILE
volumes:
- ./data/logs/:/home/rundeck/var/logs/rundeck/
ports:
- 4440:4440
tty: true
mysql:
image: mysql:8
expose:
- 3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=rundeck
- MYSQL_USER=rundeck
- MYSQL_PASSWORD=rundeck
The service.log is available in the docker logs command, to see it just do docker logs <container_id> -f.

Related

ECS Fargate application container cannot establish connection with Postgres database container

I am trying to use ecs-cli to push a two container docker compose file up to FARGATE ECS. This is for a preview environment only. The first container is postgres:12 and the second is hasura/graphql-engine:v1.3.3
The docker-compose.yml looks like the following
version: '3'
services:
postgres:
image: postgres:12
ports:
- "5432:5432"
restart: always
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: postgrespassword
logging:
driver: awslogs
options:
awslogs-group: tutorial
awslogs-region: us-east-1
awslogs-stream-prefix: postgres
graphql-engine:
image: hasura/graphql-engine:v1.3.3
ports:
- "80:80"
depends_on:
- "postgres"
restart: always
environment:
HASURA_GRAPHQL_DATABASE_URL: postgres://postgres:postgrespassword#127.0.0.1:5432/postgres
## enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
## enable debugging mode. It is recommended to disable this in production
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
## uncomment next line to set an admin secret
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
logging:
driver: awslogs
options:
awslogs-group: tutorial
awslogs-region: us-east-1
awslogs-stream-prefix: hasura
volumes:
db_data:
The ecs-params.yml looks like the following
version: 1
task_definition:
ecs_network_mode: awsvpc
task_role_arn: "arn:aws:iam::***:role/ecsTaskExecutionRole"
task_execution_role: "arn:aws:iam::***:role/ecsTaskExecutionRole"
task_size:
cpu_limit: "256"
mem_limit: "512"
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-***"
- "subnet-***"
security_groups:
- "sg-***"
assign_public_ip: "ENABLED"
I am using the following command line call to trigger the push
ecs-cli compose --file docker-compose.yml --ecs-params ecs-params.yml --debug service up --deployment-max-percent 100 --deployment-min-healthy-percent 0 --region us-east-1 --cluster "{ARN CLUSTER VALUE}" --create-log-groups --launch-type "FARGATE"
In ECS I can see the new service created and its 1 Fargate task is spinning up. If I open the task, the containers move from PENDING -> RUNNING. After some time, the application container moves to STOPPED and then eventually the database container moves to STOPPED as well. Once this happens the task stops and a new task goes through the same cycle.
Here is the log for the application container
Here is the log for the database container
In the docker-compose I have tried changing the environment variable for the PG database connection string to both postgres://postgres:postgrespassword#127.0.0.1:5432/postgres and postgres://postgres:postgrespassword#localhost:5432/postgres, both result in the same issue.
Any idea what might be going on here? This is inspired from this article: https://dev.to/raphaelmansuy/10-minutes-to-deploy-a-docker-compose-stack-on-aws-illustrated-with-hasura-and-postgres-3f6e
The only difference is that article uses EC2, not Fargate.
try and add
links:
- postgres
to your graphql-engine service instead of depends_on which doesn't seem to work with AWS ECS.

How to connect to a postgres database when having two docker-compose files?

First I have built an image using Dockerfile:
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*-SNAPSHOT.jar
ADD ${JAR_FILE} app.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]
as I have two docker-compose files one for production:
version: "3"
services:
app:
image: "demo:latest"
container_name: demo-production-api
restart: always
depends_on:
- "productiondb"
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://productiondb:5432/testdb
- SPRING_DATASOURCE_HIKARI_JDBC_URL=jdbc:postgresql://productiondb:5432/testdb
- SPRING_DATASOURCE_USER=tester
- SPRING_DATASOURCE_PASSWORD=test
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
ports:
- "8440:8443"
productiondb:
image: "postgres:latest"
container_name: productiondb
ports:
- "5430:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres-db-production:/usr/local/var/postgres
volumes:
postgres-db-production:
and one for develop:
version: "3"
services:
app:
image: "demo:latest"
container_name: demo-develop-api
restart: always
depends_on:
- "developdb"
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://developdb:5432/testdb
- SPRING_DATASOURCE_HIKARI_JDBC_URL=jdbc:postgresql://developdb:5432/testdb
- SPRING_DATASOURCE_USER=tester
- SPRING_DATASOURCE_PASSWORD=test
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
ports:
- "8441:8443"
developdb:
image: "postgres:latest"
container_name: developdb
ports:
- "5431:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres-db-develop:/usr/local/var/postgres
volumes:
postgres-db-develop:
I build both images using:
docker-compose -p demo-production-api -f docker-compose.yml up -d && docker-compose -p demo-develop-api -f docker-compose-develop.yml up -d
Now I was able to build both environments demo-develop-api and demo-production-api as well, the Spring Boot application from demo-develop-api docker image runs using the command:
docker run -it demo-develop-api
The application runs but I keep getting this error:
Caused by: java.net.UnknownHostException: productiondb
The above error happened after changing the database host in the application.properties file from localhost to productiondb first I was getting the following:
org.postgresql.util.PSQLException: Connection to localhost:5432
refused. Check that the hostname and port are correct and that the
postmaster is accepting TCP/IP connections.
Why this issue occurring or what is the cause?
How to solve this kind of issue?
As far as I see it, the issue might be that you have binded port 5430 and 5431 to 5432 and you might be having the port set to 5432 in your application.resources file. Your application should be trying to connect to the database by using either port 5430 or 5431 for production and development respectively. Please check and try this. So, make a port change in the application.resources file.
So after a long time of debugging and trials, hopefully, this is is going to save people hours, it turned out that actually, the Spring Boot application inside the container was restarting runs and crashes without any errors, which made me more confused why it is not listening or opening a port. I even doubt it that it could be a firewall or something. So basically I just tried to get a shell from the container by doing:
docker exec -it <container id or image> sh
Note: Since I am using the image openjdk:8-jdk-alpine don't do below you will not get a shell:
docker exec -it <container id or image> bash
Then I tried to get a list of open ports by doing:
netstat -tulpn | grep ":8443"
The port 8443 was not listed, I thought it could be a problem with the java program not being running, tried to execute the spring boot which executed but without any errors and the shell itself was exiting which made me more confused.
Until I have found out that container was restartig because of Spring Boot was crashing. So I enabled verbose mode by adding the below properties to application.properties then rebuild the image again:
logging.level.org.springframework.web=DEBUG
logging.level.org.hibernate=DEBUG
So I retried the last above steps where I get a shell and execute the app.jar and it turned out that the database testdb did not exist.
UPDATE: So to sum up here how I modifed my project, I created two Spring Boot Profiles for my case one for develop application-develop.properties and one for production application-production.properties:
So inside the application-develop.properties I have it mapped to a develop postgres container host and port:
spring.datasource.url=jdbc:postgresql://developdb:5432/testdb
spring.datasource.hikari.jdbc-url=jdbc:postgresql://developdb:5432/testdb
spring.datasource.username=tester
spring.jpa.generate-ddl=true
spring.datasource.password=test
spring.jpa.database-platform=postgres
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQL9Dialect
spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
server.port=8443
And for application-production.properties:
spring.datasource.url=jdbc:postgresql://productiondb:5432/testdb
spring.datasource.hikari.jdbc-url=jdbc:postgresql://productiondb:5432/testdb
spring.datasource.username=tester
spring.jpa.generate-ddl=true
spring.datasource.password=test
spring.jpa.database-platform=postgres
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQL9Dialect
spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation=true
server.port=8443
And in the docker-compose file for develop I just define the Spring Boot profile environment variable to:
environment:
- SPRING_PROFILES_ACTIVE=develop
And for production docker-compose file I define it as below:
environment:
- SPRING_PROFILES_ACTIVE=production

How to attach a PostgreSQL volume to a Docker image generated with SBT native packager?

I would like to be able to deploy my app in a pre-prod environment for integration testing using a Docker volume that will expose an instance of PostgreSQL. I'm using Scala v2.12.8 and Play v2.7.
Looking at the environment settings of the SBT native packager it seems possible to define dockerExposedVolumes in order to attach a DB.
Using a normal Docker compose file I would do something like that:
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgress
- POSTGRES_DB=postgres
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- suruse
volumes:
pgdata:
This configuration has been taken from this SO answer.
I tried searching for config examples but I didn't find anything useful so far. Now I'm wondering how I should define a new docker volume and then expose it to the Docker image created by SBT exactly?
THE WORKING SOLUTION
The final version. I've fully tested it and it works exposing the DB on the TCP port 5433.
# https://docs.docker.com/samples/library/postgres/
version: "3"
services:
app-pgsql:
image: postgres:9.6
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=yourPasswordHere
- POSTGRES_DB=yourDatabaseNameHere
- POSTGRES_INITDB_ARGS="--encoding=UTF8"
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
driver: local
Launch the docker compose using sbt dockerComposeUp -useStaticPorts and then check if the containers have been actually exposed using docker ps -a. Also, check the log files using the command provided by dockerComposeUp or dockerComposeInstances.
There is a sbt Plugin that helps you to achieve this:
sbt-docker-compose
With that you can add your database to a docker compose file and you can run everything within sbt.
This is a Docker standard. Here is an explaination how to do it for Postgres:
[run_postgresql_docker_compose][2]
The docker-compose.yml from that example:
version: '3'
services:
mydb:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432/tc
volumes:
db-data:
driver: local
As this is a standard way of Docker you will find more examples.

Get docker-compose up to only run certain containers

So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)

Docker containers with volume mounting exits immediately on using docker-compose up

I am using docker-compose up command to spin-up few containers on AWS AMI RHEL 7.6 instance. I observe that in whichever containers there's a volume mounting, they are exiting with status Exiting(1) immediately after starting and remaining containers remain up. I tried using tty: true and stdin_open: true, but it didn't help. Surprisingly, the set-up works fine in another instance which basically I am trying to replicate in this new one.
The stopped containers are Fabric v1.2 peers, CAs and orderer.
Docker-compose.yml file which is in root folder where I use docker-compose up command
version: '2.1'
networks:
gcsbc:
name: gcsbc
services:
ca.org1.example.com:
extends:
file: fabric/docker-compose.yml
service: ca.org1.example.com
fabric/docker-compose.yml
version: '2.1'
networks:
gcsbc:
services:
ca.org1.example.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
ports:
- '7054:7054'
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./artifacts/channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerorg1
networks:
- gcsbc
hostname: ca.org1.example.com