JanusGraph docker compose environment - docker-compose

I'm trying to setup JanusGraph using docker compose using the following lines in a docker-compose.yml file
janusgraph:
image: janusgraph/janusgraph:latest
environment:
JANUS_CONFIG_DIR: /conf
JANUS_PROPS_TEMPLATE: cql
gremlinserver.graphManager: org.janusgraph.graphdb.management.JanusGraphManager
gremlinserver.channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer
gremlinserver.graphs.graph: conf/janusgraph-cql-server.properties
janusgraph.storage.hostname: janusgraph_cassandra
janusgraph.gremlin.graph: org.janusgraph.core.ConfiguredGraphFactory
janusgraph.graph.graphname: ConfigurationManagementGraph
depends_on:
- cassandra
- solr
ports:
- 8182:8182
networks:
- janusgraph
restart: always
container_name: janusgraph
The build is running just ok, but the two configuration files (janusgraph-server.yml, janusgraph-cql-server.properties) remain unchanged. Is there something am I missing here?

Related

How to setup kafka-kinesis-connector if I use a Kafka container?

I'm a little bit confused. I've been following this to get started and the installation method.
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-kafka-connector-msk/
Setup AWS cli and configure it
Install maven
Compile connector
Set classpath with the jar generated
Set up the properties file of the connector
FYI: I have a docker-compose that creates all my containers (kafka, mqtt, etc.)
(all of the above is setup on-premise)
And then, I executed all this on my machine itself and not on the Kafka container, so for the last step how would that work when I try to run it standalone?
version: '3'
services:
nodered:
container_name: nodered
image: nodered/node-red
ports:
- "1880:1880"
volumes:
- ./nodered:/data
depends_on:
- mosquitto
environment:
- TZ=America/Toronto
- NODE_RED_ENABLE_PROJECTS=true
restart: always
mosquitto:
image: eclipse-mosquitto
container_name: mqtt
restart: always
ports:
- "1883:1883"
volumes:
- "./mosquitto/config:/mosquitto/config"
- "./mosquitto/data:/mosquitto/data"
- "./mosquitto/log:/mosquitto/log"
environment:
- TZ=America/Toronto
user: "${PUID}:${PGID}"
portainer:
ports:
- "9000:9000"
container_name: portainer
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./portainer/portainer_data:/data"
image: portainer/portainer-ce
zookeeper:
image: zookeeper:3.4
container_name: zookeeper
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/data"
kafka:
image: wurstmeister/kafka:1.0.0
container_name: kafka
ports:
- "9092:9092"
- "9093:9093"
volumes:
- "kafka_data:/data"
environment:
- KAFKA_ZOOKEEPER_CONNECT=10.0.0.129:2181
- KAFKA_ADVERTISED_HOST_NAME=10.0.0.129
- JMX_PORT=9093
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_LOG_RETENTION_HOURS=1
- KAFKA_MESSAGE_MAX_BYTES=10000000
- KAFKA_REPLICA_FETCH_MAX_BYTES=10000000
- KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS=60000
- KAFKA_NUM_PARTITIONS=2
- KAFKA_DELETE_RETENTION_MS=1000
depends_on:
- zookeeper
restart: on-failure
cmak:
image: hlebalbau/kafka-manager:1.3.3.16
container_name: kafka-manager
restart: always
depends_on:
- kafka
- zookeeper
ports:
- "9080:9080"
environment:
- ZK_HOSTS=10.0.0.129
- APPLICATION_SECRET=letmein
command: -Dconfig.file=/kafka-manager/conf/application.conf -Dapplication.home=/kafkamanager -Dhttp.port=9080
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
I guess I have to go into my Kafka container and run the below code, but how can I reference by machine path... I'm stuck here or perhaps I'm missing something:
./bin/connect-standalone.sh {{path_from_machine_where_jar_is}}/kinesis-kafka-connector/config/worker.properties {{path_from_machine_where_jar_is}}/kinesis-kafka-connector/config/kinesis-streams-kafka-
connector.properties
Or I have to run all the previous steps in my Kafka container directly...
I was thinking of just doing this, copy my jar file and just moving it in my kafka container.
docker cp /hostfile (container_id):/(to_the_place_you_want_the_file_to_be)
Thank you!
guess I have to go into my Kafka container and run the below code
No. Containers should only run one process, which is kafka server.
So, either you download Kafka locally and run the Connect scripts from your host.
Or you simply add a new container for Kafka Connect, which will run Connect distributed mode, rather than standalone.
In either case, yes, you need to copy (or mount) the jar into Connect's plugin path
Alternatively, run MSK rather than Kinesis and produce your Kafka data there rather than locally.

DOCKER_Cannot run multiple services by docker-compose

I'm set up docker compose for my project with 2 services: spring-boot and postgresql. I created Dockerfile and docker-compose,yml as below:
Dockerfile :
FROM openjdk:8-jdk-alpine
MAINTAINER linhan.com
COPY target/LinhAn-0.0.1-SNAPSHOT.jar linhan-server-1.0.0.jar
ENTRYPOINT ["java","-jar","/linhan-server-1.0.0.jar"]
docker-compose.yml:
version: '2'
services:
spring_boot:
image: 'linhan'
build: .
container_name: api
ports:
- "8080:8080"
depends_on:
- postgres
environment:
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://localhost:5432/test_db
- SPRING_DATASOURCE_USERNAME=user
- SPRING_DATASOURCE_PASSWORD=123456
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
postgres:
image: 'postgres:13.1-alpine'
container_name: db
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Then, when I type docker-compose up in terminal, postgres ran only, spring boot still not.
I searched google for solution but seems no hope. Please help me, thanks a lot!!!!!
I think you need to change the SPRING_DATASOURCE_URL to reference your service name instead of localhost. The service name is resolved automatically to your service since all services are part of the default_network by default in docker-compose.
- SPRING_DATASOURCE_URL=jdbc:jdbc:postgresql://postgres:5432/test_db
Also, for clarity I would suggest you add the port to your docker-compose postgres service, so it is clear which port is being used, even if it is the default:
postgres:
image: 'postgres:13.1-alpine'
container_name: db
ports:
- "5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=123456
Also, another suggestion would be to try and use a healthcheck to see if your database service becomes available instead of a simple depends_on. The short version will mark the dependency fulfilled as soon as the container is Running, regardless of the availability of the database.
Either that, or you can add application logic to retry database connection in case of failure.

Intergrate elasticsearch with multiple mongodb in docker-compose

I have a implemented a microservice architecture with several servers and databases. I have installed elasticsearch with docker and when I do docker-compose up, everything seems to run fine.
However I would like to integrate the elasticsearch with the several databases (2 mongodb in this sample below) in the system. How do I synch the two mongodb in two different containers with elasticsearch so that I can search them?
client:
container_name: client
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
restart: always
volumes:
- './client:/app'
ports:
- '1000:3000'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
weatherdb:
container_name: weather-db
image: mongo
restart: always
ports:
- '2002:27017'
volumes:
- ./weather_service/weather_db:/data/db
networks:
- backend
weather-service:
container_name: weather-service
build: ./weather_service
restart: always
ports:
- "1002:3000"
depends_on:
- weatherdb
links:
- elasticsearch
networks:
- backend
newsdb:
container_name: news-db
image: mongo
restart: always
ports:
- '2003:27017'
volumes:
- ./news_service/news_db:/data/db
networks:
- backend
news-service:
container_name: news-service
build: ./news_service
restart: always
ports:
- "1003:3000"
depends_on:
- newsdb
links:
- elasticsearch
networks:
- backend
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
restart: always
ports:
- 9200:9200
- 9300:9300
environment:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
network.bind_host: 0.0.0.0
network.host: 0.0.0.0
discovery.type: single-node
volumes:
- ./elasticsearch/esdata:/usr/share/elasticsearch/data
networks:
- backend
Its very simple to just add a elasticsearch docker section in any docker-compose file and start it, all these are independent docker containers and as long as their exposed port on host is not interfering each other and you have the correct configuration in place it should work.
Please refer elasticsearch multi-docker installation using docker file for more info.
NOTE: You have not mentioned what exact issue you are facing, you have mentioned everything ie all docker containers are running file, so please explain in detail what exactly you are trying to solve

How to attach a PostgreSQL volume to a Docker image generated with SBT native packager?

I would like to be able to deploy my app in a pre-prod environment for integration testing using a Docker volume that will expose an instance of PostgreSQL. I'm using Scala v2.12.8 and Play v2.7.
Looking at the environment settings of the SBT native packager it seems possible to define dockerExposedVolumes in order to attach a DB.
Using a normal Docker compose file I would do something like that:
version: "3"
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgress
- POSTGRES_DB=postgres
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- suruse
volumes:
pgdata:
This configuration has been taken from this SO answer.
I tried searching for config examples but I didn't find anything useful so far. Now I'm wondering how I should define a new docker volume and then expose it to the Docker image created by SBT exactly?
THE WORKING SOLUTION
The final version. I've fully tested it and it works exposing the DB on the TCP port 5433.
# https://docs.docker.com/samples/library/postgres/
version: "3"
services:
app-pgsql:
image: postgres:9.6
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=yourPasswordHere
- POSTGRES_DB=yourDatabaseNameHere
- POSTGRES_INITDB_ARGS="--encoding=UTF8"
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
driver: local
Launch the docker compose using sbt dockerComposeUp -useStaticPorts and then check if the containers have been actually exposed using docker ps -a. Also, check the log files using the command provided by dockerComposeUp or dockerComposeInstances.
There is a sbt Plugin that helps you to achieve this:
sbt-docker-compose
With that you can add your database to a docker compose file and you can run everything within sbt.
This is a Docker standard. Here is an explaination how to do it for Postgres:
[run_postgresql_docker_compose][2]
The docker-compose.yml from that example:
version: '3'
services:
mydb:
image: postgres
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432/tc
volumes:
db-data:
driver: local
As this is a standard way of Docker you will find more examples.

with docker-compose my app container cannot see the mongodb container

Here is my docker compose file:
version: "3.3"
services:
test:
image: test
networks:
- mongo_net
ports:
- 4000:80
depends_on:
- mongodb
links:
- mongodb
mongodb:
image: mongo:latest
networks:
- mongo_net
ports:
- 27017:27017
volumes:
- local_data:/data/db
volumes:
local_data:
networks:
mongo_net:
driver: bridge
The 'test' image cannot find the 'mongodb' instance.
My assumption is that the 'links' section would connect the two, but it is not happening.
What am I missing?
for your compose file try just using depends_on. links is deprecated and maybe thats why you are currently getting issues since this is a V3 compose file.