What I do ATM
I have a problem with service discovery in my docker-compose.yaml with a dynamic amount of services named 'foo1', 'foo2', 'foo3', ... and one gateway service 'gw'.
version: "3.8"
services:
foo1:
...
foo2:
...
foo3:
...
foo4:
...
gw:
environment:
- SERVER_URLS="foo1;foo2;foo3;foo4"
The main reason for the environment variable SERVER_URLS in 'gw' is that a 'docker up' will then restart the 'gw' and then I can add a new 'foo5' service by just adding the service itself and extending the SERVER_URLS.
Wish - automation
My problem is that this SERVER_URLS needs to be extended manually and so it can be forgotten. I wonder if there is something like:
gw:
environment:
- SERVER_URLS=foo*
Where docker compose generates a new list and only restarts the gw service if there is a change.
Related
I'm getting error when configuration file is set.
My host is a Ubuntu 22.04
Inside the docker container the user is rabbitmq, using id -u rabbitmq the $UID is 999
I changed the file using: chown 999 advanced.config
But the same error still persists.
Failed to load advanced configuration file "/etc/rabbitmq/advanced.config": unknown POSIX error
Error during startup: {error,failed_to_read_advanced_configuration_file}
version: "3.2"
services:
rabbitmq2:
image: rabbitmq:3-management
hostname: rabbitmq2
container_name: 'rabbitmq2'
ports:
- "5672:5672"
- "15672:15672"
- "5552:5552"
volumes:
- ./advanced/rabbitmq2/advanced.config:/etc/rabbitmq/advanced.config
# or using:
# - type: bind
# source: $PWD/advanced/rabbitmq2/advanced.config
# target: /etc/rabbitmq/advanced.config
environment:
- RABBITMQ_ADVANCED_CONFIG_FILE=/etc/rabbitmq/advanced.config
If I use another place to put the file, or another file name, the container runs, but Rabbitmq doesn't load the configuration file.
I changed the content of the file and it didn't work (rabbitmq can't load the file), I tried using blank file, and using some configurations, for example:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
]
Be sure the format is correct:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
].
Note the trailing period.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
The config server is reachable from localhost:8888 but when I deploy my applications on SCDF the following error occurs:
Fetching config from server at : http://localhost:8888
2021-07-30 14:58:53.535 INFO 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Connect Timeout Exception on Url - http://localhost:8888. Will be trying the next url if available
2021-07-30 14:58:53.535 WARN 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Could not locate PropertySource ([ConfigServerConfigDataResource#3de88f64 uris = array<String>['http://localhost:8888'], optional = true, profiles = list['default']]): I/O error on GET request for "http://localhost:8888/backend-service/default": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
The application(s) deploy successfully on SCDF apart from the config server connection. The only property I specify in SCDF is the docker network. I'm using spring.config.import and am not using any bootstraps. This all works correctly when deployed locally but the microservices can't connect to the config server when deployed on SCDF.
Spring Boot Version: 2.5.1
app properties
spring.application.name=backend-service
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.config.import=optional:configserver:http://localhost:8888
config server properties
spring.cloud.config.server.git.uri=...
management.endpoints.web.exposure.include=*
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.cloud.bus.id=my-config-server
spring.cloud.stream.rabbit.bindings.springCloudBus.consumer.declareExchange=false
spring.rabbitmq.host=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
spring.cloud.bus.enabled=true
spring.cloud.bus.refresh.enabled: true
spring.cloud.bus.env.enabled: true
server.port=8888
docker-compose.yml
version: '3.1'
services:
h2:
...
rabbitmq-container:
image: rabbitmq:3.7.14-management
hostname: dataflow-rabbitmq
expose:
- '5672'
ports:
- "5672:5672"
- "15672:15672"
networks:
- scdfnet
dataflow-server:
...
networks:
- scdfnet
app-import:
...
networks:
- scdfnet
skipper-server:
...
networks:
- scdfnet
configserver-container:
image: ...
ports:
- "8888:8888"
expose:
- '8888'
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
depends_on:
- rabbitmq-container
networks:
- scdfnet
networks:
scdfnet:
external:
name: scdfnet
volumes:
h2-data:
For anyone else having this problem, I have found two ways of solving it. The problem is that once the Spring Boot application is containerized, the localhost referred to in the properties file will cause the program to fetch the localhost of the application container's virtual network and not that of your local machine.
There are numerous Stack Overflow answers for this same error but all center around corrections to bootstrap properties. However, bootstrap context initialization is deprecated since Spring Boot 2.4.
The first solution is to use your IPv4 address instead of localhost.
spring.config.import=configserver:http://<insert IPv4 address>:8888
For Example:
spring.config.import=configserver:http://10.6.39.148:8888
A much better solution than hardwiring addresses is to reference the config server container running in docker compose:
spring.config.import=optional:configserver:http://configserver-container:8888
Make sure that all of the Docker Compose services are running on the same network (scdf_network in my case) and note that this address will only work when running on docker-compose so if you are building the maven file on Eclipse, you may need to remove or disable your tests to build successfully. That might be unnecessary; it could just be that there is some property that I failed to copy to my local application.properties file which is causing the context tests to fail. According to the documentation, the optional label should allow the config client to run even if contact cannot be established with the config server.
I want to proxy mongodb behind nginx. And I came across following code for same purpose.
My question is, how can I enable "stream" module in nginx?
stream {
server {
listen 27020;
proxy_connect_timeout 5s;
proxy_timeout 20s;
proxy_pass mongodb_host;
}
upstream mongodb_host{
server xx.xxx.xxx.xx:27017;
}
}
Technically recompiling using the --with-stream option is required, as per the nginx docs. Fortunately there are plenty of existing images for that purpose. I've used this one personally: https://hub.docker.com/r/tekn0ir/nginx-stream
If using compose, your docker-compose.yml file would look something like this -
version: '3'
services:
service1:
...
service2:
...
nginx:
image: tekn0ir/nginx-stream:latest
ports:
- "27020:27020"
volumes:
- /usr/local/nginx/conf/http:/opt/nginx/http.conf.d
- /usr/local/nginx/conf/stream:/opt/nginx/stream.conf.d
Create 1 or more files in the "/usr/local/nginx/conf/stream" directory containing your code stream { ... } . Config file names just need to end with the ".conf" extension. Then you can create any http config files in "/usr/local/nginx/conf/http".
Straight from the nginx documentation:
"The ngx_stream_core_module module is available since version 1.9.0. This module is not built by default, it should be enabled with the --with-stream configuration parameter."
Here's my goal, I would like to configure emails for my Gitlab server. I followed a lot of tutorials but I can't make it work.
My configuration is the following, I've got a reverse-proxy in a Docker container and my Gitlab server also in a Docker container.
About versions :
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.16.1, build 6d1ac21
Here's my docker-compose.yml file
version: '3.3'
networks:
proxy:
external: true
internal:
external: false
services:
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
environment:
- TZ=Europe/Paris
- GITLAB_TIMEZONE=Paris
- IMAP_USER=USER#GMAIL.COM
- IMAP_PASSWORD=MYGMAILPASS
- GITLAB_INCOMING_EMAIL_ADDRESS=USERGMAIL+%{key}#gmail.com
volumes:
- /srv/gitlab/config:/etc/gitlab
- /srv/gitlab/logs:/var/log/gitlab
- /srv/gitlab/data:/var/opt/gitlab
restart: always
labels:
- traefik.backend=gitlab
- traefik.frontend.rule=Host:git.domain.com
- traefik.docker.network=proxy
- traefik.port=80
- traefik.frontend.entryPoints=http,https
networks:
- internal
- proxy
I followed this tutorial which seems to be good :
https://github.com/sameersbn/docker-gitlab#available-configuration-parameters
I must miss something in my configuration but I can't figure out what is it ...
Does anyone can help me to configure email sending ? I don't know either the proper way to test email sending from GitLab.
Is the best way is to configure from docker-compose environment variables or directly from gitlab.rb file ?
Some help would be much appreciated
The instructions you followed are for a different docker image than the one you're actually using. You also set up IMAP, which is for receiving emails. In GitLab's case, it's for replying to issues by email.
What you want are the SMTP settings. The GitLab docker image does not come with sendmail installed, so you will have to follow the instructions here to set up SMTP in GitLab: https://docs.gitlab.com/omnibus/settings/smtp.html#example-configuration
You can dump gitlab.rb configuration right in your docker-compose under the environment section. My Fastmail setup for reference:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "***"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "***"
gitlab_rails['smtp_password'] = "***"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true
gitlab_rails['smtp_openssl_verify_mode'] = 'peer'
I have two Docker containers.
TestWeb (Expose: 80)
TestAPI (Expose: 80)
Testweb container calls TestApi container. Host can communicate with TestWeb container from port 8080. Host can communicate with TestApi using 8081.
I can get TestWeb to call TestApi in my dev box (Windows 10) but when I deploy the code to AWS (ECS) I get "unknown host" exception. Both the containers work just fine and I can call them individually. But when I call a method that internally makes a Rest call using HttpClient to a method in Container2, it gives the error:
An error occurred while sending the request. ---> System.Net.Http.CurlException: Couldn't resolve host name.
Code:
using (var client = new HttpClient())
{
try
{
string url = "http://testapi/api/Tenant/?i=" + id;
var response = client.GetAsync(url).Result;
if (response.IsSuccessStatusCode)
{
var responseContent = response.Content;
string responseString = responseContent.ReadAsStringAsync().Result;
return responseString;
}
return response.StatusCode.ToString();
}
catch (HttpRequestException httpRequestException)
{
return httpRequestException.Message;
}
}
The following are the things I have tried:
The two containers (TestWeb, TestAPI) are in the same Task definition in AWS ECS. When I inspect the containers, I get the IP address of each of the containers. I can ping container2 from container1 with their IP address. But I can't ping using container2 name. It gives me "unknown host" error.
It appears ECS doesn't use legit docker-compose under the hood, however, their implementation does support the Compose V2 "links" feature.
Here is a portion of my compose file I just ran on ECS that needed this same functionality AND had the same "could not resolve host" error you were getting. The "links" I added fixed my hostname resolution issue on Elastic Container Service!
version: '3'
services:
appserver:
links:
- database:database
- socks-proxy:socks-proxy
This allowed my appserver to communicate TO the database and socks-proxy hostnames. The format is "SERVICE:ALIAS" and it is fine to keep them both the same as a default practice.
In your example it would be:
version: '3'
services:
testapi:
links:
- testweb:testweb
testweb:
links:
- testapi:testapi
AWS does not use Docker compose but provides a interface to add Task Definitions.
Containers that need to communicate together can be put on the same Task definition. Then we can also specify in the links section the containers that will be called from the current container. Each container can be given its container name on the "Host" section of Task Definition. Once I added the container name to the "Host" field, Container1 (TestWeb) was able to communicate with Container2 (TestAPI).