The config server is reachable from localhost:8888 but when I deploy my applications on SCDF the following error occurs:
Fetching config from server at : http://localhost:8888
2021-07-30 14:58:53.535 INFO 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Connect Timeout Exception on Url - http://localhost:8888. Will be trying the next url if available
2021-07-30 14:58:53.535 WARN 143 --- [ main] o.s.b.context.config.ConfigDataLoader : Could not locate PropertySource ([ConfigServerConfigDataResource#3de88f64 uris = array<String>['http://localhost:8888'], optional = true, profiles = list['default']]): I/O error on GET request for "http://localhost:8888/backend-service/default": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
The application(s) deploy successfully on SCDF apart from the config server connection. The only property I specify in SCDF is the docker network. I'm using spring.config.import and am not using any bootstraps. This all works correctly when deployed locally but the microservices can't connect to the config server when deployed on SCDF.
Spring Boot Version: 2.5.1
app properties
spring.application.name=backend-service
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.config.import=optional:configserver:http://localhost:8888
config server properties
spring.cloud.config.server.git.uri=...
management.endpoints.web.exposure.include=*
spring.cloud.config.fail-fast=true
spring.cloud.config.retry.max-attempts=6
spring.cloud.config.retry.max-interval=11000
spring.cloud.bus.id=my-config-server
spring.cloud.stream.rabbit.bindings.springCloudBus.consumer.declareExchange=false
spring.rabbitmq.host=127.0.0.1
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
spring.cloud.bus.enabled=true
spring.cloud.bus.refresh.enabled: true
spring.cloud.bus.env.enabled: true
server.port=8888
docker-compose.yml
version: '3.1'
services:
h2:
...
rabbitmq-container:
image: rabbitmq:3.7.14-management
hostname: dataflow-rabbitmq
expose:
- '5672'
ports:
- "5672:5672"
- "15672:15672"
networks:
- scdfnet
dataflow-server:
...
networks:
- scdfnet
app-import:
...
networks:
- scdfnet
skipper-server:
...
networks:
- scdfnet
configserver-container:
image: ...
ports:
- "8888:8888"
expose:
- '8888'
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
depends_on:
- rabbitmq-container
networks:
- scdfnet
networks:
scdfnet:
external:
name: scdfnet
volumes:
h2-data:
For anyone else having this problem, I have found two ways of solving it. The problem is that once the Spring Boot application is containerized, the localhost referred to in the properties file will cause the program to fetch the localhost of the application container's virtual network and not that of your local machine.
There are numerous Stack Overflow answers for this same error but all center around corrections to bootstrap properties. However, bootstrap context initialization is deprecated since Spring Boot 2.4.
The first solution is to use your IPv4 address instead of localhost.
spring.config.import=configserver:http://<insert IPv4 address>:8888
For Example:
spring.config.import=configserver:http://10.6.39.148:8888
A much better solution than hardwiring addresses is to reference the config server container running in docker compose:
spring.config.import=optional:configserver:http://configserver-container:8888
Make sure that all of the Docker Compose services are running on the same network (scdf_network in my case) and note that this address will only work when running on docker-compose so if you are building the maven file on Eclipse, you may need to remove or disable your tests to build successfully. That might be unnecessary; it could just be that there is some property that I failed to copy to my local application.properties file which is causing the context tests to fail. According to the documentation, the optional label should allow the config client to run even if contact cannot be established with the config server.
Related
I am trying to setup a local Beam Runner for easier testing/developing.
I'd like to allow testing python pipeline which uses kafka IO locally on my mac.
Here's my current plan for the entire framework looks like:
Here's my current docker-compose
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
container_name: kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- "9092:9092"
jobmanager:
image: flink_image
command: ['jobmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\nparallelism.default: 2"
ports:
- "8081:8081"
taskmanager:
image: flink_image
scale: 1
depends_on:
- jobmanager
command: ['taskmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\ntaskmanager.numberOfTaskSlots: 2\nparallelism.default: 2"
beam-jobserver:
image: flink_image
ports:
- "8097:8097"
- "8098:8098"
- "8099:8099"
entrypoint:
- java
- -cp
- /target/flink/flink-web-upload/beam-runner.jar
- org.apache.beam.runners.flink.FlinkJobServerDriver
- --flink-master=jobmanager
- --job-host=0.0.0.0
And my pipeline looks like this:
LOCAL_ARGS = [
'--streaming',
'--runner=portableRunner',
'--environment_type=LOOPBACK',
'--job_endpoint=localhost:8099',
'--artifact_endpoint=localhost:8098',
'--defaultEnvironmentType=EXTERNAL',
'--defaultEnvironmentConfig=host.docker.internal:5000',
]
with beam.Pipeline(options=PipelineOptions(LOCAL_ARGS)) as pipeline:
result = (
pipeline
| "Kafka Read" >> ReadFromKafka(
consumer_config={"bootstrap.servers": "kafka:9092", 'auto.offset.reset': 'earliest'},
topics=["test.topic"],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam/java_boot\"}",
'--experiments=use_deprecated_read',
]
)
)
| "logging" >> beam.Map(lambda x: logging.info(f"logged: {x}"))
)
However, it looks like the LOOPBACK tried to open a port on my host machine, and ask the task manager to talk to itself via localhost:<randomPort>. Which is not accessible inside the container.
Unfortunately, host network is not supported for Docker on Mac, and thus I need to find a way to overwrite the Loopback settings so that it connect to host.docker.internal:<dedicated_pool> instead of a random port on my host machine? or if there are other suggested workaround? Thanks!
(The entire infra can be found here: https://gist.github.com/lydian/0db7614652c2ccdc733884134bf67f9b)
It looks like this is not supported. LOOPBACK mode is mostly targeting very simple setups.
You could come close by starting the worker manually, e.g.
python -m apache_beam.runners.worker.worker_pool_main --service_port =PORT
and then passing --environment_type=EXTERNAL --environment_config= host.docker.internal:PORT.
I was just facing similar struggles recently. Luckily there's two environment variables that facilitate testing on Docker for Mac. Unfortunately, there's not much documentation around that currently.
DOCKER_MAC_CONTAINER=1 limits the ports for communication with SDK workers to the range 8100 - 8200 instead of using random ports. Ports of that range are used in a round-robin fashion and have to be published.
BEAM_WORKER_POOL_IN_DOCKER_VM=1 tells an SDK worker to communicate with a runner node using host.docker.internal / via the docker host instead of using localhost.
Here's an example how to use these with Spark, but Flink shouldn't be any different
I'm trying out RedHat's Business Central using a docker-compose file as described in https://github.com/jboss-dockerfiles/business-central. At startup it runs a kie-server (quay.io/kiegroup/kie-server-showcase:7.67.0.Final) and a business-central webserver (quay.io/kiegroup/business-central-workbench-showcase:7.67.0.Final).
Because I'm only interested in the drools part and not the jbpm part, I starting the business-central server with -Dorg.kie.workbench.profile=PLANNER_AND_RULES as described in https://docs.jboss.org/drools/release/7.67.0.Final/drools-docs/html_single/#_selecting_a_profile
After login with admin I receive the following error:
business-central_1 | 18:11:13,321 ERROR [org.kie.workbench.common.services.backend.logger.GenericErrorLoggerServiceImpl] (default task-2) Error from user: admin Error ID: -1427996616 Location: HomePerspective|org.kie.workbench.common.screens.home.client.HomePresenter Exception: Uncaught exception: Client-side exception occurred although RPC call succeeded. Caused by: The profile is not expected and profile to define product name
Below, you can find the docker-compose file used:
version: "3.2"
services:
business-central:
image: quay.io/kiegroup/business-central-workbench-showcase:7.67.0.Final
ports:
- "8090:8080"
- "8091:8001"
environment:
KIE_SERVER_LOCATION: http://kie-server:8080/kie-server/services/rest/server
JAVA_OPTS: "-Xms256m -Xmx2048m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=512m -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -Dorg.kie.workbench.profile=PLANNER_AND_RULES"
kie-server:
image: quay.io/kiegroup/kie-server-showcase:7.67.0.Final
environment:
KIE_SERVER_ID: sample-server
KIE_SERVER_LOCATION: http://kie-server:8080/kie-server/services/rest/server
KIE_SERVER_CONTROLLER: http://business-central:8080/business-central/rest/controller
KIE_MAVEN_REPO: http://business-central:8080/business-central/maven2
ports:
- "8092:8080"
depends_on:
- business-central
volumes:
business-central_data:
UPDATE: on 2022-04-07
I also looked at the source code in github but couldn't find any reference to PLANNER_AND_RULES. I looked at several repo's in https://github.com/kiegroup/:
https://github.com/kiegroup/drools
https://github.com/kiegroup/droolsjbpm-build-bootstrap
https://github.com/kiegroup/droolsjbpm-knowledge
https://github.com/kiegroup/kie-soup
I have no idea how where to look for ;-(.
I found a very disturbing picture on explaining all the repo's that are being used, but it would be good if someone pointed out what repo to look for :-).
"ResourceLoader" with AWS S3 works fine with these properties:
cloud:
aws:
s3:
endpoint: s3.amazonaws.com <-- custom endpoint added in spring cloud aws 2.3
credentials:
accessKey: XXXXXX
secretKey: XXXXXX
region:
static: us-east-1
stack:
auto: false
However, when I bring up a localstack container locally and try to use it with these properties(as per this release doc: https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available):
cloud:
aws:
s3:
endpoint: http://localhost:4566
credentials:
accessKey: test
secretKey: test
region:
static: us-east-1
stack:
auto: false
I get this exception:
17:12:12.130 [reactor-http-nio-2] ERROR org.springframework.boot.autoconfigure.web.reactive.error.AbstractErrorWebExceptionHandler - [23efd000-1] 500 Server Error for HTTP GET "/getresource/test"
com.amazonaws.SdkClientException: Unable to execute HTTP request: mybucket.localhost
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP GET "/getresource/test" [ExceptionHandlingWebHandler]
Stack trace:
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]
Caused by: java.net.UnknownHostException: mybucket.localhost
at java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) ~[?:?]
I can view my localstack bucket files otherwise fine in an S3 browser.
Here is the docker compose config for my localstack:
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- AWS_ACCESS_KEY_ID=test
- AWS_SECRET_ACCESS_KEY=test
- EDGE_PORT=4566
- SERVICES=lambda,s3
ports:
- '4566-4583:4566-4583'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
Here is how I am reading a text file:
public class ResourceTransferManager {
#Autowired
ResourceLoader resourceLoader;
public void resourceLoadingMethod() throws IOException {
Resource resource = resourceLoader.getResource("s3://mybucket/index.txt");
InputStream inputStream = resource.getInputStream();
System.out.println("File content: " + IOUtils.toString(inputStream, StandardCharsets.UTF_8));
}}
By default S3 client creates a path having bucket name as subdomain and this causes the issue.
there are couple of ways to address this issue :
In case of localstack , do not use the endpoint http://localhost:4566 , use the standard formate endpoint i.e : http://s3.localhost.localstack.cloud:4566 , this will actualy reachout to DNS and will resolve into localhost IP internally and thus this will work fine. (only caviate it , it resolve using public DNS thus it either needs internet connection or you will need to make host entries prefixing bucketname for example in host file put 127.0.0.1 <yourexpectedbucketName>.s3.localhost.localstack.cloud).
OR if you are using docker then instead of making host entries , you can also create network alias for your localstack container like : <yourexpectedbucketName>.s3.localhost.localstack.cloud
another better way is extension to first approach , but here instead of using aliases for each of your bucket (which may not always be feasible) , you can spin up local dns container and use wildcard dns config there. refer simplified sample at : https://gist.github.com/paraspatidar/c29e4adb172a5afc92852a57e621323d
( original reference : https://gist.github.com/NAR8789/92da076d0c35b434107fb4f4f198fd12)
I am working on an application that connects an app running using Dapr (self-hosted) and a gRPC client. I use the common.proto and runtime.proto. I am able to get the connection working when using the DAPR CLI.
But when using the DAPR self-hosted instance I am getting the following error when using the gRPC port:
Grpc.Core.RpcException: 'Status(StatusCode="Internal", Detail="invoke API is not ready", DebugException="Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1621435606.713000000","description":"Error received from peer ipv4:127.0.0.1:54000","file":"..\..\..\src\core\lib\surface\call.cc","file_line":1068,"grpc_message":"invoke API is not ready","grpc_status":13}")'
and this when I am using the dapr http port
Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1621435606.713000000","description":"Error received from peer ipv4:127.0.0.1:54000","file":"..\..\..\src\core\lib\surface\call.cc","file_line":1068,"grpc_message":"invoke API is not ready","grpc_status":13}")'
Below is the compose file for DAPR server
version: '3.4'
services:
daprserver:
image: ${DOCKER_REGISTRY-}daprserver
build:
context: .
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
networks:
- daprserver-network
daprserver-dapr:
image: "daprio/dapr:latest"
command: [ "./daprd",
"-app-id", "daprserver",
"-app-port", "80",
"-dapr-http-port","53001",
"-dapr-grpc-port","53000"]
ports:
- 54000:53000 #grpc external:internal
- 54001:53001 #http external:internal
depends_on:
- daprserver
networks:
- daprserver-network
networks:
daprserver-network:
and the client code is below:
var channel = new Channel("127.0.0.1:54000", ChannelCredentials.Insecure);
var daprClient = new Dapr.Client.Autogen.Grpc.v1.Dapr.DaprClient(channel);
var request = new InvokeServiceRequest
{
Id = "daprserver",
Message = new InvokeRequest
{
Method = "weatherforecast",
HttpExtension = new HTTPExtension
{
Verb = HTTPExtension.Types.Verb.Get,
}
}
};
var invokeResponse = daprClient.InvokeServiceAsync(request).GetAwaiter().GetResult();
var json = invokeResponse.Data.Value.ToStringUtf8();
Am I missing a setting or is there any issue with my configuration?
You need to run the app using the below command in the command prompt from the project file. It will run both dapr and your application, you can see two processes in the task manager one is dapr.exe, and the second is dotnet.exe.
dapr run --app-id yourappid dotnet run
Here's my goal, I would like to configure emails for my Gitlab server. I followed a lot of tutorials but I can't make it work.
My configuration is the following, I've got a reverse-proxy in a Docker container and my Gitlab server also in a Docker container.
About versions :
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.16.1, build 6d1ac21
Here's my docker-compose.yml file
version: '3.3'
networks:
proxy:
external: true
internal:
external: false
services:
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
environment:
- TZ=Europe/Paris
- GITLAB_TIMEZONE=Paris
- IMAP_USER=USER#GMAIL.COM
- IMAP_PASSWORD=MYGMAILPASS
- GITLAB_INCOMING_EMAIL_ADDRESS=USERGMAIL+%{key}#gmail.com
volumes:
- /srv/gitlab/config:/etc/gitlab
- /srv/gitlab/logs:/var/log/gitlab
- /srv/gitlab/data:/var/opt/gitlab
restart: always
labels:
- traefik.backend=gitlab
- traefik.frontend.rule=Host:git.domain.com
- traefik.docker.network=proxy
- traefik.port=80
- traefik.frontend.entryPoints=http,https
networks:
- internal
- proxy
I followed this tutorial which seems to be good :
https://github.com/sameersbn/docker-gitlab#available-configuration-parameters
I must miss something in my configuration but I can't figure out what is it ...
Does anyone can help me to configure email sending ? I don't know either the proper way to test email sending from GitLab.
Is the best way is to configure from docker-compose environment variables or directly from gitlab.rb file ?
Some help would be much appreciated
The instructions you followed are for a different docker image than the one you're actually using. You also set up IMAP, which is for receiving emails. In GitLab's case, it's for replying to issues by email.
What you want are the SMTP settings. The GitLab docker image does not come with sendmail installed, so you will have to follow the instructions here to set up SMTP in GitLab: https://docs.gitlab.com/omnibus/settings/smtp.html#example-configuration
You can dump gitlab.rb configuration right in your docker-compose under the environment section. My Fastmail setup for reference:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "***"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "***"
gitlab_rails['smtp_password'] = "***"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true
gitlab_rails['smtp_openssl_verify_mode'] = 'peer'