why must eureka.client.serviceUrl.defaultZone be provided in `bootstrap.properties` when config the spring cloud config server in a discovery manner? - spring-cloud

I was aiming to config the location of spring cloud config server by setting spring.applicaton.name and server.port and eureka.client.serviceUrl.defaultZone in application.properties together with spring.cloud.config.discovery.enabled=true and spring.cloud.config.discovery.service-id=cloud-config in bootstrap.properties, which turned out to be insufficient. The following error messages are shown in log:'
com.netflix.discovery.DiscoveryClient : DiscoveryClient_BOOTSTRAP/192.168.1.5:bootstrap - was unable to refresh its cache! status = Cannot execute request on any known server
No instances found of configserver (cloud-config)
According to the docs, I moved eureka.client.serviceUrl.defaultZone into bootstrap.properties and succeeded.
My question is, if spring.application.name and server.port are essential for a eureka client to register on the eureka server, why they can be unsettled in bootstrap.properties for the config client?
I suspect that the config client will first use eureka.client.serviceUrl.defaultZone alone to connect with the eureka server and fetch service registration informations but not register itself so as to locate the config server and pull something. After that, since the config client is also a eureka client, it uses relative parameters in application.properties to register on eureka server. As some evidence of my suspect, I found the following logs during the startup of the application:
2017-09-07 06:13:09.651 INFO [bootstrap,,,] 74104 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
2017-09-07 06:13:09.817 INFO [bootstrap,,,] 74104 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : The response status is 200
2017-09-07 06:13:09.821 INFO [bootstrap,,,] 74104 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Not registering with Eureka server per configuration
2017-09-07 06:13:37.427 INFO [-,,,] 74104 --- [ restartedMain] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server
Is it?

Related

Unable to Run a Composed Task in Spring Cloud Data Flow

I am running latest version of SCDF server on Kubernetes cluster. Every time I try to run a composed task, it tries to fetch the application properties for composed-task-runner application and fails to launch the composed task.
First of all, SCDf is trying to pull the properties (metadata) from Spring Maven repo when I am running the server on k8s. my server behind a firewall and it cannot connect to spring maven repo. I already downloaded the composed-task-runner docker image to my local repo and added the composed-task-runner application using UI. Why it still tries to download metadata from Spring Maven repo ? How do I stop it ?
here is the log :
2020-11-21 15:49:07.591 INFO 1 --- [nio-8080-exec-4] o.s.c.d.s.k.DefaultContainerFactory : Using Docker entry point style: exec
2020-11-21 15:49:58.355 WARN 1 --- [nio-8080-exec-6] .s.c.d.s.s.i.TaskConfigurationProperties : org.springframework.cloud.dataflow.server.service.impl.TaskConfigurationProperties.logDeprecationWarning is deprecated. Please use org.springframework.cloud.dataflow.server.service.impl.ComposedTaskRunnerConfigurationProperties.logDeprecationWarning
2020-11-21 15:50:18.427 WARN 1 --- [nio-8080-exec-6] ApplicationConfigurationMetadataResolver : Failed to retrieve properties for resource org.springframework.cloud:spring-cloud-dataflow-composed-task-runner:jar:2.7.0-SNAPSHOT because of ConnectTimeoutException: Connect to repo.spring.io:443 timed out
2020-11-21 15:50:38.522 WARN 1 --- [nio-8080-exec-6] ApplicationConfigurationMetadataResolver : Failed to retrieve properties for resource org.springframework.cloud:spring-cloud-dataflow-composed-task-runner:jar:2.7.0-SNAPSHOT because of ConnectTimeoutException: Connect to repo.spring.io:443 timed out
2020-11-21 15:50:38.572 INFO 1 --- [nio-8080-exec-6] o.s.c.d.s.k.KubernetesTaskLauncher : Preparing to run a container from org.springframework.cloud:spring-cloud-dataflow-composed-task-runner:jar:2.7.0-SNAPSHOT. This may take some time if the image must be downloaded from a remote container registry.
2020-11-21 15:50:38.573 INFO 1 --- [nio-8080-exec-6] o.s.c.d.s.k.DefaultContainerFactory : Using Docker image: //org.springframework.cloud:spring-cloud-dataflow-composed-task-runner:jar:2.7.0-SNAPSHOT
Looks like, the Composed Task Runner docker image can now be set using the environment variable :
name: SPRING_CLOUD_DATAFLOW_TASK_COMPOSED_TASK_RUNNER_URI
value: 'docker://springcloud/spring-cloud-dataflow-composed-task-runner:2.6.0'
We were on SCDF server 2.2.4 version before this and we had to manually add the composed task runner as an Application using the dashboard UI.
Right now, all I had to do is to download this image and push to my local git repo and use it here.

Issue using SAML for authentication with spring boot app

I'm currently reading this article on how to set up SAML with Spring Boot applications.
I followed all the steps and I just changed the Single Sign On URL from "https://localhost:8443/saml/SSO" to "https://localhost:8443/mycompanysaml/SSO".
When I run the application, I see no errors in the IDE console, But the login page of Okta doesn't show on the browser. I have the following message.
Error message in the browser
And the stacktrace of the message in the console is the following :
2017-11-03 15:21:23.991 INFO 50013 --- [nio-8443-exec-7] o.a.c.c.C[Tomcat].[localhost].[/] : Initializing Spring FrameworkServlet 'dispatcherServlet'
2017-11-03 15:21:23.991 INFO 50013 --- [nio-8443-exec-7]o.s.web.servlet.DispatcherServlet : FrameworkServlet'dispatcherServlet': initialization started
2017-11-03 15:21:24.006 INFO 50013 --- [nio-8443-exec-7]o.s.web.servlet.DispatcherServlet : FrameworkServlet'dispatcherServlet': initialization completed in 15 ms
2017-11-03 15:21:24.021 INFO 50013 --- [nio-8443-exec-7]o.s.s.s.m.MetadataGeneratorFilter : No default metadata configured, generating with default values, please pre-configure metadata for production use
2017-11-03 15:21:24.060 INFO 50013 --- [nio-8443-exec-7]o.s.s.s.m.MetadataGeneratorFilter : Created default metadata for system with entityID: https://localhost:8443/saml/metadata
2017-11-03 15:21:24.708 INFO 50013 --- [nio-8443-exec-7].s.m.p.AbstractReloadingMetadataProvider : New metadata succesfullyloaded for 'https://dev-531605.oktapreview.com/app/exkcp2fsptqmfDGtf0h7/sso/saml/metadata'
2017-11-03 15:21:24.720 INFO 50013 --- [nio-8443-exec-7].s.m.p.AbstractReloadingMetadataProvider : Next refresh cycle for metadata provider 'https://dev-531605.oktapreview.com/app/exkcp2fsptqmfDGtf0h7/sso/saml/metadata' will occur on '2017-11-04T01:21:24.240Z' ('2017-11-03T18:21:24.240-07:00' local time)
2017-11-03 15:21:24.865 INFO 50013 --- [io-8443-exec-10]o.s.security.saml.log.SAMLDefaultLogger:AuthNRequest;SUCCESS;0:0:0:0:0:0:0:1;https://localhost:8443/saml/metadata;http://www.okta.com/exkcp2fsptqmfDGtf0h7;;;
Could someone please explain me what going on ? Is it because I changed the Single Sign On URL to my own, it shouldn't be a problem, right ?
Thanks in advance for your help.
G.
The configuration that you setup for SAML in your application will have one spring security filter related to SAML where you have added the filters.
so if you want to change the SSO url then you will have to change the url's in the filters declared in the config file.
For the application that you are referring is using the default configurations.
for more info. http://www.sylvainlemoine.com/2016/06/06/spring-saml2.0-websso-and-jwt-for-mobile-api/ check this

Spring Cloud - Registry Service port customization

I'd like to customize the Eureka port with Spring Cloud.
With the default port below, the services registry sees itself right (within the provided GUI)
spring:
application:
name: services-registry
server:
port: 8761
eureka:
instance:
hostname: localhost
nonSecurePort: ${server.port}
client:
register-with-eureka: true
fetch-registry: false
service-url:
default-zone: http://${eureka.instance.hostname}:${server.port}/eureka/
But if I just change server.port to 8787, no service can register itself, not even the services registry itself.
2017-01-09 16:18:21.584 WARN 17496 --- [nfoReplicator-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failure
2017-01-09 16:18:21.584 WARN 17496 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_SERVICES-REGISTRY/xxx.org:services-registry:8787 - registration failed Cannot execute request on any known server
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
...
2017-01-09 16:13:33.299 WARN 17496 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
Can someone explain this issue and save my day? Thanks!
Ok, got it... the label after service-url property (which can be aliased as serviceUrl in YML) is a HashMap KEY, not a property label. So it has to be kept as a Camel Case tag in any ways!
eureka.client.service-url.defaultZone=http://[myIP#]:8787/eureka

Zoomdata error - Connect to localhost:3333 [localhost/127.0.0.1] failed: Connection refused

Good morning.
I am an intern currently working on a proof of concept requiring me to use zoomdata to visualize data from elasticsearch on ubuntu 16.04 LTS.
I managed to connect both on docker - in short, each process ran in a separate, isolated container and communicated through ports - but the performance wasn't good enough (the visualisation encountered bugs for files > 25 mb).
So I decided to install zoomdata on my computer (trial, .deb version)
I also installed mongodb first.
However, when I run zoomdata, I have two issues, and I believe that solving the second might solve the first:
-the first is that when I create a new elasticsearch connexion, I enter exactly the same parameters as with docker (I've triple checked, they are accurate):
node name: elasticsearch (the name of my node)
client type : HTTP node
adress: 192.168.10.4 and port: 9200
-the second is when I run zoomdata:
During the initialisation (more accurately, the spark one) I have an error message (more details below):
c.z.s.c.r.impl.CategoriesServiceClient : Attempt: 1 to send
categories registration failed. Connect to localhost:3333
[localhost/127.0.0.1] failed: Connection refused
followed by the same message over and over again (the attempt number changes) as the rest of the program executes normally.
I took a look at the logs - computer(bug) and docker(works).
Note: the SAML error doesn't stop the docker version from working, so unless fixing it fixes other problems It's not my priority.
Computer:
2016-06-15 16:04:06.680 ERROR 1 --- [ main]
c.z.core.service.impl.SamlService : Error initializing SAML.
Disabling SAML.
2016-06-15 15:58:12.125 INFO 8149 --- [ main]
com.zoomdata.dao.mongo.KeyValueDao : Inserting into key value
collection com.zoomdata.model.dto.SamlConfig#4f3431a1
2016-06-15 15:58:12.789 INFO 8149 --- [ main]
c.z.core.init.SystemVersionBeanFactory : Server version 2.2.6,
database version 2.2.6, git commit :
0403c951946e5daf03000d83ec45ad85d0ce4a56, built on 06060725
2016-06-15 15:58:17.571 ERROR 8149 --- [actory-thread-1]
c.z.s.c.r.impl.CategoriesServiceClient : Attempt: 1 to send
categories registration failed. Connect to localhost:3333
[localhost/127.0.0.1] failed: Connection refused
2016-06-15 15:58:17.776 ERROR 8149 --- [actory-thread-1]
c.z.s.c.r.impl.CategoriesServiceClient : Attempt: 2 to send
categories registration failed. Connect to localhost:3333
[localhost/127.0.0.1] failed: Connection refused
2016-06-15 15:58:18.537 INFO 8149 --- [ main]
c.z.c.s.impl.SparkContextManager$2 : Running Spark version 1.5.1
Docker:
2016-06-15 16:04:06.680 ERROR 1 --- [ main]
c.z.core.service.impl.SamlService : Error initializing SAML.
Disabling SAML.
2016-06-15 16:04:06.681 INFO 1 --- [ main]
com.zoomdata.dao.mongo.KeyValueDao : Inserting into key value
collection com.zoomdata.model.dto.SamlConfig#43b09564
2016-06-15
16:04:07.209 INFO 1 --- [ main]
c.z.core.init.SystemVersionBeanFactory : Server version 2.2.7,
database version 2.2.7, git commit :
97c9ba3c3662d1646b4e380e7640766674673039, built on 06131356
2016-06-15
16:04:12.117 INFO 1 --- [actory-thread-1]
c.z.s.c.r.impl.CategoriesServiceClient : Registered to handle
categories: [field-refresh, source-refresh]
2016-06-15 16:04:12.427 INFO 1 --- [ main]
c.z.c.s.impl.SparkContextManager$2 : Running Spark version 1.5.1
The server versions are different (2.2.6 for computer, 2.2.7 for docker) - I'll try to update the computer one but I don't have much hope that it will work.
What I tried already:
-use zoomdata as root: zoomdata refused, and internet told me why it was a bad idea to begin with.
-deactivate all firewalls : didn't do anything.
-search on the internet: didn't manage to solve.
I'm all out of ideas, and would be grateful for any assistance.
EDIT : I updated to 2.2.7, but the problem persists.
I'm wondering if there may be a problem with authorisations (since the connexion is refused).
I also tried disabling SSL on my zoomdata server.
EDIT: after a discussion on the logs and .properties files seen here
the issue is the scheduler service not connecting to mongodb, although zoomdata does connect.
I filled the files following the installation guide but to no avail.
From the logs I see that scheduler cannot connect to mongo because socket exception. Found possible solution here:
stackoverflow.com/questions/15419325/mongodb-insert-fails-due-to-socket-exception
Comment out the line in your mongod.conf that binds the IP to 127.0.0.1. Usually, it is set to 127.0.0.1 by default.
For Linux, this config file location should be be /etc/mongod.conf.
Once you comment that out , it will receive connections from all interfaces.
This fixed it for me as i was getting these socket exceptions as well.

Spring Boot with server.contextPath set vs. URL to hystrix.stream via Eureka Server

I have Eureka Server with Turbine instance running and a few discovery clients that are connected to it. Everything works fine, but if I register a discovery client that has server.contextPath set, it didn't get recognized by InstanceMonitor and Turbine stream is not able to combine its hystrix.stream.
This is how it looks in the logs of Eureka/Turbine server:
2015-02-12 06:56:23.265 INFO 1 --- [ Timer-0] c.n.t.discovery.InstanceObservable : Hosts up:3, hosts down: 0
2015-02-12 06:56:23.266 INFO 1 --- [ Timer-0] c.n.t.monitor.instance.InstanceMonitor : Url for host: http://user-service:8887/hystrix.stream default
2015-02-12 06:56:23.268 ERROR 1 --- [InstanceMonitor] c.n.t.monitor.instance.InstanceMonitor : Could not initiate connection to host, giving up: []
2015-02-12 06:56:23.269 WARN 1 --- [InstanceMonitor] c.n.t.monitor.instance.InstanceMonitor : Stopping InstanceMonitor for: user-service default
com.netflix.turbine.monitor.instance.InstanceMonitor$MisconfiguredHostException: []
at com.netflix.turbine.monitor.instance.InstanceMonitor.init(InstanceMonitor.java:318)
at com.netflix.turbine.monitor.instance.InstanceMonitor.access$100(InstanceMonitor.java:103)
at com.netflix.turbine.monitor.instance.InstanceMonitor$2.call(InstanceMonitor.java:235)
at com.netflix.turbine.monitor.instance.InstanceMonitor$2.call(InstanceMonitor.java:229)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
It tries to get hystrix stream from http://user-service:8887/hystrix.stream where the correct URL including sever.contextPath should be http://user-service:8887/uaa/hystrix.stream
The application.yml of that client contains:
server:
port: 8887
contextPath: /uaa
security:
ignored: /css/**,/js/**,/favicon.ico,/webjars/**
basic:
enabled: false
My question is: should I add some additional configuration options to this user-service discovery client to register proper hystrix.stream URL location?
I didn't dig into that yet, I will let you know if found something before getting answer to that question.
Current solution
There is one problem when it comes to using server.contextPath and management.context-path. When both are set, turbine stream is being served on ${HOST_URL}/${server.contextPath}/${management.context-path}/hystrix.stream. In that case I had to drop using server.contextPath (I replaced it with a prefix in controllers #RequestMapping).
Now, when you user management.context-path, then your hystrix.stream is being served from the URL that uses it as a prefix. In that case you have to follow Spencer's suggestion and set
turbine.instanceUrlSuffix=/{PUT_YOUR_MANAGEMENT_CONTEXT_PATH_HERE}/hystrix.stream
And of course this management.context-path must be set with the same value for all your Discovery Clients - it can be done easily with Spring Cloud Config http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html
You can set turbine.instanceUrlSuffix.<CLUSTERNAME>=/uaa/hystrix.stream. Where <CLUSTERNAME> is the value set in turbine.aggregator.clusterConfig. All of the config options from the Turbine 1 wiki work. You don't need to add the port to the suffix as Spring Cloud Netflix Turbine adds the port from eureka.