I want to configure a simple microservice architecture with Spring Cloud.
I have started with 3 infrastructure components:
- Service Registry (SR)
- Config Server (CS)
- Auth Server (AS)
and this configuration:
SR and AS use CS to retrieve their configurations on startup.
CS and AS register themself on SR to be discoverable.
Now, a cyclic dependency between SR and CS cause an exception because one of them can not connect to the other.
There is a solution without remove this dependency? For example, waiting for the other service to be up and running?
Thanks!
Related
I am using Hono for a while, it is a pretty awesome IoT-Hub - thanks for a great job :)
Now I am trying to get forward and as it is recommended by the Hono documentation,
I would like to integrate the EnMasse Project and replace the default "AMQP Messaging Network in hono with the EnMasse.
After reading the EnMasse-Doc I realized that actually also EnMasse uses the same "AMQP Networking" structure as Hono by means of Qpid Dispatch router and (Multiple) ActiveMQ Artemis!
Now my Questions are:
What is actually the difference between the default AMQP-Messaging Network in Hono and the EnMasse?
I searched a lot on the Net but found no answer on how to integrate EnMasse in Hono. I am grateful for any Idea where to start!
Thanks in advance!
ad 1) By default, the Hono Helm chart deploys a single instance of each Qpid Dispatch Router and Artemis broker. This means that both Dispatch Router and Artemis are single points of failure. With enMasse, a network of Dispatch Routers and multiple Artemis brokers can be created and (more importantly) consistently managed. This will be important for scale out and fail over in production scenarios.
ad 2) If you want to deploy to Kubernetes then you might want to start with using the enMasse operator to create an instance of enMasse in your kubernetes cluster. You can then use the Hono Helm chart's configuration properties to configure your Hono instance to not deploy the example AMQP Messaging Network (i.e. single Dispatch Router + Artemis) but instead to connect to the enMasse instance that you have created.
We have an existing microservice environment with logstash, config and eureka servers. We are now setting up a Spring Cloud Dataflow (Kubernetes) environment (primarily intially to run tasks/batch jobs).
Ideally we would like the tasks to use the existing logstash, config and eureka servers via the standard spring boot configuration (annotations etc) to support the following scenarios:
Logstash: When a task runs its logs are output to logstash and viewable from Kibana
Config Server: To support changing configuration properties for tasks. eg a periodic task's configuration can be tweaked by altering the values on the configuration server and next time the task runs it will use the new values.
My understanding is that config server properties will override properties in the task definition which override properties in the internal application.properties.
Eureka: Each task would register itself in Eureka. The main reason for this is that our tasks have web actuator endpoints exposed and we can then can use Spring Boot Admin (which can discover services via eureka) to access the actuator endpoints and information while a task is running.
(Some of our tasks can take hours to run and this would enable us to monitor them, adjust logging etc)
Is this a sensible approach - or are there any potential issues to look out for here (eg short lived tasks with eureka). I can’t find any discussion of this in the existing spring cloud data flow or spring cloud task documentation.
You may try logstash-logback-encoder for SCDF integration with ELK stack. It works fine for our SCDF on Yarn stream application.
Config Server should work for any Spring Boot application.
I'm trying to configure reliable configuration service that uses bus to update clients when config change happens. I started two config servers that monitors local file system. Two eureka servers, so that clients could discover config service at startup (i.e. eureka fist config type). I used rabbitMQ as a amqp bus.
Current behavior is the following: If I update config file and request post on http://config-server1/bus/refresh, config server sends notification and only one client picks it. So that to update 3 clients I need to make 3 posts.
Question: How can I configure bus so that one post to /bus/refresh will update all clients.
Thank you in advance.
I have published my dnx/Web Service Fabric stateless service to local - it works. I publish to the cloud (carefully setting up the correct ports) and it does not start correctly. The error is the usual partition is below replica count
My suspicion is that dnx is not installed by default on the cluster VMs. Any way to get around that? I don't appear to get a login to those VMs so I can install asp.net 5 manually.
Found the issue - it was not DNX.
I set up a new cluster and was able to log in. There are 22304 error messages saying that my second non-dnx stateless service which is in the same application package is causing this event:
.NET Runtime version : 4.0.30319.34014 - This application could not be started.This application requires one of the following versions of the .NET Framework:
.NETFramework,Version=v4.5.2
Do you want to install this .NET Framework version now?
I'll figure out how to target correctly.
I have a Tapestry application (WAR, no EJB) that ...
... I want to deploy on 2 EC2 small instances (for failover).
... uses Spring Security
... is stateful (very small session state)
... should be deployed on Glassfish 3.1 (seems to have best cluster support?)
... and has an elastic load balancer with sticky session in front of it
How can I configure a cluster to achieve minimal ('no') interruptions for the user experience in case A) a node fails and B) I deploy a new version?
Everything is explained here: http://download.oracle.com/docs/cd/E18930_01/html/821-2426/docinfo.html#scrolltoc
Basically, you setup a DAS (=master), which control nodes with instances on it. You could do all of this on the same machine (1 DAS, 1 node with multiple instances), although it would be a good idea to have at least 2.
You should then have at least one load balancer (apache, physical load balancer, whatever).
A) if a node fails, the load balancer can redirect all traffic to the other node
B)
deploy application, disabled, with new version (see "application versioning")
mark server A as unavailable
enable the new version on server A
mark server A as available and server B as unavailable
enable the new version on server B
mark server B as avalilable