I have a question about Netflix Eureka and Blue\Green Pattern....
Recently I wrote a blog about Micro Services and Transactions, there I tried to cover the topic of the of Blue\Green pattern but I have a question about the subject.
I try to start my Service in 'OUT_OF_SERVICE' mode with the following configuration
eureka:
instance:
initialStatus: OUT_OF_SERVICE
instanceEnabledOnit: false
Initially everything is OK, the service starts with OUT_OF_SERVICE and with the JSON interface of the Eureka I like to convert Service to 'UP' state. The problem is, I also have Health Check functionality implemented for my service (a class implementing 'HealthCheck' interface automatically detected and registered by Eureka), this HealthCheck functionality overwriting my 'OUT_OF_SERVICE' status and taking the service to UP state before I can intervene with 'Blue\Green' pattern.
HealtCheck interface of Eureka does not give me the information about the actual state of the service, so I can't check before it is 'OUT_OF_SERVICE' or not and actually I was expecting that it can go from OUT_OF_SERVICE to UP (a state transition from DOWN to UP is understandable but I was OUT_OF_SERVICE for me means I don't want to run this service).
You can find the detailed discussion here 'https://mehmetsalgar.wordpress.com/2016/11/05/micro-services-fan-out-transaction-problems-and-solutions-with-spring-bootjboss-and-netflix-eureka/#blue_green'.
So my question is how to have 'blue\green' pattern and the same time have the 'HealthCheck' functionality.
Thx for answers
Related
First of all, this is a question regarding my thesis for school. I have done some research about this, it seems like a problem that hasn't been tackled yet (might not be that common).
Before jumping right into the problem, I'll give a brief example of my use case.
I have multiple namespaces containing microservices depending on a state X. To manage this the microservices are put in a namespace named after the state. (so namespaces state_A, state_B, ...)
Important to know is that each microservice needs this state at startup of the service. It will download necessary files, ... according to the state. When launching it with state A version 1, it is very likely that the state gets updated every month. When this happens, it is important to let all the microservices that depend on state A upgrade whatever necessary (databases, in-memory state, ...).
My current approach for this problem is simply using events, the microservices that need updates when the state changes can subscribe on the event and migrate/upgrade accordingly. The only problem I'm facing is that while the service is upgrading, it should still work. So somehow I should duplicate the service first, let the duplicate upgrade and when the upgrade is successful, shut down the original. Because of this the used orchestration service would have to be able to create duplicates (including duplicating the state).
My question is, are there already solutions for my problem (and if yes, which ones)? I have looked into Netflix Conductor (which seemed promising with its workflows and events), Amazon SWF, Marathon and Kubernetes, but none of them covers my problem.
Best of all the existing solution should not be bound to a specific platform (Azure, GCE, ...).
For uninterrupted upgrade you should use clusters of nodes providing your service and perform a rolling update, which takes out a single node at a time, upgrading it, leaving the rest of the nodes for continued servicing. I recommend looking at the concept of virtual services (e.g. in kubernetes) and rolling updates.
For inducing state I would recommend looking into container initialization mechanisms. For example in docker you can use entrypoint scripts or in kubernetes there is the concept of init containers. You should note though that today there is a trend to decouple services and state, meaning the state is kept in a DB that is separate from the service deployment, allowing to view the service as a stateless component that can be replaced without losing state (given the interfacing between the service and required state did not change). This is good in scenarios where the service changes more frequently and the DB design less frequently.
Another note - I am not sure that representing state in a namespace is a good idea. Typically a namespace is a static construct for organization (of code, services, etc.) that aims for stability.
We need to clarify if the use.proxy is true or false and this value should come dynamically via the properties file. Following two scenarios can happen:
If we send a request or a service callout to the real backend, we need use.proxy=true.
If we send a request or a service callout to the simulated backend (for continuous integration), we need use.proxy=false.
Unfortunately the simulation is an IP, which is not accessible via a proxy.
What we tried out:
<Property name="use.proxy">{_PROXY_CHOICE}</Property>
And in the properties-file the argument:
context.setVariable('_PROXY_CHOICE', '${proxy.choice}');
But nothing happened. Anyone any clues how to solve this issue?
I would have proxy to real backend configured in the properties file and have use.proxy set to true - so by default it keeps using real backend. And use.proxy to false in case of simulation in the proxy - so just play around with use.proxy based on the request - as i understand you can change this per request in proxy.
This has happend to me more than once, thought someone can give some insight.
I have worked on multiple projects where my project depends on external service. When I have to run the application locally, i would need that service to be up. But sometimes I would be coding to the next version of their service which may not be ready.
So the question is, is there already a way that can have a mock service up and running that i could configure with some request and responses?
For example, lets say that I have a local application that needs to make a rest call to some other service outside to obtain some data. E.g, say, for given a user, i need to find all pending shipments which would come from other service. But I dont have access to that service.
In order to run my application, i need a working external service but I dont have access to it in my environment. Is there a better way rather than having to create a fake service?
You should separate the communications concerns from your business logic (something I call "Edge Component" see here and here).
For one it will let you test the business logic by itself. It will also give you the opportunity to rethink the temporal coupling you currently have. e.g. you may want the layer that handle communications to pre-fetch, cache etc. data from other services so that you will also have more resilient services at run time
I'm wondering how best to handle the Faulted state in a WF4 workflow service host. I'm using a console self-hosted service. I understand one approach is to implement the IErrorHandler interface, but does anybody know how I then configure this on my service? i.e. How do I add to the Behaviors collection?
Additionally, I wonder if anybody had any thoughts/advice on how best to handle a 'restart' scenario (or indeed if it's possible??) once the workflow service host has entered the Faulted state. My understanding is that once the service host enters the faulted state then it is end game and the application is in effect terminated. Can anybody give me a possible strategy for this? I'm thinking maybe a management service on top that handles failed instances of the workflow service host console application - though I'd be interested to hear from people who've faced this dilemma before, before I attempt anything.
EDIT:
Also, I'm working in a clustered environment. When the cluster enters a fail-over state, the workflow appears to lose connectivity with the database for a period of (no more than) one minute. Has anybody dealt with this scenario specifically?
Thanks in advance
Ian
We have a solution with Microsoft.Activities v1.8.4 see WorkflowService Configuration Based Extensions which allows you to add extensions using a service behavior and some config.
Can anyone point me in the direction of some documentation (or provide the information here) about the following tables, created by JBoss 5.1.0 when it starts up?
I know what they are for at a high level, and know why they are there, but I could do with some lower-level documentation about each table's purpose.
The tables are:
hilosequences
timers
jbm_counter
jbm_dual
jbm_id_cache
jbm_msg
jbm_msg_ref
jbm_postoffice
jbm_role
jbm_tx
jbm_user
I know that the first two are associated with uuid-key-generator and the EJB Timer Service respectively, while the rest are associated with JBoss Messaging. What I want to know is something along the lines of "jmg_msg stores each message when it is created...", that kind of thing.
Thanks
Rich
ps: I originally asked this question at ServerFault but didn't get a response
hilosequences is used by the uuid-key-generator.sar which provides the jboss:service=KeyGeneratorFactory,type=HiLo service, which basically allows you to have UUID keys consistently across all aplications of an instance
timers is used by ejb2-timer-service.xml, a legacy timer service
and the jbm* tables are used by JBoss Messaging (JMS) to store messages, queues, etc, which is why it's strongly recommended to change the DB from the default (Hypersonic) to a production ready one