Difference between watch endpoints? - kubernetes

I'm interested in watching a stream of Events from Kubernetes, to determine whether a deployment was successful, or if any of the Pods were unable to be scheduled.
I could call the endpoint /api/v1/watch/events, or I could call /api/v1/events?watch=true. Is there a difference between those two? I'm confused about the purpose of them.
Thanks.

We're making watch a query param and removing it from the path (legacy form). You should call /api/v1/events?watch=true. See more discussions here if you're interested.

Related

Kubernetes Operators: Informers vs. reconcile loop

I recently got started with building a Kubernetes operator. I'm using the Fabric8 Java Kubernetes Client but I think my question is more general and also applies to other programming languages and libraries.
When reading through blog posts, documentation or textbooks explaining the operator pattern, I found there seem to be two options to design an operator:
Using an infinite reconcile loop, in which all corresponding Kubernetes objects are retrieved from the API and then some action is performed.
Using informers, which are called whenever an observed Kubernetes resource changes.
However, I don't find any source discussion which option should be used in which case. Are there any best practices?
You should use both.
When using informers, it's possible that the handler gets the events out of order or even not at all. The former means the handler needs to define and reconcile state - this approach is referred to as level-based, as opposed to edge-based. The latter means reconciliation needs to be triggered on a regular interval to account for that possibility.
The way controller-runtime does things, reconciliation is triggered by cluster events (using informers behind the scenes) related to the resources watched by the controller and on a timer. Also, by design, the event is not passed to the reconciler so that it is forced to define and act on a state.

What happens when creaing crd without relating operator?

Once I register the crd into the k8s cluster, I can use .yaml to create it, without operator running. Then What happends to these created resouces?
I have seen the Reconciler of operator, but it's more like an async status transfer. When we create a pod, we can directly get the pod ip from the create result. But it seems that I didn't find a place to write my OnCreate hook. (I just see some validate webhook, but never see a hook that be called when creation request made, defines how to create the resourse, and return the created resourse info to the caller ).
If my story is that for one kind of resource, in a time window, all coming creation will multiplex only one pod. Can you give me some advice?
That's a big story for kubernetes crd/controller life cycle, I try to make a simple representation.
After register a new CRD, and create CR, kube-api-server do not care if there is a related controller existed or not. see the process:
That's means the resource(your CR) will be store to etcd, has no business of your controller
ok, let talk about your controller. your controller will setup a list/watch(actually a long live http link) to the api-server and register hook(what you ask, right?) for different event: onCreate, onUpdate and onDelete. Actually you will handle all event in your controller's reconcile (remember kubernetes reconcile's responsibility: move current state to desired state). see the diagram:
For the list/watch link in your controller, you need set different link for different kind of resource. for example: if you care about event for pod, you need set pod list/watch or care about deployment, and set a deployment list/watch...

Kubernetes Pod warm-up for load balancing

We are having a Kubernetes service whose pods take some time to warm up with first requests. Basically first incoming requests will read some cached values from Redis and these requests might take a bit longer to process. When these newly created pods become ready and receive full traffic, they might become not very responsive for up to 30 seconds, before everything is correctly loaded from Redis and cached.
I know, we should definitely restructure the application to prevent this, unfortunately that is not feasible in a near future (we are working on it).
It would be great if it was possible to reduce the weight of the newly created pods, so they would receive 1/10 of the traffic in the beggining with the weight increasing as the time would pass. This would be also great for newly deployed versions of our application to see if it behaves correctly.
Why you need the cache loading in first call instead of having in heartbeat which is hooked to readiness probe? One other option is to make use of init containers in kubernetes
Until the application can be restructured to do this "priming" internally...
For when running on Kubernetes, look into Container Lifecycle Hooks and specifically into the PostStart hook. Documentation here and example here.
It seems that the behavior of "...The Container's status is not set to RUNNING until the postStart handler completes" is what can help you.
There's are few gotchas like "... there is no guarantee that the hook will execute before the container ENTRYPOINT" because "...The postStart handler runs asynchronously relative to the Container’s code", and "...No parameters are passed to the handler".
Perhaps a custom script can simulate that first request with some retry logic to wait for the application to be started?

A callback for publish method in Autobahn?

I wonder why there is no callback defined on the publish method in AutobahnJS? I think it would be useful for the client who tries to publish something to know whether their publish call succeeded or not and react accordingly. I wonder if other frameworks that support pub/sub have such a callback for publishing.
How do you define "success" for a publish? Depending on that, PubSub will get very different. Today, WAMP PubSub is a Best-Effort service. WAMPv2 might get other QoS levels in addition.

How to use a WF DelayActivity in an ASP.Net web based workflow

I have a web application that I am adding workflow functionality to using Windows Workflow Foundation. I have based my solution around K. Scott Allen's Orders Workflow example on OdeToCode. At the start I didn't realise the significance of the caveat "if you use Delay activities with and configure active timers for the manual scheduling service, these events will happen on a background thread that is not associated with an HTTP request". I now need to use Delay activities and it doesn't work as is with his solution architecture. Has anyone come across this and found a good solution to this? The example is linked to from a lot of places but I haven't seen anyone else come across this issue and it seems like a bit of a show stopper to me.
Edit: The problem is that the results from the workflow are returned to the the web application via HttpContext. I am using the ManualWorkflowSchedulerService with the useActiveTimers and this works fine for most situations because workflow events are fired from the web app and HttpContext still exists when the workflow results are returned and the web app can continue processing. When a delay activity is used processing happens on a background thread and when it tries to return results to the web app, there is no valid HttpContext (because there has been no Http Request), so further processing fails. That is, the webapp is trying to process the workflow results but there has been no http request.
I think I need to do all post Delay activity processing within the workflow rather than handing off to the web app.
Cheers.
You didn't describe the problem you are having. But maybe this is of some help.
You can use the ManualWorkflowSchedulerService with the useActiveTimers and the workflow will continue on another thread. Normally this is fine because your HTTP request has already finished and it doesn't really matter.
If however you need full control the workflow runtime will let you get a handle on all loaded workflows using the GetLoadedWorkflows() function. This will return acollection of WorkflowInstance objects. usign these you can can call the GetWorkflowNextTimerExpiration() to check which is expired. If one is you can manually resume it. In this case you want to use the ManualWorkflowSchedulerService with the useActiveTimers=false so you can control the last thread as well. However in most cases using useActiveTimers=true works perfectly well.