APOC UUID support in Kubernetes - kubernetes

I'm running a Neo4j instance (version 4.2.2) in a pod within a Kubernetes cluster, in standalone mode. The server starts, I can create, find and update nodes and relationships, however, when trying to install a UUID using apoc.uuid.install, the procedure hangs and never seems to finish.
I'd also like to mention that apoc.uuid.enabled=true is set in neo4j.conf, I've set a constraint on the designated UUID field before running install and I can't find any errors in the logs. I've also tried this functionality in non-K8s environments and I have no problem using it there.
The helm charts used for this deployment are taken from https://github.com/neo4j-contrib/neo4j-helm.
Did anyone experience this behavior? If yes, how did you manage to solve it?

Related

Possible to remove cluster from 2.4.8 server and import it on a fresh 2.5 server?

due to a wrong installation method (single node) we would like to migrate our existing kubernetes cluster to a newer and HA-rancher-kubernetes cluster.
Can someone tell me if it’s safe to do following:
remove (previously imported) cluster from our 2.4.8 single-node rancher installation
register this cluster again on our new kubernetes-managed 2.5 rancher-cluster?
We already tried this with our development cluster and it worked fine, the only thing which was necessary to do was to:
create user/admin accounts again
reassign all namespaces to the corresponding rancher projects
Would be nice to get some more opinions on this, right now it looks more or less safe :smiley:
Also does someone know what happens if one kubernetes cluster is registered/imported to two rancher instances at the same time (like 2.4.8 and 2.5 at the same time) - I know its probably really a bad Idea - just want to get a better understanding if I’m wrong :D
Just to give some feedback after we solved it by our selfes:
We removed the old installation and imported the cluster again in the new installation, worked fine wihtout problems.
We also had another migration, where we moved again since our old rancher installation didnt work anymore (on a managed kubernetes cluster, which is not recommended) and wasnt also reachable anymore.
we just imported the cluster into the new rancher installation, without removing it from the old one. Everything worked fine also, except that the projects-namespace associations were broken and we had to reassign all namespaces to new projects again. Also somehow our alerting is still not working correctly, the notification emails still point to our old rancher installation domain, even though we changed all config files which we found.

Persistence volume change: Restart a service in Kubernetes container

I have an HTTP application (Odoo). This app support install/updating modules(addons) dynamically.
I would like to run this app in a Kubernetes cluster. And I would like to dynamically install/update the modules.
I have 2 solutions for this problem. However, I was wondering if there are other solutions.
Solution 1:
Include the custom modules with the app in the Docker image
Every time I made a change in the custom module and push it to a git repository. Jinkins pull the changes and create a new image and then apply the new changes to the Kubernetes cluster.
Advantages: I can manage the docker image version and restart an image if something happens
Drawbacks: This solution is not bad for production however the list of all custom module repositories should all be included in the docker file. Suppose that I have two custom modules each in its repository a change to one of them will lead to a rebuild of the whole docker image.
Solution 2:
Have a persistent volume that contains only the custom modules.
If a change is made to a custom module it is updated in the persistent volume.
The changes need to apply to each pod running the app (I don't know maybe doing a restart)
Advantages: Small changes don't trigger image build. We don't need to recreate the pods each time.
Drawbacks: Controlling the versions of each update is difficult (I don't know if we have version control for persistent volume in Kubernetes).
Questions:
Is there another solution to solve this problem?
For both methods, there is a command that should be executed in order to take the module changes into consideration odoo --update "module_name". This command should include the module name. For solution 2, How to execute a command in each pod?
For solution 2 is it better to restart the app service(odoo) instead of restarting all the nodes? Meaning, if we can execute a command on each pod we can just restart the service of the app.
Thank you very much.
You will probably be better off with your first solution. Specially if you already have all the toolchain to rebuild and deploy images. It will be easier for you to rollback to previous versions and also to troubleshoot (since you know exactly which version is running in each pod).
There is an alternative solution that is sometime used to provision static assets on web servers: You can add an emptyDir volume and a sidecar container to the pod. The sidecar pull the changes from your plugins repositories into the emptyDir at fixed interval. Finally your app container, sharing the same emptyDir volume will have access to the plugins.
In any case running the command to update the plugin is going to be complicated. You could do it at fixed interval but your app might not like it.

Can new Rancher version be used for local cluster only?

I have been working with kubernetes in a staging environment for a couple of month and want to switch to production, I came across a tool called Rancher almost 2 weeks ago and since then am going through their documents.
It was recommended by the developers and also in the community not to use rancher in production kubernete and preferably create a separated cluster for that and add an agent to your main production cluster from that one.
However in the latest stable version, there is actually an option you can tick to use the rancher only for local cluster so this question came to my mind that:
If the latest stable version of rancher is modified to be deployed on production cluster itself rather than having dedicated cluster? and if there is any security or restarting issues can happen that deletes all the configurations for other components on cluster
Note: on another staging environment I installed on the local clustor an instance of wordpress and ghost and both were working fine.
I still think the best option for you would be to have fully accessible own cluster and you wont be dependent to rancher cloud solutions. I am not saying Rancher is bad - no. Just If you are talking about PRODUCTION environment - my personal opinion cluster should be own. Sure arguable topic.
What I can mention also here - you can use any of Useful Interactive Terminal and Graphical UI Tools for Kubernetes . for example Octant
Octant is a browser-based UI aimed at application developers giving
them visibility into how their application is running. I also think
this tool can really benefit anyone using K8s, especially if you
forget the various options to kubectl to inspect your K8s Cluster
and/or workloads. Octant is also a VMware Open Source project and it
is supported on Windows, Mac and Linux (including ARM) and runs
locally on a system that has access to a K8S Cluster. After installing
Octant, just type octant and it will start listening on localhost:7777
and you just launch your web browser to access the UI.

What is the suggested workflow when working on a Kubernetes cluster using Dask?

I have set up a Kubernetes cluster using Kubernetes Engine on GCP to work on some data preprocessing and modelling using Dask. I installed Dask using Helm following these instructions.
Right now, I see that there are two folders, work and examples
I was able to execute the contents of the notebooks in the example folder confirming that everything is working as expected.
My questions now are as follows
What are the suggested workflow to follow when working on a cluster? Should I just create a new notebook under work and begin prototyping my data preprocessing scripts?
How can I ensure that my work doesn't get erased whenever I upgrade my Helm deployment? Would you just manually move them to a bucket every time you upgrade (which seems tedious)? or would you create a simple vm instance, prototype there, then move everything to the cluster when running on the full dataset?
I'm new to working with data in a distributed environment in the cloud so any suggestions are welcome.
What are the suggested workflow to follow when working on a cluster?
There are many workflows that work well for different groups. There is no single blessed workflow.
Should I just create a new notebook under work and begin prototyping my data preprocessing scripts?
Sure, that would be fine.
How can I ensure that my work doesn't get erased whenever I upgrade my Helm deployment?
You might save your data to some more permanent store, like cloud storage, or a git repository hosted elsewhere.
Would you just manually move them to a bucket every time you upgrade (which seems tedious)?
Yes, that would work (and yes, it is)
or would you create a simple vm instance, prototype there, then move everything to the cluster when running on the full dataset?
Yes, that would also work.
In Summary
The Helm chart includes a Jupyter notebook server for convenience and easy testing, but it is no substitute for a full fledged long-term persistent productivity suite. For that you might consider a project like JupyterHub (which handles the problems you list above) or one of the many enterprise-targeted variants on the market today. It would be easy to use Dask alongside any of those.

Redeploy Service Fabric application without version change

I've read about partial upgrade, but it always requires to change some parts of the packages application. I'd like to know if there's a way to redeploy a package without version change. In a way, similar to what VS is doing when deploying to the dev cluster.
On your local dev cluster, VS simply deletes the application before it starts the re-deployment. You could do the same in your production cluster, however this results in downtime since the application is not accessible during that time.
What's the reason why you wouldn't want to use the regular monitored upgrade? It has many advantages, like automatic rollbacks and so on.