New to using KNIME and wondering what cluster validity measures I can use in it. Any ideas? I was thinking about using the scorer node but not sure how to configure it correctly as when I tried I got an error % of 100
Related
The task is to configure Cluster Autoscaler(CA) with 1 minute of scale-down time in DigitalOcean DOKS.
DOKS by default supports CA and having 10 mins of scale down time. Now i need to add below parameters as per my requirement.
Parameters need to modify to:
--scale-down-unneeded-time=10s
--scale-down-delay-after-add=30s
I tried in DO, but there is no place for adding/changing the parameters(or not sure if im missing anything).
Then tried in AWS EKS Cluste, Configured CA(no default support) with the above parameters, working fine.
Could anyone help me to configure these parameters in DOKS?
Found the root after the entier day.
It may be good for someone, so adding.
DigitalOcean K8S Cluster supports Cluster-AutoScaler(CA) by default. But not giviing explicit option to change its configurations.
We cannot change CA behaviour by passing parameters.
There is work in progress:
https://github.com/kubernetes/autoscaler/issues/3556
https://github.com/kubernetes/autoscaler/issues/3556#issuecomment-877015122
i am trying to setup a complete GitLab Routine to setup my Kubernetes Cluster with all installations and configurations automatically incl. decommissioning at the end.
However the Creation and Decommissioning Progress is one of the most time consuming because i am basically waiting for the provisioning till i can execute further commands.
as i have some times troubles in the bases of the Kubernetes Setup, i currently decomission my cluster and create a new one. But this is pretty un-economical and time consuming.
Question:
Is there a command or a series of commands to completely reset a Kubernetes to his state after creation ?
The closest is probably to do all your work in a new namespace and then delete that namespace when you are done. That automatically deletes all objects inside it.
We stand up a lot of clusters for testing/poc/deving and its up to us to remember to delete them
What I would like is a way of setting a ttl on an entire gke cluster and having it get deleted/purged automatically.
I could tag the clusters with a timestamp at creation and have an external process running on a schedule that reaps old clusters, but it'd be great if I didn't have to do that- it might be the only way but maybe there is a gke/k8s feature for this?
Is there a way to have the cluster delete itself without relying on an external service? I suppose it could spawn a cloud function itself- but Im wondering if there is a native gke/k8s feature to do this more elegantly
You can spawn GKE cluster with Alpha features. Such clusters exist for one month maximum and then are auto-deleted.
Read more: https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters
Try Cloud Scheduler and hook it up with your build server. Cloud Scheduler supports Http , App Engine , Pub/Sub endpoints.
I don't believe there is a native way to do this, but it doesn't seem unreasonable to use cloud scheduler to every so often trigger a cloud function which looks for appropriately labeled clusters and triggers their deletion via the API.
I am creating a redshift cluster using CF and then I need to output the cluster status (basically if its available or not). There are ways to output the endpoints and port but I could not find any possible way of outputting the status.
How can I get that, or it is not possible ?
You are correct. According to AWS::Redshift::Cluster - AWS CloudFormation, the only available outputs are Endpoint.Address and Endpoint.Port.
Status is not something that you'd normally want to output from CloudFormation because the value changes.
If you really want to wait until the cluster is available, you could create a WaitCondition and then have something monitor the status and the signal for the Wait Condition to continue. This would probably need to be an Amazon EC2 instance with some User Data. Linux instances are charged per-second, so this would be quite feasible.
I'm new to Akka Clusters, however as I am understanding its documentation, I need to know at least one "seed node" to join an existing cluster.
So when using clusters with OpenShift I would need to know if the current gear is the first node - then I would create a new cluster - or if there are already some other gears around - I would need to know at least one of their IPs to join them.
Is this possible with OpenShift cloud? (I'm using the DIY catridge, so customizing the start up script wouldn't be a problem. However I can't find any environment variable which provides me relevant data.)
DIY gears on OpenShift Online do not scale. And if you are spinning up separate applications for each of the nodes in your cluster, you are going to (probably) run into inter-gear communication issues. You might need to create your own akka cartridge (http://docs.openshift.org/origin-m4/oo_cartridge_developers_guide.html), then you can set your own scaling options. You might check out this cartridge (https://github.com/smarterclayton/openshift-redis-cart) which supports scaling and might give you some ideas about how to implement yours.