How can I specify on which instance I want my app to run ?
After gcloud preview app deploy, a new instance is automatically created for that new version. If I create a new instance (one with more CPU than those created automatically) it just stays idle.
in app.yaml:
resources:
cpu: 8
memory_gb: 64
Related
We have an application running under kubernetes that is NET6.0. This application is a controller and starts up 10 worker processes. The issue we are experiencing is that frequently these worker processes are being killed by kubernetes and have the exit code of 137. From my research that indicates that they were kill because they are consuming too much memory.
To make this issue further difficult to troubleshoot, it only happens in our production environment after a period of time. Our production environment is also very locked down, the docker images all run with a readonly root filesystem, with a non-root user and very low priviledges. So to monitor the application we created a dashboard that reports various things, the two I will focus on are these pieces of data:
DotnetTotalMemory = GC.GetTotalMemory(false) / (1024 * 1024),
WorkingSetSize = process.WorkingSet64 / (1024 * 1024),
The interesting thing is that the "DotnetTotalMemory" ranges anywhere from 200mb to 400mb, but the "WorkingSetSize" starts out between 400mb to 600mb, but at times it jumps up to 1300mb, even when the "DotnetTotalMemory" is hovering at 200mb.
Our quota is as follows:
resources:
limits:
cpu: '5'
memory: 10Gi
requests:
cpu: 1250m
memory: 5Gi
From what I have read, the limit amount is recognized as the "available system memory" for dotnet and is passed to it through some mechanism similar to docker run --memory=XX, correct?
I switched to Workstation GC and that seems to make them slightly more stable. Another thing I tried was setting the 'DOTNET_GCConserveMemory' environment variable to '9', again it seems to help some. But I can't get past the fact that the process seems to have 1100mb+ of memory that is not managed by the GC. Is there a way for me to reduce the working set used by these processes?
Redis in our production environment is in cluster mode, with 6 nodes, 3 master nodes and 3 slave nodes. When nodes are switched due to network and other reasons, Redis-client cannot automatically refresh these nodes, and will report exception
MOVED 5649 192.168.1.1:6379
The vertx-redis-client version I use is 4.2.1
My configuration looks like this:
RedisOptions options = new RedisOptions();
options.setType(RedisClientType.CLUSTER)
.setRole(RedisRole.MASTER)
.setUseReplicas(RedisReplicas.NEVER)
Every time I encounter this problem, I need to restart my application service to restore it. I want to ask if there is any way to make vertx-redis-client automatically refresh the node?
Thank you ~
I'm trying to deploy this docker GCE project in a deploy.yaml but every time I update my git repository, the server goes down due to 1.
The original instance being deleted and 2. The new instance hasn't finished starting up yet (or at least the web app hasn't finished starting up yet).
What command should I use or how should I change this so that I have a canary deployment that destroys the old instances once a new one is up (I only have one instance running at a time)? I have no health checks on the instance group, only the load balancer.
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'instance-groups', 'managed', 'rolling-action', 'replace', 'a-group', '--max-surge', '1']
Thanks for the help!
Like John said - you can set max-unavailable and max-surge variables to alter the behavior of your deployment during updates.
Everytime I make a new deployment of my app, my nodes start reporting nodeHasDiskPressure . After around 10 minutes or so the node goes back to a normal state. I found this SO answer regarding setting thresholds: DiskPressure crashing the node
.. but I am not sure how to actually set these thresholds on Google Kubernetes Engine
The kubelet option you mentioned can be added to you cluster "instance-template"
Make a copy of the instance-template that has been used for your cluster (instance-group) after clicked on copy before to save you can make some changes at the instance template,you can add those flags into : Instance-template --> Custom metadata--> kube-env
The flag will be added in this way;
KUBELET_TEST_ARGS: --image-gc-high-threshold=[your value] KUBELET_TEST_ARGS: --low-diskspace-threshold-mb=[your value] KUBELET_TEST_ARGS: --image-gc-low-threshold=[your value]
Once you set your values,save the instance template then edit the instance group of your cluster by changing the instance-template from the default to your custom one, once done it hit "rolling restart/replace" on your Dashboard on the instance group main page. This will restart your instances of your cluster with the new values.
Every time I deploy to Google's Managed VM service, the console automatically creates a duplicated instance. I am up to 15 instances running in parallel. I even tried using the command:
gcloud preview app deploy "...\app.yaml" --set-default
I tried doing some research and it looks like even deleting these duplicated instances can be a pain. Thoughts on how to stop this duplication?
You can deploy over the same version each time:
gcloud preview app deploy "...\app.yaml" --set-default --version=version-name
This will stop creating VMs.