I want to know can we create multiple disks on K8 cluster ? Like currently i see below configuration on my cluster
"disks":[
{
"prefix":"abc-storage",
"count":1,
"size":10,
"disk-type": StandardSSD_LRS"
}
And now the ask is to - create fresh storage disk with 2 GB minimum size with labels and same storage class of existing "abc-storage" ?
Does it mean i will create similar one more disks block and change size to 2 or change in the existing block from 10 to 2?
Create with labels so that it is useful to refer under selector section ? I am not able to understand this point.
]
Related
I would like to scale a kubernetes stateful set in EKS which serves static data. The data should be shared where possible, which means by availability zone.
Can I specify a volume claim template to make this possible? Or is there another mechanism?
I also need to initialize the volume (in an init-container) when the first node joins. Do I need to provide some external locking mechanism rather than just checking if volume is empty?
If you want your pods to share static data, then you can:
Put the data into the Persistent Volume
Mount the same volume into the Pods (with ReadOnlyMany)
A few comments:
Here you can find the list of volumes, so you can choose the one that fits your needs
Since every pod serves the same data, you may use Deployment instead of StatefulSet, StatefulSets are when each of your pod is different
If you need the first pod to initialize the data, then you can either use initContainer (then use ReadWriteMany instead of ReadOnlyMany). Depending on what you try to do exactly, but maybe you can first initialize the data and then start your Deployment (and Pods), then you'd not need to lock anything
one of my application running ECS(with fargate) needs more memory, the 20GB ephemeral memory is not sufficient for my application, so I am planning to use efs
volume {
name = "efs-test-space
efs_volume_configuration {
file_system_id = aws_efs_file_system.efs_apache.id
root_directory = "/"
transit_encryption = "ENABLED"
container_path = "/home/user/efs/"
authorization_config {
access_point_id = aws_efs_access_point.efs-access-point.id
iam = "ENABLED"
}
}
I can see it is mounted and my application is able to access the mounted folder, but because of HA and to have parallelism my ecs task count are 6. Since I am using sone EFS and same will be shared by all tasks. So here the problem I got stuck is providing unique mounted EFS filepath for each task .
I added something like this /home/user/efs/{random_id} but this I want to make as part of task lifecycle, I mean this folder should get deleted if my task is stopped or destroyed/
So is there a way to mount efs as bind mount or enable delete of folder during task destroy stage?
You can now increase your ephemeral storage size to 200GB all you need to do is to set the ephemeral parameter in fargate task definition
"ephemeralStorage": {
"sizeInGiB": 100
}
While this could be achieved in theory, there is a lot of moving parts to it because the life cycle of EFS (and Access Points) is decoupled from the life cycle of tasks. This means you need to create Access Points out of band and on the fly AND these Access Points (and their data) are not automatically deleted when you tear down your tasks. The Fargate/EFS integration did not have this as a primary use case. The primary use case were more around sharing of the same data among different tasks(which is what you are observing but it doesn't serve your use case!) in addition to provide point persistency for a single task. .
What you need to solve your problem easily is a new feature the Fargate team is working on right now and that will allow you to expand your local ephemeral storage as a property of the task. I can't say more about the timing but the feature is actively being developed so you may want to consider intercepting it rather than building a complex workflow to achieve the same result.
I have requirement to create 5 Container definitions which have common environmental variables as secrets. But the count of them is around 50. So instead of duplicating these 50 secrets, is there a way I can create all these a single resource and refer them in all container definitions ?
You could use AWS Systems Manager Parameter Store. It's intended for this kind of situation, when sharing config values between multiple containers, lambda functions, etc.
I try to get Total and Free disk space on my Kubernetes VM so I can display % of taken space on it. I tried various metrics that included "filesystem" in name but none of these displayed correct total disk size. Which one should be used to do so?
Here is a list of metrics I tried
node_filesystem_size_bytes
node_filesystem_avail_bytes
node:node_filesystem_usage:
node:node_filesystem_avail:
node_filesystem_files
node_filesystem_files_free
node_filesystem_free_bytes
node_filesystem_readonly
According to my Grafana dashboard, the following metrics work nicely for alerting for available space,
100 - ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} * 100) / node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"})
The formula gives out the percentage of available space on the pointed disk. Make sure you include the mountpoint and fstype within the metrics.
FS utilization can be calculated as
100 - (100 * ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} ) / (node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"}) ))
According kubernetes-mixin you can use node:node_filesystem_usage, but if this metrics won't show your correct vales please ensure that device is set correctly.
I highly recommend using Prometheus Graph or Graphana Explore tab to check all available metrics.
I have a service fabric .net application that needs more temporary storage than is available on the instance. The current instance allows for up to 4 data disks. I see how to allocate these from the scale set, but not how to specify mounts, permissions, or even how to properly access them from the file API. I was surprised to find very little in the way of documentation on this so any help is appreciated.
Once you get into the instance of the scale then you add new disk to the VM using instruction here
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/attach-disk-ps
After that you can access them using standard IO i.e System.IO.File.ReadAllText
However it is not recommend to change node type for primary node and for non primary node type it is better to create a new vm scale set. Graduate move services from old scale set to the new one by update placement properties