Kubernetes API custom image metadata - kubernetes

I try to use the Kubernetes API to read metadata via annotations from container images. The metadata is applicable to every instance of the respecting image and is needed in order to run any resulting container properly. Following this SO question it is not possible to read Docker image labels from the kubernetes API directly.
My next thought was to use custom annotations added to the image manifest, although this seems to be a pretty hacky solution for such a "simple" task. Anyway if I add the annotations to the manifest using docker, I see no way to read them from the Kubernetes API.
I think I am on the completely wrong track here. This seems to be a rather simple task which other people likely have implemented already...anyway I cannot find any further information regarding this. Is it really that hard to read image metadata via kubernetes before deploying a container of that image?
Thanks in advance for any help!
Edit:
The reason I am asking is because I want to grant the containers of specific images access to specific serial USB devices (e.g. FTDI232) on diverse host systems. Since I have no idea which path (e.g. /dev/ttyUSB0) will be assigned to the USB devices, I wrote a program that is monitoring USB devices and, in case an appropriate device is plugged in or gets plugged in, creates the container and passes it the corresponding path. From inside the container I want to access the serial device via a static, non-changing path (e.g. /dev/FTDI232)

Yes. The K8s API is limited when it comes to this, I believe the abstractions for container image metadata are at lower level and probably left out for a reason. You can always look at the CRI spec to see what's supported (note that the doc is out of date so you might have to look at the code).
If the end goal is to use Kubernetes to run your workloads it sounds like the more feasible route here is just to write a script that reads that image manifest outside Kubernetes and create the manifest files that you need to run your workloads after (based on that metadata) and then finally apply it to your cluster.
If you are using a common container image registry you could also write something that pulls the images from that registry to just pick metadata and metadata changes.

Related

How to fetch already rotated logs in Kubernetes?

Currently I tried to fetch already rotated logs within the node using --since-time parameter.
Can anybody suggest what is the command/mechanism to fetch already rotated logs within kubernetes architecture using commands
You can't. Kubernetes does not store logs for you, it's just providing an API to access what's on disk. For long term storage look at things like Loki, ElasticSearch, Splunk, SumoLogic, etc etc.

Import a custom Linux image for POWER-IAAS part of the IBM Cloud?

I am trying to import a cloud-enabled Debian Linux image for the Power architecture to run on the IBM public cloud, which supports this architecture.
I think I am following the instructions, but the behavior I am seeing is that, at image-import-time, after filling in all the relevant information, when I hit the "import" button, the GUI just exits silently, with no apparent effect, and no reported error.
I am reasonably experienced doing simple iaas stuff on AWS, but am new to the IBM cloud, and have not deployed a custom image on any cloud provider. I'm aware of "cloud-init", and have a reasonable general knowledge of what problem it solves (mapping cloud-provider metadata to config entries in the resulting VM at start-time), but not a great deal about how it actually works.
What I have done is:
Got an IBM cloud account, and upgraded out of the free tier, for access to Power.
Activated the Power Systems Virtual Server service.
Activated the Cloud Object Storage service.
Created a bucket in the COS.
Created an HMAC-enabled service credential for this bucket.
Uploaded my image, in .tar.gz format, to the bucket (via the CLI, it's too big to upload by GUI).
The image is from here -- that page is a bit vague on which cloud providers it may be expected to work with, but AFAIK the IBM cloud is the only public cloud supporting Power?
Then, from the Power Systems Virtual Server service page, I clicked the "Boot Images" item on the left, to show the empty list, then "Import Image" at the top of the list, and filled in the form. I have answers for all of the entries -- I can make up a new name, I know the region of my COS, the image file name" (the "key", in key-object storage parlance), the bucket name, and the access key and secret keys, which are available from the credential description in the COS panel.
Then the "import" button lights up, and I click it, and the import dialog disappears, no error is reported, and no image is imported.
There are various things that might be wrong that I'm not sure how to investigate.
It's possible the credential is not connected to the bucket in the right way, I didn't really understand the documentation about that, but in the GUI it looks like it's in the right scope and has the right data in it.
It's also possible that only certain types of images are allowed, and my image is failing some kind of validation check, but in that case I would expect an error message?
I have found the image-importing instructions for the non-Power-IAAS, but it seems like it's out of scope. I have also found some docs on how to prepare a custom image, but they also seem to be non-Power-IAAS.
What's the right way to do this?
Edit to add: Also tried doing this via the CLI ("ibmcloud pi image-import"), where it gets a time-out, apparently on the endpoint that's supposed to receive the image. Also, the command-line tool has an --os-type flag that apparently only takes [aix | sles | redhat | ibmi] -- my first attempt used raw, which is an error.
This is perhaps additional evidence that what I want to do is actually impossible?
PowerVS supports only .ova images. Those are not the same supported by VMWare, for instance.
You can get from here https://public.dhe.ibm.com/software/server/powervs/images/
Or you can use the images available in the regional pool of images:
ibmcloud pi image-list-catalog
Once you have your first VM up and running you can use https://github.com/ppc64le-cloud/pvsadm to create a new .ova. Today the tool only supports RHEL, CentOS and CoreOS.
If you want to easily play with PowerVS you can also use https://github.com/rpsene/powervs-actions.

Kubernetes dynamic pod provisioning

I have an app I'm building on Kubernetes which needs to dynamically add and remove worker pods (which can't be known at initial deployment time). These pods are not interchangeable (so increasing the replica count wouldn't make sense). My question is: what is the right way to do this?
One possible solution would be to call the Kubernetes API to dynamically start and stop these worker pods as needed. However, I've heard that this might be a bad way to go since, if those dynamically-created pods are not in a replica set or deployment, then if they die, nothing is around to restart them (I have not yet verified for certain if this is true or not).
Alternatively, I could use the Kubernetes API to dynamically spin up a higher-level abstraction (like a replica set or deployment). Is this a better solution? Or is there some other more preferable alternative?
If I understand you correctly you need ConfigMaps.
From the official documentation:
The ConfigMap API resource stores configuration data as key-value
pairs. The data can be consumed in pods or provide the configurations
for system components such as controllers. ConfigMap is similar to
Secrets, but provides a means of working with strings that don’t
contain sensitive information. Users and system components alike can
store configuration data in ConfigMap.
Here you can find some examples of how to setup it.
Please try it and let me know if that helped.

Azure Service Fabric deployment

I am doing API deployment to Service Fabric Nodes, and it is by default going to D drive (Temp drive), I would like to change this default behavior and deploy it to another drive or C drive to avoid application loss in case of VMSS deallocation. How can I do this?
You say you want to do this to avoid application loss, however:
SF already replicates your application package to multiple machines when you Register the application package in the Image Store (part of the provisioning/deployment process)
Generally, if you want your application code and config to be safe, keeping it somewhere outside the cluster (wherever you're deploying from, or in blob storage) is usually a better answer.
SF doesn't really support deallocating the VMs out from under it and then bringing them back later. See the FAQ answer here.
So overall I'm not sure that what you're trying to do is the right solution to your real problem, and it looks like you're heading into multiple unsupported scenarios, which usually means there's some misunderstanding.
That all said, of course, it's configurable.
Within a node type, you can specify the dataPath (example here). However, it's not recommended that you change this.
"settings": {
"dataPath": "D:\\\\SvcFab",
},

Apache Marathon app and container relation

I would like to understand the relation between a Marathon App and a container. Is it really so, that a Marathon App definition can contain only a single container definition (1:1)? As far as I understand the Marathon REST API, link attached, the answer is yes.
https://mesosphere.github.io/marathon/docs/rest-api.html#post-/v2/apps
But then are we supposed to use App Groups in order to define such complex applications that are built from more than a single container? I have checked Kubernetes, and the idea of "pod" in that case seems to be very convenient to build such applications, that are composed by multiple containers, which containers in the same pod have a single network stack, and application scaling happens on pod level.
Can we say, that Kubernetes pod corresponds to Marathon App Group? Or should I not try to find any similarities, but rather I should better understand Marathon philosophy?
Thank you!
Regards,
Laszlo
An app in Marathon specifies how to spawn tasks of that application. While you can specify how many tasks you want to spawn, every single on of these tasks only corresponds to one command or container.
To help you, I would need to understand more about your use case.
Groups can be used to organize related apps including dependencies. The tasks of the apps will not necessarily be co-located on the same host.
If you need co-location, you need to either create a container with multiple processed or use constraints to directly specify on which host you want to run the tasks.