libvirt API - overlays, backing image - overlay

Is there a possibility to create 2(or more) different overlays from one base image using libvirt API ?
FedoraBase.img <-- FedoraGuest1.qcow2
\
<-- FedoraGuest2.qcow2
So FedoraBase is backing image for Guest1 and Guest2 overlays...
Using qemu-img tool you can create it like this:
qemu-img create -b /export/vmimages/FedoraBase.img -f qcow2 \
/export/vmimages/FedoraGuest1.qcow2
qemu-img create -b /export/vmimages/FedoraBase.img -f qcow2 \
/export/vmimages/FedoraGuest2.qcow2
But I can not find any libvirt API doing the same.

This task requires you to use the libvirt storage pool APIs - in particular qcow2 images can be created using the virStorageVolCreateXML() API. This API accepts an XML document that describes the desired configuration and gives you the ability to specify the backing file. This should let achieve the layering you describe.

Related

How can I play HLS streams from Wasabi in Ant Media Server?

I want to play my HLS streams from Wasabi. I enabled S3 options in Ant Media Server Dashboard. But it seems that Ant Media Server uploads HLS files after the stream ends. How can I play HLS chunks on Wasabi?
s3fs 1.88 and later buffers data locally and flushes according to the -o max_dirty_data flag, defaulting to 5 GB. If you reduce this value you should see updates more often. Note that these flushes require server-side copies and may do more IO than you anticipate.
We recommend S3 Fuse for instant transfer and deletion of your HLS files to S3. You do not need to activate S3 in the panel. If the streams folder of the application in the Ant Media directory is linked to a folder under s3, it automatically syncs to S3.
I briefly list the steps below:
Install s3fs
sudo apt install s3fs
You need to add the access key and secret key from wasabi account.
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
chmod 600 ${HOME}/.passwd-s3fs
In order to mount S3, you need to update the mybucket below with the bucket in wasabi, add the folder you will mount and add the endpoint url to the url. For example: https://s3.us-west-1.wasabisys.com
You need to replace us-west-1 with your own region. You can access the Region parameter from the bucket list.
sudo s3fs -o dbglevel=info -o curldbg -o allow_other -o use_cache=/tmp/s3-cache mybucket /usr/local/antmedia/webapps/LiveApp/streams/ -o url=https://s3.us-west-1.wasabisys.com -o use_path_request_style
-o passwd_file=${HOME}/.passwd-s3fs
Please check the disk if mount is successful. You should see a line similar to below in the output when you run df
s3fs 274877906944 0 274877906944 0% /usr/local/antmedia/webapps/LiveApp/streams

Cloud formation Nested Stacks best way to version control templates using TemplateURL

We're creating a micro-services project to be deployed in multiple environments (dev,qa, stg, prd), we plan on making use of cloud formation templates using nested stacks for the shared resources between multiple services.
The thing is that when using nested stacks you need to specify the TemplateURL of the nested resource, and this is a static URL pointing an S3 Bucket that changes every time you do update the template (Upload a new template with some changes).
So the question is, what is the best way to use a version control tool like GIT to keep track of the changes made in a nested template that once it is upload to S3 would give you a new URL?
The cloudformation package command in the AWS Command Line Interface will upload local artifacts (including the TemplateURL property for the AWS::CloudFormation::Stack resource) to an S3 bucket, and output a transformed CloudFormation template referencing the uploaded artifact.
Using this command, the best way to track changes would be to commit both the base template and nested-stack templates to Git, then use cloudformation package as an intermediate processing step in your deploy script, e.g., with cloudformation deploy:
S3_BUCKET=my_s3_bucket
TEMPLATE=base_template.yml
OUTPUT_TEMPLATE=$(mktemp)
aws cloudformation package \
--template-file $TEMPLATE \
--s3-bucket $S3_BUCKET \
--output-template-file $OUTPUT_TEMPLATE
aws cloudformation deploy \
--template-file $OUTPUT_TEMPLATE \
--stack-name $STACK

Passing long configuration file to Kubernetes

I like the work methology of Kuberenetes, use self-contained image and pass the configuration in a ConfigMap, as a volume.
Now this worked great until I tried to do this thing with Liquibase container, The SQL is very long ~1.5K lines, and Kubernetes rejects it as too long.
Error from Kubernetes:
The ConfigMap "liquibase-test-content" is invalid: metadata.annotations: Too long: must have at most 262144 characters
I thought of passing the .sql files as a hostPath, but as I understand these hostPath's content is probably not going to be there
Is there any other way to pass configuration from the K8s directory to pods? Thanks.
The error you are seeing is not about the size of the actual ConfigMap contents, but about the size of the last-applied-configuration annotation that kubectl apply automatically creates on each apply. If you use kubectl create -f foo.yaml instead of kubectl apply -f foo.yaml, it should work.
Please note that in doing this you will lose the ability to use kubectl diff and do incremental updates (without replacing the whole object) with kubectl apply.
Since 1.18 you can use server-side apply to circumvent the problem.
kubectl apply --server-side=true -f foo.yml
where server-side=true runs the apply command on the server instead of the client.
This will properly show conflicts with other actors, including client-side apply and thus fail:
Apply failed with 4 conflicts: conflicts with "kubectl-client-side-apply" using apiextensions.k8s.io/v1:
- .status.conditions
- .status.storedVersions
- .status.acceptedNames.kind
- .status.acceptedNames.plural
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See http://k8s.io/docs/reference/using-api/api-concepts/#conflicts
If the changes are intended you can simple use the first option:
kubectl apply --server-side=true -force-conflicts -f foo.yml
You can use an init container for this. Essentially, put the .sql files on GitHub or S3 or really any location you can read from and populate a directory with it. The semantics of the init container guarantee that the Liquibase container will only be launched after the config files have been downloaded.

Composing objects with > 1024 parts without download/upload

Is there a way to either clear the compose count or copy an object inside cloud storage so as to remove the compose count without downloading and uploading again?
With a 5TB object size limit, I'd need 5GB pieces composed together with a 1024 compose limit -- are 5GB uploads even possible? They are certainly not easy to work with.
The compose count should be higher (1MM) or I should be able to copy an object within cloud storage to get rid of the existing compose count.
There is no longer a restriction on the component count. Composing > 1024 parts is allowed.
https://cloud.google.com/storage/docs/composite-objects
5G uploads are definitely possible. You can use a tool such as gsutil to perform them easily.
There's not an easy way to reduce the existing component count, but it is possible using the Rewrite API. Per the documentation: "When you rewrite a composite object where the source and destination are different locations and/or storage classes, the result will be a composite object containing a single component."
So you can create a bucket of a different storage class, rewrite it, then rewrite it back to your original bucket and delete the copy. gsutil uses the rewrite API under the hood, so you could do all of this with gsutil cp:
$ gsutil mb -c DRA gs://dra-bucket
$ gsutil cp gs://original-bucket/composite-obj gs://dra-bucket/composite-obj
$ gsutil cp gs://your-dra-bucket/composite-obj gs://original-bucket/composite-obj
$ gsutil rm gs://dra-bucket/composite-obj

Setting the Durable Reduced Availability (DRA) attribute for a bucket using Storage Console

When manually creating a new cloud storage bucket using the web-based storage console (https://console.developers.google.com/), is there a way to specify the DRA attribute? From the documentation, it appears that the only way to create buckets with that attribute is to either use Curl, gsutil or some other script, but not the console.
There is currently no way to do this.
At present, the storage console provides only a subset of the Cloud Storage API, so you'll need to use one of the tools you mentioned to create a DRA bucket.
For completeness, it's pretty easy to do this using gsutil (documentation at https://developers.google.com/storage/docs/gsutil/commands/mb):
gsutil mb -c DRA gs://some-bucket