How to ascertain if a mercurial patch is already applied - version-control

I am using mercurial queues to apply patches. I have the .patch files and series file. I copy those into .hg/patches directory. Then I run the following:
hg qpush -all
Now, say patch A fails. I investigate and find that patch A was already applied. So I remove patch A from the series file and do the process again.
My question is: How can I safely know that the reason my patch has failed to apply is because it is already applied? Or Is there a way to know beforehand that the patch is already applied?

Related

How to revert pending `kops` changes?

What I have done?
$ kops edit ig nodes ## added some nodes by modifying min/max nodes parameters
$ ... ## did some other modifications
$ kops update cluster ## saw the pending changes before applying them to the cluster
In kops update cluster I saw some not desired changes and wanted to revert all of them and start over.
But given that the pending changes live in a state file in S3, I was not able to revert such changes easily.
I realized I should have had an S3 versioning enabled to revert the changes quickly and easily, but I didn't have that, and needed another way of cancelling pending changes.
Do you know how to achieve that?
PS. I googled for "kops drop pending changes" and similar; I browsed kops manual page; I tried accessing kops from multiple accounts before I realized that the changes are shared via S3... Nothing helped.
UPDATE:
See the https://kops.sigs.k8s.io/tutorial/working-with-instancegroups/, it says
To preview the change:
kops update cluster
...
Will modify resources:
*awstasks.LaunchTemplate LaunchTemplate/mycluster.mydomain.com
InstanceType t2.medium -> t2.large
Presuming you're happy with the change, go ahead and apply it: kops update cluster --yes
So - what I want to do is to revert the not-yet-applied changes if i'm not happy with them. Is there any way other than kops edit ig nodes and manually changing t2.large back to t2.medium, repeating that for all changes I want to revert?
As the production cluster recommendations states, storing the configuration in version control is recommended to avoid these scenarios.
But given that you are already facing a challenge now ...
It depends on what is introducing the change. If it is a change in your cluster config alone, reverting those changes would be the way to go.
If the changes comes from a kOps upgrade, there is no real way of reverting. kOps should never be downgraded, and the changes it tries to do are there for good reasons.
If you can share what the unwanted changes are, I may be able to determine how to revert them.

a very simple Kubernetes Scheduler question about custom scheduler

Just a quick question.. Do you HAVE to remove or move the default kube-scheduler.yaml from the folder? Can't I just make a new yaml(with the custom scheduler) and run that in the pod?
Kubernetes isn't file-based. It doesn't care about the file location. You use the files only to apply the configuration onto the cluster via a kubectl / kubeadm or similar CLI tools or their libraries. The yaml is only the content you manually put into it.
You need to know/decide what your folder structure and the execution/configuration flow is.
Also, you can simply have a temporary fule, the naming doesn't matter as well and it's alright to replace the content of a yaml file. Preferably though, try to have some kind of history record such as manual note, comment or a source control such as git in place, so you know what and why was changed.
So yes, you can change the scheduler yaml or you can create a new file and reorganize it however you like but you will need to adjust your flow to that - change paths, etc.

Variable number of input artifacts into a step

I have a diamond style workflow where a single step A starts a variable number of analysis jobs B to X using withParam:. The number of jobs is based on dynamic information and unknown until the first step runs. This all works well, except that I also want a single aggregator job Y to run over the output of all of those analysis jobs:
B
/ \
/ C \
/ / \ \
A-->D-->Y
\ . /
\ . /
\./
X
Each of the analysis jobs B-X writes artifacts, and Y needs as input all of them. I can't figure out how to specify the input for Y. Is this possible? I've tried passing in a JSON array of the artifact keys, but the pod gets stuck on pod initialisation. I can't find any examples on how to do this.
A creates several artifacts which are consumed by B-X (one per job as part of the withParam:) so I know my artifact repository is set up correctly.
Each of the jobs B-X require a lot of CPU so will be running on different nodes, so I don't think a shared volume will work (although I don't know much about sharing volumes across different nodes).
I posted the question as a GitHub issue:
https://github.com/argoproj/argo/issues/4120
The solution is to write all the output to an artifact path specific to the job (i.e. the same subdirectory). You then specify that path as the input key and argo will unpack all the previous results into a subdirectory. You can use {{workflow.name}} to create unique paths.
This does mean you're restricted to a specific directory structure on your artifact repository, but for me that was a small price to pay.
For a full working solution see sarabala1979's answer on the GitHub issue.

Why prefix kubernetes manifest files with numbers?

I'm trying to deploy Node.js code to a Kubernetes cluster, and I'm seeing that in my reference (provided by the maintainer of the cluster) that the yaml files are all prefixed by numbers:
00-service.yaml
10-deployment.yaml
etc.
I don't think that this file format is specified by kubectl, but I found another example of it online: https://imti.co/kibana-kubernetes/ (but the numbering scheme isn't the same).
Is this a Kubernetes thing? A file naming convention? Is it to keep files ordered in a folder?
This is to handle the resource creation order. There's an opened issue in kubernetes:
https://github.com/kubernetes/kubernetes/issues/16448#issue-113878195
tl;dr kubectl apply -f k8s/* should handle the order but it does not.
However, except the namespace, I cannot imagine where the order will matter. Every relation except namespace is handled by label selectors, so it fixes itself once all resources are deployed. You can just do 00-namespace.yaml and everything else without prefixes. Or just skip prefixes at all unless you really hit the issue (I never faced it).
When you execute kubectl apply * the files are executed alphabetically. Prefixing files with a rising number allows you to control the order of the executed files. But in nearly all cases the order shouldn't matter.
Sequence helps in readability, user friendly and not the least maintainability. Looking at the resources one can conclude in which order the deployment needs to be performed. For example, deployment using configMap object would fail if the deployment is done before configMap is created.

Passing long configuration file to Kubernetes

I like the work methology of Kuberenetes, use self-contained image and pass the configuration in a ConfigMap, as a volume.
Now this worked great until I tried to do this thing with Liquibase container, The SQL is very long ~1.5K lines, and Kubernetes rejects it as too long.
Error from Kubernetes:
The ConfigMap "liquibase-test-content" is invalid: metadata.annotations: Too long: must have at most 262144 characters
I thought of passing the .sql files as a hostPath, but as I understand these hostPath's content is probably not going to be there
Is there any other way to pass configuration from the K8s directory to pods? Thanks.
The error you are seeing is not about the size of the actual ConfigMap contents, but about the size of the last-applied-configuration annotation that kubectl apply automatically creates on each apply. If you use kubectl create -f foo.yaml instead of kubectl apply -f foo.yaml, it should work.
Please note that in doing this you will lose the ability to use kubectl diff and do incremental updates (without replacing the whole object) with kubectl apply.
Since 1.18 you can use server-side apply to circumvent the problem.
kubectl apply --server-side=true -f foo.yml
where server-side=true runs the apply command on the server instead of the client.
This will properly show conflicts with other actors, including client-side apply and thus fail:
Apply failed with 4 conflicts: conflicts with "kubectl-client-side-apply" using apiextensions.k8s.io/v1:
- .status.conditions
- .status.storedVersions
- .status.acceptedNames.kind
- .status.acceptedNames.plural
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See http://k8s.io/docs/reference/using-api/api-concepts/#conflicts
If the changes are intended you can simple use the first option:
kubectl apply --server-side=true -force-conflicts -f foo.yml
You can use an init container for this. Essentially, put the .sql files on GitHub or S3 or really any location you can read from and populate a directory with it. The semantics of the init container guarantee that the Liquibase container will only be launched after the config files have been downloaded.