The script buildwebdocs.fan generates the documentation for the distribution's pods, but not for the pods I did myself or imported. How can I generate the documentation locally for this pods?
You can invoke compilerDoc yourself:
$ fan compilerDoc -?
Usage:
compilerDoc [options] <pods>*
Arguments:
pods Name of pods to compile (does not update index)
Options:
-help, -? Print usage help
-all Generate docs for every installed pods
-allCore Generation docs for Fantom core pods
-clean Delete outDir
-outDir <File> Output dir for doc files
So for you local pod:
$ fan compilerDoc myPod
That's a tricky question, and one I don't have an immediate answer (or script) for. However, assuming you only wish to view the documentation and don't require a directory of HTML files, then I can offer an alternative...
Install and use the Explorer application. It's a desktop file explorer application that, amongst other things, includes a Fandoc Viewer that lets you view documentation from pods in the current Fantom installation.
Example, to view the documentation for the afReflux pod, type afReflux (case-insensitive) in the address bar:
You can also press F1 to bring up an index page of all installed Fantom pods.
Related
I'm trying to deploy Node.js code to a Kubernetes cluster, and I'm seeing that in my reference (provided by the maintainer of the cluster) that the yaml files are all prefixed by numbers:
00-service.yaml
10-deployment.yaml
etc.
I don't think that this file format is specified by kubectl, but I found another example of it online: https://imti.co/kibana-kubernetes/ (but the numbering scheme isn't the same).
Is this a Kubernetes thing? A file naming convention? Is it to keep files ordered in a folder?
This is to handle the resource creation order. There's an opened issue in kubernetes:
https://github.com/kubernetes/kubernetes/issues/16448#issue-113878195
tl;dr kubectl apply -f k8s/* should handle the order but it does not.
However, except the namespace, I cannot imagine where the order will matter. Every relation except namespace is handled by label selectors, so it fixes itself once all resources are deployed. You can just do 00-namespace.yaml and everything else without prefixes. Or just skip prefixes at all unless you really hit the issue (I never faced it).
When you execute kubectl apply * the files are executed alphabetically. Prefixing files with a rising number allows you to control the order of the executed files. But in nearly all cases the order shouldn't matter.
Sequence helps in readability, user friendly and not the least maintainability. Looking at the resources one can conclude in which order the deployment needs to be performed. For example, deployment using configMap object would fail if the deployment is done before configMap is created.
Is there a way you can run kubectl in a 'session' such that it gets its kubeconfig from a local directory rather then from ~/.kubeconfig?
Example Use Case
Given the abstract nature of the question, it's worth describing why this may be valuable in an example. If someone had an application, call it 'a', and they had 4 kubernetes clusters, each running a, they may have a simple script which did some kubectl actions in each cluster to smoke test a new deployment of A, for example, they may want to deploy the app, and see how many copies of it were autoscaled in each cluster afterward.
Example Solution
As in git, maybe there could be a "try to use a local kubeconfig file if you can find one" as a git-style global setting:
kubectl global set-precedence local-kubectl
Then, in one terminal:
cd firstcluster
cat << EOF > kubeconfig
firstcluster
...
EOF
kubectl get pods
p4
Then, in another terminal:
cd secondcluster/
cat << EOF > kubeconfig
secondcluster
...
EOF
kubectl get pods
p1
p2
p3
Thus, the exact same kubectl commands (without having to set context) actually run against new clusters depending on the directory you are in.
Some ideas for solutions
One idea I had for this, was to write a kubectl-context plugin which somehow made kubectl always check for local kubeconfig, setting context behind the scenes if it could before running, to a context in a global config that matched the directory name.
Another idea I've had along these lines would be to create different users which each had different kubeconfig home files.
And of course, using something like virtualenv, you might be able to do something where kubeconfig files had their own different value.
Final thought
Ultimately I think the goal here is to subvert the idea that a ~/.kubeconfig file has any particular meaning, and instead look at ways that many kubeconfig files can be used in the same machine, however, not just using the --kubeconfig option but rather, in such a way that state is still maintained in a directory local manner.
AFAIK, the config file is under ~/.kube/config and not ~/.kubeconfig. I suppose you are looking at an opinion on your answer, so you gave me the great idea about creating kubevm, inspired by awsvm for the AWS CLI, chefvm for managing multiple Chef servers and rvm for managing multiple Ruby versions.
So, in essence, you could have a kubevm setup that switches between different ~/.kube configs. You can use a CLI like this:
# Use a specific config
kubevm use {YOUR_KUBE_CONFIG|default}
# or
kubevm YOUR_KUBE_CONFIG
# Set your default config
kubevm default YOUR_KUBE_CONFIG
# List your configurations, including current and default
kubevm list
# Create a new config
kubevm create YOUR_KUBE_CONFIG
# Delete a config
kubevm delete YOUR_KUBE_CONFIG
# Copy a config
kubevm copy SRC_CONFIG DEST_CONFIG
# Rename a config
kubevm rename OLD_CONFIG NEW_CONFIG
# Open a config directory in $EDITOR
kubevm edit YOUR_KUBE_CONFIG
# Update kubevm to the latest
kubevm update
Let me know if it's useful!
I like the work methology of Kuberenetes, use self-contained image and pass the configuration in a ConfigMap, as a volume.
Now this worked great until I tried to do this thing with Liquibase container, The SQL is very long ~1.5K lines, and Kubernetes rejects it as too long.
Error from Kubernetes:
The ConfigMap "liquibase-test-content" is invalid: metadata.annotations: Too long: must have at most 262144 characters
I thought of passing the .sql files as a hostPath, but as I understand these hostPath's content is probably not going to be there
Is there any other way to pass configuration from the K8s directory to pods? Thanks.
The error you are seeing is not about the size of the actual ConfigMap contents, but about the size of the last-applied-configuration annotation that kubectl apply automatically creates on each apply. If you use kubectl create -f foo.yaml instead of kubectl apply -f foo.yaml, it should work.
Please note that in doing this you will lose the ability to use kubectl diff and do incremental updates (without replacing the whole object) with kubectl apply.
Since 1.18 you can use server-side apply to circumvent the problem.
kubectl apply --server-side=true -f foo.yml
where server-side=true runs the apply command on the server instead of the client.
This will properly show conflicts with other actors, including client-side apply and thus fail:
Apply failed with 4 conflicts: conflicts with "kubectl-client-side-apply" using apiextensions.k8s.io/v1:
- .status.conditions
- .status.storedVersions
- .status.acceptedNames.kind
- .status.acceptedNames.plural
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
manifest to remove references to the fields that should keep their
current managers.
* You may co-own fields by updating your manifest to match the existing
value; in this case, you'll become the manager if the other manager(s)
stop managing the field (remove it from their configuration).
See http://k8s.io/docs/reference/using-api/api-concepts/#conflicts
If the changes are intended you can simple use the first option:
kubectl apply --server-side=true -force-conflicts -f foo.yml
You can use an init container for this. Essentially, put the .sql files on GitHub or S3 or really any location you can read from and populate a directory with it. The semantics of the init container guarantee that the Liquibase container will only be launched after the config files have been downloaded.
Using Kubernetes' kubectl I can execute arbitrary commands on any pod such as kubectl exec pod-id-here -c container-id -- malicious_command --steal=creditcards
Should that ever happen, I would need to be able to pull up a log saying who executed the command and what command they executed. This includes if they decided to run something else by simply running /bin/bash and then stealing data through the tty.
How would I see which authenticated user executed the command as well as the command they executed?
Audit logging is not currently offered, but the Kubernetes community is working to get it available in the 1.4 release, which should come around the end of September.
There are 3rd party solutions that can solve the auditing issue, and if you're looking for a PCI compliance as the title implies solutions exist that helps solve the broader problem, and not just auditing.
Here is a link to such a solution by Twistlock. https://info.twistlock.com/guide-to-pci-compliance-for-containers
Disclaimer, I work for Twistlock.
I am following Zenoss Development Environment Guide to configure Zenoss. When I get to the part about mounting z directory into container, I cannot find the file mentioned - I've tried "find" and it's just not there. I can't find anything on Google on how to add serviced to the environment, I think partly because searches bring the root word "service" rather than "serviced". Does anyone know what serviced is and how to install it or its substitute for the purpose of the task? Please, see quote below. Thanks much.
...
Mount “/z” Into All Containers
Now we can configure serviced to
automatically share (bind mount) the host’s /z directory into every
container it starts. This will let us use the same files on the host
and in containers using the exact same path.
Edit /lib/systemd/system/serviced.service. Add a mount argument to the
end of the ExecStart line so that it looks like this:
ExecStart=/opt/serviced/bin/serviced --mount *,/z,/z
...
Serviced is command line Docker orchestration tool developed for Zenoss 5. Full name is Control Center. Read Zenoss 5 installation guide, there is included what do you need for serviced (control center) installation.