I'm learning openshift origin , in the master container I found a number of config files:
[root#openshift] cd /var/lib/origin
[root#openshift origin]# find . -name *kubeconfig
./openshift.local.config/node-localhost/node.kubeconfig
./openshift.local.config/master/admin.kubeconfig
./openshift.local.config/master/openshift-master.kubeconfig
[root#openshift origin]# find . -name *config.yaml
./openshift.local.config/node-localhost/node-config.yaml
./openshift.local.config/master/master-config.yaml
I found out these files also inspecting the origin container:
$ docker inspect 671fb8df3752 | grep config
"--master-config=/var/lib/origin/openshift.local.config/master/master-config.yaml",
"--node-config=/var/lib/origin/openshift.local.config/node-localhost/node-config.yaml"
"/var/lib/origin/openshift.local.config:/var/lib/origin/openshift.local.config:z",
"Source": "/var/lib/origin/openshift.local.config",
"Destination": "/var/lib/origin/openshift.local.config",
"KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig",
"--master-config=/var/lib/origin/openshift.local.config/master/master-config.yaml",
"--node-config=/var/lib/origin/openshift.local.config/node-localhost/node-config.yaml"
Could you help me to schematize / summarize the role and use of each of these files?
Specifically when executing commands of this type:
oadm policy add-scc-to-group anyuid system:authenticated --config=/var/lib/origin/openshift.local.config/master/admin.kubeconfig
they must be directed to each of the configurations I have found or only to the specific one?
Related
how do I source this file which is in another repo. here is the GitHub action workflow example which is used in another project. I used the same code, its complaining that "file is not found"
run: |
# Setting up cluster configurations, config files are in the kubectl image
# https://github.com /kube-apps/tree/master/kubectl
source /gke/gke-clusters.config
source /aks/aks-clusters.config
Just add the full path to the source lines.
run: |
# Setting up cluster configurations, config files are in the kubectl image
# https://github.com/kube-apps/tree/master/kubectl
source https://github.com/ /kube-apps/tree/master/kubectl/gke-clusters.config
source https://github.com /kube-apps/tree/master/kubectl/aks-clusters.config
I want to set up Liquibase (using Docker) for a PostgreSQL database running locally (not in a container). I followed multiple tutorials, including the one on Docker Hub.
As suggested I've created a liquibase.docker.properties file in my <PATH TO CHANGELOG DIR>
classpath: /liquibase/changelog
url: jdbc:postgresql://localhost:5432/mydb?currentSchema=public
changeLogFile: changelog.xml
username: myuser
password: mypass
to be able to run docker run --rm --net="host" -v <PATH TO CHANGELOG DIR>:/liquibase/changelog liquibase/liquibase --defaultsFile=/liquibase/changelog/liquibase.docker.properties <COMMAND>.
When I run [...] generateChangeLog I get the following output (with option --logLevel info):
[2021-04-27 06:08:20] INFO [liquibase.integration] No Liquibase Pro license key supplied. Please set liquibaseProLicenseKey on command line or in liquibase.properties to use Liquibase Pro features.
Liquibase Community 4.3.3 by Datical
####################################################
## _ _ _ _ ##
## | | (_) (_) | ##
## | | _ __ _ _ _ _| |__ __ _ ___ ___ ##
## | | | |/ _` | | | | | '_ \ / _` / __|/ _ \ ##
## | |___| | (_| | |_| | | |_) | (_| \__ \ __/ ##
## \_____/_|\__, |\__,_|_|_.__/ \__,_|___/\___| ##
## | | ##
## |_| ##
## ##
## Get documentation at docs.liquibase.com ##
## Get certified courses at learn.liquibase.com ##
## Free schema change activity reports at ##
## https://hub.liquibase.com ##
## ##
####################################################
Starting Liquibase at 06:08:20 (version 4.3.3 #52 built at 2021-04-12 17:08+0000)
BEST PRACTICE: The changelog generated by diffChangeLog/generateChangeLog should be inspected for correctness and completeness before being deployed.
[2021-04-27 06:08:22] INFO [liquibase.diff] changeSets count: 1
[2021-04-27 06:08:22] INFO [liquibase.diff] changelog.xml does not exist, creating and adding 1 changesets.
Liquibase command 'generateChangeLog' was executed successfully.
It looks like the command ran "successfully" but I could not find the file changelog.xml in my local directory which I mounted, i.e. <PATH TO CHANGELOG DIR>. The mounting however has to be working since it connects to the database successfully, i.e. the container is able to access and read liquibase.docker.properties.
First I thought I might have to "say" to Docker that it is allowed to write on my disk but it seems that this should be supported [from the description on Docker Hub]:
The /liquibase/changelog volume can also be used for commands that write output, such as generateChangeLog
What am I missing? Thanks in advance for any help!
Additional information
Output of docker inspect:
"Mounts": [
{
"Type": "bind",
"Source": "<PATH TO CHANGELOG DIR>",
"Destination": "/liquibase/changelog",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
...
],
When you run generateChangeLog, the path to the file should be specified as /liquibase/changelog/changelog.xml even though for update it needs to be changelog.xml
Example:
docker run --rm --net="host" -v <PATH TO CHANGELOG DIR>:/liquibase/changelog liquibase/liquibase --defaultsFile=/liquibase/changelog/liquibase.docker.properties --changeLogFile=/liquibase/changelog/changelog.xml generateChangeLog
For generateChangeLog, the changeLogFile argument is the specific path to the file to output vs. a path relative to the classpath setting that update and other commands use.
When you include the command line argument as well as a defaultsFile like above, the command line argument wins. That lets you leverage the same default settings while replacing specific settings when specific commands need more/different ones.
Details
There is a distinction between operations that are creating files and ones that are reading existing files.
With Liquibase, you almost always want to use paths to files that are relative to directories in the classpath like the examples have. The specified changelogFile gets stored in the tracking system, so if you ever run the same changelog but referenced in a different way (because you moved the root directory or are running from a different machine) then Liquibase will see it as a new file and attempt to re-run already ran changesets.
That is why the documentation has classpath: /liquibase/changelog and changeLogFile: com/example/changelog.xml. The update operation looks in the /liquibase/changelog dir to find a file called com/example/changelog.xml and finds it and stores the path as com/example/changelog.xml.
GenerateChangeLog is one of those "not always relative to classpath" cases because it needs to know where to store the file. If you just specify the output changeLogFile as changelog.xml it creates just creates that file relative to your process's working directory which is not what you are needing/expecting.
TL;DR
Prefix the changelog filename with /liquibase/changelog/ and pass it as a command line argument:
[...] --changeLogFile /liquibase/changelog/changelog.xml generateChangelog
See Nathan's answer for details.
Explanation
I launched the container with -it and overwrote the entrypoint to get an interactive shell within the container (see this post):
docker run --net="host" -v <PATH TO CHANGELOG DIR>:/liquibase/changelog -it --entrypoint /bin/bash liquibase/liquibase -s
Executing ls yields the following:
liquibase#ubuntu-rafael:/liquibase$ ls
ABOUT.txt UNINSTALL.txt docker-entrypoint.sh liquibase
GETTING_STARTED.txt changelog examples liquibase.bat
LICENSE.txt changelog.txt lib liquibase.docker.properties
README.txt classpath licenses liquibase.jar
Notable here is docker-entrypoint.sh which actually executes the liquibase command, and the folder changelog which is mounted to my local <PATH TO CHANGELOG DIR> (my .properties file is in there).
Now I ran the same command as before but now inside the container:
sh docker-entrypoint.sh --defaultsFile=/liquibase/changelog/liquibase.docker.properties --logLevel info generateChangeLog
I got the same output as above but guess what reveals when running ls again:
ABOUT.txt changelog examples liquibase.docker.properties
GETTING_STARTED.txt changelog.txt lib liquibase.jar
LICENSE.txt changelog.xml ...
The changelog actually exists! But it is created in the wrong directory...
If you prefix the changelog filename with /liquibase/changelog/, the container is able to write it to your local (mounted) disk.
P.S. This means that the description of the "Complete Example" using "a properties file" from here is not working. I will open an Issue for that.
UPDATE
Specifying the absolute path is only necessary for commands that write a new file, e.g. generateChangeLog (see Nathan's answer). But it is better practise to pass the absolute path via command line so that you can keep the settings in the defaults-file.
I am new to grafana and I am getting this error while executing the grafana-server.exe
Grafana-server Init Failed: Could not find config defaults, make sure homepath command line parameter is set or working directory is homepath
Firstly, I am not clear about which path to specify as homepath and which to specify as config path.
Secondly, I have tried to set the homepath using this command:
grafana-cli admin reset-admin-password --homepath "c:\" mynewpassword
But getting this error :
"Incorrect Usage: flag provided but not defined: -homepath"
in grafana version 7.3.5 this is the help message.
NAME:
Grafana CLI - A new cli application
USAGE:
grafana-cli [global options] command [command options] [arguments...]
VERSION:
7.3.5
AUTHOR:
Grafana Project <hello#grafana.com>
COMMANDS:
plugins Manage plugins for grafana
admin Grafana admin commands
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--pluginsDir value Path to the Grafana plugin directory (default: "/var/lib/grafana/plugins") [$GF_PLUGIN_DIR]
--repo value URL to the plugin repository (default: "https://grafana.com/api/plugins") [$GF_PLUGIN_REPO]
--pluginUrl value Full url to the plugin zip file instead of downloading the plugin from grafana.com/api [$GF_PLUGIN_URL]
--insecure Skip TLS verification (insecure) (default: false)
--debug Enable debug logging (default: false)
--configOverrides value Configuration options to override defaults as a string. e.g. cfg:default.paths.log=/dev/null
--homepath value Path to Grafana install/home path, defaults to working directory
--config value Path to config file
--help, -h show help (default: false)
--version, -v print the version (default: false)
so you can set it by passing --homepath .
be careful, it seems its diffrent with grafana-server parametes. in that case you must set flag with only 1 hyphen (-homepath)
but come back to your problem there is 2 things to say.
first is to order your command correctly. i mean something like this
grafana-cli --homepath path ...
because homepath flag is for grafana-cli so it must come right after that or there will not be any guarantee of "what you want to do is what you write".
second is the homepath. consider this tree directory
.
|_LICENSE
|_NOTICE.md
|_README.md
|_VERSION
|_bin
|_conf
|_data
|_plugin-bundled
|_public
|_scripts
here is installation directory or homepath which you must set. more specifically exacly around bin(or conf or data or ...) directory.
When I try to install a new template using the following:
dotnet new --install . --name MyTemplate
or
dotnet new --install "Path" --name MyTemplate
I get the usage information:
Usage: new [options]
Options:
-h, --help Displays help for this command.
-l, --list Lists templates containing the specified name. If no name is specified, lists all templates.
-n, --name The name for the output being created. If no name is specified, the name of the current directory is used.
-o, --output Location to place the generated output.
-i, --install Installs a source or a template pack.
-u, --uninstall Uninstalls a source or a template pack.
--nuget-source Specifies a NuGet source to use during install.
--type Filters templates based on available types. Predefined values are "project", "item" or "other".
--dry-run Displays a summary of what would happen if the given command line were run if it would result in a template creation.
--force Forces content to be generated even if it would change existing files.
-lang, --language Filters templates based on language and specifies the language of the template to create.
I have a .template.config directory with a template.json file within.
The contents of the template.json file are something like this:
{
"author": "My Department",
"classifications": [
"Solution Template"
],
"name": "My Template Name",
"identity": "My Template Identity",
"shortName": "mytemplate",
"tags": {
"language": "C#"
},
"sourceName": "Company.Product",
"preferNameDirectory": "true"
}
I certainly wish it would tell me what I'm doing wrong. This has worked for me in the past.
The way the dotnet new --install command works is a bit confusing unfortunately. The installation can be successful but the output does not make it obvious. You will get the usage information and a list of installed templates that should include your new one.
As mentioned in the comments, there is a bug filed that aims to tidy this up.
I was seeing similar results running dotnet new -i IdentityServer4.Templates, but the package wasn't being installed and no error or other info was being displayed.
Turns out nuget.org wasn't configured as a package source (new machine I guess? - thought that was configured by default when installing visual studio though).
Here's the nuget.org feed at the time of this writing:
https://api.nuget.org/v3/index.json
And here's info for configuring them in case it helps someone who hasn't done that:
https://learn.microsoft.com/en-us/nuget/consume-packages/install-use-packages-visual-studio#package-sources
When running this command:
kubectl apply -f tenten
I get this error:
unable to decode "tenten\.angular-cli.json": Object 'Kind' is missing in '{
"project": {
"$schema": "./node_modules/#angular/cli/lib/config/schema.json",
"name": "tenten"
},
"apps": [{
"root": "src/main/webapp/",
"outDir": "target/www/app",
"assets": [
"content",
"favicon.ico"
],
"index": "index.html",
"main": "app/app.main.ts",
"polyfills": "app/polyfills.ts",
"test": "",
"tsconfig": "../../../tsconfig.json",
"prefix": "jhi",
"mobile": false,
"styles": [
"content/scss/vendor.scss",
"content/scss/global.scss"
],
"scripts": []
}],
It looks like you're running this from the parent directory of your applications. You should 1) create a directory that's parallel to your applications and 2) run yo jhipster:kubernetes in it. Then run kubectl apply -f tenten in that directory after you've built and pushed your docker images. For example, here's the output when I run it from the kubernetes directory in my jhipster-microservices-example project.
± yo jhipster:kubernetes
_-----_
| | ╭──────────────────────────────────────────╮
|--(o)--| │ Update available: 2.0.0 (current: 1.8.5) │
`---------´ │ Run npm install -g yo to update. │
( _´U`_ ) ╰──────────────────────────────────────────╯
/___A___\ /
| ~ |
__'.___.'__
´ ` |° ´ Y `
⎈ [BETA] Welcome to the JHipster Kubernetes Generator ⎈
Files will be generated in folder: /Users/mraible/dev/jhipster-microservices-example/kubernetes
WARNING! kubectl 1.2 or later is not installed on your computer.
Make sure you have Kubernetes installed. Read http://kubernetes.io/docs/getting-started-guides/binary_release/
Found .yo-rc.json config file...
? Which *type* of application would you like to deploy? Microservice application
? Enter the root directory where your gateway(s) and microservices are located ../
2 applications found at /Users/mraible/dev/jhipster-microservices-example/
? Which applications do you want to include in your configuration? (Press <space> to select, <a> to toggle all, <i> to i
nverse selection)blog, store
JHipster registry detected as the service discovery and configuration provider used by your apps
? Enter the admin password used to secure the JHipster Registry admin
? What should we use for the Kubernetes namespace? default
? What should we use for the base Docker repository name? mraible
? What command should we use for push Docker image to repository? docker push
Checking Docker images in applications' directories...
ls: no such file or directory: /Users/mraible/dev/jhipster-microservices-example/blog/target/docker/blog-*.war
identical blog/blog-deployment.yml
identical blog/blog-service.yml
identical blog/blog-postgresql.yml
identical blog/blog-elasticsearch.yml
identical store/store-deployment.yml
identical store/store-service.yml
identical store/store-mongodb.yml
conflict registry/jhipster-registry.yml
? Overwrite registry/jhipster-registry.yml? overwrite this and all others
force registry/jhipster-registry.yml
force registry/application-configmap.yml
WARNING! Kubernetes configuration generated with missing images!
To generate Docker image, please run:
./mvnw package -Pprod docker:build in /Users/mraible/dev/jhipster-microservices-example/blog
WARNING! You will need to push your image to a registry. If you have not done so, use the following commands to tag and push the images:
docker image tag blog mraible/blog
docker push mraible/blog
docker image tag store mraible/store
docker push mraible/store
You can deploy all your apps by running:
kubectl apply -f registry
kubectl apply -f blog
kubectl apply -f store
Use these commands to find your application's IP addresses:
kubectl get svc blog
See the end of my blog post Develop and Deploy Microservices with JHipster for more information.