Unable to set HomePath And config in grafana - grafana

I am new to grafana and I am getting this error while executing the grafana-server.exe
Grafana-server Init Failed: Could not find config defaults, make sure homepath command line parameter is set or working directory is homepath
Firstly, I am not clear about which path to specify as homepath and which to specify as config path.
Secondly, I have tried to set the homepath using this command:
grafana-cli admin reset-admin-password --homepath "c:\" mynewpassword
But getting this error :
"Incorrect Usage: flag provided but not defined: -homepath"

in grafana version 7.3.5 this is the help message.
NAME:
Grafana CLI - A new cli application
USAGE:
grafana-cli [global options] command [command options] [arguments...]
VERSION:
7.3.5
AUTHOR:
Grafana Project <hello#grafana.com>
COMMANDS:
plugins Manage plugins for grafana
admin Grafana admin commands
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--pluginsDir value Path to the Grafana plugin directory (default: "/var/lib/grafana/plugins") [$GF_PLUGIN_DIR]
--repo value URL to the plugin repository (default: "https://grafana.com/api/plugins") [$GF_PLUGIN_REPO]
--pluginUrl value Full url to the plugin zip file instead of downloading the plugin from grafana.com/api [$GF_PLUGIN_URL]
--insecure Skip TLS verification (insecure) (default: false)
--debug Enable debug logging (default: false)
--configOverrides value Configuration options to override defaults as a string. e.g. cfg:default.paths.log=/dev/null
--homepath value Path to Grafana install/home path, defaults to working directory
--config value Path to config file
--help, -h show help (default: false)
--version, -v print the version (default: false)
so you can set it by passing --homepath .
be careful, it seems its diffrent with grafana-server parametes. in that case you must set flag with only 1 hyphen (-homepath)
but come back to your problem there is 2 things to say.
first is to order your command correctly. i mean something like this
grafana-cli --homepath path ...
because homepath flag is for grafana-cli so it must come right after that or there will not be any guarantee of "what you want to do is what you write".
second is the homepath. consider this tree directory
.
|_LICENSE
|_NOTICE.md
|_README.md
|_VERSION
|_bin
|_conf
|_data
|_plugin-bundled
|_public
|_scripts
here is installation directory or homepath which you must set. more specifically exacly around bin(or conf or data or ...) directory.

Related

How to display new Yocto image option after source poky/oe-init-env

let say i have a new yocto image call stargazer-cmd
what file should i edit so that every time i source poky/oe-init-env
it display as a build option to the user?
kj#kj-Aspire-V3-471G:~/stm32Yoctominimal$ source poky/oe-init-build-env build-mp1/
### Shell environment set up for builds. ###
You can now run 'bitbake <target>'
Common targets are:
core-image-minimal
core-image-sato
meta-toolchain
meta-ide-support
I wish to add stargazer-cmd on top of core-image-minimal, i am not sure what to google and what is the file i need to change.
Let me explain how to add a custom configuration to the OpenEmbedded build process.
First of all, here is the process that is done when running:
source poky/oe-init-build-env
The oe-init-build-env script initializes OEROOT variable to point to the location of the script itself.
The oe-init-build-env script sources another file $OEROOT/scripts/oe-buildenv-internal which will:
Check if OEROOT is set
Set BUILDDIR to your custom build directory name $1, or just build if you do not provide one
Set BBPATH to the poky/bitbake folder
Adds $BBPATH/bin and OEROOT/scripts to PATH (This will enable commands like bitbake and bitbake-layers ...)
Export BUILDDIR and PATH to the next file
The oe-init-build-env script continues by running the final build script with:
TEMPLATECONF="$TEMPLATECONF" $OEROOT/scripts/oe-setup-builddir
The oe-setup-builddir script will:
Check if BUILDDIR is set
Create the conf directory under $BUILDDIR
Sources a template file that will check if there is a TEMPLATECONF variable is set:
. $OEROOT/.templateconf
That file contains:
# Template settings
TEMPLATECONF=${TEMPLATECONF:-meta-poky/conf}
it means that if TEMPLATECONF variable is not set, set it to meta-poky/conf, and that is where the default local.conf and bblayers.conf are coming from.
Copy $TEMPLATECONF to $BUILDDIR/conf/templateconf.cfg
Set some variables pointing to custom local.conf and bblayers.conf:
OECORELAYERCONF="$TEMPLATECONF/bblayers.conf.sample"
OECORELOCALCONF="$TEMPLATECONF/local.conf.sample"
OECORENOTESCONF="$TEMPLATECONF/conf-notes.txt"
In the oe-setup-builddir there is a comment saying that TEMPLATECONF can point to a directory:
#
# $TEMPLATECONF can point to a directory for the template local.conf & bblayers.conf
#
Copy local.conf.sample and bblayers.conf.sample from TEMPLATECONF directory into BUIDDIR/conf:
cp -f $OECORELOCALCONF "$BUILDDIR/conf/local.conf"
sed -e "s|##OEROOT##|$OEROOT|g" \
-e "s|##COREBASE##|$OEROOT|g" \
$OECORELAYERCONF > "$BUILDDIR/conf/bblayers.conf"
Finally it will print what is inside OECORENOTESCONF which points to TEMPLATECONF/conf-notes.txt:
[ ! -r "$OECORENOTESCONF" ] || cat $OECORENOTESCONF
and by default that is located under meta-poky/conf/conf-notes.txt:
### Shell environment set up for builds. ###
You can now run 'bitbake <target>'
Common targets are:
core-image-minimal
core-image-sato
meta-toolchain
meta-ide-support
You can also run generated qemu images with a command like 'runqemu qemux86'
Other commonly useful commands are:
- 'devtool' and 'recipetool' handle common recipe tasks
- 'bitbake-layers' handles common layer tasks
- 'oe-pkgdata-util' handles common target package tasks
So, now, after understanding all of that, here is what you can do:
Create a custom template directory for your project, containing:
local.conf.sample
bblayers.conf.sample
conf-notes.txt
Do not forget to set the path to poky in bblayers.conf to ##OEROOT## as it will be set automatically by the build script.
Set your custom message in conf-notes.txt
Before any new build, just set TEMPLATECONF:
TEMPLATECONF="<path/to/template-directory>" source poky/oe-init-build-env <build_name>
Then, you will find a build with your custom local.conf and bblayers.conf with additional file conf/templateconf.cfg containing the path of TEMPLATECONF
conf/conf-notes.txt in your layer.
OECORENOTESCONF should point to the file.

How to decide Quarkus application arguments in Kubernetes at run-time?

I've built a Quarkus 2.7.1 console application using picocli that includes several subcommands. I'd like to be able to run this application within a Kubernetes cluster and decide its arguments at run-time. This is so that I can use the same container image to run the application in different modes within the cluster.
To get things started I added the JIB extension and tried setting the arguments using a configuration value quarkus.jib.jvm-arguments. Unfortunately it seems like this configuration value is locked at build-time so I'm unable to update this at run-time.
Next I tried setting quarkus.args while using default settings for JIB. The configuration value documentation makes it sound general enough for the job but it doesn't seem to have an affect when the application is run in the container. Since most references to this configuration value in documentation are in the context of Dev Mode I'm wondering if this may be disabled outside of that.
How can I get this application running in a container image with its arguments decided at run-time?
You can set quarkus.jib.jvm-entrypoint to any container entrypoint command you want, including scripts. An example in the doc is quarkus.jib.jvm-entrypoint=/deployments/run-java.sh. You could make use of $CLI_ARGUMENTS in such a script. Even something like quarkus.jib.jvm-entrypoint=/bin/sh,-c,'/deployments/run-java.sh $CLI_ARGUMENTS' should work too, as long as you place the script run-java.sh at /deployments in the image. The possibility is limitless.
Also see this SO answer if there's an issue. (The OP in the link put a customer script at src/main/jib/docker/run-java.sh (src/main/jib is Jib's default "extra files directory") so that Jib places the script in the image at /docker/run-java.sh.
I was able to find a solution to the problem with a bit of experimenting this morning.
With the quarkus-container-image-docker extension (instead of quarkus.jib.jvm-arguments) I was able to take the template Dockerfile.jvm and extend it to pass through arguments to the CLI. The only line that needed changing was the ENTRYPOINT (details included in the snippet below). I changed the ENTRYPOINT form (from exec to shell) and added an environment variable as an argument to pass-through program arguments.
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.3
ARG JAVA_PACKAGE=java-11-openjdk-headless
ARG RUN_JAVA_VERSION=1.3.8
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en'
# Install java and the run-java script
# Also set up permissions for user `1001`
RUN microdnf install curl ca-certificates ${JAVA_PACKAGE} \
&& microdnf update \
&& microdnf clean all \
&& mkdir /deployments \
&& chown 1001 /deployments \
&& chmod "g+rwX" /deployments \
&& chown 1001:root /deployments \
&& curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \
&& chown 1001 /deployments/run-java.sh \
&& chmod 540 /deployments/run-java.sh \
&& echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/lib/security/java.security
# Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size.
ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager"
# We make four distinct layers so if there are application changes the library layers can be re-used
COPY --chown=1001 target/quarkus-app/lib/ /deployments/lib/
COPY --chown=1001 target/quarkus-app/*.jar /deployments/
COPY --chown=1001 target/quarkus-app/app/ /deployments/app/
COPY --chown=1001 target/quarkus-app/quarkus/ /deployments/quarkus/
EXPOSE 8080
USER 1001
# [== BEFORE ==]
# ENTRYPOINT [ "/deployments/run-java.sh" ]
# [== AFTER ==]
ENTRYPOINT "/deployments/run-java.sh" $CLI_ARGUMENTS
I have tried the above approaches but they didn't work with the default quarkus JIB's ubi8/openjdk-17-runtime image. This is because this base image doesn't use /work as the WORKIR, but instead the /home/jboss.
Therefore, I created a custom start-up script and referenced it on the properties file as following. This approach works better if there's a need to set application params using environment variables:
File: application.properties
quarkus.jib.jvm-entrypoint=/bin/sh,run-java.sh
File: src/main/jib/home/jboss/run-java.sh
java \
-Djavax.net.ssl.trustStore=/deployments/truststore \
-Djavax.net.ssl.trustStorePassword="$TRUST_STORE_PASSWORD" \
-jar quarkus-run.jar

How to use plugin commands in bcftools?

My goal is to use bcftools to check that the reference alleles in my dataset (vcf file) match with a reference genome (fasta file) using the fixref plugin.
Working on command line, I first set the following environment:
export BCFTOOLS_PLUGINS=/path/to/bcftools/plugins
The following code is recommended for test datasets with mismatches:
bcftools +fixref test.bcf -Ob -o output.bcf -- -f ref.fa -m top
When I run this code using my own files (please note that my data is .vcf, not .bcf) I get the following error:
[main] Unrecognized command
If I simply enter:
bcftools
I get a list of the only 5 commands (view, index, cat, ld, ldpair) that I can use. So although I've set the environment, does it somehow need to be activated? Do I need to run my command through a bash script?
bcftools
was pointing to a deprecated version of bcftools (0.1.19) in ../bin/, while
BCFTOOLS_PLUGINS=/path/to/bcftools/plugins
was pointing to the plugins for bcftools version 1.10.2 outside /bin/
Replacing ../bin/bcftools (0.1.19 with 1.10.2) was the fix.

How to pass credentials to sbt build cli?

I'm trying to Dockerize an old sbt build that needs access to a private nexus repo. I've previously been using a local credentials file referenced from the build.sbt, but that doesn't really fit my use currently since I want to bootstrap everything from a Dockerfile build. I rather not have to output stuff to a file and then copy it into my docker build container but rather just pass it as docker ARG's.
Under
https://www.scala-sbt.org/1.0/docs/Publishing.html
I read I can pass it like so:
credentials += Credentials("Some Nexus Repository Manager",
"my.artifact.repo.net", "admin", "admin123")
So therefore I figured I could do something like:
ARG REPO_USER
ARG REPO_PWD
RUN sbt ";credentials += Credentials(\"Some Nexus Repository Manager\", \"repo.host.com\", ${REPO_USER}, ${REPO_PWD}) ;package"
and then run
docker build . --build-arg REPO_USER=foobar --build-arg REPO_PWD=*****
in my Dockerfile but that didn't work. I still get:
Unable to find credentials for [Sonatype Nexus Repository Manager # repo.host.com]
Is there any nice way to pass repo credentials to sbt from cli?
Update:
I tried a a file approach but that didn't solve the problem so I guess I might be on the wrong track on what's actually wrong here.
RUN echo "realm=Sonatype Nexus Repository Manager" >> .credentials && \
echo "host=repo.host.se" >> .credentials && \
echo "user=$REPO_USER" >> .credentials && \
echo "password=$REPO_PWD" >> .credentials && \
export SBT_CREDENTIALS=.credentials && \
sbt package
Update 2
I think this is no longer a Docker question at all since I've debugged it in the docker container sbt simply wouldn't pick my creds up any way I passed it according to the sbt docs.
I'll answer my own question.
You can use environment variables. You can set them in dockerfile either straight or from args. Something like this:
ARG REPO_USR
ARG REPO_PWD
ENV REPO_USR = ${REPO_USR}
ENV REPO_PWD = ${REPO_PWD}
Then you can use the environment variables in sbt:
val repoUser = sys.env.get("REPO_USR").getOrElse("")
val repoPass = sys.env.get("REPO_PWS").getOrElse("")
credentials += Credentials("Repo Realm", "repo.url.com", repoUser, repoPass)
Then you can basically pass args to docker build and they will be passed on to sbt.
The sbt docs are misleading at best or simply just wrong. After debugging this to bits in the docker container I found that there was no way I could pass the creds cli so they got picked up. The SBT_CREDENTIALS variable just doesn't work either.
This comment finally saved me: SBT is unable to find credentials when attempting to download from an Artifactory virtual repo
The least intrusive way I've got working is to add an sbt config file in the docker image's home dir:
RUN mkdir .sbt/0.13/plugins && \
echo "credentials += Credentials(\"Sonatype Nexus Repository Manager\", \"repo.host.se\", \"$REPO_USER\", \"$REPO_PWD\")" >> .sbt/0.13/plugins/creds.sbt
RUN sbt package

Openldap : overlay accesslog not found

I am trying to configure accesslog. I have changed the slapd.conf file and trying to test using slaptest but i am getting error while executing slaptest -f /etc/openldap/slapd.conf.
slapd.conf configuration:
# See slapd.conf(5) for details on configuration options.
# This file should NOT be world readable.
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
.
.
modulepath /usr/lib/openldap/
moduleload accesslog.la
overlay accesslog
logdb "cn=accesslog"
logops writes
logsuccess TRUE
I am getting error at overlay accesslog
overlay "accesslog" not found
slaptest: bad configuration file!
Am I missing something..?
I found the issue on my own. I have not compiled the openldap with --enable overlay.
To solve this issue
i have downloaded the openldap src
./configure --enable-overlays (./configure [options] [variable=value ...])
Now modify the slapd.conf to load accesslog.la and execute slaptest -f /etc/openldap/slapd.conf. Now you wont find any error.
restart the slapd.