Get Influxdb V2 token only in CLI after upgrading from version 1x to 2x - grafana

Upgraded influxDb from v1.8.10 to v2.3.0+SNAPSHOT.090f681737 with below command
(https://docs.influxdata.com/influxdb/v2.3/upgrade/v1-to-v2/automatic-upgrade/)
influxd upgrade --username='admin' --password='adminpass1234' --org='test' --bucket='testbucket' --force
Now trying to get or create token of influxdb with CLI commands to process with other command(like creating buckets, users and etc). To create new token I need to pass token which is only available in Influx Dashboard right now, I would like to pass username and password but only token is option (https://docs.influxdata.com/influxdb/v2.3/security/tokens/create-token/)
influx v1 auth create -username admin
I don't want to go to influxdb dashboard page and get token from there because my script should work automatically without depending any manual actions in UI. It's ansible script.
Don't know if there is a any option which can help to solve this problem.

Client config should be created during upgrade, you can list it, the output contains the token
influx config ls --json

Related

How can I go about using Hasura CLI to export metadata via Windows Active Directory Login?

Here is information about our technical environment:
Hasura GraphQL Current server version: v2.6.2-pro.1
Hasura CLI version 2.15.0
We log onto the Hasura GraphQL Web UI Console using our Windows Active Directory Login (essentially Single-SignOn SSO) (therefore,
we do not have an admin secret)
However, the official Hasura GraphQL Technical Tutorial Guide only gives examples showing the admin secret being supplied in the Hasura CLI command line console arguments (https://hasura.io/docs/latest/migrations-metadata-seeds/migrations-metadata-setup/)
hasura init demo-project --endpoint https://docs-demo.hasura.app --admin-secret mySecret
How can I go about using Hasura CLI to export metadata via Windows Active Directory Login? (I would be interested in Hasura CLI command line examples).
as of now you’ll have to set an admin secret via environment variables and use that via the CLI. Please file a feature request via Github if you need this so we can get it tracked and prioritized.

Helm Registry Login to ACR

I am trying to use an OCI Registry (ACR) to store my helm charts. I have found ways to push and pull my charts but I cannot login to the registry in an OCI native way.
At the moment I can log in via: az acr login --name myacr
I want to use helm registry login myacr.azurecr.io but this fails with
Error: Get https://myacr.azurecr.io/v2/: unauthorized: Application not registered with AAD.
What does this mean? Do I need to perform some setup between AAD and ACR?
Update
When I try to helm registry login with my user account (user name as name and password as password) from AAD I get the error above.
If I try to login with a service principal it works.
If I try to log in with the 00000000-0000-0000-0000-000000000000 account from this method, that also works.
I suspect there is something additional that needs to be done with user accounts but I am not sure what that is.
The issue you met is the problem with the authentication type. Here shows you all the available authentication methods, but it does not contain the user account. It means it does not support currently. And right now, the controllable authentication way is the service principal. You can only grant it with the appropriate permission.
Not really an answer but might want to follow https://github.com/Azure/azure-cli/issues/14467
Also commented, I've downgraded to helm v3.3.3 and proceeded to authenticate/add the ACR repo with the following command
az acr helm repo add --name <containerregistry>

How to provide credentials for s3 bucket provider to Velero when creating a new backup-location

Velero is installed in the cluster. At the installation velero was given credentials to s3 provider with --secret-file parameter and everything works fine.
Now I would like to create a new backup-location which will use buckets from a different s3 provider. When creating a backup location I pass to --config a key/value pair of s3Url=... But I can't seem to find a way to add a way to pass credentials to the new a3 provider. The --secret-file flag that worked at install is not accepted.
As the result when calling backup create with the new --storage-location the backup is created as failed.
My question is how I can give velero a way to authenticate itself with the new s3 provider? Is it even possible to create a new backup-location using s3 provider different that that used at velero install?
As the documentation says:
Velero only supports a single set of credentials per provider. It's
not yet possible to use different credentials for different locations,
if they're for the same provider.
So I am afraid it is not possible at the moment to add new credentials.
As far as I can see from Velero Issue Tracker, they are working on fixing that issue, however it still hasn't been properly implemented.

Google cloud credentials totally hosed after attempting to setup boto

I had a gcloud user authenticated and was running gsutils fine from the command line (Windows 8.1). But I needed to access gsutils from a python application so I followed the instructions here:
https://cloud.google.com/storage/docs/xml-api/gspythonlibrary#credentials
I got as far as creating a .boto file, but now not only does the my python code fail (boto.exception.NoAuthHandlerFound: No handler was ready to authenticate.). But I can't run bsutils from the command line any more. I get this error:
C:\>gsutil ls
You are attempting to access protected data with no configured
credentials. Please visit https://cloud.google.com/console#/project
and sign up for an account, and then run the "gcloud auth login"
command to configure gsutil to use these credentials.
I have run gcloud auth and it appears to work, I can query my users:
C:\>gcloud auth list
Credentialed Accounts:
- XXXserviceuser#XXXXX.iam.gserviceaccount.com ACTIVE
- myname#company.name
To set the active account, run:
$ gcloud config set account `ACCOUNT`
I have tried both with the account associated with my email active, and the new serveruser account (created following instructions above). Same "protected data with no configured credentials." error. I tried removing the .boto file, and adding the secret CLIENT_ID and CLIENT_SECRET to my .boto file.
Anyone any ideas what the issue could be?
So I think the latest documentation/examples showing how to use (and authenticate) Google Cloud storage via python is in this repo:
https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/storage/api
That just works for me without messing around with keys and service users.
Would be nice if there was a comment somewhere in the old gspythonlibrary docs pointing this out.

How to retrieve api key for IBM containers after first usage

the api key, used in 'ice login -k xxx', was shown when I started to use IBM containers beta. After that, where can I retrieve my api key in case I forgot it?
If you have the latest version of the cli you just need to type ice login and it will log you into to the Containers Service. The cli with authenticate you with your Bluemix username and password.
To check what version of the CLI you have installed type the following.
[09:18 PM]>pip list | grep icecli
icecli (2.0)
As of this writing the latest version is 2.0.
The latest installer as of this writing can be fetched from https://static-ice.ng.bluemix.net/icecli-2.0.zip.
The 'ice login' does not require -k option any more. If unauthorized error is shown after 'ice login', delete lines 'x_auth_token' and 'api_key' from ~/.ice/ice-cfg.ini, and try 'ice login' again, everything should work fine.