I have an issue with creating tokens for hashicorp-vault using salt.
Create token:
$ curl --header "X-Vault-Token: f3821c23-4558-72db-8739-bbf7ac4b90d1" \
--request POST \
--data #create_token.json \
http://127.0.0.1:8200/v1/auth/token/create
{"request_id":"72ba8117-fcb8-506d-f1c4-fe0e5e0f5cbf","lease_id":"","renewable":false,"lease_duration":0,"data":null,"wrap_info":null,"warnings":["Policy \"saltstack/minion/myhost\" does not exist"],"auth":{"client_token":"96bfd0f2-a10a-d966-2d46-3f803fb1d995","accessor":"8a0a296f-d19a-e01c-4782-0fbab06a6ebe","policies":["default","saltstack/minion/admin.p13","saltstack/minions"],"metadata":null,"lease_duration":2764800,"renewable":true,"entity_id":""}}
Create a child token using client_token of the first operation.
$ curl --header "X-Vault-Token: 96bfd0f2-a10a-d966-2d46-3f803fb1d995" \
--request POST \
--data #test.json \
http://127.0.0.1:8200/v1/auth/token/create
{"errors":["parent token lookup failed"]}
Used payloads:
File create_token.json
{"policies": ["saltstack/minion/myhost", "saltstack/minions"], "num_uses":1}
File test.json
{"num_uses": 0, "policies": ["default", "myapp"], "ttl": "1h", "no_parent": true, "renewable": true, "metadata": {"user": "root"}}
The orphan tokens can only be created:
Via the auth/token/create-orphan endpoint
By having sudo capability or root policy when accessing auth/token/create and setting the orphan parameter to true
Which implies that your initial token doesn't have root policy associated with it. as you can see below in your policies list,
"policies":["default","saltstack/minion/admin.p13","saltstack/minions"],"metadata":null,"lease_duration":2764800,"renewable":true,"entity_id":""}}
Beside,
If you are using salt, then your Master token must have privileges to create new tokens in minions, which you can then create using vault.write_secret
Related
Using the github API I am trying to manually start a workflow using:
curl \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: MY_TOKEN" \
https://api.github.com/repos/djpr-data/djprdashdata/actions/workflows/refresh-data.yaml/dispatches
but I keep getting an authentication error:
{
"message": "Must have admin rights to Repository.",
"documentation_url": "https://docs.github.com/rest/reference/actions#create-a-workflow-dispatch-event"
}
This seems to be a similar issue to this question. But my PAT token has all admin and repo scopes selected. I also have my user account setup as admin for the repository and I have added a workflow dispatch to the workflow yaml file.
workflow_dispatch:
inputs:
tags:
description:
"run from cmdline"
I have been following the docs at https://docs.github.com/en/rest/actions/workflows#create-a-workflow-dispatch-event and have had no problems using the API to retrieve all previous workflow jobs. I have also tried the runs and jobs endpoints but get the same error. So I am now not sure what else I can do. Is there somewhere else I need to set permissions?
Thanks
This is a poor error message to tell you that your request is not formed correctly. If you want to pass a PAT as a header, you need to prefix it with token, as described in the docs:
-H "Authorization: token MY_TOKEN"
Once that's resolved, however, you'll also get an error because you don't pass the required ref payload. Assuming your default branch is main, here's a correct curl command:
> export MY_TOKEN=gha_abcdef
> curl \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: token $MY_TOKEN" \
-d '{"ref": "main"}' \
https://api.github.com/repos/djpr-data/djprdashdata/actions/workflows/refresh-data.yaml/dispatches
I have integrated Jenkins CI with pagerduty. Once I do that, I can see intergration key generated.
That will be used in jenkins to send the events to pagerduty.
The requirement is to rotate the keys after some time. I want to automate this.
Is there any api to regenerate the intergration key and return the key in response to be stored in jenkins?
I think the simplest solution here is to use the REST API -- it isn't possible to regenerate the integration key directly, but you can delete the integration and create a new one programmatically.
First fetch the service details:
curl --location --request GET 'https://api.pagerduty.com/services/<service_id>' \
--header 'Accept: application/vnd.pagerduty+json;version=2' \
--header 'Authorization: Bearer <bearer_token>'
This will include all of the integrations on the service -- make note of the integration_id and the vendor_id.
The delete endpoint isn't documented but it does seem to exist:
curl --location --request DELETE 'https://api.pagerduty.com/services/<service_id>/integrations/<integration_id>' \
--header 'Accept: application/vnd.pagerduty+json;version=2' \
--header 'Authorization: Bearer <bearer_token>'
And finally you can create the new integration, using the vendor_id from the GET request:
curl --request POST \
--url https://api.pagerduty.com/services/id/integrations \
--header 'Accept: application/vnd.pagerduty+json;version=2' \
--header 'Authorization: Bearer <bearer_token>' \
--header 'Content-Type: application/json' \
--data '{
"integration": {
"type": "generic_email_inbound_integration",
"name": "Email",
"service": {
"id": "<service_id>",
"type": "service_reference"
},
"integration_email": "my-email-based-integration#subdomain.pagerduty.com",
"vendor": {
"type": "vendor_reference",
"id": "<vendor_id>"
}
}
On doing inspect element to UI button
Its executing POST API:
https://xxxxxxx.pagerduty.com/api/v1/services/XXXXXXX/integrations/XXXXXXX/regenerate_key
I am trying to add a custom code check for a PR. After doing some research I found out that it can be done using the API mentioned below.
POST /repos/{owner}/{repo}/check-runs
Initially, it was giving me this error:
{
"message": "You must authenticate via a GitHub App.",
"documentation_url": "https://docs.github.com/rest/reference/checks#create-a-check-run"
}
I followed the guideline provided in this link.
I created a GitHub app.
Gave it required permission.
Generated a private key.
Generated a JWT token using the private key.
Installed the Github app in the repo too
I created a curl request:
curl --location --request POST 'https://api.github.com/repos/X/X-app/check-runs' \
--header 'Accept: application/vnd.github.v3+json' \
--header 'Authorization: Bearer eyJhbGciOiJSUzI1NiJ9.X.X-X-SAFvDnSkaJDjMI2T_BAC2iLlRZ7uNyFSe-X-UgFBFjoFrwsbcYFKfDM8f3FNPYpA6afhr18DLZ6rzu35klA' \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "loremipsum"
}'
But, now I am getting this error
{
"message": "Bad credentials",
"documentation_url": "https://docs.github.com/rest"
}
I am not sure what I am missing here.
I figured this out. The GH documentation is a bit unclear/misleading. Here are the steps to make this work:
with the JWT bearer token, list your installations and note the installation id for your app
$ curl -i \
-H "Authorization: Bearer YOUR_JWT" \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/app/installations
then get an installation access token for the above id
$ curl -i -X POST \
-H "Authorization: Bearer YOUR_JWT" \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/app/installations/:installation_id/access_tokens
then with that token create the check run but use "Authorization: token" header
curl -i -H "Authorization: token YOUR_INSTALLATION_ACCESS_TOKEN"
I want to export a database and import the output into another database programatically. This is what I have so far:
gcloud sql export sql instance_name gs://bucketname/db.gz --database=db_name
gcloud sql databases create new_db --instance=instance_name
gcloud sql import sql instance_name gs://bucketname/db.gz --database=new_db
Created database [new_db].
instance: instance_name
Data from [gs://bucketname/db.gz]
will be imported to [instance_name].
Do you want to continue (Y/n)
As you can see the prompt is the issue.
How can I import it without being prompted? Is there another way to import an export?
You can use --quiet, -q parameter when running your gcloud command as shown below:
gcloud sql import sql instance_name gs://bucketname/db.gz --database=new_db -q
The gcloud Reference official documentation contains the following explanation about this parameter in case you want to take a look on it:
--quiet, -q
Disable all interactive prompts when running gcloud commands. If input
is required, defaults will be used, or an error will be raised.
Overrides the default core/disable_prompts property value for this
command invocation. Must be used at the beginning of commands. This is
equivalent to setting the environment variable
CLOUDSDK_CORE_DISABLE_PROMPTS to 1.
Additionally, you can perform the import/export tasks by using cURL API calls as an alternative option; In this way, you just need to send the authorized requests to the service.
*Importing:
ACCESS_TOKEN="$(gcloud auth application-default print-access-token)"
curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{"importContext":
{"fileType": "SQL",
"uri": "gs://[BUCKET_NAME]/[PATH_TO_DUMP_FILE]",
"database": "[DATABASE_NAME]" }}' \
-X POST \
https://www.googleapis.com/sql/v1beta4/projects/[PROJECT-ID]/instances/[INSTANCE_NAME]/import
*Exporting:
ACCESS_TOKEN="$(gcloud auth application-default print-access-token)" curl --header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header 'Content-Type: application/json' \
--data '{"exportContext":
{"fileType": "SQL",
"uri": "gs://<BUCKET_NAME>/<PATH_TO_DUMP_FILE>",
"databases": ["<DATABASE_NAME1>", "<DATABASE_NAME2>"] }}' \
-X POST \
https://www.googleapis.com/sql/v1beta4/projects/[PROJECT-ID]/instances/[INSTANCE_NAME]/export
The GitHub API provides a lot of functionality, but is there a way to retrieve the build status for a commit? The GitHub UI provides information from the CI system we have configured, but I can't see this information exposed through the API?
You can access the status for a particular ref
GET https://api.github.com/repos/:owner/:repo/commits/:ref/statuses
For the value of :ref, you can use a SHA, a branch name, or a tag name.
It doesn't provide status directly, but offers you to create a status
That means the CI can have a final build step which publishes the status to GitHub repo that way.
POST /repos/:owner/:repo/statuses/:sha
For example:
{
"state": "success",
"target_url": "https://example.com/build/status",
"description": "The build succeeded!",
"context": "continuous-integration/jenkins"
}
(and that, for a given SHA1)
See for instance "Github Commit Status API with Bamboo from Atlassian", where:
${bamboo.buildResultsUrl} is the GitHub commit SHA1:
<xxx> is a placeholder value, which can be replaced by a value, or a variable ${var} as shown here.
Add those to your plan as Script.
complete.sh:
# specs and cukes results are stored in JUnit format under test-reports
if (grep 'failures="[^0]"' test-reports/* || \
grep 'errors="[^0]"' test-reports/*); then
curl -H "Authorization: token <MY_TOKEN>" --request POST \
--data '{"state": "failure", "description": "Failed!", \
"target_url": "${bamboo.buildResultsUrl}"}' \
https://api.github.com/repos/<USER>/<REPO>/statuses/${bamboo.repository.revision.number} > /dev/null
else
curl -H "Authorization: token <MY_TOKEN>" --request POST \
--data '{"state": "success", "description": "Success!", \
"target_url": "${bamboo.buildResultsUrl}"}' \
https://api.github.com/repos/<USER>/<REPO>/statuses \
/${bamboo.repository.revision.number} > /dev/null
fi
pending.sh:
curl -H "Authorization: token <MY_TOKEN>" --request POST \
--data '{"state": "pending", "description": "Build is running", \
"target_url": "${bamboo.buildResultsUrl}"}' \
https://api.github.com/repos/<USER>/<REPO>/statuses/${bamboo.repository.revision.number} > /dev/null