I have a production graphite dashboard. I've saved some graphs under the tag abc so that you can access it using http://prod-graphite.com/dashboard/abc.
I've another dashboard for staging hosted on different server. Let's say the URL is http://staging-graphite.com/dashboard/.
I want to copy all the graphs of prod /abc to staging as I don't want to go through the trouble of creating 20 graphs again. I've tried the Copy Dashboard feature provided by graphite but it is not working. Nothing happens when I enter the prod URL. any help?
GET/POST http://your.graphite.host/dashboard/load/YOUR_DASHBOARD_NAME - gives you dump of specified dashboard. It returns json with state as root object, that holds dashboards' structure.
POST http://your.graphite.host/dashboard/save/NEW_DASHBOARD_NAME - lets you save data as new dashboard. Requires state parameter with dashboards' structure.
Oneliner, fetchs dump, prepares body, save:
curl -o- http://graphite.host/dashboard/load/DASH_NAME | \
python -c "import json,sys,urllib;o=json.load(sys.stdin);print('state=%s' % urllib.quote(json.dumps(o['state'])));" | \
curl -X POST http://graphite.host/dashboard/save/COPY_OF_DASH_NAME -d #-
Related
I have a large set of GCP Cloud Build Triggers that I invoke via a Cloud scheduler, all running fine.
Now I want to invoke these triggers by an external API call and pass them dynamic parameters that vary in values and number of parameters.
I was able to start a trigger by running an API request but any JSON parameters in the API request that I sent were ignored.
Google talks about substitution parameters at https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values. I define these variables in the cloudbuild.yaml file, however they were not propagated into my shell script from the API request.
I don't any errors with authentication or authorization, so security may not be an issue.
Is my idea supported at all or do I need to resort to another solution such as running a GKE cluster with containers that would expose its API (a very heavy-boxing solution).
We do something similar -- we migrated from Jenkins to GCB but for some people we still need a nicer "UI" to start builds / pass variables.
I got scripts from here and modified them to our own needs: https://medium.com/#nieldw/put-your-build-triggers-into-source-control-with-the-cloud-build-api-ed0c18d6fcac
Here is their REST API: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/run
For the script below, keep in mind you need the trigger-id of what you want to run. (you can also get this by parsing the output of another REST API.)
TRIGGER_ID=1
# we need to specify ATLEAST the branch name or commit id (check after)
BRANCH_OR_SHA=$2
# check if branch_name or commit_sha
if [[ $BRANCH_OR_SHA =~ [0-9a-f]{5,40} ]]; then
# is COMMIT_HASH
COMMIT_SHA=$BRANCH_OR_SHA
BRANCH_OR_SHA="\"commitSha\": \"$COMMIT_SHA\""
else
# is BRANCH_NAME
BRANCH_OR_SHA="\"branchName\": \"$BRANCH_OR_SHA\""
fi
# This is the request we send to google so it knows what to build
# Here we're overriding some variables that we have already set in the default 'cloudbuild.yaml' file of the repo
cat <<EOF > request.json
{
"projectId": "$PROJECT_ID",
$BRANCH_OR_SHA,
"substitutions": {
"_MY_VAR_1": "my_value",
"_MY_VAR_2": "my_value_2"
}
}
EOF
# our curl post, we send 'request.json' with info, add our Token, and set the trigger_id
curl -X POST -T request.json -H "Authorization: Bearer $(gcloud config config-helper \
--format='value(credential.access_token)')" \
https://cloudbuild.googleapis.com/v1/projects/"$PROJECT_ID"/triggers/"$TRIGGER_ID":run
I have tried this tutorial. But it didn't catch the OSSEC log (alerts, syslog, etc), it just give me this message for my Kibana apps.
Couldn't find any Elasticsearch data
You'll need to index some data into Elasticsearch before you can create an index pattern.
I know that there is some tutorial like this. But it's required to use wazuh package and I dont want to use it, I just want to use the pure OSSEC. My OSSEC and ELK apps are located in the samw machine
My question is, How can I integrate OSSEC with ELK ? What configuration do i have to do first before starting connected OSSEC to ELK ?
You need to load the data template so that Elastisearch can understand the format of the alert data. You can use the one made by Wazuh, or you could download it and modify it to "make your own". If you go down this road you will eventually end up trying to re-write Wazuh, which you don't need to do because it is open source. You can just download all the source files and do whatever you want with them.
Command to load template:
curl https://raw.githubusercontent.com/wazuh/wazuh/3.2/extensions/elasticsearch/wazuh-elastic6-template-alerts.json | curl -XPUT 'http://localhost:9200/_template/wazuh' -H 'Content-Type: application/json' -d #-
Download Template:
https://raw.githubusercontent.com/wazuh/wazuh/3.2/extensions/elasticsearch/wazuh-elastic6-template-alerts.json
-OR-
You could just spin up a Docker container that is ready to go:
https://github.com/wazuh/docker-ossec-elk
I am running bitbucket server v4.14. I want to be able to get diff of any two commits for a repo. I saw this posted at https://bitbucket.org/site/master/issues/4779/ability-to-diff-between-any-two-commits
However the same does not work, probably cause the version I am running is older. The compare page directly takes me to diff across branches. I do not want any UI element, just a URL is fine.
Thanks
This feature is not available to Bitbucket 4.14.
You can get this data using the REST API. Try to execute the following command:
curl -s --user USER:PASS --request GET https://BITBUCKET-SERVER/rest/api/1.0/projects/PROJECT/repos/REPOSITORY/commits?since=SINCE-COMMIT\&until=UNTIL-COMMIT | jq --raw-output '.values[] | .displayId+ " " + .author.name'
AFAIK there is only rest api endpoint. There is a since parameter that you can pass. See the docs for more details.
I upload a file like this:
curl -u ${CREDS} --upload-file ${file} ${url}
Is there a way to add a body or headers that will set some metadata for the file? Like build number.
You can actually deploy artifacts with properties to Artifactory OSS using matrix parameters, for example:
curl -uadmin:password -T file.tar "http://localhost:8081/artifactory/generic-local/file.tar;foo=bar;"
And get the artifact properties using REST API, for example:
curl -uadmin:password "http://localhost:8081/artifactory/api/storage/generic-local/file.tar?properties"
Viewing properties from the UI and other features are limited to the Pro edition.
Seems this is a pro feature. Documentation: Set Item Properties
PUT /api/storage/{repoKey}{itemPath}?properties=p1=v1[,v2][|p2=v3][&recursive=1]
Not helping me :-/
We use confluence as project(s) wiki etc.
There are plenty of useful docs, tables, etc. But many of them can be actual one day and not actual next day.
So, it will be just perfect to use/(implement?) feature, which will re-upload needed file(s) in confluence by schedule. Files will be (for example) get from git repository.
So, my question is: Does confluence have such feature ((re)uploading files by schedule)? If not, is there any way to implement this, except of new plugin writing?
If you have a CI server (like Jenkins), you can create a nightly job to upload the new version of the docs using the Confluence REST API.
Extract from the doc:
A simple example to upload a file called "myfile.txt" to the Attachment with id "456" in a container with id "123", with the comment updated, and minorEdit set to true:
curl -D- -u admin:admin -X POST -H "X-Atlassian-Token: nocheck" -F "file=#myfile.txt" -F "minorEdit=true" -F "comment=This
is my updated File" http://myhost/rest/api/content/123/child/attachment/456/data