Splunk REST API - Specify relative time range for alert - rest

I want to create an alert using Splunk's REST API. I want the alert to get events which happened in the last two minutes. How can I do that?
This is my alert so far:
curl -k -u admin:password https://my.company:8089/servicesNS/admin/search/saved/searches \
-d name=test7 \
--data-urlencode output_mode='json' \
--data-urlencode actions='' \
--data-urlencode alert.digest_mode='1' \
--data-urlencode alert.expires='24h' \
--data-urlencode alert.managedBy='' \
--data-urlencode alert.severity='3' \
--data-urlencode alert.suppress='1' \
--data-urlencode alert.suppress.fields='' \
--data-urlencode alert.suppress.period='5m' \
--data-urlencode alert.track='1' \
--data-urlencode alert_comparator='greater than' \
--data-urlencode alert_condition='' \
--data-urlencode alert_threshold='10' \
--data-urlencode alert_type='number of events' \
--data-urlencode allow_skew='0' \
--data-urlencode cron_schedule='*/2 * * * *' \
--data-urlencode description='' \
--data-urlencode disabled='0' \
--data-urlencode displayview='' \
--data-urlencode is_scheduled='1' \
--data-urlencode is_visible='1' \
--data-urlencode max_concurrent='1' \
--data-urlencode realtime_schedule='1' \
--data-urlencode restart_on_searchpeer_add='1' \
--data-urlencode run_n_times='0' \
--data-urlencode run_on_startup='0' \
--data-urlencode schedule_priority='default' \
--data-urlencode schedule_window='0' \
--data-urlencode search='sourcetype="auth" failed'

The parameters you are looking for in Splunk's documentation are dispatch.earliest_time and dispatch.latest_time.
Here is your request with the added parameters. It will look for events during the last 2 minutes:
curl -k -u admin:password https://my.company:8089/servicesNS/admin/search/saved/searches \
-d name=test7 \
--data-urlencode output_mode='json' \
--data-urlencode actions='' \
--data-urlencode alert.digest_mode='1' \
--data-urlencode alert.expires='24h' \
--data-urlencode alert.managedBy='' \
--data-urlencode alert.severity='3' \
--data-urlencode alert.suppress='1' \
--data-urlencode alert.suppress.fields='' \
--data-urlencode alert.suppress.period='5m' \
--data-urlencode alert.track='1' \
--data-urlencode alert_comparator='greater than' \
--data-urlencode alert_condition='' \
--data-urlencode alert_threshold='10' \
--data-urlencode alert_type='number of events' \
--data-urlencode allow_skew='0' \
--data-urlencode cron_schedule='*/2 * * * *' \
--data-urlencode description='' \
--data-urlencode disabled='0' \
--data-urlencode displayview='' \
--data-urlencode is_scheduled='1' \
--data-urlencode is_visible='1' \
--data-urlencode max_concurrent='1' \
--data-urlencode realtime_schedule='1' \
--data-urlencode restart_on_searchpeer_add='1' \
--data-urlencode run_n_times='0' \
--data-urlencode run_on_startup='0' \
--data-urlencode schedule_priority='default' \
--data-urlencode schedule_window='0' \
--data-urlencode dispatch.earliest_time='-2m' \
--data-urlencode dispatch.latest_time='now' \
--data-urlencode search='sourcetype="auth" failed'

Related

Passing a date -d to sed command

I got a script looking like that :
function tvpsport(){
wget -qO- 'https://sport.tvp.pl/'|
grep -i -B 9 'video_id'|
grep 'title\|broadcast_start\|video'|
sed -e 's/"//g' \
-e 's/,$//g' \
-e 's/\\u2013/-/g' \
-e 's/\\u0105/ą/g' \
-e 's/\\u0119/ę/g' \
-e 's/\\u0142/ł/g' \
-e 's/\\u00f3/ó/g' \
-e 's/\\u015a/Ś/g' \
-e 's/\\u017a/ź/g' \
-e 's/\\u017c/ż/g' \
-e 's/\\u0144/ń/g' \
-e 's/title : //g' \
-e "s/broadcast_start : /$(date -d) /g" \
-e 's/video_id : /https:\/\/tvp.pl\/sess\/TVPlayer2\/embed.php?ID=/g'
}
which gives output :
Title of transmission
date -d {time format}
a link to transmission
I want to express that format date to human readable (from eg. 161145246842 to dd/mm/YYYY)
and output like :
date (dd/mm/YYYY) title of transmission
a link to a transmission
I tried to covrt date on this line :
-e 's/broadcast_start : /date -d /g' \
but with no luck

The treeview is not displayed in html [doxygen 1.8.17]

I use the same configuration file in win10 and ubuntu 20.04 to generate html, but in ubuntu 20.04 the HTML treeview is not displayed in index.html.
I hope you understand me. I need the treeview to connect to other pages.
How can I solve this problem?
Thanks for all your help.
Reply to #Albert,
My Doxyfile settings as below:
# Difference with default Doxyfile 1.8.17
PROJECT_NAME = PROJECT
PROJECT_NUMBER = v050051
PROJECT_BRIEF = "Test"
PROJECT_LOGO = ./logo.png
OUTPUT_DIRECTORY = .
OPTIMIZE_OUTPUT_FOR_C = YES
EXTRACT_ALL = YES
CASE_SENSE_NAMES = NO
HIDE_SCOPE_NAMES = YES
INPUT = . \
../../common/api.h
FILE_PATTERNS = *.c \
*.cc \
*.cxx \
*.cpp \
*.c++ \
*.java \
*.ii \
*.ixx \
*.ipp \
*.i++ \
*.inl \
*.idl \
*.ddl \
*.odl \
*.h \
*.hh \
*.hxx \
*.hpp \
*.h++ \
*.l \
*.cs \
*.d \
*.php \
*.php4 \
*.php5 \
*.phtml \
*.inc \
*.m \
*.markdown \
*.md \
*.mm \
*.dox \
*.py \
*.pyw \
*.f90 \
*.f95 \
*.f03 \
*.f08 \
*.f18 \
*.f \
*.for \
*.vhd \
*.vhdl \
*.ucf \
*.qsf \
*.ice
RECURSIVE = YES
EXAMPLE_PATH = ../../tools/test.c
EXAMPLE_RECURSIVE = YES
SOURCE_BROWSER = YES
DISABLE_INDEX = YES
GENERATE_TREEVIEW = YES
MATHJAX_RELPATH =
GENERATE_LATEX = NO
GENERATE_XML = YES
CLASS_DIAGRAMS = NO
CALL_GRAPH = YES
CALLER_GRAPH = YES
I further compared the log difference between win10 and ubuntu as follows:

Why i get PayPal API authorization error?

At first i authorize with this command:
curl -v https://api.sandbox.paypal.com/v1/oauth2/token \
-H "Accept: application/json" \
-H "Accept-Language: en_US" \
-u "client_id:secret" \
-d "grant_type=client_credentials"
Then i get access token. With access token i run this command:
curl -v -X GET https://api.sandbox.paypal.com/v1/invoicing/invoices?page=1 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer MY_TOKEN"
But i get error:
{"name":"AUTHORIZATION_ERROR","message":"Authorization error occurred.","information_link":"https://developer.paypal.com/docs/api/invoicing/#errors","debug_id":"75bc8ac7b89e1"}
Any ideas why? Most of the commands gives me same error. But this command works fine:
curl -v -X POST https://api.sandbox.paypal.com/v2/checkout/orders \
-H "Content-Type: application/json" \
-H "Authorization: Bearer MY-TOKEN" \
-d '{
"intent": "CAPTURE",
"purchase_units": [
{
"amount": {
"currency_code": "USD",
"value": "100.00"
}
}
]
}'
Any ideas, what i'm missing here?
Thanks, in advance.

Splunk REST API - How to add an extra field to a saved search?

I wish to create an alert which should have an additional "Selected Field" - uri_path. I don't know how to add the field as a "selected field". How can I do that?
This is my current code:
curl -k -u admin:password https://splunk.rf:8089/servicesNS/admin/search/saved/searches \
-d name=http1 \
--data-urlencode output_mode='json' \
--data-urlencode actions='' \
--data-urlencode alert.digest_mode='0' \
--data-urlencode alert.expires='24h' \
--data-urlencode alert.managedBy='' \
--data-urlencode alert.severity='4' \
--data-urlencode alert.suppress='1' \
--data-urlencode alert.suppress.fields='uri_path' \
--data-urlencode alert.suppress.period='5m' \
--data-urlencode alert.track='1' \
--data-urlencode alert_comparator='greater than' \
--data-urlencode alert_condition='' \
--data-urlencode alert_threshold='10' \
--data-urlencode alert_type='number of events' \
--data-urlencode allow_skew='0' \
--data-urlencode cron_schedule='*/2 * * * *' \
--data-urlencode description='' \
--data-urlencode disabled='0' \
--data-urlencode displayview='' \
--data-urlencode is_scheduled='1' \
--data-urlencode is_visible='1' \
--data-urlencode max_concurrent='1' \
--data-urlencode realtime_schedule='1' \
--data-urlencode restart_on_searchpeer_add='1' \
--data-urlencode run_n_times='0' \
--data-urlencode run_on_startup='0' \
--data-urlencode schedule_priority='default' \
--data-urlencode schedule_window='0' \
--data-urlencode dispatch.earliest_time='-2m' \
--data-urlencode dispatch.latest_time='now' \
--data-urlencode search='sourcetype="auth" failed'
The parameter you are looking for is display.events.fields. This will add the field to "selected fields".
Here is your code, with the correct parameters:
curl -k -u admin:password https://splunk.rf:8089/servicesNS/admin/search/saved/searches \
-d name=http1 \
--data-urlencode output_mode='json' \
--data-urlencode actions='' \
--data-urlencode alert.digest_mode='0' \
--data-urlencode alert.expires='24h' \
--data-urlencode alert.managedBy='' \
--data-urlencode alert.severity='4' \
--data-urlencode alert.suppress='1' \
--data-urlencode alert.suppress.fields='uri_path' \
--data-urlencode alert.suppress.period='5m' \
--data-urlencode alert.track='1' \
--data-urlencode alert_comparator='greater than' \
--data-urlencode alert_condition='' \
--data-urlencode alert_threshold='10' \
--data-urlencode alert_type='number of events' \
--data-urlencode allow_skew='0' \
--data-urlencode cron_schedule='*/2 * * * *' \
--data-urlencode description='' \
--data-urlencode disabled='0' \
--data-urlencode displayview='' \
--data-urlencode is_scheduled='1' \
--data-urlencode is_visible='1' \
--data-urlencode max_concurrent='1' \
--data-urlencode realtime_schedule='1' \
--data-urlencode restart_on_searchpeer_add='1' \
--data-urlencode run_n_times='0' \
--data-urlencode run_on_startup='0' \
--data-urlencode schedule_priority='default' \
--data-urlencode schedule_window='0' \
--data-urlencode dispatch.earliest_time='-2m' \
--data-urlencode dispatch.latest_time='now' \
--data-urlencode display.events.fields='["host","source","sourcetype","uri_path"]' \
--data-urlencode search='sourcetype="auth" failed'

How do you upload files/folders to Pydio Cells using the Pydio Cells API

So far the API calls that appear to help me in getting to my end goal of eventually uploading or viewing files and folders via the API are as follows:
POST https://demo.pydio.com/a/tree/admin/list
POST https://demo.pydio.com/a/workspace
GET https://demo.pydio.com/a/config/datasource
GET https://demo.pydio.com/a/config/virtualnodes/
Pydio Cells API Documentation
https://pydio.com/en/docs/developer-guide/cells-api
Cells provides S3 api to interact with data. The action upload/download with curl is divided into steps:
1. Get jwt
2. Upload/Download
You can use following bash file:
./cells-download.sh CELL_IP:PORT USER PASSWORD CLIENT_SECRET FILENAME WORKSPACE_SLUG/PATH NEW_NAME_AFTTER_DOWNLOAD
./cells-upload.sh CELL_IP:PORT USER PASSWORD CLIENT_SECRET ABS_PATH_FILE NEW_NAME WORKSPACE_SLUG/PATH
CLIENT_SECRET is found in /home/pydio/.config/pydio/cells/pydio.json >> dex >> staticClients >> Secret:
cells-download.sh
=============================
#!/bin/bash
HOST=$1
CELLS_FRONT="cells-front"
CELLS_FRONT_PWD=$4
ADMIN_NAME=$2
ADMIN_PWD=$3
FILE=$5
DEST=$6
NEW_NAME=$7
AUTH_STRING=$(echo cells-front:$CELLS_FRONT_PWD | base64)
AUTH_STRING=${AUTH_STRING::-4}
JWT=$(curl -s --request POST \
--url http://$HOST/auth/dex/token \
--header "Authorization: Basic $AUTH_STRING" \
--header 'Cache-Control: no-cache' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data "grant_type=password&username=$ADMIN_NAME&password=$ADMIN_PWD&scope=email%20profile%20pydio%20offline&nonce=123abcsfsdfdd" | jq '.id_token')
JWT=$(echo $JWT | sed "s/\"//g")
#!/bin/bash -e
#
# Copyright 2014 Tony Burns
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Upload a file to AWS S3.
file="${5}"
bucket="io"
prefix="io/$DEST"
region="us-east-1"
timestamp=$(date -u "+%Y-%m-%d %H:%M:%S")
content_type="application/octet-stream"
#signed_headers="date;host;x-amz-acl;x-amz-content-sha256;x-amz-date"
signed_headers="host;x-amz-content-sha256;x-amz-date"
if [[ $(uname) == "Darwin" ]]; then
iso_timestamp=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%d")
date_header=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%a, %d %h %Y %T %Z")
else
iso_timestamp=$(date -ud "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ud "${timestamp}" "+%Y%m%d")
date_header=$(date -ud "${timestamp}" "+%a, %d %h %Y %T %Z")
fi
payload_hash() {
# empty string
echo "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
}
canonical_request() {
echo "GET"
echo "/${prefix}/${file}"
echo ""
echo "host:$HOST"
echo "x-amz-content-sha256:$(payload_hash)"
echo "x-amz-date:${iso_timestamp}"
echo ""
echo "${signed_headers}"
printf "$(payload_hash)"
}
canonical_request_hash() {
local output=$(canonical_request | shasum -a 256)
echo "${output%% *}"
}
string_to_sign() {
echo "AWS4-HMAC-SHA256"
echo "${iso_timestamp}"
echo "${date_scope}/${region}/s3/aws4_request"
printf "$(canonical_request_hash)"
}
AWS_SECRET_ACCESS_KEY="gatewaysecret"
signature_key() {
local secret=$(printf "AWS4${AWS_SECRET_ACCESS_KEY}" | hex_key)
local date_key=$(printf ${date_scope} | hmac_sha256 "${secret}" | hex_key)
local region_key=$(printf ${region} | hmac_sha256 "${date_key}" | hex_key)
local service_key=$(printf "s3" | hmac_sha256 "${region_key}" | hex_key)
printf "aws4_request" | hmac_sha256 "${service_key}" | hex_key
}
hex_key() {
xxd -p -c 256
}
hmac_sha256() {
local hexkey=$1
openssl dgst -binary -sha256 -mac HMAC -macopt hexkey:${hexkey}
}
signature() {
string_to_sign | hmac_sha256 $(signature_key) | hex_key | sed "s/^.* //"
}
curl \
-H "Authorization: AWS4-HMAC-SHA256 Credential=${JWT}/${date_scope}/${region}/s3/aws4_request,SignedHeaders=${signed_headers},Signature=$(signature)" \
-H "Host: $HOST" \
-H "Date: ${date_header}" \
-H "x-amz-acl: public-read" \
-H 'Content-Type: application/octet-stream' \
-H "x-amz-content-sha256: $(payload_hash)" \
-H "x-amz-date: ${iso_timestamp}" \
"http://$HOST/${prefix}/${file}" --output $NEW_NAME
=============================
cells-upload.sh
=============================
#!/bin/bash
HOST=$1
CELLS_FRONT="cells-front"
CELLS_FRONT_PWD=$4
ADMIN_NAME=$2
ADMIN_PWD=$3
FILE=$5
NEW_NAME=$6
DEST=$7
AUTH_STRING=$(echo cells-front:$CELLS_FRONT_PWD | base64)
AUTH_STRING=${AUTH_STRING::-4}
JWT=$(curl -s --request POST \
--url http://$HOST/auth/dex/token \
--header "Authorization: Basic $AUTH_STRING" \
--header 'Cache-Control: no-cache' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data "grant_type=password&username=$ADMIN_NAME&password=$ADMIN_PWD&scope=email%20profile%20pydio%20offline&nonce=123abcsfsdfdd" | jq '.id_token')
JWT=$(echo $JWT | sed "s/\"//g")
#!/bin/bash -e
#
# Copyright 2014 Tony Burns
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Upload a file to AWS S3.
file="${5}"
bucket="io"
prefix="io/$DEST"
region="us-east-1"
timestamp=$(date -u "+%Y-%m-%d %H:%M:%S")
content_type="application/octet-stream"
#signed_headers="date;host;x-amz-acl;x-amz-content-sha256;x-amz-date"
signed_headers="content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date"
if [[ $(uname) == "Darwin" ]]; then
iso_timestamp=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%Y%m%d")
date_header=$(date -ujf "%Y-%m-%d %H:%M:%S" "${timestamp}" "+%a, %d %h %Y %T %Z")
else
iso_timestamp=$(date -ud "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ud "${timestamp}" "+%Y%m%d")
date_header=$(date -ud "${timestamp}" "+%a, %d %h %Y %T %Z")
fi
payload_hash() {
local output=$(shasum -ba 256 "$file")
echo "${output%% *}"
}
canonical_request() {
echo "PUT"
echo "/${prefix}/${NEW_NAME}"
echo ""
echo "content-type:${content_type}"
echo "host:$HOST"
echo "x-amz-acl:public-read"
echo "x-amz-content-sha256:$(payload_hash)"
echo "x-amz-date:${iso_timestamp}"
echo ""
echo "${signed_headers}"
printf "$(payload_hash)"
}
canonical_request_hash() {
local output=$(canonical_request | shasum -a 256)
echo "${output%% *}"
}
string_to_sign() {
echo "AWS4-HMAC-SHA256"
echo "${iso_timestamp}"
echo "${date_scope}/${region}/s3/aws4_request"
printf "$(canonical_request_hash)"
}
AWS_SECRET_ACCESS_KEY="gatewaysecret"
signature_key() {
local secret=$(printf "AWS4${AWS_SECRET_ACCESS_KEY}" | hex_key)
local date_key=$(printf ${date_scope} | hmac_sha256 "${secret}" | hex_key)
local region_key=$(printf ${region} | hmac_sha256 "${date_key}" | hex_key)
local service_key=$(printf "s3" | hmac_sha256 "${region_key}" | hex_key)
printf "aws4_request" | hmac_sha256 "${service_key}" | hex_key
}
hex_key() {
xxd -p -c 256
}
hmac_sha256() {
local hexkey=$1
openssl dgst -binary -sha256 -mac HMAC -macopt hexkey:${hexkey}
}
signature() {
string_to_sign | hmac_sha256 $(signature_key) | hex_key | sed "s/^.* //"
}
curl \
-T "${file}" \
-H "Authorization: AWS4-HMAC-SHA256 Credential=${JWT}/${date_scope}/${region}/s3/aws4_request,SignedHeaders=${signed_headers},Signature=$(signature)" \
-H "Host: $HOST" \
-H "Date: ${date_header}" \
-H "x-amz-acl: public-read" \
-H 'Content-Type: application/octet-stream' \
-H "x-amz-content-sha256: $(payload_hash)" \
-H "x-amz-date: ${iso_timestamp}" \
"http://$HOST/${prefix}/${NEW_NAME}"
Turns out my original thoughts regarding the Pydio Cells s3 buckets requiring an AWS account were wrong. Pydio Cells uses the same code or syntax (not sure 100%) that is used when working with AWS Buckets. The file system can be accessed using s3 buckets when working with the Pydio Endpoint https://demo.pydio.com/io. io is the s3 Bucket.
To test I am using Postman to first place a file named 'Query.sql' with content into the 'Personal Files' Workspace.
Authorization: AWS Signature
AccessKey: Token returned when using OpenID Connect. The "id_token" contained in the body.
SecretKey: The demo uses the key: 'gatewaysecret'
Advanced Options:
AWS Region: Default is 'us-east-1'. I didn't have to enter anything here but it still worked when I set it to 'us-west-1'.
Service Name: 's3' - I found that this is Required
Session Token: I left this blank.
Create files using PUT. Download files using GET.
PUT https://demo.pydio.com/io/personal-files/Query.sql
The below example shows how to first create a file and then pull it's content/download the file.
In my GET example I manually place a file named Query.sql onto the demo.pydio.com server in the Personal Files workspace. This example shows how to access the data and/or download the Query.sql file I manually placed into the Personal Files workspace.
GET https://demo.pydio.com/io/personal-files/Query.sql