SumoLogic dashboards - how do I automate? - infrastructure-as-code

I am getting some experience with SumoLogic dashboards and alerting. I would like to have all possible configuration in code. Does anyone have experience with automation of SumoLogic configuration? At the moment I am using Ansible for general server and infra provisioning.
Thanks for all info!
Best Regards,
Rafal.

(The dashboards, alerts, etc. are referred to as Content in Sumo Logic parlance)
You can use the Content Management API, especially the content-import-job. I am not an expert in Ansible, but I am not aware of any way to plug that API into Ansible.
Also there's a community Terraform provider for Sumo Logic and it supports content:
resource "sumologic_content" "test" {
parent_id = "%s"
config =
{
"type": "SavedSearchWithScheduleSyncDefinition",
"name": "test-333",
"search": {
"queryText": "\"warn\"",
"defaultTimeRange": "-15m",
[...]
Disclaimer: I am currently employed by Sumo Logic

Below is the shell script to import the dashboard. Here it is SumoLogic AU instance. eg: https://api.au.sumologic.com/api. This will be changed based on your country.
Note: You can export all of your dashboard as json files.
#!/usr/bin/env bash
set -e
# if you are using AWS parameter store
# accessKey=$(aws ssm get-parameter --name path_to_your_key --with-decryption --query 'Parameter.Value' --region=ap-southeast-2 | tr -d \")
# accessSecret=$(aws ssm get-parameter --name name path_to_your_secret --with-decryption --query 'Parameter.Value' --region=ap-southeast-2 | tr -d \")
# yourDashboardFolderName="xxxxx" # this is the folder id in the sumologic where you want to create dashboards
# if you are using just key and secreat
accessKey= "your_sumologic_key"
accessSecret= "your_sumologic_secret"
yourDashboardFolderName="xxxxx" # this is the folder id in the sumologic
# you can place all the json files of dashboard in ./Sumologic/Dashboards folder.
for f in $(find ./Sumologic/Dashboards -name '*.json'); \
do \
curl -X POST https://api.au.sumologic.com/api/v2/content/folders/$yourDashboardFolderName/import \
-H "Content-Type: application/json" \
-u "$accessKey:$accessSecret" \
-d #$f \
;done

Related

Upload username and password to rundeck key storage using CLI / REST?

I want to use username and password in Rundeck to run jobs on node instead of public / private keys. How do I do it?
Rundeck CLI always asks for the user and password by default, also, you can define environments vars RD_USER and RD_PASSWORD in your .bashrc file. Take a look at this (Credentials section).
Example:
export RD_USER=username
export RD_PASSWORD=password
Using API you can use use the "cookie way" to access an endpoint, take a look at this.
And check the following example:
#!/bin/sh
curl -v -c cookie -b cookie -d j_username=admin -d j_password=admin http://localhost:4440/j_security_check \
-H "Accept: application/json" \
http://hyperion:4440/api/31/system/info/

Mongo DB Atlas. Is it safe to whitelist all ip because someone attempting to access the database needs a password

I have a google app engine with my express server. I also have my db in MongoDB Atlas. I currently have my MongoDB Atlas whitelisting all ip. The connection string is in the code for my express server running on Google Cloud. Presumable any attacker trying to get into the database would still need a user name and password for the connection string.
Is it safe to do this?
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
Is it safe to do this?
"Safe" is a relative term. It is safer than having an unauthed database open to the internet, but the weakest link is now your password.
A whitelist is an additional layer of security, so that if someone knows or can guess your password, they can't just connect from anywhere. They must be connecting from a set of known IP addresses. This makes the attack surface smaller, so the database is less likely to be broken into by a random person in the internet.
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
You would need to determine the IP ranges of your application, and plug in that range into the whitelist.
here is an answer i left elsewhere. hope it helps someone who comes across this:
this script will be kept up to date on my gist
why
mongo atlas provides a reasonably priced access to a managed mongo DB. CSPs where containers are hosted charge too much for their managed mongo DB. they all suggest setting an insecure CIDR (0.0.0.0/0) to allow the container to access the cluster. this is obviously ridiculous.
this entrypoint script is surgical to maintain least privileged access. only the current hosted IP address of the service is whitelisted.
usage
set as the entrypoint for the Dockerfile
run in cloud init / VM startup if not using a container (and delete the last line exec "$#" since that is just for containers
behavior
uses the mongo atlas project IP access list endpoints
will detect the hosted IP address of the container and whitelist it with the cluster using the mongo atlas API
if the service has no whitelist entry it is created
if the service has an existing whitelist entry that matches current IP no change
if the service IP has changed the old entry is deleted and new one is created
when a whitelist entry is created the service sleeps for 60s to wait for atlas to propagate access to the cluster
env
setup
create API key for org
add API key to project
copy the public key (MONGO_ATLAS_API_PK) and secret key (MONGO_ATLAS_API_SK)
go to project settings page and copy the project ID (MONGO_ATLAS_API_PROJECT_ID)
provide the following values in the env of the container service
SERVICE_NAME: unique name used for creating / updating (deleting old) whitelist entry
MONGO_ATLAS_API_PK: step 3
MONGO_ATLAS_API_SK: step 3
MONGO_ATLAS_API_PROJECT_ID: step 4
deps
bash
curl
jq CLI JSON parser
# alpine / apk
apk update \
&& apk add --no-cache \
bash \
curl \
jq
# ubuntu / apt
export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install \
bash \
curl \
jq
script
#!/usr/bin/env bash
# -- ENV -- #
# these must be available to the container service at runtime
#
# SERVICE_NAME
#
# MONGO_ATLAS_API_PK
# MONGO_ATLAS_API_SK
# MONGO_ATLAS_API_PROJECT_ID
#
# -- ENV -- #
set -e
mongo_api_base_url='https://cloud.mongodb.com/api/atlas/v1.0'
check_for_deps() {
deps=(
bash
curl
jq
)
for dep in "${deps[#]}"; do
if [ ! "$(command -v $dep)" ]
then
echo "dependency [$dep] not found. exiting"
exit 1
fi
done
}
make_mongo_api_request() {
local request_method="$1"
local request_url="$2"
local data="$3"
curl -s \
--user "$MONGO_ATLAS_API_PK:$MONGO_ATLAS_API_SK" --digest \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request "$request_method" "$request_url" \
--data "$data"
}
get_access_list_endpoint() {
echo -n "$mongo_api_base_url/groups/$MONGO_ATLAS_API_PROJECT_ID/accessList"
}
get_service_ip() {
echo -n "$(curl https://ipinfo.io/ip -s)"
}
get_previous_service_ip() {
local access_list_endpoint=`get_access_list_endpoint`
local previous_ip=`make_mongo_api_request 'GET' "$access_list_endpoint" \
| jq --arg SERVICE_NAME "$SERVICE_NAME" -r \
'.results[]? as $results | $results.comment | if test("\\[\($SERVICE_NAME)\\]") then $results.ipAddress else empty end'`
echo "$previous_ip"
}
whitelist_service_ip() {
local current_service_ip="$1"
local comment="Hosted IP of [$SERVICE_NAME] [set#$(date +%s)]"
if (( "${#comment}" > 80 )); then
echo "comment field value will be above 80 char limit: \"$comment\""
echo "comment would be too long due to length of service name [$SERVICE_NAME] [${#SERVICE_NAME}]"
echo "change comment format or service name then retry. exiting to avoid mongo API failure"
exit 1
fi
echo "whitelisting service IP [$current_service_ip] with comment value: \"$comment\""
response=`make_mongo_api_request \
'POST' \
"$(get_access_list_endpoint)?pretty=true" \
"[
{
\"comment\" : \"$comment\",
\"ipAddress\": \"$current_service_ip\"
}
]" \
| jq -r 'if .error then . else empty end'`
if [[ -n "$response" ]];
then
echo 'API error whitelisting service'
echo "$response"
exit 1
else
echo "whitelist request successful"
echo "waiting 60s for whitelist to propagate to cluster"
sleep 60s
fi
}
delete_previous_service_ip() {
local previous_service_ip="$1"
echo "deleting previous service IP address of [$SERVICE_NAME]"
make_mongo_api_request \
'DELETE' \
"$(get_access_list_endpoint)/$previous_service_ip"
}
set_mongo_whitelist_for_service_ip() {
local current_service_ip=`get_service_ip`
local previous_service_ip=`get_previous_service_ip`
if [[ -z "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] has not yet been whitelisted"
whitelist_service_ip "$current_service_ip"
elif [[ "$current_service_ip" == "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] IP has not changed"
else
echo "service [$SERVICE_NAME] IP has changed from [$previous_service_ip] to [$current_service_ip]"
delete_previous_service_ip "$previous_service_ip"
whitelist_service_ip "$current_service_ip"
fi
}
check_for_deps
set_mongo_whitelist_for_service_ip
# run CMD
exec "$#"

how to push docker image via **rest** api given config

I want to create a new image in a remote docker registry by providing only partial data:
According to the docs
https://docs.docker.com/registry/spec/api/#pushing-an-image
in order to push a docker image, i can:
* post a tar layer that i have.
* post a manifest
and the registry will support my new new image.
For example:
* I have locally a java app in a tar layer.
* The remote docker registry already has a java8 base image.
* I want to upload the tar layer and a manifest that references the java8 base image and have the docker registry support the new image for my app.
(The layer tar i get from a 3rd party build tool called Bazel if anyone cares)
From the docs i gather that i can take the existing java8 image manifest, download it, append (or pre-pend) my new layer to the layers section and viola.
Looking at the manifest spec
https://docs.docker.com/registry/spec/manifest-v2-2/#image-manifest-field-descriptions
I see there's a "config object" section with digest as reference to config file. This makes sense, i may need to redefine the entrypoint for example. So suppose i also have a docker config in a file that i guess i need to let the registry know about somehow.
Nowhere (that i can see) in the API does it state where or how to upload the config or if i need to do that at all - maybe it's included in the layer tar or something.
Do i upload the config as a layer? is it included in the tar? if not why do i give a reference to it by digest?
Best answer i can hope for would be a sequence of http calls to a docker-registry that do what i'm trying. Alternatively just explaining what the config is, and how to go about it would be very helpful.
found the solution here:
https://www.danlorenc.com/posts/containers-part-2/
very detailed, great answer, don't know who you are but i love you!
From inspecting some configs from existing images, Docker seems to require a few fields:
{
"architecture": "amd64",
"config": {
},
"history": [
{
"created_by": "Bash!"
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:69e4bd05139a843cbde4d64f8339b782f4da005e1cae56159adfc92311504719"
]
}
}
The config section can contain environment variables, the default CMD and ENTRYPOINT of your container and a few other settings. The rootfs section contains a list of layers and diff_ids that look pretty similar to our manifest. Unfortunately, the diff_ids are actually slightly different than the digests contained in our manifest, they’re actually a sha256 of the ‘uncompressed’ layers.
We can create one with this script:
cat <<EOF > config.json
{
"architecture": "amd64",
"config": {
},
"history": [
{
"created_by": "Bash!"
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:$(gunzip layer.tar.gz --to-stdout | shasum -a 256 | cut -d' ' -f1)"
]
}
}
EOF
Config Upload
Configs are basically stored by the registry as normal blobs. They get referenced differently in the manifest, but they’re still uploaded by their digest and stored normally.
The same type of script we used for layers will work here:
returncode=$(curl -w "%{http_code}" -o /dev/null \
-I -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://gcr.io/v2/$PROJECT/hello/blobs/$config_digest)
if [[ $returncode -ne 200 ]]; then
# Start the upload and get the location header.
# The HTTP response seems to include carriage returns, which we need to strip
location=$(curl -i -X POST \
https://gcr.io/v2/$PROJECT/hello/blobs/uploads/ \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-d "" | grep Location | cut -d" " -f2 | tr -d '\r')
# Do the upload
curl -X PUT $location\?digest=$config_digest \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
--data-binary #config.json
fi

Google cloud's glcoud compute instance create gives error "The resource projects/{ourID}/global/images/family/debian-8 was not found

We are using a server I created on Google Cloud Platform to create and manage the other servers over there. But when trying to create a new server from the Linux command line with the GCloud compute instances create function we receive the following error:
marco#ans-mgmt-01:~/gcloud$ ./create_gcloud_instance.sh app-tst-04 tst,backend-server,bootstrap home-tst 10.20.22.104
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- The resource 'projects/REMOVED_OUR_PROJECTID/global/images/family/debian-8' was not found
Our script looks like this:
#!/bin/bash
if [ "$#" -ne 4 ]; then
echo "Usage: create_gcloud_instance <instance_name> <tags> <subnet_name> <server_ip>"
exit 1
fi
set -e
INSTANCE_NAME=$1
TAGS=$2
SERVER_SUBNET=$3
SERVER_IP=$4
gcloud compute --project "REMOVED OUR PROJECT ID" instances create "$INSTANCE_NAME" \
--zone "europe-west1-c" \
--machine-type "f1-micro" \
--network "cloudnet" \
--subnet "$SERVER_SUBNET" \
--no-address \
--private-network-ip="$SERVER_IP" \
--maintenance-policy "MIGRATE" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--service-account "default" \
--tags "$TAGS" \
--image-family "debian-8" \
--boot-disk-size "10" \
--boot-disk-type "pd-ssd" \
--boot-disk-device-name "bootdisk-$INSTANCE_NAME" \
./clean_known_hosts.sh $INSTANCE_NAME
On the google cloud console (console.cloud.google.com) I enabled the cloud api access scope for the ans-mgmt-01 server and also tried to create a server from there. That's working without problems.
The problem is that gcloud is looking for the image family in your project and not the debian-cloud project where it really exists.
This can be fixed by simply using --image-project debian-cloud.
This way instead of looking for projects/{yourID}/global/images/family/debian-8, it will look for projects/debian-cloud/global/images/family/debian-8.
For me the problem was debian-8(and now debian-9) reached the end of life and no longer supported. Updating to debian-10 or debian-11 fixed the issue
For me the problem was debian-9 after so much time came to an end and tried updating to debian-10 fixed the issue
you could run below command to see if the image is available
gcloud compute images list | grep debian
Below is the result from the command
NAME: debian-10-buster-v20221206
PROJECT: debian-cloud
FAMILY: debian-10
NAME: debian-11-bullseye-arm64-v20221102
PROJECT: debian-cloud
FAMILY: debian-11-arm64
NAME: debian-11-bullseye-v20221206
PROJECT: debian-cloud
FAMILY: debian-11
So you could have some idea from your result

ServiceM8 api email - how to relate to job diary

I can send an email from a ServiceM8 account through the ServiceM8 API 'message services' (http://developer.servicem8.com/docs/platform-services/message-services/), and read the resulting ServiceM8 message-id.
But I would like to relate that message to a specific job within ServiceM8, so that it will appear as an email item in that job's diary in the ServiceM8 web application. (Emails sent from within the ServiceM8 web application are related to the diary and appear there - my question is about how to do this from the API).
Worst case, I could create a new 'Note' containing the email text and add that to the job in the hope that it would show up in the diary in the web application as a note.
But I want to check there isn't an easier way since sending the email results in there already being a relatable message-id available within ServiceM8.
Thanks
Using the messaging services API, can't be done. Using the web API, you can do just that.
There's an authorisation code required, which is specific to your account and to this function, you only need to retrieve it once, and then you can integrate that specific URL into your code. It's contained within the ClientSidePlatform_PerSessionSetup URL.
Here is a script that will grab the E-mail URL specific to your login:
Syntax: ./getsm8emailurl.sh "email#address.com" "password"
#!/usr/bin/env bash
#getsm8emailurl.sh
#Create Basic auth
user="$1"
pass="$2"
pass="$(echo -n "${pass}" | md5sum | cut -f1 -d' ')"
auth="$(echo -n "${user}:${pass}" | base64)"
#Get Account specific e-mail url
email_url="https://go.servicem8.com/$(curl --compressed -s -L "https://go.servicem8.com/$(curl --compressed -s -L "https://go.servicem8.com/" -H "Authorization: Basic $auth" | grep -o 'ClientSidePlatform_PerSessionSetup.[^"]*' | grep -v "s_boolFailover")" -H "Authorization: Basic $auth" | grep -o "PluginEmailClient_SendEmail.[^']*")"
#Output base e-mail URL
echo "$email_url"
Once you have the email url, (will start with https://go.servicem8.com/PluginEmailClient_SendEmail and will end with the s_auth code), you can use it like any other rest endpoint.
Required Header Values:
Authorization (same as regular API)
Required Post Params:
s_form_values="guid-to-cc-subject-msg-job_id-attachedFiles-attachedContacts-strRegardingObjectUUID-strRegardingObject-boolAllowDirectReply"
(these have to stay just as they are)
s_auth="your_account_s_auth_code"
to="recipient#domain.com"
Optional Post Params:
subject="subject"
msg="html message body"
boolAllowDirectReply="true|false" (Can recipient reply directly to job diary)
strRegardingObject="job|company"
strRegardingObjectUUID="job|company uuid"
DEMO
#!/usr/bin/env bash
#sendemail.sh
#demo here using random auth codes and uuids
curl --compressed -s "https://go.servicem8.com/PluginEmailClient_SendEmail" \
-H "Authorization: Basic dGVzdHVzZXJAdGVzdGRvbWFpbi5jb206dGVzdHBhc3M=" \
-d s_form_values=guid-to-cc-subject-msg-job_id-attachedFiles-attachedContacts-strRegardingObjectUUID-strRegardingObject-boolAllowDirectReply \
-d s_auth="6akj209db12bikbs01hbobi3r0fws7j2" \
-d boolAllowDirectReply=true \
-d strRegardingObject=job \
-d strRegardingObjectUUID="512b3b2a-007e-431b-be23-4bd812f2aeaf" \
-d to="test#testdomain.com" \
-d subject="Job Diary E-mail" \
-d msg="hello"
Edit/Update/Disclaimer:
This information is for convenience and efficiency - memos, quick tasks, notifications, updates, etc. This isn't to be relied upon for critical business operations as it is undocumented, and since it does not process JS like a browser would, it could stop working if the inner workings of the service changed.