gss_accept_sec_context() error:ASN.1 structure is missing a required field - kerberos

I'm trying to implement Kerberos authentication on Ubuntu.
run_kerberos_server.sh
#!/usr/bin/env bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
docker stop krb5-server && docker rm krb5-server && true
docker run -d --network=altexy --name krb5-server \
-e KRB5_REALM=EXAMPLE.COM -e KRB5_KDC=localhost -e KRB5_PASS=12345 \
-v /etc/localtime:/etc/localtime:ro \
-v /etc/timezone:/etc/timezone:ro \
--network-alias example.com \
-p 88:88 -p 464:464 -p 749:749 gcavalcante8808/krb5-server
echo "=== Init krb5-server docker container ==="
docker exec krb5-server /bin/sh -c "
# Create users bob as normal user
# and add principal for the service
cat << EOF | kadmin.local
add_principal -randkey \"HTTP/service.example.com#EXAMPLE.COM\"
ktadd -k /etc/krb5-service.keytab -norandkey \"HTTP/service.example.com#EXAMPLE.COM\"
ktadd -k /etc/admin.keytab -norandkey \"admin/admin#EXAMPLE.COM\"
listprincs
quit
EOF
"
echo "=== Copy keytabs ==="
docker cp krb5-server:/etc/krb5-service.keytab "${DIR}"/krb5-service.keytab
docker cp krb5-server:/etc/admin.keytab "${DIR}"/admin.keytab
Get Kerberos ticket:
alex#alex-secfense:~/projects/proxy-auth/etc/kerberos$ kinit admin/admin#EXAMPLE.COM
Password for admin/admin#EXAMPLE.COM:
alex#alex-secfense:~/projects/proxy-auth/etc/kerberos$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: admin/admin#EXAMPLE.COM
Valid starting Expires Service principal
16.12.2020 12:05:38 17.12.2020 00:05:38 krbtgt/EXAMPLE.COM#EXAMPLE.COM
renew until 17.12.2020 12:05:35
Then I start nginx, also in Docker container, image is derived from openresty/openresty:xenial.
My /etc/hosts file has 127.0.0.1 service.example.com line.
My Firefox is configured for network.negotiate-auth.trusted-uris = service.example.com
I open service.example.com:<mapped_port> page in Firefox, nginx responds with 401 and Firefox send Authorization: Negotiate ...` header.
My server side code (error and result handling is stripped):
MYAPI int authenticate(const char* token, size_t length)
{
gss_buffer_desc service = GSS_C_EMPTY_BUFFER;
gss_name_t my_gss_name = GSS_C_NO_NAME;
gss_cred_id_t my_gss_creds = GSS_C_NO_CREDENTIAL;
OM_uint32 minor_status;
OM_uint32 major_status;
gss_ctx_id_t gss_context = GSS_C_NO_CONTEXT;
gss_name_t client_name = GSS_C_NO_NAME;
gss_buffer_desc output_token = GSS_C_EMPTY_BUFFER;
gss_buffer_desc input_token = GSS_C_EMPTY_BUFFER;
input_token.length = length;
input_token.value = (void*)token;
major_status = gss_accept_sec_context(&minor_status, &gss_context, my_gss_creds, &input_token,
GSS_C_NO_CHANNEL_BINDINGS, &client_name, NULL, &output_token, NULL, NULL, NULL);
return 0;
}
Eventually, I get gss_accept_sec_context() error:ASN.1 structure is missing a required field error.
The same code works great with Windows Kerberos setup.
Any idea what does it mean or how to debug the issue?
I did define KRB5_TRACE=/<log_file_name> environment variable and see lines like below:
[7] 1607798057.341744: Sending request (937 bytes) to EXAMPLE.COM
[6] 1608109670.292389: Sending request (937 bytes) to EXAMPLE.COM
[6] 1608109670.660887: Sending request (937 bytes) to EXAMPLE.COM
May it be DNS issue?
UPDATE: I missed that I specify keyatb file to use on server side before calling gss_accept_sec_context (again error handling is stripped out):
OM_uint32 major_status = gsskrb5_register_acceptor_identity(keytab_filename)

You code breaks the fundemantal concept of context completion. It violates RFC 7546 and is not trustworthy, plus you completely ignore major/minor. Now, you tokens get modified somehow in-flight because the ASN.1 encoding is broken.
Dump token before and after transmission and compare.
Start with gss-server and gss-client first.
Read their code and implement your alike. Do not deviate from the imperative of the context look completion,
Show the ticket cache after Firefox has obtained a service ticket.
As soon as you have to tokens, display inspect them with https://lapo.it/asn1js/.

Related

Brute forcing http digest with Hydra

I am having some trouble brute forcing a HTTP digest form with Hydra. I am using the following command however when proxied through burp suite hydra I can see hydra is using basic auth and not digest.
How do I get hydra to use the proper auth type?
Command:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -vV http-get /digest
Request as seen in proxy:
GET /digest HTTP/1.1
Host: 127.0.0.1
Connection: close
Authorization: Basic YWRtaW46aWxvdmV5b3U=
User-Agent: Mozilla/4.0 (Hydra)
I have studied this case, if the digest method is implemented on Nginx or apache servers level, hydra might work. But if the authentication is implemented on the application server like Flask, Expressjs, Django, it will not work at all
You can create a bash script for password spraying
#!/bin/bash
cat $1 | while read USER; do
cat $2 | while read PASSWORD; do
if curl -s $3 -c /tmp/cookie --digest -u $USER:$PASSWORD | grep -qi "unauth"
then
continue
else
echo [+] Found $USER:$PASSWORD
exit 0
fi
done
done
Save this file as app.sh
$ chmod +x app.sh
$ ./app.sh /path/to/users.txt /path/to/passwords.txt http://example.com/path
Since no Hydra version was specified, I assume the latest one: 9.2.
#tbhaxor is correct:
Against a server like Apache or nginx Hydra works. Flask using digest authentication as recommended in the standard documentation does not work (details later). You could add the used web server so somebody can verify this.
Hydra does not provide explicit parameters to distinguish between basic and digest authentication.
Technically, it first sends a request that attempts to authenticate itself via basic authentication. After that it evaluates the corresponding response.
The specification of digest authentication states that the web application has to send a header WWW-Authenticate : Digest ... in the response if the requested documented is protected using the scheme.
So Hydra now can distinguish between the two forms of authentication.
If it receives this response (cf. code), it sends a second attempt using digest authentication.
The reason why you only can see basic auth and not digest requests is due to the default setting of what Hydra calls "tasks". This is set to 16 by default, which means it initially creates 16 threads.
Thus, if you go to the 17th request in your proxy you will find a request using digest auth. You can also see the difference if you set the number of tasks to 1 with the parameter -t 1.
Following 3 Docker setups where you can test the differences in basic auth (nginx), digest auth(nginx) and digest auth(Flask) using "admin/password" credentials based upon your example:
basic auth:
cat Dockerfile.http_basic_auth
FROM nginx:1.21.3
LABEL maintainer="secf00tprint"
RUN apt-get update && apt-get install -y apache2-utils
RUN touch /usr/share/nginx/html/.htpasswd
RUN htpasswd -db /usr/share/nginx/html/.htpasswd admin password
RUN sed -i '/^ location \/ {/a \ auth_basic "Administrator\x27s Area";\n\ auth_basic_user_file /usr/share/nginx/html/.htpasswd;' /etc/nginx/conf.d/default.conf
:
sudo docker build -f Dockerfile.http_basic_auth -t http-server-basic-auth .
sudo docker run -ti -p 127.0.0.1:8888:80 http-server-basic-auth
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (nginx):
cat Dockerfile.http_digest
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update && \
# For digest module
DEBIAN_FRONTEND=noninteractive apt-get install -y curl unzip \
# For nginx
build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev libgd-dev libxml2 libxml2-dev uuid-dev make apache2-utils expect
RUN curl -O https://nginx.org/download/nginx-1.21.3.tar.gz
RUN curl -OL https://github.com/atomx/nginx-http-auth-digest/archive/refs/tags/v1.0.0.zip
RUN tar -xvzf nginx-1.21.3.tar.gz
RUN unzip v1.0.0.zip
RUN cd nginx-1.21.3 && \
./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/ nginx.lock --pid-path=/run/nginx.pid --modules-path=/etc/nginx/modules --add-module=../nginx-http-auth-digest-1.0.0/ && \
make && make install
COPY generate.exp /usr/share/nginx/html/
RUN chmod u+x /usr/share/nginx/html/generate.exp && \
cd /usr/share/nginx/html/ && \
expect -d generate.exp
RUN sed -i '/^ location \/ {/a \ auth_digest "this is not for you";' /etc/nginx/nginx.conf
RUN sed -i '/^ location \/ {/i \ auth_digest_user_file /usr/share/nginx/html/passwd.digest;' /etc/nginx/nginx.conf
CMD nginx && tail -f /var/log/nginx/access.log -f /var/log/nginx/error.log
:
cat generate.exp
#!/usr/bin/expect
set timeout 70
spawn "/usr/bin/htdigest" "-c" "passwd.digest" "this is not for you" "admin"
expect "New password: " {send "password\r"}
expect "Re-type new password: " {send "password\r"}
wait
:
sudo docker build -f Dockerfile.http_digest -t http_digest .
sudo docker run -ti -p 127.0.0.1:8888:80 http_digest
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (Flask):
cat Dockerfile.http_digest_fask
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY ./app.py /app/
CMD ["flask", "run", "--host=0.0.0.0"]
:
cat requirements.txt
Flask==2.0.2
Flask-HTTPAuth==4.5.0
:
cat app.py
from flask import Flask
from flask_httpauth import HTTPDigestAuth
app = Flask(__name__)
app.secret_key = 'super secret key'
auth = HTTPDigestAuth()
users = {
"admin" : "password",
"john" : "hello",
"susan" : "bye"
}
#auth.get_password
def get_pw(username):
if username in users:
return users.get(username)
return None
#app.route("/")
#auth.login_required
def hello_world():
return "<p>Flask Digest Demo</p>"
:
sudo docker build -f Dockerfile.http_digest_flask -t digest_flask .
sudo docker run -ti -p 127.0.0.1:5000:5000 digest_flask
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 5000 http-get /
If you want to see more information I wrote about it in more detail here.

Accessing gitlab postgres omnibus database

I'm trying to access my gitlab omnibus's postgres installation from other apps so that I can share data within. How do I find the login information, eg user/pass?
There should be no password.
If you have sudo access on the machine where you installed GitLab Omnibus, then you can confirm this with:
sudo grep gitlab-psql /etc/shadow
and it should show '!' in the password field, something like:
gitlab-psql:!!:16960::::::
Faced with a similar goal (accessing GitLab's DB in order to derive some usage plots, counts of issues opened/closed over time, etc.), here is what I did (assuming sudo ability):
sudo su -l gitlab-psql
mkdir -p ~/.ssh
chmod 0700 ~/.ssh
cat >> ~/.ssh/authorized_keys << "EOF"
<your ssh public key here>
EOF
chmod 0600 ~/.ssh/authorized_keys
Once this is done, first check that you can ssh to that host as gitlab-psql, using the proper key, of course, either from a remote host: ssh gitlab-psql#my-gitlab-host, or locally: ssh gitlab-psql#localhost.
After that, you should be able to access the DB from other apps via ssh. For example, here is a way to query the DB directly from a Python notebook (running on another host somewhere in EC2), and using Pandas:
def gitlab_query(query):
cmdargs = [
'ssh', 'gitlab-psql#my-gitlab-host',
f"""/opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql/ gitlabhq_production -A -F $'\t' -c "{query}" """,
]
proc = subprocess.Popen(cmdargs, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
try:
outs, errs = proc.communicate(timeout=15)
except subprocess.TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
errors = errs.decode('utf-8')
if errors:
raise ValueError(errors)
result = outs.decode('utf-8')
result = result[:result.rfind('\n', 0, -1)]
return result
# simple example
# NOTE: as is, this is incomplete, because many issues are closed by other
# actions (e.g. commits or merges) and in those cases, there is no
# closed_at date. See further below for better queries. (not included in
# this SO answer as this is getting beyond the scope of the question).
q = """
select
b.name, a.title, a.created_at, a.closed_at
from issues a inner join projects b on (a.project_id = b.id)
where closed_at > '2018-01-09' and b.name='myproject'
order by 1,4 limit 10
"""
pd.read_csv(io.StringIO(gitlab_query(q)), sep='\t', parse_dates=['created_at', 'closed_at'])
If you have installed a gitlab-praefect node as described here and you are using AWS EC2 and a AWS postgres and want to check if those two can communicate.
/opt/gitlab/embedded/bin/psql -U YourExistingUsername -d template1 -h RDS-POSTGRES-ENDPOINT

Windows authentication in linux docker container

i am trying to use windows authentication in linux docker container under kubernetes.
I am following this settings: https://learn.microsoft.com/en-us/aspnet/core/security/authentication/windowsauth?view=aspnetcore-3.1&tabs=visual-studio#kestrel
App is in .net core3, with nuget Microsoft.AspNetCore.Authentication.Negotiate and running in kestrel
I have added the
services.AddAuthentication(Microsoft.AspNetCore.Authentication.Negotiate.NegotiateDefaults.AuthenticationScheme).AddNegotiate();
as well as
app.UseAuthentication();
and setup my devbase image as
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster as final
USER root
RUN whoami
RUN apt update && apt dist-upgrade -y
ADD ca/ca.crt /usr/local/share/ca-certificates/ca.crt
RUN chmod 644 /usr/local/share/ca-certificates/*
RUN update-ca-certificates
RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user
COPY krb5.conf /etc/krb5.conf
RUN mkdir /app
RUN echo BQIAAA..== | base64 -d > /app/is.k01.HTTP.keytab
WORKDIR /app
#RUN docker version
RUN groupadd --gid 1000 app && useradd --uid 1000 --gid app --shell /bin/bash -d /app app
RUN apt install -y mc sudo syslog-ng realmd gss-ntlmssp
the build in tfs pipeline creates app docker image derived from above and adds following env variables, also copies build to /app
RUN chmod 0700 run.sh
ENV KRB5_KTNAME=/app/is.k01.HTTP.keytab
ENV KRB5_TRACE=/dev/stdout
ENV ASPNETCORE_URLS=http://*:80;https://+:443
RUN chown app:app /app -R
USER app
the app is being run by run.sh
service syslog-ng start
kinit HTTP/is.k01.mydomain.com#MYDOMAIN.COM -k -t /app/is.k01.HTTP.keytab
klist
dotnet dev-certs https
dotnet /app/SampleApi.dll
klist lists the principal which has assigned the SPN to the machine
in ie and firefox i have added the network.negotiate-auth.trusted-uris to my app
however i am getting the login dialog with no success to log in
so the question is:
How can I enable debug log with Microsoft.AspNetCore.Authentication.Negotiate package?
My assumption is that this package does not communicate with kerberos properly, perhaps some package is missing, not running or something.
Also note that the container and .net app is connected successfully to the domain because I use integrated security for connection to the database which works.
**** Edit > Answer to first part
To enable logs, one should enable logs in kestrel:
in appsettings.json:
"Logging": {
"LogLevel": {
"Default": "Debug",
}
},
In program.cs:
Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.AddFilter("Microsoft", LogLevel.Debug);
logging.AddFilter("System", LogLevel.Debug);
logging.ClearProviders();
logging.AddConsole();
})
.ConfigureWebHostDefaults(webBuilder =>
{
In Startup.cs one can track the negotiate events:
services.AddAuthentication(NegotiateDefaults.AuthenticationScheme).AddNegotiate(
options =>
{
options.PersistKerberosCredentials = true;
options.Events = new NegotiateEvents()
{
OnAuthenticated = challange =>
{
..
},
OnChallenge = challange =>
{
..
},
OnAuthenticationFailed = context =>
{
// context.SkipHandler();
Console.WriteLine($"{DateTimeOffset.Now.ToString(czechCulture)} OnAuthenticationFailed/Scheme: {context.Scheme.Str()}, Request: {context.Request.Str()}");
Console.WriteLine("context?.HttpContext?.Features?.Select(f=>f.Key.Name.ToString())");
var items = context?.HttpContext?.Features?.Select(f => "- " + f.Key?.Name?.ToString());
if (items != null)
{
Console.WriteLine(string.Join("\n", items));
}
Console.WriteLine("context.HttpContext.Features.Get<IConnectionItemsFeature>()?.Items " + context.HttpContext.Features.Get<IConnectionItemsFeature>()?.Items?.Count);
var items2 = context.HttpContext?.Features.Get<IConnectionItemsFeature>()?.Items?.Select(f => "- " + f.Key?.ToString() + "=" + f.Value?.ToString());
if (items2 != null) {
Console.WriteLine(string.Join("\n", items2));
}
return Task.CompletedTask;
}
};
}
);
**** Edit
Meanwhile according my goal to allow windows authentication in .net core docker web app i was going through the source code of .net core, and corefx and trucated the auth code to this sample console app:
try
{
var token = "MyToken==";
var secAssembly = typeof(AuthenticationException).Assembly;
Console.WriteLine("var ntAuthType = secAssembly.GetType(System.Net.NTAuthentication, throwOnError: true);");
var ntAuthType = secAssembly.GetType("System.Net.NTAuthentication", throwOnError: true);
Console.WriteLine("var _constructor = ntAuthType.GetConstructors(BindingFlags.NonPublic | BindingFlags.Instance).First();");
var _constructor = ntAuthType.GetConstructors(BindingFlags.NonPublic | BindingFlags.Instance).First();
Console.WriteLine("var credential = CredentialCache.DefaultCredentials;");
var credential = CredentialCache.DefaultCredentials;
Console.WriteLine("var _instance = _constructor.Invoke(new object[] { true, Negotiate, credential, null, 0, null });");
var _instance = _constructor.Invoke(new object[] { true, "Negotiate", credential, null, 0, null });
var negoStreamPalType = secAssembly.GetType("System.Net.Security.NegotiateStreamPal", throwOnError: true);
var _getException = negoStreamPalType.GetMethods(BindingFlags.NonPublic | BindingFlags.Static).Where(info => info.Name.Equals("CreateExceptionFromError")).Single();
Console.WriteLine("var _getOutgoingBlob = ntAuthType.GetMethods(BindingFlags.NonPublic | BindingFlags.Instance).Where(info => info.Name.Equals(GetOutgoingBlob) && info.GetParameters().Count() == 3).Single();");
var _getOutgoingBlob = ntAuthType.GetMethods(BindingFlags.NonPublic | BindingFlags.Instance).Where(info => info.Name.Equals("GetOutgoingBlob") && info.GetParameters().Count() == 3).Single();
Console.WriteLine("var decodedIncomingBlob = Convert.FromBase64String(token);;");
var decodedIncomingBlob = Convert.FromBase64String(token);
Console.WriteLine("var parameters = new object[] { decodedIncomingBlob, false, null };");
var parameters = new object[] { decodedIncomingBlob, false, null };
Console.WriteLine("var blob = (byte[])_getOutgoingBlob.Invoke(_instance, parameters);");
var blob = (byte[])_getOutgoingBlob.Invoke(_instance, parameters);
if (blob != null)
{
Console.WriteLine("var out1 = Convert.ToBase64String(blob);");
var out1 = Convert.ToBase64String(blob);
Console.WriteLine(out1);
}
else
{
Console.WriteLine("null blob value returned");
var securityStatusType = secAssembly.GetType("System.Net.SecurityStatusPal", throwOnError: true);
var _statusException = securityStatusType.GetField("Exception");
var securityStatus = parameters[2];
var error = (Exception)(_statusException.GetValue(securityStatus) ?? _getException.Invoke(null, new[] { securityStatus }));
Console.WriteLine("Error:");
Console.WriteLine(error);
Console.WriteLine("securityStatus:");
Console.WriteLine(securityStatus.ToString());
}
}
catch(Exception exc)
{
Console.WriteLine(exc.Message);
}
So i found out that the library communicates with
System.Net.NTAuthentication
which communicates with
System.Net.Security.NegotiateStreamPal
which communicates with unix version of
Interop.NetSecurityNative.InitSecContext
which should somehow trigger the GSSAPI in os
In dotnet runtime git they tell us that gss-ntlmssp is required for this to work even that it is not mentioned anyhow in the aspnet core documentation.
https://github.com/dotnet/runtime/issues?utf8=%E2%9C%93&q=gss-ntlmssp
Nevertheless I have compiled the gss-ntlmssp and found out that without this library it throws error "An unsupported mechanism was requested.". With my library it throws error "No credentials were supplied, or the credentials were unavailable or inaccessible.", but never access to any gss_* methods.
I have tested usage of gss methods by adding the log entry to file which never occured.. fe:
OM_uint32 gss_init_sec_context(OM_uint32 *minor_status,
gss_cred_id_t claimant_cred_handle,
gss_ctx_id_t *context_handle,
gss_name_t target_name,
gss_OID mech_type,
OM_uint32 req_flags,
OM_uint32 time_req,
gss_channel_bindings_t input_chan_bindings,
gss_buffer_t input_token,
gss_OID *actual_mech_type,
gss_buffer_t output_token,
OM_uint32 *ret_flags,
OM_uint32 *time_rec)
{
FILE *fp;
fp = fopen("/tmp/gss-debug.log", "w+");
fprintf(fp, "gss_init_sec_context\n");
fclose(fp);
return gssntlm_init_sec_context(minor_status,
claimant_cred_handle,
context_handle,
target_name,
mech_type,
req_flags,
time_req,
input_chan_bindings,
input_token,
actual_mech_type,
output_token,
ret_flags,
time_rec);
}
So .net calls gssapi, and gssapi does not call mechanism.
I have observed the same behavior in centos7 vm, ubuntu windows subsystem, and debian docker image (customized mcr.microsoft.com/dotnet/core/sdk:3.1-buster)
So the question now is, how can I debug gssapi ?
I assume my current gssapi is managed by this library:
readelf -d /usr/lib64/libgssapi_krb5.so
Dynamic section at offset 0x4aa48 contains 34 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libkrb5.so.3]
0x0000000000000001 (NEEDED) Shared library: [libk5crypto.so.3]
0x0000000000000001 (NEEDED) Shared library: [libcom_err.so.2]
0x0000000000000001 (NEEDED) Shared library: [libkrb5support.so.0]
0x0000000000000001 (NEEDED) Shared library: [libdl.so.2]
0x0000000000000001 (NEEDED) Shared library: [libkeyutils.so.1]
0x0000000000000001 (NEEDED) Shared library: [libresolv.so.2]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x000000000000000e (SONAME) Library soname: [libgssapi_krb5.so.2]
0x000000000000000c (INIT) 0xb1d8
0x000000000000000d (FINI) 0x3ebcc
0x0000000000000019 (INIT_ARRAY) 0x24a120
0x000000000000001b (INIT_ARRAYSZ) 8 (bytes)
0x000000000000001a (FINI_ARRAY) 0x24a128
0x000000000000001c (FINI_ARRAYSZ) 16 (bytes)
0x000000006ffffef5 (GNU_HASH) 0x1f0
0x0000000000000005 (STRTAB) 0x3048
0x0000000000000006 (SYMTAB) 0x720
0x000000000000000a (STRSZ) 9167 (bytes)
0x000000000000000b (SYMENT) 24 (bytes)
0x0000000000000003 (PLTGOT) 0x24b000
0x0000000000000002 (PLTRELSZ) 8088 (bytes)
0x0000000000000014 (PLTREL) RELA
0x0000000000000017 (JMPREL) 0x9240
0x0000000000000007 (RELA) 0x58b0
0x0000000000000008 (RELASZ) 14736 (bytes)
0x0000000000000009 (RELAENT) 24 (bytes)
0x000000006ffffffc (VERDEF) 0x5788
0x000000006ffffffd (VERDEFNUM) 3
0x000000006ffffffe (VERNEED) 0x57e0
0x000000006fffffff (VERNEEDNUM) 4
0x000000006ffffff0 (VERSYM) 0x5418
0x000000006ffffff9 (RELACOUNT) 504
0x0000000000000000 (NULL) 0x0
so far i have compiled new latest gssapi from mit source, and found out that it is throwing me error "An unsupported mechanism was requested." because gssapi requires gss interpreter which is not provided. In centos7 i had another issue that the openssl library was using shared kerberos library which was incompatible, thus yum stopped working.
*** edit
I have found out that the gss-ntlmssp has flag GSS_C_MA_NOT_DFLT_MECH thus it was failing with the message "No credentials were supplied, or the credentials were unavailable or inaccessible.". The solution is to build custom gss-ntlmssp without this attribute because i desire to use it as default auth mechanism.
My sample console app to check credentials works now, I will try to put it work in docker container now.
*** edit
I was able to run my ConsoleApp successfully in kubernetes:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster as final
USER root
RUN whoami
RUN apt update && apt dist-upgrade -y
ADD ca/ca.crt /usr/local/share/ca-certificates/ca.crt
RUN chmod 644 /usr/local/share/ca-certificates/*
RUN update-ca-certificates
RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user
RUN mkdir /app
RUN apt install -y mc sudo syslog-ng python3-software-properties software-properties-common packagekit git gssproxy vim
RUN apt install -y autoconf automake libxslt-dev doxygen findutils libgettextpo-dev libtool m4 make libunistring-dev libssl-dev zlib1g-dev gettext xsltproc libxml2-utils libxml2-dev xml-core docbook-xml docbook-xsl bison libkrb5-dev
RUN systemctl enable syslog-ng
RUN mkdir /src
RUN cd /src && wget https://web.mit.edu/kerberos/dist/krb5/1.18/krb5-1.18.tar.gz
RUN cd /src && tar -xf krb5-1.18.tar.gz
RUN cd /src/krb5-1.18/src && ./configure && make && make install
RUN cd /src && git clone https://github.com/scholtz/gss-ntlmssp.git
RUN cd /src/gss-ntlmssp/ && autoreconf -f -i && ./configure && make && make install
RUN cp /src/gss-ntlmssp/examples/mech.ntlmssp.conf /etc/gss/mech.d/mech.ntlmssp.conf
COPY testgss /testgss
RUN cd /testgss && dotnet ConsoleApp3.dll
RUN groupadd --gid 1000 app && useradd --uid 1000 --gid app --shell /bin/bash -d /app app
RUN echo BQIA..AAAB | base64 -d > /app/user.keytab
RUN echo BQIA..oQ== | base64 -d > /etc/krb5.keytab
RUN echo BQIA..oQ== | base64 -d > /app/is.k01.HTTP.keytab
RUN echo BQIA..AAA= | base64 -d > /app/is.k01.kerb.keytab
COPY krb5.conf /etc/krb5.conf
COPY krb5.conf /usr/local/etc/krb5.conf
RUN ln -s /etc/gss /usr/local/etc/gss
RUN cd /app
WORKDIR /app
However, i am getting this error now:
System.Exception: An authentication exception occured (0xD0000/0x4E540016).
---> Interop+NetSecurityNative+GssApiException: GSSAPI operation failed with error - Unspecified GSS failure. Minor code may provide more information (Feature not available).
at System.Net.Security.NegotiateStreamPal.GssAcceptSecurityContext(SafeGssContextHandle& context, Byte[] buffer, Byte[]& outputBuffer, UInt32& outFlags)
at System.Net.Security.NegotiateStreamPal.AcceptSecurityContext(SafeFreeCredentials credentialsHandle, SafeDeleteContext& securityContext, ContextFlagsPal requestedContextFlags, Byte[] incomingBlob, ChannelBinding channelBinding, Byte[]& resultBlob, ContextFlagsPal& contextFlags)
*** edit
Now it fails in here:
gssntlm_init_sec_context..
gssntlm_acquire_cred..
gssntlm_acquire_cred_from..
if (cred_store != GSS_C_NO_CRED_STORE) {
retmin = get_creds_from_store(name, cred, cred_store);
} else {
retmin = get_user_file_creds(name, cred);
if (retmin) {
retmin = external_get_creds(name, cred);
}
}
get_user_file_creds() returns error as i do not have specific file setup as i want to verify users from ad
external_get_creds() fails here:
wbc_status = wbcCredentialCache(&params, &result, NULL);
if(!WBC_ERROR_IS_OK(wbc_status)) goto done;
external_get_creds tries to authenticate with winbind library and obviously in the credential cache there is no user present
i managed to compile it with the winbind library that samba has provided
so the question now is:
How to setup winbind library to communicate with AD?
*** Edit
I have tried to use .net 5 as at github i was told that NTLM works in .net 5. However i get the same result as with .net 3.1.
Docker image with which i have tried that:
FROM mcr.microsoft.com/dotnet/core-nightly/sdk:5.0-buster as final
USER root
RUN whoami
RUN apt update && apt dist-upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user
RUN mkdir /app
RUN apt install -y mc sudo syslog-ng python3-software-properties software-properties-common packagekit git gssproxy vim apt-utils
RUN apt install -y autoconf automake libxslt-dev doxygen findutils libgettextpo-dev libtool m4 make libunistring-dev libssl-dev zlib1g-dev gettext xsltproc libxml2-utils libxml2-dev xml-core docbook-xml docbook-xsl bison libkrb5-dev
RUN systemctl enable syslog-ng
RUN mkdir /src
#RUN cd /src && git clone https://github.com/scholtz/gss-ntlmssp.git
RUN DEBIAN_FRONTEND=noninteractive apt install -y libwbclient-dev samba samba-dev
#RUN cat /usr/include/samba-4.0/wbclient.h
COPY gss-ntlmssp /usr/local/src/gss-ntlmssp
RUN cd /usr/local/src/gss-ntlmssp/ && autoreconf -f -i && ./configure && make && make install
RUN cp /usr/local/src/gss-ntlmssp/examples/mech.ntlmssp.conf /etc/gss/mech.d/mech.ntlmssp.conf
RUN groupadd --gid 1000 app && useradd --uid 1000 --gid app --shell /bin/bash -d /app app
RUN echo BQIAAABMA..ArHdoQ== | base64 -d > /etc/krb5.keytab
COPY krb5.conf /etc/krb5.conf
COPY smb.conf /etc/samba/smb.conf
COPY krb5.conf /usr/local/etc/krb5.conf
RUN DEBIAN_FRONTEND=noninteractive apt install -y winbind
ENV KRB5_TRACE=/dev/stdout
RUN mkdir /src2
WORKDIR /src2
RUN dotnet --list-runtimes
RUN dotnet new webapi --auth Windows
RUN dotnet add package Microsoft.AspNetCore.Authentication.Negotiate
RUN sed -i '/services.AddControllers/i services.AddAuthentication(Microsoft.AspNetCore.Authentication.Negotiate.NegotiateDefaults.AuthenticationScheme).AddNegotiate();' Startup.cs
RUN sed -i '/app.UseAuthorization/i app.UseAuthentication();' Startup.cs
run echo a
RUN cat Startup.cs
RUN dotnet restore
RUN dotnet build
ENV ASPNETCORE_URLS="http://*:5002;https://*:5003"
EXPOSE 5002
EXPOSE 5003
RUN cd /app
WORKDIR /app
docker run -it -p 5003:5003 -it registry.k01.mydomain.com/k01-devbase:latest
In docker container:
kinit HTTP/myuser#MYDOMAIN.COM -k -t /etc/krb5.keytab
klist
dotnet run src2.dll
I have put my own debug info in gssntlmssp library and i put it to file
cat /tmp/gss-debug.log
This is exactly the same end where i finished with .net core 3.1 .
wbcCredentialCache (samba lib) fails at the point where it cannot find cached credentials
This is my krb5.conf:
[appdefaults]
default_lifetime = 25hrs
krb4_convert = false
krb4_convert_524 = false
ksu = {
forwardable = false
}
pam = {
minimum_uid = 100
forwardable = true
}
pam-afs-session = {
minimum_uid = 100
}
[libdefaults]
default_realm = MYDOMAIN.COM
[realms]
MYDOMAIN.COM = {
kdc = DC01.MYDOMAIN.COM
default_domain = MYDOMAIN.COM
}
[domain_realm]
mydomain.com. = MYDOMAIN.COM
.mydomain.com. = MYDOMAIN.COM
[logging]
default = CONSOLE
default = SYSLOG:INFO
default = FILE:/var/log/krb5-default.log
kdc = CONSOLE
kdc = SYSLOG:INFO:DAEMON
kdc = FILE:/var/log/krb5-kdc.log
admin_server = SYSLOG:INFO
admin_server = DEVICE=/dev/tty04
admin_server = FILE:/var/log/krb5-kadmin.log
and part of samba file:
[global]
security = domain
workgroup = mydomain.com
password server = *
idmap config * : range = 16777216-33554431
template shell = /bin/bash
winbind use default domain = yes
winbind offline logon = false
wins server = 10.0.0.2
In my opinion i would like more to have NTLM then Negotiate because Negotiate is not supported among browsers as far as I know. For example in firefox the person must setup the about:config for negotiate server. Wildcards are not supported, ...
nevertheless it seems that i will not be able to run .net core 5 web app with ntlm, so i will attempt to setup it without the gssntlmssp library now with some default kerberos mechanism. Any idea what is wrong with my krb5.conf settings?
**** Edit
So I am now trying two different approaches:
NTLM - in my opinion this is preferable way as i have seen ntlm authenticate users in iis express for example without the dialog box, and does not require any special configuration in firefox or through group policy (please fix me if I am wrong)
Negotiate
With regards for the negotiate i have managed to make some progres..
With this docker container i was able to get around the unsupported mechanism:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster as final
USER root
RUN whoami
RUN apt update && apt dist-upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user
RUN mkdir /app
RUN apt install -y mc sudo syslog-ng python3-software-properties software-properties-common packagekit git gssproxy vim apt-utils
RUN apt install -y autoconf automake libxslt-dev doxygen findutils libgettextpo-dev libtool m4 make libunistring-dev libssl-dev zlib1g-dev gettext xsltproc libxml2-utils libxml2-dev xml-core docbook-xml docbook-xsl bison libkrb5-dev
RUN systemctl enable syslog-ng
RUN mkdir /src
RUN groupadd --gid 1000 app && useradd --uid 1000 --gid app --shell /bin/bash -d /app app
RUN echo BQIAAAA8..vI | base64 -d > /etc/krb5.keytab
COPY krb5.conf /etc/krb5.conf
COPY krb5.conf /usr/local/etc/krb5.conf
ADD ca/is.k01.mydomain.com.p12 /etc/ssl/certs/is.k01.mydomain.com.pfx
RUN cd /app
WORKDIR /app
However now I have other issue:
Request ticket server HTTP/is.k01.mydomain.com#MYDOMAIN.com kvno 3 found in keytab but not with enctype rc4-hmac
This seems to me that the keytab is not with rc4-hmac which is true, because the keytab was generated with
ktpass -princ HTTP/is.k01.mydomain.com#MYDOMAIN.COM -pass ***** -mapuser MYDOMAIN\is.k01.kerb -pType KRB5_NT_PRINCIPAL -out c:\temp\is.k01.HTTP.keytab -crypto AES256-SHA1
as the .net documentation says.
I was not able to disallow use of rc4-hmac and allow only newer encoding, so i asked my infra department to generate new keytab with old rc4-hmac encoding.
This step has moved me further and I get this error instead: Request ticket server HTTP/is.k01.mydomain.com#MYDOMAIN.COM kvno 4 not found in keytab; keytab is likely out of date*
Which is very wierd because keytabs cannot get out of date, password has not been changed and was 100% valid one hour ago when the keytab was generated, and there is no information on web - "kvno 4 not found in keytab" fetch only 4 results in google.
**** EDIT
So finally I have managed to make it work :)
The issue with "kvno 4 not found in keytab" was in krb5.conf file, where I in favor of forcing aes encryption i have added lines
# default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes256-cts-hmac-sha1-9
# default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes256-cts-hmac-sha1-9
# permitted_enctypes = aes256-cts-hmac-sha1-96 aes256-cts-hmac-sha1-9
After I have commented them out, the authentication using Negotiate has started to work. I have tested the NTLM with .net 5 and it still does not work.
The krb5.conf file with which negotiate in docker container as build above works :
[appdefaults]
default_lifetime = 25hrs
krb4_convert = false
krb4_convert_524 = false
ksu = {
forwardable = false
}
pam = {
minimum_uid = 100
forwardable = true
}
pam-afs-session = {
minimum_uid = 100
}
[libdefaults]
default_realm = MYDOMAIN.COM
[realms]
MYDOMAIN.COM = {
kdc = DC02.MYDOMAIN.COM
default_domain = MYDOMAIN.COM
}
[domain_realm]
mydomain.com. = MYDOMAIN.COM
.mydomain.com. = MYDOMAIN.COM
[logging]
default = CONSOLE
default = SYSLOG:INFO
default = FILE:/var/log/krb5-default.log
kdc = CONSOLE
kdc = SYSLOG:INFO:DAEMON
kdc = FILE:/var/log/krb5-kdc.log
admin_server = SYSLOG:INFO
admin_server = DEVICE=/dev/tty04
admin_server = FILE:/var/log/krb5-kadmin.log
So the question now: Is there any way how to allow many services run negotiate protocol without adding each to spn by one, and manualy setting the browsers?
So at the moment every new web service must have:
setspn -S HTTP/mywebservice.mydomain.com mymachine
setspn -S HTTP/mywebservice#MYDOMAIN.COM mymachine
and must be allowed in internet explorer > settings > security > webs > Details > domain should be listed there
in firefox about:config > network.negotiate-auth.trusted-uris
chrome as far as i know takes internet explorer settings
i assume that internet explorer settings should be possible somehow update by the domain group policy.. anybody any idea how?
**** EDIT
I have tested wildcard in domain for negotiate settings in browsers and these are the results:
chrome: SUPPORTS *.k01.mydomain.com
ie: SUPPORTS *.k01.mydomain.com
firefox (73.0.1 (64-bit)): DOES NOT SUPPORT *.k01.mydomain.com - only full domain eg is.k01.mydomain.com
edge 44.18362.449.0 - dont know why but none of ie settings were propagated.. not working with *.k01.mydomain.com nor is.k01.mydomain.com
**** EDIT
I have started to use the win auth with negotiate, however I get some issues now in .net core
This code under IIS express shows user in form of MYDOMAIN\myuser:
var userId = string.Join(',', User?.Identities?.Select(c => c.Name)) ?? "?";
In linux it shows as myuser#mydomain.com
User.Indentities.First() under IIS express is WindowsIdentity and I can list all groups of the user
User.Indentities.First() under Linux is ClaimsIdentity with no group information
When I try to restrict it with group in IIS Express i get:
//Access granted
[Authorize(Roles = "MYDOMAIN\\GROUP1")]
//403
[Authorize(Roles = "MYDOMAIN\\GROUP_NOT_EXISTS")]
Linux kestrel with negotiate:
//403
[Authorize(Roles = "MYDOMAIN\\GROUP1")]
So it seems that negotiate in kestrel does not list groups properly. So i am going to investigate now, how to get WindowsIdentity in kestrel.
This article is a good example of misunderstanding how things work. I don't recommend to follow the way(like I did) author described here at all .
Instead, I would recommend learning about Kerberos authentication, how it works, what settings it requires. This article visualizes it good.
First,
If you profile http traffic coming from browser(user Fiddler, for example) you can find a TGS token in the second request.
If it starts with Negotiate TlR then you're doing auth over NTLM.
If it starts with Negotiate YII then you're doing auth over Kerberos.
Second,
Like David said before ASP.NET Core 3.1 doesn't support NTLM on Linux at all. So if you have TlR token and ntlm-gssapi mechanism you will get "No credentials were supplied, or the credentials were unavailable or inaccessible." error.
If you have TlR token and use default Kerberos mechanism you will get "An unsupported mechanism was requested."
Next,
The only way to get your app works well is to create SPNs and generate keytab correctly for Kerberos authentication. Unfortunately, this is not documented well. So, I gonna give an example here to make things more clear.
Let's say you have:
AD domain MYDOMAIN.COM
The web application with host webapp.webservicedomain.com. This can ends with mydomain.com, but not in my case.
Windows machine joined to AD with name mymachine.
Machine account MYDOMAIN\mymachine
Regarding the instructions described here you need to do:
Add new web service SPNs to the machine account:
setspn -S HTTP/webapp.webservicedomain.com mymachine
setspn -S HTTP/webapp#MYDOMAIN.COM mymachine
Use ktpass to generate a keytab file
ktpass -princ HTTP/webapp.webservicedomain.com#MYDOMAIN.COM -pass myKeyTabFilePassword -mapuser MYDOMAIN\mymachine$ -pType KRB5_NT_PRINCIPAL -out c:\temp\mymachine.HTTP.keytab -crypto AES256-SHA1*.
*Make sure MYDOMAIN\mymachine has AES256-SHA1 allowed in AD.
Finally,
After making all above things done and deploying the app into Linux container with keytab the Integrated Windows Authentication is supposed to worked well. My experiment showed you can use keytab wherever you want not only on the host with name "mymachine".
In dotnet runtime git they tell us that gss-ntlmssp is required for this to work even that it is not mentioned anyhow in the aspnet core documentation.
The 'gss-ntlmssp' package is a plug-in for supporting the NTLM protocol for the GSS-API. It supports both raw NTLM protocol as well as NTLM being used as the fallback from Kerberos to NTLM when 'Negotiate' (SPNEGO protocol) is being used. Ref: https://learn.microsoft.com/en-us/openspecs/windows_protocols/MS-SPNG/f377a379-c24f-4a0f-a3eb-0d835389e28a
From reading the discussion above and the image you posted, it appears that the application is trying to actually use NTLM instead of Kerberos. You can tell because the based64 encoded token starts with "T" instead of "Y".
ASP.NET Core server (Kestrel) does NOT support NTLM server-side on Linux at all. It only provides for 'Www-Authenticate: Negotiate' to be sent back to clients. And usually that means that Kerberos would be used. Negotiate can fall back to using NTLM. However, that doesn't work in ASP.NET Core except in .NET 5 which has not shipped yet.
Are you expecting your application to fall back to NTLM? If not, then perhaps the Kerberos environment is not completely set up. This can be caused by a variety of issues including the SPNs and Linux keytab files not being correct. It can also be caused by the client trying to use a username/password that is not part of the Kerberos realm.
This problem is being discussed here: https://github.com/dotnet/aspnetcore/issues/19397
I recommend the conversation continue in the aspnet core repo issue discussion.

Mongo DB Atlas. Is it safe to whitelist all ip because someone attempting to access the database needs a password

I have a google app engine with my express server. I also have my db in MongoDB Atlas. I currently have my MongoDB Atlas whitelisting all ip. The connection string is in the code for my express server running on Google Cloud. Presumable any attacker trying to get into the database would still need a user name and password for the connection string.
Is it safe to do this?
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
Is it safe to do this?
"Safe" is a relative term. It is safer than having an unauthed database open to the internet, but the weakest link is now your password.
A whitelist is an additional layer of security, so that if someone knows or can guess your password, they can't just connect from anywhere. They must be connecting from a set of known IP addresses. This makes the attack surface smaller, so the database is less likely to be broken into by a random person in the internet.
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
You would need to determine the IP ranges of your application, and plug in that range into the whitelist.
here is an answer i left elsewhere. hope it helps someone who comes across this:
this script will be kept up to date on my gist
why
mongo atlas provides a reasonably priced access to a managed mongo DB. CSPs where containers are hosted charge too much for their managed mongo DB. they all suggest setting an insecure CIDR (0.0.0.0/0) to allow the container to access the cluster. this is obviously ridiculous.
this entrypoint script is surgical to maintain least privileged access. only the current hosted IP address of the service is whitelisted.
usage
set as the entrypoint for the Dockerfile
run in cloud init / VM startup if not using a container (and delete the last line exec "$#" since that is just for containers
behavior
uses the mongo atlas project IP access list endpoints
will detect the hosted IP address of the container and whitelist it with the cluster using the mongo atlas API
if the service has no whitelist entry it is created
if the service has an existing whitelist entry that matches current IP no change
if the service IP has changed the old entry is deleted and new one is created
when a whitelist entry is created the service sleeps for 60s to wait for atlas to propagate access to the cluster
env
setup
create API key for org
add API key to project
copy the public key (MONGO_ATLAS_API_PK) and secret key (MONGO_ATLAS_API_SK)
go to project settings page and copy the project ID (MONGO_ATLAS_API_PROJECT_ID)
provide the following values in the env of the container service
SERVICE_NAME: unique name used for creating / updating (deleting old) whitelist entry
MONGO_ATLAS_API_PK: step 3
MONGO_ATLAS_API_SK: step 3
MONGO_ATLAS_API_PROJECT_ID: step 4
deps
bash
curl
jq CLI JSON parser
# alpine / apk
apk update \
&& apk add --no-cache \
bash \
curl \
jq
# ubuntu / apt
export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install \
bash \
curl \
jq
script
#!/usr/bin/env bash
# -- ENV -- #
# these must be available to the container service at runtime
#
# SERVICE_NAME
#
# MONGO_ATLAS_API_PK
# MONGO_ATLAS_API_SK
# MONGO_ATLAS_API_PROJECT_ID
#
# -- ENV -- #
set -e
mongo_api_base_url='https://cloud.mongodb.com/api/atlas/v1.0'
check_for_deps() {
deps=(
bash
curl
jq
)
for dep in "${deps[#]}"; do
if [ ! "$(command -v $dep)" ]
then
echo "dependency [$dep] not found. exiting"
exit 1
fi
done
}
make_mongo_api_request() {
local request_method="$1"
local request_url="$2"
local data="$3"
curl -s \
--user "$MONGO_ATLAS_API_PK:$MONGO_ATLAS_API_SK" --digest \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request "$request_method" "$request_url" \
--data "$data"
}
get_access_list_endpoint() {
echo -n "$mongo_api_base_url/groups/$MONGO_ATLAS_API_PROJECT_ID/accessList"
}
get_service_ip() {
echo -n "$(curl https://ipinfo.io/ip -s)"
}
get_previous_service_ip() {
local access_list_endpoint=`get_access_list_endpoint`
local previous_ip=`make_mongo_api_request 'GET' "$access_list_endpoint" \
| jq --arg SERVICE_NAME "$SERVICE_NAME" -r \
'.results[]? as $results | $results.comment | if test("\\[\($SERVICE_NAME)\\]") then $results.ipAddress else empty end'`
echo "$previous_ip"
}
whitelist_service_ip() {
local current_service_ip="$1"
local comment="Hosted IP of [$SERVICE_NAME] [set#$(date +%s)]"
if (( "${#comment}" > 80 )); then
echo "comment field value will be above 80 char limit: \"$comment\""
echo "comment would be too long due to length of service name [$SERVICE_NAME] [${#SERVICE_NAME}]"
echo "change comment format or service name then retry. exiting to avoid mongo API failure"
exit 1
fi
echo "whitelisting service IP [$current_service_ip] with comment value: \"$comment\""
response=`make_mongo_api_request \
'POST' \
"$(get_access_list_endpoint)?pretty=true" \
"[
{
\"comment\" : \"$comment\",
\"ipAddress\": \"$current_service_ip\"
}
]" \
| jq -r 'if .error then . else empty end'`
if [[ -n "$response" ]];
then
echo 'API error whitelisting service'
echo "$response"
exit 1
else
echo "whitelist request successful"
echo "waiting 60s for whitelist to propagate to cluster"
sleep 60s
fi
}
delete_previous_service_ip() {
local previous_service_ip="$1"
echo "deleting previous service IP address of [$SERVICE_NAME]"
make_mongo_api_request \
'DELETE' \
"$(get_access_list_endpoint)/$previous_service_ip"
}
set_mongo_whitelist_for_service_ip() {
local current_service_ip=`get_service_ip`
local previous_service_ip=`get_previous_service_ip`
if [[ -z "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] has not yet been whitelisted"
whitelist_service_ip "$current_service_ip"
elif [[ "$current_service_ip" == "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] IP has not changed"
else
echo "service [$SERVICE_NAME] IP has changed from [$previous_service_ip] to [$current_service_ip]"
delete_previous_service_ip "$previous_service_ip"
whitelist_service_ip "$current_service_ip"
fi
}
check_for_deps
set_mongo_whitelist_for_service_ip
# run CMD
exec "$#"

How to set up cron using curl command?

After apache rebuilt my cron jobs stopped working.
I used the following command:
wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
Now my DC support suggests me to change the wget method to curl. What would be the correct value in this case?
-O - is equivalent to curl's default behavior, so that's easy.
-q is curl's -s (or --silent)
--retry N will substitute for wget's -t N
All in all:
curl -s --retry 1 http://example.com/cgi-bin/loki/autobonus.pl
try run change with the full path of wget
/usr/bin/wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
you can find the full path with:
which wget
and more, check if you can reach the destination domain with ping or other methods:
ping example.com
Update:
based on the comments, seems to be caused by the line in /etc/hosts:
127.0.0.1 example.com #change example.com to the real domain
It seems that you have restricted options in terms that on the server where the cron should run you have the domain pinned to 127.0.0.1 but the virtual host configuration does not work with that.
What you can do is to let wget connect by IP but send the Host header so that the virtual host matching would work:
wget -O - -q -t 1 --header 'Host: example.com' http://xx.xx.35.162/cgi-bin/loki/autobonus.pl
Update
Also probably you don't need to run this over the web server, so why not just run:
perl /path/to/your/script/autobonus.pl