Why an invalid service principal name (SPN) can be created using setspn - kerberos

Today, I was able to create totally random and invalid SPN using the setspn command, but I dont understand why invalid SPNs are allowed. For example:
setspn -s RandomSvc/randomname.random.random valid_user was run successfully for valid_user in my domain (I substitute the actual user name here, but the user is a valid user in the domain).
Then if I do setspn -l valid_user, it will list this invalid entry.
I guess nobody can actually connect to this service since it does not exist. however, if I try to add a valid SPN, but typed it by mistake, I won't notice it until my application gives me an error. So why setspn does not do any validation (other than checking for duplicate with -s)?

The setspn command won't stop you from creating an invalid SPN, and for good reason, so there is no actual problem here believe it or not. Your definition of a valid SPN - that of a representation of an actual service running on an actual machine having a host name in DNS which can be reached over TCP/IP, is not going to be enforced by setspn for reasons I am about to describe. According to a KDC, while an SPN represents an actual service running on an actual machine having a host name in DNS which can be reached over TCP/IP, it doesn't actually have to be real at the time though. Here's why. Have you considered the case where perhaps the service, or even the machine it will be run on, will be installed later? The service and even the machine it is running on, and the DNS entry for said machine, doesn't have to be in place in the here and now when you create the SPN. Setspn.exe is simply a rudimentary tool which exists that allows you to create SPNs which will conform the RFCs for Kerberos. Its up to you to architect it right though, it's not going to hold anyone's hand in the process. I work for a large company and create SPNs all the time for services, and even for machines, which do not yet exist. That way the Developers or sys admins don't have to contact me to have an SPN created after the fact. They are following a project plan and savvy managers are going to have the SPN, DNS entires, IP addresses for machines, all planned out and created ahead of time before the OS admins get around to actually spinning up a live server for such use. So if setspn prevented people from creating an SPN for a service which is not yet up and running, there would be some very angry system admins out there. That is why it is allowed to create "invalid" SPNs, when you think it shouldn't. If you want something to do extra layers of validation not afforded by the generic setspn.exe, to catch mistakes before your application does, then you will have to create such a thing yourself. Ask yourself though, is this worth the time for something so specific? I mean, how often are you creating SPNs? This should all make sense to you now. I periodically run a setspn -X to catch duplicate sin my domain. I've never had the time, but I suppose I could also list out all the current SPNs in the domain and check if they are each currently valid, and take corrective action if they weren't. I'm sure there are probably more than a few which no longer are active. I don't consider it that big of a deal.

Related

Sophos UTM VPN not accessible

I used the Sophos UTM 9.510 ha_standalone Cloudformation template (https://github.com/sophos-iaas/aws-cf-templates/blob/master/utm/9.510/standalone.template) and used defaults when possible. I did not use an existing ElasticIP, so it created it's own at (scrubbed) 50.12.12.123.
I gave a hostname at (for example) vpn.example.com and after creation, I created an A record for vpn.example.com to point to 50.12.12.123.
I don't have a license and just pay hourly for the AMI.
I understand that I should be able to hit https://vpn.example.com:4444 or https://50.12.12.123:4444 to see the admin panel. However, it times out and doesn't load anything.
When I deployed the stack, I got an email at the admin email I provided and it said REST daemon not running - restarted. I assume it restarted fine, since I have received no new emails, and the EC2 instance is running.
Has anyone else experienced this? Is there a step I'm missing? Aside from creating the Route53 record, I thought the Cloudformation Template should just work right out of the box.
The default security groups blocked traffic. I modified one of them to accept all traffic and the dashboard became accessible. I will now refine access further.

Purpose of Install-ADServiceAccount

I create grouped managed service account gMSA by running command New-ADServiceAccount and I specify -PrincipalsAllowedToRetrieveManagedPassword. At the very moment of creation I am able to use gMSA on computer which is specified as -PrincipalsAllowedToRetrieveManagedPassword.
What is the purpose of Install-ADServiceAccount? I have found many blogs where people say to run it but nobody explains reason why to do it.
I am not doing it and everything works fine.
Install-ADServiceAccount is used for installing/linking an MSA to the computer so it's available to be used. This is needed because MSA have a one-to-one relationship with a computer. It does not apply to gMSA which can be shared. gMSA only require you to set the permissions using New-/Set-ADServiceAccount
How to use gMSA: https://technet.microsoft.com/en-us/library/jj128431.aspx

Some simple questions about Kerberos

I am learning about kerberos and i have few questions about it that i didnt found on the network and i wanna ask you.
The questions are:
What happen when I change user's password? What really gonna behind? What the service it use? I want to know what the steps and how the KDS behave after change password
Why kerberos's name called about the hades dog / 3 head dog? What the connection between them?
In kerberos system how I can see my tickets I recive from the KDC?
Thank you in advance.
I only have an answer to your 2nd question.
The reference to the three-headed dog is that there are 3 different entities:
The client system
the Authentication Server
the Service Server (the thing you're trying to access)
Most authentication protocols only involve the client and server.
From "Kerberos: The definitive guide" book by Jason Garman:
The Greeks believed that when a person dies, his soul is sent to Hades to spend eternity. While all souls were sent to Hades, those people who had led a good life would be spared the eternal punishment that those who had not would have to endure. Cerberus, as the gatekeeper to Hades, ensured that only the souls of the dead entered Hades, and he ensured that souls could not escape once inside.
As the gatekeeper to Hades, Cerberus authenticated those who attempted to enter (to determine whether they were dead or alive) and used that authentication to determine whether to allow access or not. Just like the ancient Cerberus, the modern Kerberos authenticates those users who attempt to access network resources.
You can see list of your tickets with klist command. If you mean literally see file where tickets stored, this command provides you with path to ticket cache as well. On *nix systems using MIT Kerberos it's /tmp/krb5cc_%{uid} by default. This command also should work in windows, but I'm not sure is it installed by default.
****1. What happen when I change user's password?****
They will get a new password, nothing special really, it shouldn't affect an existing kerberos ticket cache that i am aware of as long as the ticket is valid. If they have to enter their password anywhere at a later point for example if you have to run the kinit command to get a ticket where you enter your password then you must use the new password.
There shouldn't be much "sync" time or anything but it is vital that the time on your server is synced with the KDC as Kerberos is strict about times being in sync, by default there is a 5 minute clock skew, so it can only be off my no more than 5 minutes or things will start failing. Typically you would do this on linux by running the ntpdate command to sync the clocks.
***1a. What really gonna behind? What the service it use? I want to know what the steps and how the KDS behave after change password****
What happens depends on your setup, of which you have a variety of options but here a few more common setups.
The most common setup is running a corporate Active Directory environment. In a basic Active Directory setup your Domain Controller(s) run your KDC automatically. So for this you would just reset your Active Directory users password then pretty much be good to go, it will take care of the changes to the KDC for you.
The second would be running an OpenLDAP type environment for your users in place of Active Directory where you would change the passwords in OpenLDAP then update the password in the MIT Kerberos KDC using the kpasswd command to reset the password for your principal on the MIT KDC unless you have setup something such as pass-through authentication.
The third setup I see in an MIT Kerberos KDC with no LDAP environment whatsoever. Usually the kerberos users are local user accounts on the operating system. In this case you would just update the password on the MIT KDC using the kpasswd command I mentioned before to update the keberos principal password for the user on the MIT KDC.
2. Why kerberos's name called about the hades dog / 3 head dog? What the connection between them?
In addition to build on the previous answers Kerberos is similar to the 3 headed dog since it performs a 3 way handshake when authenticating. The three pieces are the Key Distribution Center (KDC), the client, and the server. This article gives a good explanation in detail, it is slightly off as it is talking about specific software but at the bottom of page 1 from Paper 476-2013 Kerberos and SASĀ® 9.4: A Three-Headed Solution for Authentication by Stuart Rogers, SAS Institute you will find the specific details.
3. In kerberos system how I can see my tickets I recive from the KDC?
If you have a ticket you can run the klist command. Append a -ef for klist -ef to see your encryption types along with any flags such as forwarded, initial, renewal, and others. See the MIT Documentation in klist documentation at http://web.mit.edu/Kerberos/krb5-1.13/doc/user/user_commands/klist.html .
You can get a ticket by running the kinit command then entering your principals password.
You can destroy a ticket cache by running kdestroy to clear your current tickets. This won't necessarly remove them from your cache directory though.
If you have a keytab file you can see details about it by running klist -kt /path/to/myuser.keytab to see the principal the keytab is for. There will be a principal per encryption type you are using, that is why it lists multiple of the same sometimes. You will see a KVNO number, which is your key version number, this number should always match for each principal.
Answers to you questions are:
Once the password for the principal is changed then after that point of time whenever you are running kinit command to get the ticket you should use new password
The name Kerberos was taken from Greek mythology; Kerberos (Cerberus) was a three-headed dog who guarded the gates of Hades. The three heads of the Kerberos protocol represent a client, a server and a Key Distribution Center (KDC).
To view the ticket you get from KDC you can run klist command if will give the details of principal , ticket lifetimes etc.
The location where ticket really exists depends on what you have given in /etc/krb5.conf which by default is default_ccache_name = FILE:/tmp/krb5cc_%{uid}

Get the list of allowed hosts in host-based authentication

I am aware that I have to add the IP addresses of remote hosts in pg_hba.conf file and restart the PostgreSQL server for changes to take effect.
But I would like to get a list of hosts currently allowed for the host-based authentication, directly from the server that is already running.
Similar to how I can get the max_connections setting using show max_connections;, I would hypothetically imagine it to be something like show hosts; or select pg_hosts(); (neither really exists).
Is this possible?
EDIT: I understand exposing the hosts would present a security risk. But how about the psql utility invoked directly in the database server's terminal? Does it have a special command to get the list?
The psql command at the terminal has no permission to get the list. Only the PostgreSQL database does.
The best way to do this (if you really must) is to create a PL/PerlU function which reads the pg_hba.conf and parses it, and returns the information in the way you want it. You could even build a management system for the pg_hba.conf with such functions (reloading the db might get interesting but you could do this with a LISTEN/NOTIFY approach).
Note, however, if you do this, your functions have a security footprint. You would probably want to just revoke permission to run the functions from public, grant access to nobody, and thus require users be superusers in order to run the functions. I would personally avoid exposing such critical information to the db unless there was a compelling reason but I could imagine that there could be cases where it might be helpful on balance. It is certainly dangerous territory however.

Umbraco on Azure: can I change hostname?

I've deployed in Windows Azure a website made with Umbraco, using
Windows Azure Accelerator for Umbraco.
For development and test i used a test Hostname. Now it's time to switch to the official DNS hostname..
How can I change current hostname?
Actually i configured hostname at deployment time (the only way i know to do this) but i can't deploy again, since many files have been changed working on website on Azure.
EDIT
Let me explain: at the step prompt in the image (during web site deploying) I used as Domain Name "test.mywebsite.com", and configured real DNS.
Now the website is configured, so I'd like to make mywebsite.com point to that site;
But is'nt enough if i configure mywebsite DNS! Shall I deploy again? An will I lose any of the changes I made?
I'd like to make two comments on your question:
1) In order to host your Azure application under a custom host name, you will need to sign up with a DNS provider that supports C-NAME records (most do). I suggest someone like GoDaddy.com because by default C-NAME records can only resolve your "www.domainname.com" records and cannot do anything for queries where "www." is dropped from the URL. DNS providers like GoDaddy also have an option to redirect all traffic destined for "domainname.com" to a URL of your choice. This is a huge deal for Azure apps. Frankly speaking, it is somewhat disappointing that for all the PaaS and IaaS features of Azure, DNS was not included in the overall package.
2) I am a little worried when you say that you can no longer redeploy your app due to the changes made. Can you elaborate on that? Have you made changes to the application's code running on VM's in Azure without going through redeployment process? If so, this is a huge no-no. Your VM's running in Azure are not "permanent". Microsoft and your redeployment process can (and will) re-stage those VM's to the original package at any given time. Microsoft will re-image your VM's at least once a month during their monthly OS upgrades. But they can also do so when they need to move your VM to another rack, etc. Whatever changes that you make to your app must be either stored in source-control before deployment or in a permanent storage facility like SQL Azure, Azure Storage, etc.
HTH
Finally i think that the answers to my questions are:
-Shall I deploy again? Yes, i must deploy again
-Will I lose any of the changes I made? Many changes will be mantained since are stored into DB. But I have to do many activities to make new website work!
This answer confirms my theory:
In my case, I created and uploaded a site with a name, let's say
http://www.contoso.com and then paid a domain from a registrar let's say
http://www.example.com, when I mapped
http://MyAcceleratorsService.cloudapp.net/ to my new domain
( http://www.example.com ) and tried to open that domain I got the home page of
the Accelerator and not the uploaded site.
I had to upload the site again to Azure (using UploadUmbracoSite.cmd
from Accelerator application) and when uploading enter the same domain
name as the one I registered: http://www.example.com. Then, I was able to
browse my uploaded site as expected.
As for your question, will upload site again using
UploadUmbracoSite.cmd (is in the Setup folder) and will enter the new
domain name when requested.
Exactly what I was trying to avoid.. but the only solution, i suppose.
Well it was not easy to publish again, i got errors of many type (i suppose tied to some components that i've installed after deploy and that are not installed in new deployed website).. i'm going to solve them.
Edit
Completed my work:
- loads of different attempts, no-one worked
- CTP backup of DB
- deleted DB and website
- new full deploy of umbraco
- CTP restore of DB
finally:
-all work on content is OK
-all work on styles, pages, templates is lost
Changing hostname is hard; dont'use test hostname but definitive hostname from the beginning.
If anyone has suggest, i'll be pleased to test it, anyway
This is not really an answer to your question, but it might be a solution to your problem: Use a CNAME record to make the production DNS name point to your development name. E.g. www.productionname.com will the point to www.testname.com. I am not sure if everything will just work out of the box, but it seems to be worth a try.
This requires, that your hosting provider allows you to set up CNAME records.
http://en.wikipedia.org/wiki/CNAME_record