Browser Hangs when receive WWW-Authenticate: Negotiate - kerberos

When IE or Chrome on Windows7 receives a response with "WWW-Authenticate: Negotiate " header it hangs for a few seconds.
I would assume it is making a network request to KDC and the request times out. It may be wrong assumption though.
Is the server keytab determines which KDC the browser queries?
Is there any way to debug this?
Thanks!

To answer your first question, avoid making the assumption that it is timing out finding a KDC - only a network capture can tell you that. While it may in fact, be doing that, it could also be failing over to using NTLM and then succeeding on that because Kerberos is broken somewhere.
To answer your second question, the keytab does not determine which KDC the browser queries. There is nothing inside a keytab which would do that. I placed an image of what an example keytab looks like at the bottom of this answer for you. Now, the KDC which gets queried is controlled by DNS. That process would only get over-ridden by values set inside a C:\Windows\krb5.ini - if that file exists - and it doesn't exist on Windows by default. To answer your last question you can debug this using Wireshark captures, filter on 'kerberos' in the WireShark search field to see what the Kerberos traffic may be doing, or not doing. That will tell you all you need to know.

Related

Using VPS to create VPN and using the local Ip address to send (Secure) a get/websocket request

So I have a VPS (Cent Os 7) and using openvpn I created VPN having an address of 10.0.8.1 now on my front end I connected to VPN using openvpn after connecting I get access to websocket on 10.0.8.1 but its not secure I want access to wss on the same address. I have also tried using a secure domain name to connect but it still fails I can only connect it with either http or ws and not with https or wss
This is very trivial as far as a question but all in all, without telling you how to perform anything in details - the question is WWAAAYYY too broad to even consider answering without unevitably creating more questions than solving a problem or helping you.
You need to add cryptography to your websocket server, same as a web server is able to run in HTTPS mode rather than unencrypted. I'm sure you can see the similarity between both abreviations of the respective protocols and how they are different from their original, unencrypted/vulnerable default configuration.
http -> https
ws -> wss
Start reading on adding a SSL certificate to your websocket server config and then you will have a WSS connection - if all goes well of course!
I believe in you
p.s. - this is not the type of question that is very well received by the majority of the community. It is too broad to be of any interrest to anyone.A complete, well-built, comprehensive answer isn't something that fits within the boudaries of most community members as there is WAY too many variables and unknowns here. Anything will most likely create more questions (of this quality) than help you or anyone else. You lack basic knowledge in order to construct a question that doesn't sound anything other than 'i need a full tutorial'. Community doesn't provide tutorials, custom solutions or anything that resssembles a full product/service. We rather help solve smaller, more precise and clear issues that pop up day to day in the field. Generally, when someone "talks the talk", it implies that the bases are covered and an issue arose. For now, you must learn to "walk the walk" i suppose.
Everyone wore the same shoes at some point or another and good memory comes from remembering such stuff from when we started playing with the wall socket angry pixies!
Cheers!

Encrypted Password Accessible via API Call, Is this Secure?

I am working through some security concepts right now and I was curious if this method has been tried and/or if it is safe taking into consideration "Brute Forcing" is still possible.
Take for example a Microsoft WebAPI Template in Visual Studio where you access a endpoint using a "GET".
The Endpoint would be accessible by any user/application
The String value that a user/application would get from this endpoint would be the password they need, but encrypted using a "KeyValue"
After a TLS Transmission of this Encrypted Value, the user/application would decrypt the String using their "KeyValue"
Is this a secure practice?
Thanks for indulging me and look forward to your responses.
EDIT: Added Further Clarification with Image to Help Illustrate
Suppose the following 2 Scenarios:
Communication between Server and Client
a. Your Server serves the Client application with an encrypted password.
b. The Client can request any password.
c. The passwords are encrypted with a shared Key that is known by both server and client application
As James K Polk already pointed out:
A knowledgable Attacker can and will analyse your deployed application and at some point will find your hardcoded decryption key ("KeyValue"). What prevents him from requesting every password that is stored on the Server?
Rule of thumb here would be: "Do not trust the client side."
Communication between Server and Server
a. You have 2 server applications. Application A is acting as some kind of database server. Application B is your Back-End for a user application of some kind.
b. Application A serves paswords to any requester, not only Server B. With no type of authentication whatsoever.
c. Confidentiality is guaranteed through a shared and hard-coded Key.
I think you are trying to overcomplicate things hoping that no one is able to piece together the puzzle.
Someone with enough time and effort might be able to get information about your server compilation and/or be able to get the Code of Application B. Which again defaults in the scenario of 1. Another point is that there are enough bots out there randomly scanning ips to check responses. Application A might be found and even-though they do not have the shared key might be able to piece together the purpose of Application A and make this server a priority target.
Is this a safe practice?
No. It is never a good idea to give away possibly confidential information for free. Encrypted or not. You wouldn't let people freely download your database would you?
What you should do
All Authentication/Authorization (for example a user login, that's what I expect is your reason to exchange the passwords) should be done on the server side since you're in control of this environment.
Since you didn't tell us what you're actually trying to accomplish I'd recommend you read up on common attack vectors and find out about common ways to mitigate these.
A few suggestions from me:
Communication between 2 End-points -> SSL/TLS
Authorization / Authentication
Open Web Application Security Project and their Top 10 (2017)

C Programming - Sending HTTP Request

My recent assignment is to make a proxy in C using socket programming. The proxy only needs to be built using HTTP/1.0. After several hours of work, I have made a proxy that can be used with Chromium. Various websites can be loaded such as google and several .edu websites; however, many websites give me a 404 error for page not found (these links work fine when not going through my proxy). These 404 errors even occur on the root address "/" of a site... which doesn't make sense.
Could this be a problem with my HTTP request? The HTTP request sent from the browser is parsed for the HTTP request method, hostname, and port. For example, if a GET request is parsed from the browser, a TCP connection is established to the hostname and port provided, and the HTTP GET request is sent in the following format:
GET /path/name/item.html HTTP/1.0\r\n\r\n
This format works for a small amount of websites, but a 404 error message is created for the rest. Could this be the problem? If not, what else could possibly be giving me this problem?
Any help would be greatly appreciated.
One likely explanation is the fact that you've designed a HTTP/1.0 proxy, whereas any website on a shared hosting site will only work with HTTP/1.1 these days (well, not quite, but I'll get to that in a second).
This isn't the only possible problem by a long way, but you'll have to give an example of a website which is failing like this to get some more ideas.
You seem to understand the basics of HTTP, that the client makes a TCP connection to the server and sends a HTTP request over it, which consists of a request line (such as GET /path/name/item.html HTTP/1.0) and then a set of optional header lines, all separated by CRLF (i.e. \r\n). The whole lot is ended with two consecutive CRLF sequences, at which point the server at the other end matches up the request with a resource and sends back an appropriate response. Resources are all identified by a path (e.g. /path/name/item.html) which could be a real file, or it could be a dynamic page.
That much of HTTP has stayed pretty much unchanged since it was first invented. However, think about how the client finds the server to connect to. What you give it is a URL, like this:
http://www.example.com/path/name/item.html
From this it looks at the scheme which is http, so it knows it's making a HTTP connection. The next part is the hostname. Under original HTTP the assumption was that each hostname resolved to its own IP address, and then the client connects to that IP address and makes the request. Since every server only had one website in those days, this worked fine.
As the number of websites increased, however, it became difficult to give every website a different IP address, particularly as many websites were so simple that they could easily be shared on the same physical machine. It was easy to point multiple domains at the same IP address (the DNS system makes this really simple), but when the server received the TCP request it would just know it had a request to its IP address - it wouldn't know which website to send back. So, a new Host header was added so that the client could indicate in the request itself which hostname it was requesting. This meant that one server could host lots of websites, and the webserver could use the Host header to tell which one to serve in the response.
These days this is very common - if you don't use the Host header than a number of websites won't know which server you're asking for. What usually happens is they assume some default website from the list they've got, and the chances are this won't have the file you're asking for. Even if you're asking for /, if you don't provide the Host header then the webserver may give you a 404 anyway, if it's configured that way - this isn't unreasonable if there isn't a sensible default website to give you.
You can find the description of the Host header in the HTTP RFC if you want more technical details.
Also, it's possible that websites just plain refuse HTTP/1.0 - I would be slightly surprised if that happened on so many websites, but you never know. Still, try the Host header first.
Contrary to what some people believe there's nothing to stop you using the Host header with HTTP/1.0, although you might still find some servers which don't like that. It's a little easier than supporting full HTTP/1.1, which requires that you understand chunked encoding and other complexities, although for simple example code you could probably get away with just adding the Host header and calling it HTTP/1.1 (I wouldn't suggest this is adequate for production code, however).
Anyway, you can try adding the Host header to make your request like this:
GET /path/name/item.html HTTP/1.0\r\n
Host: www.example.com\r\n
\r\n
I've split it across lines just for easy reading - you can see there's still the blank line at the end.
Even if this isn't causing the problem you're seeing, the Host header is a really good idea these days as there are definitely sites that won't work without it. If you're still having problems them give me an example of a site which doesn't work for you and we can try and work out why.
If anything I've said is unclear or needs more detail, just ask.

How to cancel a persistent connection using NSURLConnection?

Is it possible to destroy a persistent connection that has been created with NSURLConnection? I need to be able to destroy the persistent connection and do another SSL handshake.
As it is now, calling [conn cancel] leaves a persistent connection behind that gets used with the next connection request to that host, which I don't want to happen.
As it turns out, I believe the Secure Transport TLS session cache is to blame.
I also asked the question on the apple developer forums, and got a response from an Apple person. He pointed me to this Apple sample code readme where it says:
At the bottom of the TLS stack on both iOS and Mac OS X is a component known as Secure Transport. Secure Transport maintains a per-process TLS session cache. When you connect via TLS, the cache stores information about the TLS negotiation so that subsequent connections can connect more quickly. The on-the-wire mechanism is described at the link below.
http://en.wikipedia.org/wiki/Transport_Layer_Security#Resumed_TLS_handshake
This presents some interesting gotchas, especially while you're debugging. For example, consider the following sequence:
You use the Debug tab to set the TLS Server Validation to Disabled.
You connect to a site with a self-signed identity. The connection succeeds because you've disabled TLS server trust validation. This adds an entry to the Secure Transport TLS session cache.
You use the Debug tab to set the TLS Server Validation to Default.
You immediately connect to the same site as you did in step 2. This should fail, because of the change in server trust validation policy, but it succeeds because you never receive an NSURLAuthenticationMethodServerTrust challenge. Under the covers, Secure Transport has resumed the TLS session, which means that the challenge never bubbles up to your level.
On the other hand, if you delay for 11 minutes between steps 3 and 4, things work as expected (well, fail as expected :-). This is because Secure Transport's TLS session cache has a timeout of 10 minutes.
In the real world this isn't a huge problem, but it can be very confusing during debugging. There's no programmatic way to clear the Secure Transport TLS session cache but, as the cache is per-process, you can avoid this problem during debugging by simply quitting and relaunching your application. Remember that, starting with iOS 4, pressing the Home button does not necessarily quit your application. Instead, you should use quit the application from the recent applications list.
So, based on that, a user would have to either kill their app and restart it or wait more than 10 minutes before sending another request.
I did another google search with this new information and found this Apple technical Q&A article that matches this problem exactly. Near the bottom, it mentions adding a trailing '.' to domain names (and hopefully IP addresses) for requests in order to force a TLS session cache miss (if you can't modify the server in some way, which I can't), so I am going to try this and hopefully it will work. I will post my findings after I test it.
### EDIT ###
I tested adding a '.' to the end of the ip address, and the request was still completed successfully.
But I was thinking about the problem in general, and there's really no reason to force another SSL handshake. In my case, the solution to this problem is to keep a copy of the last known SecCertificateRef that was returned from the server. When making another request to the server, if a cached TLS session is used (connection:didReceiveAuthenticationChallenge: was not called), we know that the saved SecCertificateRef is still valid. If connection:didReceiveAuthenticationChallenge: is called, we can get the new SecCertificateRef at that time.
Starting with OS X 10.9, NSURLSession is the solution.
First, you should use
[self.conn cancel]
and second, that does just what it says. It cancels itself. If you don't want to use an NSURLConnection after that anymore, it won't do anything and if you'll use it again, you can just set a different request, which will connect to the given server.
Hope that helps.

Please suggest a secure way to transfer data from one computer to another

I want "A" computer to responsible for execute the command, and "B" computer will send the command to "A" computer. I have some ideas on the implementation.
Way1:
The "A" computer have a while true loop to listen for "B" computer's command, when it is received, it execute.
Way2:
Using a FTP Server, that store information that about the command.
The "A" computer have a while true loop to check whether the "B" computer uploaded any new information about the command. if yes, reconstruct the command and execute. After executed, the file on FTP Server will be deleted. And store a copy the "A" computer.
Way3:
this is similar to way2, but using database to store. After the command is executed, it will made as executed.
What is your suggestion about these 3 ways? Is that secure enough?
A generic way: ssh and scp.
Reliable secure database specific: depends on the platform: Service Broker, Oracle AQ, MQSeries.
Not so good idea: write a socket program w/o knowing anything about security.
You're assuming a trust relationship without giving any clues as to how you know that the payload from computer A is benign. How do you plan to prevent computer A from sending a task that says "reformat your hard drive after plundering it for all bank accounts and passwords?
AKA write a socket listener on computer B and let computer A connect to it. No security here.
FTP just saves you from having to write the transport protocol.
Database for persistence instead of file store - nothing new here.
None of your options have any security, so it's hard to say which one is more secure. FTP requires that the user know the URL of your FTP server and, if you aren't allowing anonymous access, a username and password. That makes 2 more secure than 1. 2 and 3 are equally (in)secure.
All can work, but all are risky. I see little or no security here.
Without more details, all I can suggest for security is to use SSL.
Using FTP or a database server will just add needless complexity, potentially without gaining any real value.
If you want a secure solution, you will need to carefully describe your environment, risks, and attackers.
Are you afraid of spoofing? Denial of service? Information disclosure? Interception?
1 has nothing to do with security.
2 uses FTP and not FTPS? Anyone and their grandparents can sniff the username/password. Do not use.
3 depends on how securely you connect to the database --- on both ends. Also, how can you be sure what's inserted into the database if from your trustee?
What are you really implementing here? With all due respect, it sounds like you should pick up at least an introductory book on information security.
From your examples it seems that you aren't so much interested in securing against malicious attacks and bit manipulation but what you want is reliable delivery.
Have a look at a message queue solution, for example MSMQ.
It depends what kind of security you need.
if it is guaranteed delivery - anything way that makes the message stored and approve the storing before deletion will do.
if it's about the sender and the receiver id you should use certificates.
if it's about the line security - you should encrypt the message.
All things can be achieve using WCF if you're on the Microsoft world.
and there are other libraries if you're on the Linux world.
(you can use https post for example).