I have two machines on my network. I installed ejabberd on one of the machines (Machine A), registered two users there. I have Pidgin running on both machines. I signed into Pidgin to on machine A, and I was able to sign in. When I tried to sign into the second account on Machine B, I get an error that says Host Unknown. Could someone help me out?
(I checked the ejabber logs and can see machine B trying to access it)
I was able to resolve this. I was putting the IP address of the server in place of domain. So, now I put localhost there, and the IP address in "Connect Server" in the Advanced tab.
Related
We are in the process of turning off NTLM in our environment for both inbound and outbound traffic via GPO. In our lab testing we have encountered the following when blocking inbound NTLM on a remote host:
RDP'ing to the remote host with inbound NTLM blocked via cross-forest generated a CredSSP error message.
Setting Encryption Oracle Remediation to either Mitigated or Vulnerable as a workaround did not work.
Turning off NLA on the remote host as a workaround will allow cross-forest RDP
I have tried applying "Allow delegating fresh credentials" via policy on the remote host but it is still getting the CredSSP error
I have also tried setting the policy on the remote host to use SSL for "Require use of specific security layer for remote (RDP) connections", and I still got the same CredSSP error.
What did work is if I try to RDP from the same forest to the remote host, it will allow the connection and I can confirm it is using Kerberos for RDP instead of NTLM.
Another observation is once the same forest RDP worked on the remote host, cross-forest RDP connection on the remote host with the blocked inbound NTLM will now work.
Has anyone encountered something similar like this before?
If so, has anyone found a solution for cross-forest RDP to work on a remote host with blocked inbound NTLM without the need to pre-auth on the remote host in the same forest?
The Encryption Oracle Remediation error is a red herring because it uses the same error code as the NTLM is not available error. Unless you haven't patched in 3 years it'll likely never be the Encryption Oracle Remediation issue. It's really just that it tried to fallback to NTLM and policy said no.
In all likelihood the issue is that the client can't find or communicate with a domain controller to do NLA.
The client must find the user's domain first (domain A). From there it authenticates their password. It then asks to get a ticket to the machine. The machine isn't in the user's domain so it creates a referral ticket to where it thinks the machine is (domain B).
The referral is handed back to the client and the client tries to find a DC to where the referral is supposed to go (domain B). The client sends the referral to domain B and asks for a ticket to the machine. The domain controller either finds the machine and issues a ticket for it, or says it doesn't know and offers a referral to another domain (domain C) and you try again, or it just fails saying no machine can be found.
All of this occurs from the client's perspective, not the target machine's perspective. This happens before the client even pings the target machine (ish). This is why disabling NLA appears to resolve the issue.
So there are a handful of reasons why this happens:
You used an IP address -- this is a straight-to-NTLM scenario. Kerberos doens't do IP addresses by default. You can turn it on, but it won't scale.
Client can't communicate with a DC in user's domain (domain A). Networking issue, client needs line of sight to domain controller, plus DNS.
Client can't communicate a with DC in the target machine's domain (domain B). Still a networking issue, client needs line of sight to domain controller, plus DNS.
You're not providing a proper fully qualified name and the user's DC can't figure out what forest it should refer to. You can enable Forest Search Order and it'll maybe help, or you can type in the fully qualified machine name.
This isn't an exhaustive list but these are the most common causes.
References:
https://syfuhs.net/windows-and-domain-trusts
https://syfuhs.net/how-authentication-works-when-you-use-remote-desktop
I also had a similar issue when using the DOMAIN\username login ; using the UPN (username#domaine.com) worked for me.
My understanding is using the UPN allows the client to know the DNS domain name, which then allows it to discover the DC of the remote domain through DNS resolution.
NB : my setup was from a workgroup server so not exactly the same as yours; YMMV.
I am looking for a way to incorporate a command line interface into my website. Specifically I have 2 servers, one running Linux distro and the other Windows. People can request accounts and if I approve them they get a user partition on either of the servers.
They can then sign in on the website and access the servers through a command line interface. I saw a couple of repos that do something similar for the Amazon EC2 servers but was wondering if there is anything more general?
You can use shellinabox. This runs a daemon on the server and can be accessed through a specified port. You simply have to enter the IP of your server and the port number and you can log in over a browser.
I've recently purchased a cloud server which has public IP and I am using it to host an xmpp server.
My first task was to ensure my users connected using my subdomain - as an example m.chat.com.
In my configuration I have the following:
%% Hostname
{hosts, ["m.chat.com"]}.
I then created an admin user with that domain.
In parrellel I have created the following DNS record with my host provider, hostgator for my subdomain m.chat.com
Name TTL Class Type Record
m.chat.com 14400 IN A [IP of the server]
One thing that puzzled me was my ability to access the ejabberd web admin console. This was achieved via: [IP of the server]:5280/admin however I could not access it via m.chat.com:5280/admin
That aside, inside the web console, under "Virtual Hosts" I could see the host "m.chat.com". I created a user "user#m.chat.com" and tried to connect via Adium.
Inside Adium, simply typing in user#m.chat.com with the password did not work. Instead I had to also specify the "Connect server" which in this case was the [IP of the server].
It has connected fine and I have registered other users to check everything is working and it is.
Then I thought I'd go back to the ejabberd configuration and start messing around. I changed the hostname to the following:
%% Hostname
{hosts, ["m.chat.com", "facebook.com"]}.
I registered a user with that domain and restarted ejabberd. Upon checking the web console, to my surprise, I could see the Virtual host "facebook.com". I tested this user in Adium with the [IP of the server] defined in the "Connect server" section and it connected fine. I asked other people with their own internet connections to use this account on their PCs and they were able to connect too.
Story over - my question to everyone is how is this possible? Am I missing something? Is there no domain authentication. After searching online, it seems you can even use fake domains.
If I am to operate my own service in the future (iOS chat app) I do not want anyone using my domain names with their own public servers.
Can someone shine some light.
Thanks!
Edit: A second question - Preferably I do not want to have to define the "Connect Server" upon using a client. I would like the client to recognise the #m.chat.com domain and establish a connection to the Servers IP automatically. Have I configured my DNS record correctly? For anyone else using Hostgator, is there an additional task I must do?
Edit: I can now access the web console via m.chat.com:5280/admin and I no longer have to specify the Connect server when using a client. I didnt do anything, I think it was a case of Hostgater updating the DNS or something, they say it usually takes 4 hours. However I am still slightly puzzled as to why I can create accounts with the facebook.com domain. I understand that because I can not access the DNS admin for this domain I can not create any records but that does not prevent me from using the domain and just specifying a Connect server.
Your initial problems (unable to access the server by using m.chat.com) were almost certainly DNS issues, and it seems you have isolated that down to the time taken to update the record.
Your second question - about the fact that you can name virtual hosts without restriction, is simple but interesting. What makes you think there should be any kind of restriction? It would be like you dictating that I can't save "m.chat.com" in a file on my disk, or that I can't send "m.chat.com" in a message across the internet.
This is why DNS exists and is structured the way it is. Although I can tell my server that it hosts facebook.com, nobody will connect to it because the DNS record for facebook.com does not point at my server (users generally don't set the "connect host" manually). Which begs the question... why would I want to tell my server it hosts facebook.com, and if I did, why should Facebook care?
An additional, but relevant, identity layer on top of DNS are certificates - which clients should validate for the virtual host name in spite of any "connect host" set. Since it's not possible to have a certificate for facebook.com, clients should generally pop up warnings or fail to connect at all. If they don't, they're probably not validating the certificate correctly.
I have a development server (Java servlet container) running on my computer inside my private network (IP range 192.168.0.0 to 192.168.255.255). This development server executes my integration testing environment. This testing environment has its own Facebook App ID. Having the server run in the 192.168.x.y range allows my colleagues to test the website, login to my local website with their Facebook accounts etc..
At https://developers.facebook.com -> in the Facebook Apps settings -> located under "Basic settings" -> in the "Website with Facebook Login" field, I have set http://192.168.2.106:8080, as this is the address-port combination that my development server binds to.
Due to DHCP, my computer now has a slightly different IP address, namely 192.168.2.109. Whenever I start up my server and then try to do anything Facebook-API related (e.g. Facebook Login), I get the following error message from Facebook
{
"error": {
"message": "Invalid redirect_uri: Given URL is not allowed by the Application configuration.",
"type": "OAuthException",
"code": 191
}
}
Is there a way to have a Facebook App allow a "range of IP address websites with Facebook Login"? What other solutions can you suggest?
My colleagues shall be able to also start up the development server on their own machines, with their own private network addresses. Therefore, the same Facebook App ID shall work on different machines with different IP addresses and still be accessible to everybody inside the private network.
Notice that setting "Website with Facebook Login" to localhost makes the development server only available to the same machine it is running on. This unfortunately prevents colleagues from accessing this development server instance.
Update
Filed bug: https://developers.facebook.com/bugs/606277079382609
If you could get away of using localhost, then there is a very simple way:
Make the facebook app redirect to http://lvh.me:3000/ (or whatever port your server is listening on localhost). A benevolent developer owns the lvh.me domain and had the DNS setup to point to localhost. I've tried this and that's what I used for development testing.
Similarly, you could do stuff like that and points a DNS record of a domain you own to a range of local ip. I am not familiar with DNS so I am not sure how to set it up or if it's possible.
If you're running on Windows you could try changing your hosts file in C:\Windows\System32\drivers\etc
and add your IP to an imaginary domain (your colleagues should do it too) and put that domain in Facebook Settings panel.
I'm not sure about this so tell me if it works :)
As far as i know you cannot setup an IP range in your application setting, the redirect_uri is typically meant to handle urls with domain names, which is usually the case with with public websites.
The best way to avoid this problem is to make sure you local development server has always gets the same IP, which is generally a good practice if you are writing a server.
there are several ways to do it depending on the network setup, here are two option:
Setup your DHCP server to always assign the same IP to your dev server MAC address
Bend the rules a little bit and setup your computer to claim an unused IP address. DHCP servers typically assign IPs in order (will start from 192.168.1.1 and work up to 192.168.254.254) so have your computer claim an IP in the higher part of the range (Ping the IP first to make sure it's not being used)
Instead of using an IP address, use localhost.
So hxxp://192.168.1.20 would become hxxp://localhost
(replace xx with tt)
This resolves back to the local machine, whatever its IP address is. I am assuming that your development server is running on your PC, using WAMP or something similar. I draw this assumption because you state that it must run on any laptop in any network environment.
If you own a domain you could create a subdomain test.mydomain.com pointing to 192.168.2.109.
When your address should change again, you can change the entry accordingly.
There is no reason why a DNS entry could not point to a IP address from the 192.168 range. For someone outside your network it will not be of much use, because he can't access your IP address from the outside, but for your co-workers within the network it will work.
If a coworker wants to run the web app on his own PC, he can of course override this setting using his own hosts file.
Recently we got a new server at the office purely for testing purposes. It is set up so that we can access it from any computer.
However today our ip got blocked from one of our other sites saying that our ip has been suspected of having a virus that sends spam emails. we learned this from the cbl http://cbl.abuseat.org/
So of course we turned the server off to stop this. The problem is the server must be on to continue developing our application and to access the database that is installed on it. Our normal admin is on vacation and is unreachable, and the rest of us are idiots(me included) in this area.
We believe that the best solution is to remove it from connecting to the internet but still access it on the lan. If that is a valid solution how would this be done or is there a better way? say blocking specified ports or whatever.
I assume that this server is behind a router? You should be able to block WAN connections to the server on the router and still leave it open to accepting LAN connection. Or you could restrict the IPs that can connect to the server to the development machines on the network.