Could not register with unique url Red Hat 8 - redhat

I had a problem where I tried to register to a locally created subscription site. The local network was working, I could reach the target with ping and curl. I had the ip set up in /etc/hosts for the server. DNS was set up (the entry was in /etc/resolv.conf).
I was on multi-target environment on Redhat 8.6. on a virtual machine where I had ssh rights, and root access, but could not reach the virtual environment handler.
Error-msg:
Network error, unable to connect to server. Please see ...
The logs did not provide me extra information about the cause.
I can not provide logs, because the problem was handled by others, and I no longer have access to the environment, but I would like to be able to solve it the next time I encounter it.
Every tipp is appretiated.
Thank you very much
I tried:
Subscription-manager register Subscription-manager register --name=(url with and without http) Subscription-manager register --serverurl=(url with and without http) Subscription-manager register --baseurl=(url with and without http)
I tried to use --username=(user) and --password=(passw) in every combination.
The subscription manager could find the site, asked for the creditentials, but gave back network error.
I tried to disable the firewall on the local side, but it did not help.
I was not able to check the server side, but I was assured it works fine.
I also tried to
Subscription-manager remove-all Subscription-manager unregister
(dnf/yum gave back error that I have no registered subscription)
I expected:
To the register, and be able to choose pool, attach repos, and install packages.

Related

Local web server on windows stopped being reachable by devices on the same network

I use a local Python web server on my Windows machine. It’s simple, but good enough while in the static web page development stage. I just run it with something like this on my WSL command line:
python3 -m http.server
I can also access it on mobile devices on the same network, by going to my local address, e.g.: http://192.168.1.12:8000. All was good, until suddenly I could no longer access it on external devices, I got a “server not responding” type of message. Also, I could clearly see that when I refreshed the page on my phone, there was no GET request on the logs.
Immediately I tested on the local machine, and it was still working fine. This obviously smelled like a Firewall. In Linux, I’d know what to do, but it’s the first time I had to deal with this on Windows. This is what I’ve tried, without resolving the connection problem:
I opened the Event Viewer but could not see any obvious logs to check
I stopped the server (CTRL+C) and started it again on another port (5000). The Windows Firewall message popped up again asking for permission for Python3 to access the “Public network” and the “Private network”. Normally I just tick the “private network” but this time I checked both, as a troubleshooting step, in case my Wi-Fi was incorrectly being considered “public”.
I went to Windows Firewall and temporarily shut it down on the private network.
I installed and tried running nmap on the WSL, but it failed to run and prompted me to install the Windows version instead.
I installed and ran the Windows version of nmap but it told me that port 5000 was open.
What is the recommended way to troubleshoot and fix this issue?
Still suspecting the firewall, I tried something new, I switched off the “public network” firewall. I tested on my mobile and the page loaded as normal again! I immediately turned the firewall back on. Tested the page on my mobile once more, still fine. So, the solution was to toggle the public network firewall. I would make it more generic and toggle all firewall categories on Windows. And of course, I would make sure that the firewall stays on, this was a very quick operation.
I thought I’d put this here rather than ServerFault or SuperUser as it could potentially be more useful to developers, and it took a precious hour of my time. I still don’t know why it stopped working on its own in the first place. Better troubleshooting steps or suggestions are welcome, but I probably won’t be able to verify it as I don’t know how to purposely induce the issue.
Another solution that worked another time, was to delete all instances of Python 3.8 from the list of allowed apps (I don't know why Windows shows the same app multiple times) then (re)start the Python server and allow it through when the Firewall question pops up again.
In windows firewall you may have 4 options to configure your local web server when you are creating new Inbound connections rule.
1 Program
2 Port
3 Predefined
4 Custom
Try to use port only in "TCP protocol" and the custom port.
Allow connection.
Select: all checks: domain, private and public.
Enter a name.
Thats all.

How to Confirm PostgreSQL on Ubuntu VM is communicating with External Server for Updates

I have an Ubuntu VM installed on a client's VMware system. Recently, the client's IT informed us that his firewall has been detecting consistent potential port scans to our VM's internal IP address (coming from 87.238.57.227). He asked if this was part of a known package update process on our VM.
He sent us a firewall output where we can see several instances of the port scan, but there are also instances of our Ubuntu VM trying to communicate back to the external server on port 37258 (this is dropped by the firewall).
Based on a google lookup, the hostname of the external IP address is "feris.postgresql.org", with the ASN pointing to a European company called Redpill-Linpro. As far as I can tell, they offer IT consulting services, specializing in open source software (like PostgreSQL, which is installed on our VM). I have never heard of them before though and have no idea why our VM would be communicating with them or vice-versa. I'm also not sure if I'm interpreting the IP lookup information correctly: https://ipinfo.io/87.238.57.227
I'm looking for a way to confirm or disprove that this is just our VM pinging for a standard postgres update. If that's the case I'd like to restrict this behaviour. We would prefer to do these types of updates manually and limit the communication outside of the VM to what is strictly necessary for the functionality of our application.
Update
I sent an email to Redpill's abuse account. They responded quickly saying that the server should not be port scanning anyone and if it appears that way, something is wrong.
The server is part of a cluster of machines that serves apt.postgresql.org among other postgres download sites. I don't think we have anything like ansible or puppet installed that would automatically check for updates but I will look into that to make sure. I'm wondering if Ubuntu reaching out to update the MOTD with the number of available packages would explain why our VM is trying to reach out to the external postgres server?
The abuse rep said in any case there should only be outgoing connections from the VM, not incoming. He asked for some additional info so I will keep communicating with him and try to update this post accordingly
My communication with the client's IT dropped off so I did not get a definitive answer on this, but I'll provide some new details:
I reached out to the abuse email for Redpill-Linpro. He got back to me and confirmed the server corresponding to the detected IP address is part of a cluster that hosts postgres download sites, including apt.postgresql.org. He was surprised to learn we had detected a port scan from their server and seems eager to figure out why that is happening.
He asked if the client IT could pass along some necessary info for them to set up tracking on that server. But the client IT never got back to me. I think he was satisfied that it wasn't malicious and stopped pursuing it.
Here's one of the messages the abuse rep sent me that may be relevant:
That does look a lot like the tcp to the apt download server yes. It's
strange that your firewall reports that many incoming connections, but
they could be fallout from some connection tracking that's not
operating as intended. The timing appears to be matching up more or
less perfectly. And there should definitely not be any ping-back
connections from it.
Since you appear to be using the http version of the server (and not https) bringing the data in cleartext, they should be able to just
dump the TCP connection contents and verify exactly what it does. But
I bet they are going to see a number of http requests initiated by the
apt client that is checking for updates.

Sideloading Word JS Addin developed on local Docker machine - Can't reach Add-in

I'm having trouble trying to side-load an add-in in MS Word, getting the error
'ADD-IN ERROR: A problem occured while trying to reach this add-in.' The add-in needs to be hosted on a local docker environment to be integrated in the rest of a web aplication.
Setup
The add-in files are hosted on a local docker machine, accessible through both an ip-address and a https://dev.local address. The add-in is reachable through Internet Explorer and Edge Chromium without any certificate errors. It doesn't matter whether I try to reach the IP address of the locally mapped dev.local, the add-in refuses to load and just crashes. I'm on Word version 2002 build 12527.20194. Another word-addin that we host externally works fine.
What i've tried
I've been messing around with the settings in Internet Explorer (moving the sites to local zone, trusted zone, enabling and disabling the protection there).
I've upgraded Edge to edge Chromium. I've tried to use the Preview of
Edge Developer Tools, but that crashes when the error occurs.
I've tried using Fiddler and activate the runtime logging, but can't get more information on what's going wrong.
I've used the Yoman validation on the manifest.xml and everything checks out.
I've also enabled loopback through CheckNetIsolation LoopbackExempt -a -n="microsoft.win32webviewhost_cw5n1h2txyewy"
I'm pretty much at a loss now: what can I do to get more information on what's crashing the add-in?
OK so I managed to finally get this to work, leaving this here for anyone who might run into the same issues.
Because the local sideloading did work, I figured we needed to emulate the localhost situation with the docker. So I instructed the virtual machine to forward localhost:3000 to the Docker Toolbox port 443. I also copied over the SSL certificates generated by Yoman in <userhome>/.office-addin-dev-certs to the Nginx docker and instructed Nginx to use those SSL certificates for port 443.
I'm not entirely sure if adapting all of the other settings (such as enabling the loopback interface and using the about:flags page to always allow https on localhost are also neccesary, maybe just emulating the webserver on localhost is enough. Hope this helps someone!

Pyro call over WAN

I have a Pyro4 application that needs to be accessed by users globally. Users from within the US can access it fine. However, a user from a London PC seems to have issues connecting to the server. He can ping the nameserver correctly, but gets a Pyro CommunicationError ([Errno 11004] getaddrinfo failed) when executing an actual call on the proxy.
Has anyone seen this problem?
The problem was that when I created the daemon and registered it with the nameserver, I didn't use the fully qualified hostname (aka socket.getfqdn()). As a result it could find the remote object over the local network but not across the WAN.

Using any/fake domains with ejabberd

I've recently purchased a cloud server which has public IP and I am using it to host an xmpp server.
My first task was to ensure my users connected using my subdomain - as an example m.chat.com.
In my configuration I have the following:
%% Hostname
{hosts, ["m.chat.com"]}.
I then created an admin user with that domain.
In parrellel I have created the following DNS record with my host provider, hostgator for my subdomain m.chat.com
Name TTL Class Type Record
m.chat.com 14400 IN A [IP of the server]
One thing that puzzled me was my ability to access the ejabberd web admin console. This was achieved via: [IP of the server]:5280/admin however I could not access it via m.chat.com:5280/admin
That aside, inside the web console, under "Virtual Hosts" I could see the host "m.chat.com". I created a user "user#m.chat.com" and tried to connect via Adium.
Inside Adium, simply typing in user#m.chat.com with the password did not work. Instead I had to also specify the "Connect server" which in this case was the [IP of the server].
It has connected fine and I have registered other users to check everything is working and it is.
Then I thought I'd go back to the ejabberd configuration and start messing around. I changed the hostname to the following:
%% Hostname
{hosts, ["m.chat.com", "facebook.com"]}.
I registered a user with that domain and restarted ejabberd. Upon checking the web console, to my surprise, I could see the Virtual host "facebook.com". I tested this user in Adium with the [IP of the server] defined in the "Connect server" section and it connected fine. I asked other people with their own internet connections to use this account on their PCs and they were able to connect too.
Story over - my question to everyone is how is this possible? Am I missing something? Is there no domain authentication. After searching online, it seems you can even use fake domains.
If I am to operate my own service in the future (iOS chat app) I do not want anyone using my domain names with their own public servers.
Can someone shine some light.
Thanks!
Edit: A second question - Preferably I do not want to have to define the "Connect Server" upon using a client. I would like the client to recognise the #m.chat.com domain and establish a connection to the Servers IP automatically. Have I configured my DNS record correctly? For anyone else using Hostgator, is there an additional task I must do?
Edit: I can now access the web console via m.chat.com:5280/admin and I no longer have to specify the Connect server when using a client. I didnt do anything, I think it was a case of Hostgater updating the DNS or something, they say it usually takes 4 hours. However I am still slightly puzzled as to why I can create accounts with the facebook.com domain. I understand that because I can not access the DNS admin for this domain I can not create any records but that does not prevent me from using the domain and just specifying a Connect server.
Your initial problems (unable to access the server by using m.chat.com) were almost certainly DNS issues, and it seems you have isolated that down to the time taken to update the record.
Your second question - about the fact that you can name virtual hosts without restriction, is simple but interesting. What makes you think there should be any kind of restriction? It would be like you dictating that I can't save "m.chat.com" in a file on my disk, or that I can't send "m.chat.com" in a message across the internet.
This is why DNS exists and is structured the way it is. Although I can tell my server that it hosts facebook.com, nobody will connect to it because the DNS record for facebook.com does not point at my server (users generally don't set the "connect host" manually). Which begs the question... why would I want to tell my server it hosts facebook.com, and if I did, why should Facebook care?
An additional, but relevant, identity layer on top of DNS are certificates - which clients should validate for the virtual host name in spite of any "connect host" set. Since it's not possible to have a certificate for facebook.com, clients should generally pop up warnings or fail to connect at all. If they don't, they're probably not validating the certificate correctly.