We have java application running on bluemix that is supposed to submit some files over FTP to the server located in intranet.
Everything works as expected when executing the application locally, but something goes wrong when the application tries to submit something over the secure gateway.
The Gateway has a destination configured for the port 21. Looking through the logs we can see that the application is able to connect to the server and execute some commands there, but fails when it comes to the file submission (by timeout in case of passive mode and saying that the connection is closed in active)
Passive attempt results:
Active attempt results:
We are able to use the gateway to connect to the external db2 instance successfully.
Is some additional configuration required? Is FTP is possible at all over the Secure Gateway?
This question was also asked on dW Answers at the following URL:https://developer.ibm.com/answers/questions/386433/ftp-over-secure-gateway-on-bluemix.html
As stated in response to that question, SFTP doesn't run over port 21; it runs over port 22.
Answer found at: https://developer.ibm.com/answers/questions/386433/ftp-over-secure-gateway-on-bluemix.html
"you need to define two secure gateway destinations - one for command
port 21 and one for data port, which will depend on your connection
mode."
Related
I am looking for a way to incorporate a command line interface into my website. Specifically I have 2 servers, one running Linux distro and the other Windows. People can request accounts and if I approve them they get a user partition on either of the servers.
They can then sign in on the website and access the servers through a command line interface. I saw a couple of repos that do something similar for the Amazon EC2 servers but was wondering if there is anything more general?
You can use shellinabox. This runs a daemon on the server and can be accessed through a specified port. You simply have to enter the IP of your server and the port number and you can log in over a browser.
I've recently purchased a cloud server which has public IP and I am using it to host an xmpp server.
My first task was to ensure my users connected using my subdomain - as an example m.chat.com.
In my configuration I have the following:
%% Hostname
{hosts, ["m.chat.com"]}.
I then created an admin user with that domain.
In parrellel I have created the following DNS record with my host provider, hostgator for my subdomain m.chat.com
Name TTL Class Type Record
m.chat.com 14400 IN A [IP of the server]
One thing that puzzled me was my ability to access the ejabberd web admin console. This was achieved via: [IP of the server]:5280/admin however I could not access it via m.chat.com:5280/admin
That aside, inside the web console, under "Virtual Hosts" I could see the host "m.chat.com". I created a user "user#m.chat.com" and tried to connect via Adium.
Inside Adium, simply typing in user#m.chat.com with the password did not work. Instead I had to also specify the "Connect server" which in this case was the [IP of the server].
It has connected fine and I have registered other users to check everything is working and it is.
Then I thought I'd go back to the ejabberd configuration and start messing around. I changed the hostname to the following:
%% Hostname
{hosts, ["m.chat.com", "facebook.com"]}.
I registered a user with that domain and restarted ejabberd. Upon checking the web console, to my surprise, I could see the Virtual host "facebook.com". I tested this user in Adium with the [IP of the server] defined in the "Connect server" section and it connected fine. I asked other people with their own internet connections to use this account on their PCs and they were able to connect too.
Story over - my question to everyone is how is this possible? Am I missing something? Is there no domain authentication. After searching online, it seems you can even use fake domains.
If I am to operate my own service in the future (iOS chat app) I do not want anyone using my domain names with their own public servers.
Can someone shine some light.
Thanks!
Edit: A second question - Preferably I do not want to have to define the "Connect Server" upon using a client. I would like the client to recognise the #m.chat.com domain and establish a connection to the Servers IP automatically. Have I configured my DNS record correctly? For anyone else using Hostgator, is there an additional task I must do?
Edit: I can now access the web console via m.chat.com:5280/admin and I no longer have to specify the Connect server when using a client. I didnt do anything, I think it was a case of Hostgater updating the DNS or something, they say it usually takes 4 hours. However I am still slightly puzzled as to why I can create accounts with the facebook.com domain. I understand that because I can not access the DNS admin for this domain I can not create any records but that does not prevent me from using the domain and just specifying a Connect server.
Your initial problems (unable to access the server by using m.chat.com) were almost certainly DNS issues, and it seems you have isolated that down to the time taken to update the record.
Your second question - about the fact that you can name virtual hosts without restriction, is simple but interesting. What makes you think there should be any kind of restriction? It would be like you dictating that I can't save "m.chat.com" in a file on my disk, or that I can't send "m.chat.com" in a message across the internet.
This is why DNS exists and is structured the way it is. Although I can tell my server that it hosts facebook.com, nobody will connect to it because the DNS record for facebook.com does not point at my server (users generally don't set the "connect host" manually). Which begs the question... why would I want to tell my server it hosts facebook.com, and if I did, why should Facebook care?
An additional, but relevant, identity layer on top of DNS are certificates - which clients should validate for the virtual host name in spite of any "connect host" set. Since it's not possible to have a certificate for facebook.com, clients should generally pop up warnings or fail to connect at all. If they don't, they're probably not validating the certificate correctly.
I need to deploy a Azure Worker Role with input endpoint on port 21 so that it can accepts incoming FTP connections.so that i should be able to connect to worker role through FTP Client like Filezilla and access the azure blob storage.
for this i was able to implement FTP commands like LIST,RETR,STOR,PORT,USER and PASS.All these works fine with Active mode of FTP.
But when i switch to PASSIVE mode of FTP(execute PASV command to Azure Worker Role),I am finding the issue.Since i am newbie to Azure so not able to trace the problem..Going through few blogs got to know that since Azure Worker role are beyond the Load balancer so PASSIVE mode need configuration.I saw few blogs which talks about manual configuration of Web role for FTP..Since i am working on worker role, does configuration change and how can handle it in code and more over since we are not sure about which vm machine the role gonna deployed..how can i handle configuration
Ways i tried:
1.In the Azure Worker role,i set the following end points
FTP Input tcp 21
Endpoint1 Input tcp 1025
initially on Start(),I had this code on line
TcpListener server = SocketHelpers.CreateTcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["FTP"].IPEndpoint);
and on PASV mode i had following
TcpListener server = SocketHelpers.CreateTcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints[" Endpoint1"].IPEndpoint);
so that it opens on new port 1025 and send back to the client.while sending back to client i got exception as follows:
SocketErrorCode is 10053 and SocketErrorDesc:System.Net.Sockets.SocketError.ConnectionAborthhed
Unable to write data to the transport connection: An established connection was aborted by the software in your host machine.
2.other way by getting external IP address using http://checkip.dyndns.org/,if i get IPadress from this,do i need to get the port from code using
RoleEnvironment.CurrentRoleInstance.InstanceEndpoints[" Endpoint1"].IPEndpoint???
Really I am really confused with Azure stuff and FTP configuration.
I went through following articles but could not find how to configure programmatically worker role (setting the port range,retrieving from the code) to work on PASSIVE mode.
http://www.itq.nl/blogs/post/Walkthrough-Hosting-FTP-on-IIS-75-in-Windows-Azure-VM.aspx
http://angelolaris.blogspot.com/
Regards,
Vivek
First think i could confirm is that, ot sure if you are also starting the listener as below or now:
TcpListener myPortListener = new TcpListener(RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["MY_PORT"].IPEndpoint);
myPortListener.Start();
Next when you have above code in your worker role the Port start to take incoming request and any application which has binding to IP/Port will receive the packets.
IF you really want to understand it how to get it working in your Worker Role, what you can do is, following this guidance to setup in a Web Role first and then try to replicate same configuration in your worker role. It is little complex to do but first you would need to understand how things work and then you would be able to implement itself.
Also your requirement is not clear because I am not sure why do you need such configuration because you can connect directly to Azure Blob storage (if your data is located at Azure Blob storage) from a Worker Role and access the content, why having FTP/local connectivity to make it complex. May be if you revisit your application architecture, you don't need to do such work.
I have an application that can connect to the Principal, but can't connect to the Mirror during a failover.
(Note to moderator: please let me know if this question is more appropriate for serverfault. I posted it here because I found more questions similar to this issue than on serverfault.)
This is the error I receive when my application attempts to connect to the Mirror after a failover:
Named Pipes Provider: Could not open a connection to SQL Server [53].
Cannot open database "MY_DB_NAME" requested by the login. The login failed.
I am familiar with the fact that when initially connected to the Principal, the name of the Mirror server is cached to be used during the failover and that the failover partner I specify in my connection string is only used if the initial connection to the Principal fails.
This clearly describes the problem I'm having:
http://blogs.msdn.com/b/spike/archive/2010/12/15/running-a-database-mirror-setup-with-the-sqlbrowser-service-off-may-produce-unexpected-results.aspx
...but the SQL Browser Service is running and I can't figure out why the name won't resolve when connecting to the mirror.
I'm assuming there is a service that must be running to enable NetBIOS name resolution that is not running, because this is what I see in WireShark consistently without a response from the Mirror:
Source Destination Protocol Length Info
10.200.3.111 10.200.5.255 NBNS 92 Name query NB SQL-02-SVR-<00>
Question 1: What could be causing the problem? ;-)
Question 2: I really don't want to enable NetBIOS (for security reasons) and I'm using IP addresses (no FQDNs) in the mirror configuration and in the connection string. Given the caching behavior of the mirror partner when connecting to the Principal, is there a way to force TCP/IP to be used so the value that is cached is the IP address and not the name? Do I need to run the SQL Server Browser/Computer Browser services?
The configuration:
App Is Delphi XE2 using SDAC 6.5.9 (I don't think this is relevant to the component I'm using because it works in other installations with mirroring and has no issues)
SQL Server 2012 Enterprise installed as a default instance on Principal, Mirror and Witness in a non-domain configuration using certificate authentication.
Windows Server 2008 R2 SP1 64-bit on all machines
Firewalls disabled on Principal, Mirror and Client (where app is running)
TCP/IP and Named Pipes enabled on Principal and Mirror
SQL Server Browser service running on Mirror
Computer Browser service running on Mirror
Mirroring is configured for automatic failover with a witness and works properly (I can fail back and forth between mirror and principal without issue)
SQL Native Client 2012 installed on Client machine
Same app login (with same SID and user rights) exists on both Principal and Mirror
Correct server, failover partner, database name, user name and password verified in my app log
In connection string, principal server is 'tcp:10.200.3.15,1433' and failover partner is 'tcp:10.200.3.16,1433' using the SQL Native client
I can ping both servers from the Client machine
NetBIOS over TCP/IP has been enabled in the adapter under the WINS tab (on the Mirror and Client machines)
I've been able to get the application working with mirroring on several other installations, but this one is baffling me.
I found the problem, which was that the customer had the Principal and Mirror in one VLAN and the Client(s) in another. Although the IP addressing scheme was the same, the policy for communication between the VLANs prevented broadcast messages, which is why the NetBIOS query was failing on the client. A WINS or DNS server will be implemented to resolve this issue.
However, I am still interested in an answer to my Question #2, above.
I am developing an iPhone application which is communicating with a remote service over a tcp socket connection (the service actually listens on telnet and takes telnet commands too). The connection is of course insecure and all requests (with quite a bit of sensitive data, such as passwords) and responses are transmitted as plain text. My first reaction was to consider a web service with ssl, but developing a web service from scratch seems too lengthy.
Because of that I have been thinking of using an ssh tunnel in order to secure the traffic. Is it possible to set up an ssh tunnel in an iPhone application (with libssh2 for example) and then use that tunnel to securely connect to the remote service? If so, how should I set up the tunnel and most importantly, how should I connect to the remote service and give commands/receive responses? Lastly, what should I keep in mind regarding the tunnel?
EDIT: I forgot to mention that the server running the service is using Windows. SSH is achieved via Cygwin.
I am sorry if the question is too basic but this is really my first real brush with ssh.
I think you may have more security issues by using an ssh tunnel because there isn't a secure way to tie down the authentication information in the app and well, if someone can get that login information they could conceivably connect to your ssh session and start trying to issue arbitrary commands. Of course there are ways to lock down an ssh session, but still, I'd be very wary of that. At least with a web service, it acts as a "broker" between the iPhone app and the telnet session so you can add an extra layer of protection.