FTP does not work from mesos docker container - scala

I have scala application with akka steams. So the flow of my application is like this:
1. Check if file exists on FTP - I'm doing it with the org.apache.commons.net.ftp.FTPClient
2. If it exists stream it via alpakka library(and make some stream transformations)
My application works locally and it can connect to the server.
The problem is when it is being deployed to dcos/mesos. I get an issue:
java.io.IOException: /path/file.txt: No such file or directory
I can say for sure that file still exists there. Also when I try to connect from docker container locally through the ftp I've got something like this:
ftp> open some.ftp.address.com
Connected to some.ftp.address.com.
220 Microsoft FTP Service
Name (some.ftp.address.com:root): USER
331 Password required
Password:
230 User logged in.
Remote system type is Windows_NT.
ftp> dir
501 Server cannot accept argument.
ftp: bind: Address already in use
ftp>

Not sure if its still helpful but I also got my ftp client transfering data from inside a Docker container after changing the data connection to passive. I think that active mode requires the client to have open ports which the server connects to when returning file listing results and during data transfer. However the client ports are not reachable from outside of the docker container since the requests are not routed through (like in a NAT:et network).
Found this post explaning active/passive FTP connections
https://labs.daemon.com.au/t/active-vs-passive-ftp/182

So my problem was really weird. But I've managed to fix this way.
Quick answer: I was using alpakka ftp lib this way:
Ftp
.fromPath(url, user, pass, Paths.get(s"/path/$fileName"))
But using this way it works:
val ftpSettings = FtpSettings(
host = InetAddress.getByName(url),
port = 21,
NonAnonFtpCredentials(user, pass),
binary = true,
passiveMode = true
)
Ftp
.fromPath(Paths.get(s"/path/$fileName"), ftpSettings)
Longer answer: I started investigating alpakka lib and I've discovered that it uses the same lib that works for me during checking if file exists!
https://github.com/akka/alpakka/blob/master/ftp/src/main/scala/akka/stream/alpakka/ftp/impl/FtpOperations.scala
So I've started digging and it seems that most likely tahat setting passive mode to true was the solution. But it's weird because I've read that windows ftp server does not support passive mode...
I hope someone could clarify my doubts one day, but at the moment I'm happy because it works :)

Related

How to read a file on a remote server from openshift

I have an app (java, Spring boot) that runs in a container in openshift. The application needs to go to a third-party server to read the logs of another application. How can this be done? Can I mount the directory where the logs are stored to the container? Or do I need to use some Protocol to remotely access the file and read it?
A remote server is a normal Linux server. It runs an old application running as a jar. It writes logs to a local folder. An application that runs on a pod (with Linux) needs to read this file and parse it
There is a multiple way to do this.
If a continious access is needed :
A Watcher access with polling events ( WatchService API )
A Stream Buffer
File Observable with Java rx
Then creating an NFS storage could be a possible way with exposing the remote logs and make it as a persistant volume is better for this approach.
Else, if the access is based on pollling the logs at for example a certain time during the day then a solution consist of using an FTP solution like Apache Commons FTP Client or using an ssh client which have an SFTP implementation like JSch which is a native Java library.

How can I setup a cell and collective in Bluemix

I'm trying to setup a cell and a collective in a WAS for bluemix service. I've found a few steps online for generic liberty setup, but nothing specific for a bluemix collective or cell. Can someone point me in the right direction?
At a high level, you should be able to do the following for a Cell:
Login to the Admin Console as wsadmin
Create a server.
Open all the ports on each host for each server created by running the openFirewallPorts.sh script. Below, you will find the standard ports for a new server given that only one server exists on each host You may need to open more ports for additional servers on the same host since ports can be unique per server. Try the following:
cd WAS_HOME/virtual/bin
export serverPorts=2810:TCP,2810:UDP,8880:TCP,8880:UDP,9101:TCP,9101:UDP,9061:TCP,9061:UDP,9080:TCP,9080:UDP,9354:TCP,9354:UDP,9044:TCP,9044:UDP,9443:TCP,9443:UDP,5060:TCP,5060:UDP,5061:TCP,5061:UDP,11005:TCP,11005:UDP,11007:TCP,11007:UDP,9633:TCP,9633:UDP,7276:TCP,7276:UDP,7286:TCP,7286:UDP,5558:TCP,5558:UDP,5578:TCP,5578:UDP
sudo ./openFirewallPorts.sh -ports $serverPorts -persist true
Start your server.
Deploy your application.
There are a few slight differences for a Liberty Collective, but again, at a high level, you should be able to try the following:
Switch your user to wsadmin or ssh to your host using wsadmin / password
On each host, create a server and join it to the collective. Be sure to use the full host name of the controller for the --host parameter.
cd WAS_HOME/bin
./server create server
./collective join server --host=yourhostname --port=9443 --user=wsadmin --password=xxxxxxxx --keystorePassword=yyyyyyyy
Accept the chain certificate (y/n) y
Save the output from each join so you can paste it into each host's application server.xml file before deploying your application.
Install the features required by your application on each host. The features listed below are an example.
cd /opt/IBM/WebSphere/Liberty/bin
./featureManager install --acceptLicense ejblite-3.2 websocket-1.0 jsp-2.3 jdbc-4.1 jaxrs-2.0 cdi-1.2 beanValidation-1.1
NOTE: Output from this command will contain messages similar to:
chmod: changing permissions of
`/opt/IBM/WebSphere/Liberty/bin/featureManager': Operation not
permitted
This is OK. You should see this message upon completion:
Product validation completed successfully.
Update your application's server.xml file with the information saved in Step 2.
Start your server.
Deploy your application.
Verify your application is reachable :9080/appname

Proxy setting in gsutil tool

I use gsutil tool for download archives from Google Storage.
I use next CMD command:
python c:\gsutil\gsutil cp gs://pubsite_prod_rev_XXXXXXXXXXXXX/YYYYY/*.zip C:\Tmp\gs
Everything works fine, but if I try to run that command from corporate proxy, I receive error:
Caught socket error, retrying: [Errno 10051] A socket operation was attempted to an unreachable network
I tried several times to set the proxy settings in .boto file, but all to no avail.
Someone faced with such a problem?
Thanks!
Please see the section "I'm connecting through a proxy server, what do I need to do?" at https://developers.google.com/storage/docs/faq#troubleshooting
Basically, you need to configure the proxy settings in your .boto file, and you need to ensure that your proxy allows traffic to accounts.google.com as well as to *.storage.googleapis.com.
A change was just merged into github yesterday that fixes some of the proxy support. Please try it out, or specifically, overwrite this file with your current copy:
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/util.py
I believe I am having the same problem with the proxy settings being ignored under Linux (Ubuntu 12.04.4 LTS) and gsutils 4.2 (downloaded today).
I've been watching tcpdump on the host to confirm that gsutils is attempting to directly route to Google IPs instead of to my proxy server.
It seems that on the first execution of a simple command like "gsutil -d ls" it will use my proxy settings specified .boto for the first POST and then switch back to attempting to route directly to Google instead of my proxy server.
Then if I CTRL-C and re-run the exact same command, the proxy setting is no longer used at all. This difference in behaviour baffles me. If I wait long enough, I think it will work for the initial request again so this suggests some form on caching taking place. I'm not 100% of this behaviour yet because I haven't been able to predict when it occurs.
I also noticed that it always first tries to connect to 169.254.169.254 on port 80 regardless of proxy settings. A grep shows that it's hardcoded into oauth2_client.py, test_utils.py, layer1.py, and utils.py (under different subdirectories of the gsutils root).
I've tried setting the http_proxy environment variable but it appears that there is code that unsets this.

PHP Slow to process soap request via browser but fine on the command line

I am trying to connect to an external SOAP service using PHP and have written a small php test script that just connects to the service and performs a simple request to check everything is working.
This all works correctly but when I run via a browser request, it is very slow taking somewhere in the region of 40s to establish the initial connection. When I do the same request using the exact same script on the command line, it goes through straight away.
Does anyone have any ideas as to why this might be?
Cheers
PHP caches the wsdl in /tmp. If you run from the command line first, the cache file will be owned by whatever user you're running the script as, and apache won't be able to read the cache. The wsdl will have to be downloaded and parsed every time which will be slow.
Check the permissions of /tmp/wsdl*.
Maybe external SOAP service trying to check your IP, and your server has ICMP allowed, when your local network - not.
Anyway, this question might be answered more clearly by administrator of external SOAP service :)
Is there a difference between the php.inis that are being used?
On a standard ubuntu server installation:
diff /etc/php5/apache2/php.ini /etc/php5/cli/php.ini
//edit:
Another difference might be in the include paths. Had this trouble myself on a local test server, it didn't actually use the soap class that was included (it didn't include anything, because the search paths weren't valid), but it included the built-in soap_client class.

Zend_Session SaveHandler for Memcache

Recently discovered that Zend_Session's DbTable SaveHandler is implemented in a way that is not very optimized for high performance, so, I've been investigating changing over to using Memcache for session management.
I found a decent pattern/class for changing the Zend_Session SaveHandler in my bootstrap from DbTable to Memcache here and added it into my web app.
In my bootstrap, I changed the SaveHandler like so:
FROM:
Zend_Session::setSaveHandler(new Zend_Session_SaveHandler_DbTable($config));
TO:
Zend_Session::setSaveHandler(new MyApp_Session_SaveHandler_Memcache(Zend_Registry::get("cache")));
So, my session init looks like this:
Zend_Loader::loadClass('MyApp_Session_SaveHandler_Memcache');
Zend_Session::setSaveHandler(new MyApp_Session_SaveHandler_Memcache(Zend_Registry::get("cache")));
Zend_Session::start();
// set up session space
$this->session = new Zend_Session_Namespace('MyApp');
Zend_Registry::set('session', $this->session);
As you can see, the class provided from that site integrates quickly with a simple loadClass and SaveHandler change in the bootstrap and it works in my local dev env without error (web app and memcache are on the same system).
I also tested my web app hosted in local dev env with a remote memcache server in PROD to see how it performs over the wire and it appears to also work okay.
However, in my staging environment (which mimics production) my zend app is hosted on server1 and memcache on hosted on server2 and it seems that nearly every other request completely bombs out with specific error messages.
The error information I capture includes the message "session has already been started by session.auto-start or session_start()" and second/related indicates that Zend_Session::start() got a Connection Refused with an "Error #8 MemcachePool::get()" implicated on line 180 in the framework file ../Zend/Cache/Backend/Memcached.php.
I have confirmed that my php.ini has session.auto_start set to 0 and the only instance of Zend_Session::start() in my code is in my bootstrap. Also, I init my Cache, Db and Helpers before I init my Session (to make sure my Zend_Registry::get("cache") argument for instantiating my new SaveHandler is valid.
I found only about two valuable resources for how to successfully employ Memcache for Zend_Session and I have also reviewed ZF's Zend_Cache_Backend and Zend_Session "Advanced Usage" docs but I haven't been able to identify the source of why I get this error using Memcache or why it won't work consistently with a dedicated/remote memcache server.
Does anyone understand this problem?
Does anyone have experience with solving this problem?
Does anyone have Memcache working in their ZF web app for session management in a way that they can recommend?
Please be sure to include any/all Zend_Session and/or Zend_Cache configurations you made or other trickeration or witchcraft you used to get this working.
Thanks!
This one just nearly exploded my head.
First, sorry for the book of a question...I wanted to paint a complete picture of the situation. Unfortunately, I missed a few key details which my wonderful coworker found.
So, once you install, most likely when you are just starting to test out the deamon, you will do this:
root# memcached -d -u nobody -m 512 127.0.0.1 -p 11211
This command will start up memcached, using 512MB on the localhost and the default port 11211.
Did you see what I did there? That means it's set to only process requests sent to the LOOPBACK network interface.
ugh
My problem was, I couldn't get my web app to work with a REMOTE memcached server.
So, when you actually want to fire up your memcached server to accept requests from remote systems, you execute something like the following:
root# memcached -d -u nobody -m 512 -l 192.168.0.101 -p 11211
This fixed my problem. This starts my memcached daemon, setting it to use 512MB bound to IP 192.168.0.101 and listening on the default port 11211.
Now, any requests SENT to that IP and Port will be accepted by the server and handled as you might expect.
Here's a networking doc reference...RTFM...a second time!