FTP timing out on Heroku - scala

Using Apache Commons FTPClient in a Scala application works as expected on my local machine, but always times out when running on Heroku. Relevant code:
val ftp = new FTPClient
ftp.connect(hostname)
val success = ftp.login(username, pw)
if (success) {
ftp.changeWorkingDirectory(path)
//a logging statement here WILL print
val engine = ftp.initiateListParsing(ftp.printWorkingDirectory)
//a logging statement here will NOT print
while (engine.hasNext) {
val files = engine.getNext(5)
//do stuff with files
}
}
By adding some loggings I've confirmed the client is successfully connecting, logging in, and changing the directory. But stops when trying to begin retrieving files (either via the above paged access or unpaged with ftp.listFiles) and throws a connection time out exception (after about 15 minutes).
Any reason the above code works fine locally but not on Heroku?

Turns out FTP has two modes, active and passive, and Apache-Commons' FTPClient defaults to active. Heroku's firewall presumably doesn't allow active FTP (which is why this was functioning properly locally but not once deployed) - changing to passive mode by adding ftp.enterLocalPassiveMode() resolved the issue.

Related

EventLog Production

I have a console application that is responsible for saving a record in the Windows Event Viewer, but it does not work on a clean machine, despite having already installed the .Net Framework.
Create an installer which is responsible for creating the route HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\MyLogEvent
When running the installed application, it does the whole process without throwing any errors, but it is not saving anything in the Event Viewer.
A strongname has already been added.
string origen = "ErrorGeneric";
EventLogEntryType severidad = EventLogEntryType.Error
if (!EventLog.SourceExists(origen))
{
EventLog.CreateEventSource(origen, "MyLogEvent");
while (!EventLog.SourceExists(origen))
{
Console.Write(".");
Thread.Sleep(1000);
}
}
EventLog log = new EventLog() { Source = origen };
log.WriteEntry(logString.ToString(), severidad);
I found the error, I needed to add the source to the installation, so that it was created in the windows registry

react-native websocket ERR_NAME_NOT_RESOLVED stopping network requests

Every now and then our react-native app has a problem connecting to our websocket (we're using sockjs). When this happens it blocks all network requests and essentially stops our app from functioning.
When it occurs the console doesn't stop logging:
GET http://192.168.0.11.xip.io:8080/sock/194/noeli4ep/eventsource
net::ERR_NAME_NOT_RESOLVED
This is our connect script, which runs when the app starts, it also runs when socket closes (this is so we can re-connect to the socket).
connectToServer() {
if (!this.flagConnect) {
this.webSocket = new SockJS(`${config.endpoint}sock`);
this.webSocket.onopen = this.onOpen; // Just sets this.opened to true
this.webSocket.onmessage = this.onMesssage; // Just reads the msgs
this.webSocket.onclose = this.onClose; // Re-calls connectToServer()
this.webSocket.onerror = this.onError; // Console.logs the error
}
}
We're using React-native 0.41 & using sockJS on the app & backend (nodejs).
I found that removing xip.io fixed this issue.
So using http://<ip>:><port> was fine.

Debugging a crashing language server

I apologize if I'm a bit low on details here, but the main issue is actually trying to find the problem with my code. I'm updating an older extension of my own that was based on the Language Server example (https://code.visualstudio.com/docs/extensions/example-language-server). I've run into an issue where when I run the client part of my code using F5, and the debug window fires, I get:
The CSSLint Language Client server crashed 5 times in the last 3 minutes. The server will not be restarted.
Ok... so... here's the thing. The problems view in my extension client code shows nothing. DevTools for that Code window shows nothing.
The problems view for my server code shows nothing. DevTools, ditto.
For the Extension Developer Host instance, DevTools does show this:
messageService.ts:126 The CSSLint Language Client server crashed 5 times in the last 3 minutes. The server will not be restarted.e.doShow # messageService.ts:126
But I can't dig into the details to find a bug. So the question is - assuming that my server code is failing, where exactly would the errors be available?
Here is what I usually do to track server crashes down (I assume your server is written in JavaScript / TypeScript).
Use the following server options:
let serverModule = "path to your server"
let debugOptions = { execArgv: ["--nolazy", "--debug=6009"] };
let serverOptions = {
run: { module: serverModule, transport: TransportKind.ipc },
debug: { module: serverModule, transport: TransportKind.ipc, options: debugOptions}
};
Key here is to use the TransportKind.ipc. Errors that happen in the server and printed to stdio will then show in the output channel associated to your server (the name of the output channel is the name passed to the LanguageClient)
If you want to debug the server startup / initialize sequence you can change the debugOptions to:
let debugOptions = { execArgv: ["--nolazy", "--debug-brk=6009"] };
If the extension is started in debug mode (e.g. for example launched from VS Code using F5) then the LanguageClient automatically starts the server in debug mode. If the extension is started normally (for example as a real extension in VS Code) then the server is started normally as well.
To make this all work you need a latest version of the LSP node npm module both for server can client (e.g. 2.6.x)

uWSGI, Flask, sqlalchemy, and postgres: SSL error: decryption failed or bad record mac

I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database.
When I make requests to the webserver, every other response will be a 500 error.
The error is:
Traceback (most recent call last):
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context
context)
File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute
cursor.execute(statement, parameters)
psycopg2.OperationalError: SSL error: decryption failed or bad record mac
The above exception was the direct cause of the following exception:
sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac
The error is triggered by a simple Flask-SQLAlchemy method:
result = models.Event.query.get(id)
uwsgi is being managed by supervisor, which has a config:
[program:my_app]
command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions
directory=/path/to/my/app
stopsignal=QUIT
autostart=true
autorestart=true
and uwsgi's config looks like:
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
The issue ended up being uwsgi's forking.
When working with multiple processes with a master process, uwsgi initializes the application in the master process and then copies the application over to each worker process. The problem is if you open a database connection when initializing your application, you then have multiple processes sharing the same connection, which causes the error above.
The solution is to set the lazy configuration option for uwsgi, which forces a complete loading of the application in each process:
lazy
Set lazy mode (load apps in workers instead of master).
This option may have memory usage implications as Copy-on-Write semantics can not be used. When lazy is enabled, only workers will be reloaded by uWSGI’s reload signals; the master will remain alive. As such, uWSGI configuration changes are not picked up on reload by the master.
There's also a lazy-apps option:
lazy-apps
Load apps in each worker instead of the master.
This option may have memory usage implications as Copy-on-Write semantics can not be used. Unlike lazy, this only affects the way applications are loaded, not master’s behavior on reload.
This uwsgi configuration ended up working for me:
[uwsgi]
socket = /tmp/my_app.sock
logto = /var/log/my_app.log
plugins = python3
virtualenv = /path/to/my/venv
pythonpath = /path/to/my/app
wsgi-file = /path/to/my/app/application.py
callable = app
max-requests = 1000
chmod-socket = 666
chown-socket = www-data:www-data
master = true
processes = 2
no-orphans = true
log-date = true
uid = www-data
gid = www-data
# the fix
lazy = true
lazy-apps = true
As an alternative you might dispose the engine. This is how I solved the problem.
Such issues may happen if there is a query during the creation of the app, that is, in the module that creates the app itself. If that states, the engine allocates a pool of connections and then uwsgi forks.
By invoking 'engine.dispose()', the connection pool itself is closed and new connections will come up as soon as someone starts making queries again. So if you do that at the end of the module where you create your app, new connections will be created after the UWSGI fork.
I am running a flask app using gunicorn on Heroku. My application started exhibiting this problem when I added the --preload option to my Procfile. When I removed that option, my application resumed functioning as normal.
Not sure whether to add this as an answer to this question or ask a separate question and put this as an answer there. I was getting this exact same error for reasons that are slightly different from the people who have posted and answered. In my setup, I using gunicorn as a wsgi for a Flask application. In this application, I was offloading some intense database operations off to a celery worker. The error would come from the celery worker.
From reading a lot of the answers here and looking at the psycopg2 as well as sqlalchemy session documentation, it became apparent to me that it is a bad idea to share an SQLAlchemy session between separate processes (the gunicorn worker and the sqlalchemy worker in my case).
What ended up solving this for me was creating a new session in the celery worker function so it used a new session each time it was called and also destroying the session after every web request so flask used a session per request. The overall solution looked like this:
Flask_app.py
#app.teardown_appcontext
def shutdown_session(exception=None):
session.close()
celery_func.py
#celery_app.task(bind=True, throws=(IntegrityError))
def access_db(self,entity_dict, tablename):
with Session() as session:
try:
session.add(ORM_obj)
session.commit()
except IntegrityError as e:
session.rollback()
print('primary key violated')
raise e

Squeryl 0.9.5 (with Lift 2.4) not releasing database connections/pools

Following the recommended transaction setup for Squeryl, in my Boot.scala:
import net.liftweb.squerylrecord.SquerylRecord
import org.squeryl.Session
import org.squeryl.adapters.H2Adapter
SquerylRecord.initWithSquerylSession(Session.create(
DriverManager.getConnection("jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1", "sa", ""),
new H2Adapter
))
The first startup works fine. I can connect via H2's web-interface and if I use my app, it updates the database appropriately. However if I restart jetty without restarting the JVM, I get:
java.sql.SQLException: No suitable driver found for jdbc:h2:lift_proto.db;DB_CLOSE_DELAY=-1
The same result is had if I replace "DB_CLOSE_DELAY=-1" with "AUTO_SERVER=TRUE", or remove it entirely.
Following the recommendations on the Squeryl list, I tried C3P0:
import com.mchange.v2.c3p0.ComboPooledDataSource
val cpds = new ComboPooledDataSource
cpds.setDriverClass("org.h2.Driver")
cpds.setJdbcUrl("jdbc:h2:lift_proto")
cpds.setUser("sa")
cpds.setPassword("")
org.squeryl.SessionFactory.concreteFactory =
Some(() => Session.create(
cpds.getConnection, new H2Adapter())
)
This produces similar behavior:
WARNING: A C3P0Registry mbean is already registered. This probably means that an application using c3p0 was undeployed, but not all PooledDataSources were closed prior to undeployment. This may lead to resource leaks over time. Please take care to close all PooledDataSources.
To be sure it wasn't anything I was doing which was causing this, I started and stopped the server without calling a transaction { } block. No exceptions were thrown. I then added to my Boot.scala:
transaction { /* Do nothing */ }
And the exception was once again thrown (I'm assuming because connections are lazy). So I moved the db initialization code to its own file away from Lift:
SessionFactory.concreteFactory = Some(()=>
Session.create(
java.sql.DriverManager.getConnection("jdbc:h2:mem:test", "sa", ""),
new H2Adapter
))
transaction {}
Results were unchanged. What am I doing wrong? I cannot find any mention of needing to explicitly close connections or sessions in the Squeryl documentation, and this is my first time using JDBC.
I found mention of the same issue here on the Lift google group, but no resolution.
Thanks for any help.
When you say you are restarting Jetty, I think what you're actually doing is reloading your webapp within Jetty. Neither the h2 database or C3P0 will automatically shut down when your app reloads, which explains the errors you are receiving when Lift tries to initialize them a second time. You don't see the error when you don't create a transaction block because both h2 and C3P0 are initialized when the first DB connection is retrieved.
I tend to use BoneCP as a connection pool myself. You can configure the minimum number of pooled connections to be > 1, which will stop h2 from shutting down without the need for DB_CLOSE_DELAY=-1. Then you can use:
LiftRules.unloadHooks append { () =>
yourPool.close() //should destroy the pool and it's associated threads
}
That will close all of the connections when Lift is shutdown, which should properly shutdown the h2 database as well.