Zend_Session SaveHandler for Memcache - zend-framework

Recently discovered that Zend_Session's DbTable SaveHandler is implemented in a way that is not very optimized for high performance, so, I've been investigating changing over to using Memcache for session management.
I found a decent pattern/class for changing the Zend_Session SaveHandler in my bootstrap from DbTable to Memcache here and added it into my web app.
In my bootstrap, I changed the SaveHandler like so:
FROM:
Zend_Session::setSaveHandler(new Zend_Session_SaveHandler_DbTable($config));
TO:
Zend_Session::setSaveHandler(new MyApp_Session_SaveHandler_Memcache(Zend_Registry::get("cache")));
So, my session init looks like this:
Zend_Loader::loadClass('MyApp_Session_SaveHandler_Memcache');
Zend_Session::setSaveHandler(new MyApp_Session_SaveHandler_Memcache(Zend_Registry::get("cache")));
Zend_Session::start();
// set up session space
$this->session = new Zend_Session_Namespace('MyApp');
Zend_Registry::set('session', $this->session);
As you can see, the class provided from that site integrates quickly with a simple loadClass and SaveHandler change in the bootstrap and it works in my local dev env without error (web app and memcache are on the same system).
I also tested my web app hosted in local dev env with a remote memcache server in PROD to see how it performs over the wire and it appears to also work okay.
However, in my staging environment (which mimics production) my zend app is hosted on server1 and memcache on hosted on server2 and it seems that nearly every other request completely bombs out with specific error messages.
The error information I capture includes the message "session has already been started by session.auto-start or session_start()" and second/related indicates that Zend_Session::start() got a Connection Refused with an "Error #8 MemcachePool::get()" implicated on line 180 in the framework file ../Zend/Cache/Backend/Memcached.php.
I have confirmed that my php.ini has session.auto_start set to 0 and the only instance of Zend_Session::start() in my code is in my bootstrap. Also, I init my Cache, Db and Helpers before I init my Session (to make sure my Zend_Registry::get("cache") argument for instantiating my new SaveHandler is valid.
I found only about two valuable resources for how to successfully employ Memcache for Zend_Session and I have also reviewed ZF's Zend_Cache_Backend and Zend_Session "Advanced Usage" docs but I haven't been able to identify the source of why I get this error using Memcache or why it won't work consistently with a dedicated/remote memcache server.
Does anyone understand this problem?
Does anyone have experience with solving this problem?
Does anyone have Memcache working in their ZF web app for session management in a way that they can recommend?
Please be sure to include any/all Zend_Session and/or Zend_Cache configurations you made or other trickeration or witchcraft you used to get this working.
Thanks!

This one just nearly exploded my head.
First, sorry for the book of a question...I wanted to paint a complete picture of the situation. Unfortunately, I missed a few key details which my wonderful coworker found.
So, once you install, most likely when you are just starting to test out the deamon, you will do this:
root# memcached -d -u nobody -m 512 127.0.0.1 -p 11211
This command will start up memcached, using 512MB on the localhost and the default port 11211.
Did you see what I did there? That means it's set to only process requests sent to the LOOPBACK network interface.
ugh
My problem was, I couldn't get my web app to work with a REMOTE memcached server.
So, when you actually want to fire up your memcached server to accept requests from remote systems, you execute something like the following:
root# memcached -d -u nobody -m 512 -l 192.168.0.101 -p 11211
This fixed my problem. This starts my memcached daemon, setting it to use 512MB bound to IP 192.168.0.101 and listening on the default port 11211.
Now, any requests SENT to that IP and Port will be accepted by the server and handled as you might expect.
Here's a networking doc reference...RTFM...a second time!

Related

Unable to add remote node in Rundeck 4.9.0

Following the doc from Rundeck, however the only button I have under "Sources tab" is "ResourceModelSource"
When I click that button I get a blank
PPS Issue happened on previous version - new to RunDeck, so I can't say that it EVER worked
I tried adding a manual resouces.xml in the project director y(Which I had to manually create, which tells me that's another issue) and reloading RD but that did not seem to work
While it's not the likely cause, I'll mention it here incase it IS relevant, I'm hosting on port 4440 however I'm using nginx to forward http (not https) requests on 443 to 4440, this is due to corp net sec policy.
I'm sure it's something where it's having an i/o issue on the local host, however I'm not seeing anything in the logs.
That is a known issue when you have Rundeck installed behind a proxy server, take a look at this: https://github.com/rundeck/rundeck/issues/6278 the solution is to set the grails.ServerURL (rundeck-config.properties file) with the exit URL defined for Rundeck in your proxy server (e.g: grails.serverURL=http://my_domain/rundeck), then restart the Rundeck service.

Bootstrapping a Staging Clojure App via REPL incl. fetching dependenices

Deploying Clojure/Java apps is hard, so I had this idea yesterday I want to understand better. If I spin up a machine that has Clojure and boot-clj installed and run boot wait repl -s -H 0.0.0.0 on the machine (let's ignore auth for now), I should be able to connect to it from my dev box and trigger the retrieval of dependencies over the wire (which will then be cached on the machine), then wire over all the source code and eval until I hit a snag, right?
Let's pretend this is a good idea. Is it possible to do this, and what are the hurdles involved? Right now I'm waiting 5 minutes for CircleCI to package up an uberjar, then fail because some Heroku token expired but all I want to do is see my code running on a staging environment so I can wire some more code and re-eval it.
The first thing t hat comes to mind is nREPL auth, which I see is not mentioned in any of the nREPL libraries. So let's say that's a higher-level networking concern and I'll do ACL via VPC.
Has anyone done this? Why is it a bad idea? Can you show your recipe for bootstrapping a Clojure app on a remote machine without the use of git or SSH (aside from initial REPL start)?
I should be able to connect to it from my dev box and trigger the retrieval of dependencies over the wire (which will then be cached on the machine), then wire over all the source code and eval until I hit a snag, right?
Is it possible to do this, and what are the hurdles involved?
Yes, but you need to specify -b (address server listens on) instead of -H (host to connect client to):
$ boot wait repl -s -b 0.0.0.0 -p 3000
nREPL server started on port 3000 on host 0.0.0.0 - nrepl://0.0.0.0:3000
Then connect to it however you like, for example with lein repl:
$ lein repl :connect 127.0.0.1:3000
Now you can add a dependency in the REPL and it'll be downloaded on the server/host. In the client REPL:
boot.user=> (set-env! :dependencies #(into % '[[clj-time "0.14.0"]]))
And if you're watching the server console you'll see it downloading dependencies:
Retrieving clj-time-0.14.0.pom from https://repo.clojars.org/ (3k)
Retrieving joda-time-2.9.7.pom from https://repo1.maven.org/maven2/ (32k)
Retrieving clj-time-0.14.0.jar from https://repo.clojars.org/ (22k)
Retrieving joda-time-2.9.7.jar from https://repo1.maven.org/maven2/ (618k)
And then back on the client side:
boot.user=> (require '[clj-time.core :refer [now]])
nil
boot.user=> (now)
#object[org.joda.time.DateTime 0x1f68b743 "2018-03-15T12:16:29.342Z"]
Has anyone done this?
Yes, I've seen people host nREPLs from remote servers and connect to them to tinker with a running system.
Why is it a bad idea?
Generally speaking, we want reproducible builds and stable artifacts to give some degree of certainty about what code is being released. Doing this type of development on-the-fly on-the-server works against those goals, making it harder to determine what code is running where. I'd try to structure the system (and its testing) such that this degree of remote dynamism isn't required for normal development.
It sounds like your primary problem is a cumbersome link (CI/CD) in your dev/test/run feedback loop. I'd explore other options for optimizing that feedback loop before going to dynamic dependency-hot-loading nREPL, if you can avoid it. Of course, it's there if you need it!
Can you show your recipe for bootstrapping a Clojure app on a remote machine
Personally, I only ever deploy JARs to remote machines, and usually in a container. By that time I've already exercised/tested the system locally and have some confidence it'll behave as expected. If most of your system is untestable without deploying, that may be a sign you should break it into smaller, more testable pieces.

Heroku App Only Working On Local Machine

Have something really odd going on with Heroku.
I have an application build in React/JS/Node with Mongo.
If I pull up the link to my app on my local machine: https://obscure-crag-61417.herokuapp.com/, I can see a version of my website, but it is not updating for any changes that I push to Heroku.
Even more strange, is that on a non-local machine, if i visit the aforementioned link, I get the boilerplate 'Express' page.
I've tried clearing the cache, exiting browsers on both PC's but same old story.
I have the MongoDB config set in Heroku.
Not sure what could be going on here.
Any ideas?
PS--here's my code: https://github.com/pythoncreate/twit-stocks
Okay i figured this one out. I'm pretty sure it was how I was setting the ports on the backend. Heroku has some specific rules about this:
Heroku + node.js error (Web process failed to bind to $PORT within 60 seconds of launch)

play framework 2.4.2 client connection remains open

i am using using activator project which has play 2.4.2 . just for testing i deployed raw project which only listen on port 80 0r 9000 and returning Ok("abc").
but when i check the output of
$ sudo lsof -i | wc -l
the number increasing gradually with time, and after some time let say 24-48 hours. the server crashes with exception too many file open.
i tested with apache benchmark also, after completion of benchmarking, there is still some connections open and never close.
please someone help.
There seems to be some debate around this issue, when sometime back I was working with playframework.
First verify that if your client is asking for connection to be kept alive. In this case playframework would honor the client and keep the connection open. See this disscussion . The takeaway from discussion was play can handle a lot of request, which is questionable if you think about DoS attacks.
The other thing there seems to be options to kill the connection from the action with the header, but I have never tried with those. See this. I am not able to pull any documentation around this option at this moment.
Edit : Seems to be mentioned in 2.2. hightlight.

Proxy setting in gsutil tool

I use gsutil tool for download archives from Google Storage.
I use next CMD command:
python c:\gsutil\gsutil cp gs://pubsite_prod_rev_XXXXXXXXXXXXX/YYYYY/*.zip C:\Tmp\gs
Everything works fine, but if I try to run that command from corporate proxy, I receive error:
Caught socket error, retrying: [Errno 10051] A socket operation was attempted to an unreachable network
I tried several times to set the proxy settings in .boto file, but all to no avail.
Someone faced with such a problem?
Thanks!
Please see the section "I'm connecting through a proxy server, what do I need to do?" at https://developers.google.com/storage/docs/faq#troubleshooting
Basically, you need to configure the proxy settings in your .boto file, and you need to ensure that your proxy allows traffic to accounts.google.com as well as to *.storage.googleapis.com.
A change was just merged into github yesterday that fixes some of the proxy support. Please try it out, or specifically, overwrite this file with your current copy:
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/util.py
I believe I am having the same problem with the proxy settings being ignored under Linux (Ubuntu 12.04.4 LTS) and gsutils 4.2 (downloaded today).
I've been watching tcpdump on the host to confirm that gsutils is attempting to directly route to Google IPs instead of to my proxy server.
It seems that on the first execution of a simple command like "gsutil -d ls" it will use my proxy settings specified .boto for the first POST and then switch back to attempting to route directly to Google instead of my proxy server.
Then if I CTRL-C and re-run the exact same command, the proxy setting is no longer used at all. This difference in behaviour baffles me. If I wait long enough, I think it will work for the initial request again so this suggests some form on caching taking place. I'm not 100% of this behaviour yet because I haven't been able to predict when it occurs.
I also noticed that it always first tries to connect to 169.254.169.254 on port 80 regardless of proxy settings. A grep shows that it's hardcoded into oauth2_client.py, test_utils.py, layer1.py, and utils.py (under different subdirectories of the gsutils root).
I've tried setting the http_proxy environment variable but it appears that there is code that unsets this.