How to see requests in Sinatra? - sinatra

I used to be able see http requests made for sinatra in the window I started my sinantra application from.
I think that after sinatra upgrade I cannot see them any more and I don't know how to make it so. I don't need to log them into a file.
set :logging, true didn't help
ruby 1.8.7 (2010-08-16 patchlevel 302) [i386-mingw32]
rack (1.4.0)
rack-protection (1.2.0)
sinatra (1.3.2)
sinatra-advanced-routes (0.5.1)
sinatra-reloader (0.5.0)
sinatra-sugar (0.5.1)

This is a bug introduced in Sinatra 1.3.2. The commit that introduced it was intented to fix another bug where the logging was being done twice in certain circumstances, but obviously isn't quite right.
This request logging is done by using the Rack::CommonLogger middleware component, which is now only added in certain cases. The fix/workaround is to simply add it yourself. Add
use Rack::CommonLogger
to the top of your application file (after requiring Sinatra). Note that you might end up with the original problem of seeing requests logged twice in some situations (e.g. if your deployment setup is different from your development setup).

Related

Unable to integrate CQ5.6.1 with Site Catalyst

I'm having difficulty in integrating AEM 5.6.1 with Site Catalyst. It allows me to connect in the configuration successfully, but does not work on the framework setup.
I've followed the standard procedure to connect AEM to SC and it accepts my login in the configuration, but fails on the framework set up with the browser message 'We were not able to login to SiteCatalyst. Please check your credentials and try again.'. Behind the scenes in the server log;
12.12.2014 14:10:06.967 *WARN* [0:0:0:0:0:0:0:1 [1418393406764] POST /libs/cq/analytics/sitecatalyst/service.json HTTP/1.1] com.day.cq.analytics.sitecatalyst.impl.SitecatalystHttpClientImpl Data center 'https://api3.omniture.com/admin/1.3/rest/' responded with errors {"error":{"code":500,"message":"Internal Server Error"}}
12.12.2014 14:10:06.967 *ERROR* [0:0:0:0:0:0:0:1 [1418393406764] POST /libs/cq/analytics/sitecatalyst/service.json HTTP/1.1] com.day.cq.analytics.sitecatalyst.impl.servlets.SitecatalystServlet Call to SiteCatalyst method 'Company.GetReportSuites' failed com.day.cq.analytics.sitecatalyst.SitecatalystException: not authenticated
I've tried accessing via the API Explorer and it works.
I've tried the troubleshooting guide without success.
I can log in to Site Catalyst, I'm an admin, I am in the web services access group.
I've tried using a clean install of CQ5.6.1 with geometrixx - it doesn't work either.
I've tried this from a server and from a localhost/dev machine with the same results. No proxy. I've even tried using the shared secret as the password but then it doesn't connect at all, and fails on the configuration screen.
What might cause this to fail?
If it doesn't work with a fresh install and Geometrixx, then it's probably an Adobe bug. That's typically the first thing support will ask you about.
I would also verify using Geometrixx Outdoors, or a more recent demo site, on your fresh install, just to ensure it's not an outdated ClientLib issue.
I know this isn't a direct answer to your question, but honestly, I would approach the integration differently. I've worked with the AEM-SC framework and it's buggy at best. It's very finicky, it doesn't REALLY work the way the documentation claims, and it requires that you're very specific about what Clientlibs are on the page.
Moving forward, I think using Adobe Dynamic Tag Manager is the better approach, for many reasons. My understanding is that it's Adobe's recommendation as well. I'd consider moving to that. In AEM 5.6.1, you'll have to customize your integration with DTM, but it's not very hard.
Solution: Add a property on the configuration node for sitecatalyst: (eg. /etc/cloudservices/sitecatalyst/my-sc-configuration)
server=https://api.omniture.com/admin/1.2/rest/
it also seems to work with newer API versions such as https://api3.omniture.com/admin/1.3/rest/
It would appear that for 5.6.1 it ignores the OSGi configuration, at least for the configuration screens. With this extra property, the framework page loads without error and allows selection of the RSID.

Clicking on a link doesn't work but copy-pasting the link into url works

I'm not entirely sure what is going on with this? Is there a place I should start? Is it application level, client level, server level?
Note that no one will be able to help because this server is blocked by a firewall. All the other links work just by clicking, except this one (gives 500 error), but if I copy and paste it exactly into the url address bar then it pumps out exactly what I want.
http://<company-server>/aggregate?
$project[_id]=0&$project[cmts]=1&
$group[_id][cmts]=$cmts&$group[count][$sum]=1
I am using MongoDB as the backend and HAML as the frontend.
This is system-dependent and made the difference for my servers, since the bundler versions were different on dev and prod.
gem uninstall bundler
If it asks, choose all.
gem install bundler -v 1.2.2

Google App Engine: Deployed Source doesn't have Local updates

I'm working with Google App Engine in Eclipse w/ JSP pages in Windows 7.
I already have an app deployed and working, but I am unable to make changes to it for some reason.
If I make changes and debug locally, my localhost page is showing the changes that I implement.
While I am not getting any errors in the deployment, the same changes that work on my local debug are no longer showing up, so I can't update my app.
I thought updating the version number might help, but I had no luck with this.
Any ideas? Thanks.
Are you deploying the same version (as specified in appengine-web.xml) as the default version that is running on your app? If not, you'll have to access your new deployment at http://newversion.appname.appspot.com, or change your default version in app engine to your newly deployed version.
I have had the same problems too, especially when the changes concerned the static pages. Some little things to check:
If you have set an expiration date in your app.yaml, your browser cache could be holding the file.
If it’s specific to the online contents, it could be an intermediary cache (such as a squid server) serving the outdated contents, in which case you’d have to flush the cache to get the new version.
You could start by checking the log on the GAE console to see if the request is received by the server, that would help you debug.
Another trick, if you’re being served an outdated version of http://yourapp.appspot.com/index, try and pass a dummy argument to force the browser to update the version, for instance : http://yourapp.appspot.com/index?p=1

How to upgrade Wordpress and plugins when deploying using Capistrano?

I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.

Login with Facebook via Rails

As a relative newbie to Rails, I'm not sure how to approach this. I am looking to add a basic "Login with Facebook" feature to a practice site I am developing. I am stuck on two fronts:
Most Rails plugins dealing with Facebook seem out of date or poorly documented. I've encountered Facebooker (seems to have died off from what I see) and Mini_FB (more recent, but very little documentation). I tried to install Mini_FB, but I am still very unfamiliar about working with Gems. I ran gem install mini_fb, then bundle install, and finally added gem 'mini_fb' to my Gemfile, but my server complains of a no such file to load error. Are there any other steps necessary to allow your app to use a gem?
I am confused by how the "Login with Facebook" feature works from an overall birds-eye view. I understand that my App ID is passed into the login feature, and I ultimately get an access token (after resubmitting with my App Secret Key and an authorization code). But how does this integrate with some kind of user system on a Rails site? Since this access code doesn't last forever, do I need to renew it periodically? Is that done by simply waiting to catch an access token error from a Graph request and redoing the entire authorization procedure?
Have you tried OmniAuth?
It supports a whole host of external providers, including facebook.
There are also a number of railscasts on it's use.
The correct order of installing a gem on your application would be first adding it to your Gemfile.rb, then running bundle install on your console. That being said, OmniAuth is probably the best path for you