Sinatra Error log integration with sentry - sinatra

Below are my config.ru file
require 'raven'
require './managers/log_manager.rb'
logger = LogManager.create_logger('/error.log')
logger.log(Logger::ERROR, "********** just testing **********")
puts "#{logger.inspect}"
Raven.configure do |config|
config.dsn = 'https://secrect'
config.logger = logger
config.environments = 'development'
end
use Raven::Rack
Only exceptions got notify. My problem is to get notify for Error log data, but currently it didn't.

Because Ruby doesn't have a consistent logging solution you'll probably have to write your own handler.
If, i.e. the logging helper gives you an Event, you'd probably do something like this:
def my_log_helper(event)
if event.really_is_an_exception
Raven.capture_exception(event.message)
else
Raven.capture_message(event.message)
end
end
p.s. sorry about my awful ruby, I'm not fluent
The main thing is that Raven tries to be magical when it can, but outside of that it tends to explicitness.
There's many other things you can do with integration, such as sending localized context, and things that are generally environment-specific, but the basics are mostly straightforward.

Related

How to switch from Sqlite to Postgres while installing Warden on Sinatra on Heroku

This is partly a problem-solving question, partly a "I'm trying to understand what's going on" question. I hope that's allowed. Basically, I'm trying to get Warden user authentication to work with Ruby/Sinatra and Postgres on Heroku.
I got a lot of help from this handy (but oldish) tutorial.
From some Rails experience I am a bit familiar with Active Record. The tutorial didn't mention anything about creating a migration for the User class. I went ahead and made my own migration, with my own properties ("name", "email", "password"), only to discover later that, lo and behold, the properties I put in that migration weren't being used by (and in fact were rejected by) the actual model in use. When I examined the object instances in the database, I found that they had only the properties Warden provided for me ("username" and "password").
I'm just trying to understand what happened here. I migrated down my (apparently unnecessary and ignored) Users migration, and nothing happened. I mean that I was able to create User instances and log in using them just as before.
Then it occurred to me that this old Warden tutorial (from 2012) uses something called DataMapper, which does what Active Record would do today. Is that right? They are both "ORMs"? I'm still confused about why Sinatra completely ignored the User migration I did. Maybe it's just using a different database--I did notice wht might be a new db.sqlite database in my main file. Pretty sure the one I created for Active Record was db/madlibs.sqlite3.
Although it works on my local machine, I'm pretty sure it won't work on Heroku, since they don't support sqlite (pretty sure). That then means I'll have to go back to the Warden documentation and figure out how to get it to work with my Postgres database...right? Any pointers on how to get started with that? Since this will be my first project using any authentication library like Warden, it's pretty intimidating.
Here's what I have so far (repo):
app.rb:
require 'sinatra'
require 'sinatra/activerecord'
require 'sinatra/base'
require './config/environment'
require 'bundler'
Bundler.require
require './model'
enable :sessions
class Madlib < ActiveRecord::Base
end
class SinatraWardenExample < Sinatra::Base
register Sinatra::Flash
end
use Warden::Manager do |config|
config.serialize_into_session{|user| user.id }
config.serialize_from_session{|id| User.get(id) }
config.scope_defaults :default,
strategies: [:password],
action: 'auth/unauthenticated'
config.failure_app = self
end
Warden::Manager.before_failure do |env,opts|
env['REQUEST_METHOD'] = 'POST'
end
Warden::Strategies.add(:password) do
def valid?
params['user']['username'] && params['user']['password']
end
def authenticate!
user = User.first(username: params['user']['username'])
if user.nil?
fail!("The username you entered does not exist.")
elsif user.authenticate(params['user']['password'])
success!(user)
else
fail!("Could not log in")
end
end
end
...non authentication routes...
post '/auth/login' do
env['warden'].authenticate!
flash[:success] = env['warden'].message
if session[:return_to].nil?
redirect '/'
else
redirect session[:return_to]
end
end
get '/auth/logout' do
env['warden'].raw_session.inspect
env['warden'].logout
flash[:success] = 'Successfully logged out'
redirect '/'
end
post '/auth/unauthenticated' do
session[:return_to] = env['warden.options'][:attempted_path]
puts env['warden.options'][:attempted_path]
flash[:error] = env['warden'].message || "You must log in"
redirect '/auth/login'
end
get '/protected' do
env['warden'].authenticate!
#current_user = env['warden'].user
erb :protected
end
model.rb (just the User model):
require 'rubygems'
require 'data_mapper'
require 'dm-sqlite-adapter'
require 'bcrypt'
DataMapper.setup(:default, "sqlite://#{Dir.pwd}/db.sqlite")
class User
include DataMapper::Resource
include BCrypt
property :id, Serial, :key => true
property :username, String, :length => 3..50
property :password, BCryptHash
def authenticate(attempted_password)
if self.password == attempted_password
true
else
false
end
end
end
DataMapper.finalize
DataMapper.auto_upgrade!
It seems like this repo might have solved the problems I'm facing now. Should I study that? The Warden documentation itself is pretty forbidding for a relative beginner. For example, it says "Warden must be downstream of some kind of session middleware. It must have a failure application declared, and you should declare which strategies to use by default." I don't understand that. And then it gives some code...which I also don't quite understand. Advice?? (Should I be working with a teacher/mentor, maybe?)

Redmine REST API called from Ruby is ignoring updates to some fields

I have some code which was working at one point but no longer works, which strongly suggests that the redmine configuration is involved somehow (I'm not the redmine admin), but the lack of any error messages makes it hard to determine what is wrong. Here is the code:
#!/usr/bin/env ruby
require "rubygems"
gem "activeresource", "2.3.14"
require "active_resource"
class Issue < ActiveResource::Base
self.site = "https://redmine.mydomain.com/"
end
Issue.user = "myname"
Issue.password = "mypassword" # Don't hard-code real passwords :-)
issue = Issue.find 19342 # Created manually to avoid messing up real tickets.
field = issue.custom_fields.select { |x| x.name == "Release Number" }.first
issue.notes = "Testing at #{Time.now}"
issue.custom_field_values = { field.id => "Release-1.2.3" }
success = issue.save
puts "field.id: #{field.id}"
puts "success: #{success}"
puts "errors: #{issue.errors.full_messages}"
When this runs, the output is:
field.id: 40
success: true
errors: []
So far so good, except that when I go back to the GUI and look at this ticket, the "notes" part is updated correctly but the custom field is unchanged. I did put some tracing in the ActiveRecord code and that appears to be sending out my desired updates, so I suspect the problem is on the server side.
BTW if you know of any good collections of examples of accessing Redmine from Ruby using the REST API that would be really helpful too. I may just be looking in the wrong places, but all I've found are a few trivial ones that are just enough to whet one's appetite for more, and the docs I've seen on the redmine site don't even list all the available fields. (Ideally, it would be nice if the examples also specified which version of redmine they work with.)

Celery - error handling and data storage

I'm trying to better understand common strategies regarding results and errors in Celery.
I see that results have statuses/states and stores results if requested -- when would I use this data? Should error handling and data storage be contained within the task?
Here is a sample scenario, in case it helps better understand my objective:
I have a geocoding task that goeocodes user addresses. If the task fails or succeeds, I'd like to update a field in the database letting the user know. (Error handling) On success I'd like the geocoded data to be inserted into the database (Data storage)
What approach should take?
Let me preface this by saying that I'm still getting a feel for Celery myself. That being said, I have some general inclinations about how I'd go about tackling this, and since no one else has responded, I'll give it a shot.
Based on what you've written, a relatively simple (though I suspect non-optimized) solution is to follow the broad contours of the blog comment spam task example from the documentation.
app.models.py
class Address(models.Model):
GEOCODE_STATUS_CHOICES = (
('pr', 'pre-check'),
('su', 'success'),
('fl', 'failed'),
)
address = models.TextField()
...
geocode = models.TextField()
geocode_status = models.CharField(max_length=2,
choices=GEOCODE_STATUS_CHOICES,
default='pr')
class AppUser(models.Model):
name = models.CharField(max_length=100)
...
address = models.ForeignKey(Address)
app.tasks.py
from celery import task
from app.models import Address, AppUser
from some_module import geocode_function #assuming this returns a string
#task()
def get_geocode(appuser_pk):
user = AppUser.objects.get(pk=appuser_pk)
address = user.address
try:
result = geocode_function(address.address)
address.geocode = result
address.geocode_status = 'su' #set address object as successful
address.save()
return address.geocode #this is optional -- your task doesn't have to return anything
on the other hand, you could also choose to decouple the geo-
code function from the database update for the object instance.
Also, if you're thinking about chaining tasks together, you
might think about if it's advantageous to pass a parameter as
an input or partial input into the child task.
except Exception as e:
address.geocode_status = 'fl' #address object fails
address.save()
#do something_else()
raise #re-raise the error, in case you want to trigger retries, etc
app.views.py
from app.tasks import *
from app.models import *
from django.shortcuts import get_object_or_404
def geocode_for_address(request, app_user_pk):
app_user = get_object_or_404(AppUser, pk=app_user_pk)
...etc.etc. --- **somewhere calling your tasks with appropriate args/kwargs
I believe this meets the minimal requirements you've outlined above. I've intentionally left the view undeveloped since I don't have a sense of how exactly you want to trigger it. It sounds like you also may want some sort of user notification when their address can't be geocoded ("I'd like to update a field in a database letting a user know"). Without knowing more about the specifics of this requirement, I would it sounds like something that might be best accomplished in your html templates (if instance.attribute value is X, display q in template) or by using a django.signals (set up a signal for when a user.address.geocode_status switches to failure -- say, by emailing the user to let them know, etc.).
In the comments to the code above, I mentioned the possibility of decoupling and chaining the component parts of the get_geocode task above. You could also think about decoupling the exception handling from the get_geocode task, by writing a custom error handler task, and using the link_error parameter (for instance., add.apply_async((2, 2), link_error=error_handler.s(), where error_handler has been defined as a task in app.tasks.py ). Also, whether you choose to handle errors via the main task (get_geocode) or via a linked error handler, I would think that you would want to get much more specific about how to handle different sorts of errors (e.g., do something with connection errors different than with address data being incorrectly formatted).
I suspect there are better approaches, and I'm just beginning to understand how inventive you can get by chaining tasks, using groups and chords, etc. Hope this helps at least get you thinking about some of the possibilities. I'll leave it to others to recommend best practices.

How to silence DataMapper in Sinatra

So, a simple little question. Every time I perform some transaction with DataMapper inside one of my "get" or "post" blocks, I get output looking something like this...
core.local - - [19/Sep/2012:09:04:54 CEST] "GET /eval_workpiece/state?id=24 HTTP/1.1" 200 4
- -> /eval_workpiece/state?id=24
It's a little too verbose for my liking. Can I turn this feedback off?
This isn’t Datamapper logging, this is the logging done by the WEBrick server, which logs all requests using these two formats by default.
(Note this isn’t Rack logging either, although the Rack::CommonLogger uses the same (or at least very similar) format).
The simplest way to stop this would be to switch to another server that doesn’t add its own logging, such as Thin.
If you want to continue using WEBrick, you’ll need to find a way to pass options to it from your Sinatra app. The current released Sinatra gem (1.3.3) doesn’t allow an easy way to do this, but the current master allows you to set the :server_options setting which Sinatra will then pass on. So in the future you should be able to do this:
set :server_settings, {:AccessLog => []}
in order to silence WEBrick.
For the time being you can add something like this to the end of your app file (I’m assuming you’re launching your app with something like ruby my_app_file.rb):
disable :run
Sinatra::Application.run! do |server|
server.config[:AccessLog] = []
end
To cut off all logging:
DataMapper.logger = nil
To change verbosity:
DataMapper.logger.set_log(logger, :warn) # where logger is Sinatra's logging object
Other levels are :fatal => 7, :error => 6, :warn => 4, :info => 3, :debug => 0 (http://rubydoc.info/gems/dm-core/1.1.0/DataMapper/Logger)
If you're running with ActiveSupport, you can use the Kernel extension:
quietly { perform_a_noisy_task }
This temporarily binds STDOUT and SDTERR to /dev/null for the duration of the block. Since you may not want to suppress all output, theoretically you can do:
with_warnings(:warn) { # or :fatal or :error or :info or :debug
perform_a_noisy_task
}
and appropriate messages will be suppressed. (NB: I say 'theoretically' because using with_warnings gave me a seemingly unrelated error in my Padrino/DataMapper environment. YMMV.)

Why is it a bad idea to write configuration data in code?

Real-life case (from caff) to exemplify the short question subject:
$CONFIG{'owner'} = q{Peter Palfrader};
$CONFIG{'email'} = q{peter#palfrader.org};
$CONFIG{'keyid'} = [ qw{DE7AAF6E94C09C7F 62AF4031C82E0039} ];
$CONFIG{'keyserver'} = 'wwwkeys.de.pgp.net';
$CONFIG{'mailer-send'} = [ 'testfile' ];
Then in the code: eval `cat $config`, access %CONFIG
Provide answers that lay out the general problems, not only specific to the example.
There are many reasons to avoid configuration in code, and I go through some of them in the configuration chapter in Mastering Perl.
No configuration change should carry the risk of breaking the program. It certainly shouldn't carry the risk of breaking the compilation stage.
People shouldn't have to edit the source to get a different configuration.
People should be able to share the same application without using a common group of settings, instead re-installing the application just to change the configuration.
People should be allowed to create several different configurations and run them in batches without having to edit the source.
You should be able to test your application under different settings without changing the code.
People shouldn't have to learn how to program to be able to use your tool.
You should only loosely tie your configuration data structures to the source of the information to make later architectural changes easier.
You really want an interface instead of direct access at the application level.
I sum this up in my Mastering Perl class by telling people that the first rule of programming is to create a situation where you do less work and people leave you alone. When you put configuration in code, you spend more time dealing with installation issues and responding to breakages. Unless you like that sort of thing, give people a way to change the settings without causing you more work.
$CONFIG{'unhappy_employee'} = `rm -rf /`
One major issue with this approach is that your config is not very portable. If a functionally identical tool were built in Java, loading configuration would have to be redone. If both the Perl and the Java variation used a simple key=value layout such as:
owner = "Peter Palfrader"
email = "peter#peter#palfrader.org"
...
they could share the config.
Also, calling eval on the config file seems to open this system up to attack. What could a malicious person add to this config file if they wanted to wreak some havoc? Do you realize that ANY arbitrary code in your config file will be executed?
Another issue is that it's highly counter-intuitive (at least to me). I would expect a config file to be read by some config loader, not executed as a runnable piece of code. This isn't so serious but could confuse new developers who aren't used to it.
Finally, while it's highly unlikely that the implementation of constructs like p{...} will ever change, if they did change, this might fail to continue to function.
It's a bad idea to put configuration data in compiled code, because it can't be easily changed by the user. For scripts, just make sure it's separated entirely from the rest and document it nicely.
A reason I'm surprised no one mentioned yet is testing. When config is in the code you have to write crazy, contorted tests to be able to test safely. You can end up writing tests that duplicate the code they test which makes the tests nearly useless; mostly just testing themselves, likely to drift, and difficult to maintain.
Hand in hand with testing is deployment which was mentioned. When something is easy to test, it is going to be easy (well, easier) to deploy.
The main issue here is reusability in an environment where multiple languages are possible. If your config file is in language A, then you want to share this configuration with language B, you will have to do some rewriting.
This is even more complicated if you have more complex configurations (example the apache config files) and are trying to figure out how to handle potential differences in data structures. If you use something like JSON, YAML, etc., parsers in the language will be aware of how to map things with regards to the data structures of the language.
The one major drawback of not having them in a language, is that you lose the potential of utilizing setting config values to dynamic data.
I agree with Tim Anderson. Somebody here confuses configuration in code as configuration not being configurable. This is corrected for compiled code.
Both a perl or ruby file is read and interpreted, as is a yml file or xml file with configuration data. I choose yml because it is easier on the eye than in code, as grouping by test environment, development, staging and production, which in code would involve more .. code.
As a side note, XML contradicts the "easy on the eye" completely. I find it interesting that XML config is extensively used with compiled languages.
Reason 1. Aesthetics. While no one gets harmed by bad smell, people tend to put effort into getting rid of it.
Reason 2. Operational cost. For a team of 5 this is probably ok, but once you have developer/sysadmin separation, you must hire sysadmins who understand Perl (which is $$$), or give developers access to production system (big $$$).
And to make matters worse you won't have time (also $$$) to introduce a configuration engine when you suddenly need it.
My main problem with configuration in many small scripts I write, is that they often contain login data (username and password or auth-token) to a service I use. Then later, when the scripts gets bigger, I start versioning it and want to upload it on github.
So before every commit I need to replace my configuration with some dummy values.
$CONFIG{'user'} = 'username';
$CONFIG{'password'} = '123456';
Also you have to be careful, that those values did not eventually slip into your commit history at some point. This can get very annoying. When you went through this one or two times, you will never again try to put configuration into code.
Excuse the long code listing. Below is a handy Conf.pm module that I have used in many systems which allows you to specify different variables for different production, staging and dev environments. Then I build my programs to either accept the environment parameters on the command line, or I store this file outside of the source control tree so that never gets over written.
The AUTOLOAD provides automatic methods for variable retrieval.
# Instructions:
# use Conf;
# my $c = Conf->new("production");
# print $c->root_dir;
# print $c->log_dir;
package Conf;
use strict;
our $AUTOLOAD;
my $default_environment = "production";
my #valid_environments = qw(
development
production
);
#######################################################################################
# You might need to change this.
sub set_vars {
my ($self) = #_;
$self->{"access_token"} = 'asdafsifhefh';
if ( $self->env eq "development" ) {
$self->{"root_dir"} = "/Users/patrickcollins/Documents/workspace/SysG_perl";
$self->{"server_base"} = "http://localhost:3000";
}
elsif ($self->env eq "production" ) {
$self->{"root_dir"} = "/mnt/SysG-production/current/lib";
$self->{"server_base"} = "http://api.SysG.com";
$self->{"log_dir"} = "/mnt/SysG-production/current/log"
} else {
die "No environment defined\n";
}
#######################################################################################
# You shouldn't need to configure this.
# More dirs. Move these into the dev/prod sections if they're different per env.
my $r = $self->{'root_dir'};
my $b = $self->{'server_base'};
$self->{"working_dir"} ||= "$r/working";
$self->{"bin_dir"} ||= "$r/bin";
$self->{"log_dir"} ||= "$r/log";
# Other URLs. Move these into the dev/prod sections if they're different per env.
$self->{"new_contract_url"} = "$b/SysG-training-center/v1/contract/new";
$self->{"new_documents_url"} = "$b/SysG-training-center/v1/documents/new";
}
#######################################################################################
# Code, don't change below here.
sub new {
my ($class,$env) = #_;
my $self = {};
bless ($self,$class);
if ($env) {
$self->env($env);
} else {
$self->env($default_environment);
}
$self->set_vars;
return $self;
}
sub AUTOLOAD {
my ($self,$val) = #_;
my $type = ref ($self) || die "$self is not an object";
my $field = $AUTOLOAD;
$field =~ s/.*://;
#print "field: $field\n";
unless (exists $self->{$field} || $field =~ /DESTROY/ )
{
die "ERROR: {$field} does not exist in object/class $type\n";
}
$self->{$field} = $val if ($val);
return $self->{$field};
}
sub env {
my ($self,$in) = #_;
if ($in) {
die ("Invalid environment $in") unless (grep($in,#valid_environments));
$self->{"_env"} = $in;
}
return $self->{"_env"};
}
1;