Set site/username field ActiveResource based highrise gem - sinatra

I am building a sinatra app that will use Highrise CRM gem to access Highrise data. The example code to use this gem from the wiki,
require 'highrise'
Highrise::Base.site = 'https://your_site.highrisehq.com'
Highrise::Base.user = 'api-auth-token'
I want to change user and site field for every request, since each request can be for a different user. Currently these are class variables. Even if I set these fields for every request, wouldn't this cause race conditions when there are multiple requests in a multi threading scenario? Could anybody suggest best practices around setting user/site fields for every request in a threadsafe manner?

Related

What is the best way to collect metadata in salesforce using SOAP API?

We are having java application which consumes salesforce partner.wsdl. We login to salesforce instance, then we get metadata for all the objects and we cache it. As salesforce objects become more we are seeing more time in getting metadata and cache it for first call.
What is the best way we can reduce this time, even if more objects are introduced in salesforce?
Is there any soap api call I can make to get metadata only for the object and its dependencies?
do we need to use only describeSobject to get these information.?
Cache the SF responses, flush the cache once a day, not with every request?
Look into REST API, either as complete replacement or just to take advantage of the "if-modified-since". This header works also per object.
Experiment with queries on EntityDefinition table to learn names of only the objects you're interested with (you probably don't care about apex classes, custom settings, *share and *history tables..). For example https://stackoverflow.com/a/64276053/313628
Then yes - describe just them, using REST or SOAP's describeSobject. If you have many objects - the network roundtrips might be an issue, you'd need to debug the app to see where it spends most time. Combat it by requesting up to 100 objects at a time, maybe issuing multiple requests (async processing? threads?) and combining results later.
Does it have to be partner WSDL? You could "preload" objects in your app using enterprise wsdl and combine some techniques listed above.

Meteor - Why not just publish all the collection data?

This may be quite an easy question to answer as it may just be my lack of understanding, but if you are having to run the query twice - once of the server and once on the client - why not just publish all the collection data, and then just run one query on the client?
Obviously I don't mean doing this for the users collection, but if you have a blog Posts collection, wouldn't this be beneficial?
Publish all the post data, then subscribe to it and running whatever query is necessary on the client to get the data you need.
Publishing everything is good for 'development' environment as meteor adds autopublish by default but this has some fallacies in 'production' environment. I find this two points to be of importance
Security : The idea is, supply only as much data to the client as required. You can never trust the client and you don't know what the client may use the data for. For your use case, of simple blog posts, this may not be a serious risk but may be a critical risk for e commerce application. The last thing, you want is a hacker to use the data and leverage a bug in your code to do nasty stuff.
Data Overheads: For subscriptions, generally waitOn is used. Thus, till all the data has been made available to the client, the templates are not rendered. IF you have a very large amount of data it will take considerable time to render. So, it is advised to keep the data at 'only what to need' stage to optimize this time too.

Rest API needs additional operations - how to structure?

My application deals requires users to sign up before they can use the service and to do that they create an Application. The initial plan of the interface is as follows...
POST /Users/Applications - Creates an application and returns a unique identifier.
GET /Users/Applications/{id} - Retrieves an existing application.
PUT /Users/Applications/{id} - Updates an existing application.
DELETE /Users/Applications/{id} - Deletes an existing application.
This seems very clean and logical and makes best use of the HTTP verbs. However what if I now need to do other operations on an application e.g.
ActivateApplication - once all of the data is in the system by using PUT I now want the users to activate their application. This isn't just a matter of updating a status on the application using PUT, there are several additional jobs that should be done to activate an application such as emailing the HR dept. to inform them a new application has arrived.
PrintApplication - when called from the client prints the application to the office printer. (Not an ideal example but you get the idea I'm sure!)
How would I structure my REST interface to handle this type of request? Maybe something like this...
POST /Users/Applications/{id}/print
POST /Users/Applications/{id}/activate
...for activate I'm changing state so I believe I need to use POST. I understand REST is about documents but how do I structure my API when I need to perform operations on documents, not just get and update the document itself?
This Martin Fowler's article states that:
Some people incorrectly make a correspondence between POST/PUT and create/update. The choice between them is rather different to that.
When I try to decide between PUT and POST I follow the next rule:
PUT -> Idempotent
POST -> Not Idempotent
Idempotent means that there's no difference between performing one and multiple operations. The DB data will be the same after the first operation and after each of the other operations.
In case of not-idempotent operations, every performed operation changes the data in the DB.
That's why PUT is usually used for UPDATE operations and POST for CREATE. But this is not the correct rule.
Comming back to your question, in my opinion you are using POSTs correctly as a not idempotent action, because multiple calls to ActivateApplication will send multiple emails.
Edited
As #elolos has commented, following the Single Responsability Principle, sending an e-mail should be another responsability not directly linked to Update the State. Handle an event when the property changed in order to trigger processes like sending emails would be a better approach. This way ActivateApplication operation may be idempotent and be called using PUT Http method.

Wordpress: Save custom plugin options from backend

I'm developing a plugin that will pull data from a third party API. The user user inputs a number of options in a normal settings form for the plugin (used Reduz Framework - that uses WP Settings API).
The user provided options will then be used to generate a request to the third party API.
Now to my problem / question: How can I store the data that's returned from that API? Is there a built in way to do this in Wordpress - or will I have to install a database table of my own? Seems to be a bit overkill... Is there any way to "hack" in to the Settings API and set custom settings without having to display them in a form on front end?
Thank you - and happy holidays to everyone!
It sounds like what you want to do is actually just store the data from the remote API request, rather than "options". If you don't want to create a table for them, I can think of three simple approaches.
Transients API
Save the data returned from the API as transients, i.e. temporary cached data. This is generally good for data that's going to expire anyway and thus will need to be refreshed. Set an expiry time! Even if you want to hang onto the data "for ever", set an expiry time or the data will be autoloaded on every page load and thus consume memory even if you don't need them. You can then easily retrieve them with get_transient; if expired, you'll get false and that is your trigger to make your API call again.
NB: on hosts with memcached or other object caches, there's a good chance that your transients will be pushed out of the object cache sooner than you intend, thus forcing your plugin to retrieve the data again from the API. Transients really are about caching, not "data storage" per se.
Options
Save your data as custom options using add_option -- and specify autoload="no" so that they don't fill up script memory when they aren't needed! Beware the update_option will add the data with autoload="yes" if it doesn't already exist, so I recommend you delete and then add rather than update. You can then retrieve your data easily.
Custom Post Type
You can easily store your data in the wp_posts table by registering a custom post type, and then you can use wp_insert to save them and the usual WordPress post queries to retrieve them. Great for long-term data that you want to hang onto. You can make use of the post_title, post_content, post_excerpt and other standard post fields to store some of your data, and if you need more, you can add post meta fields.

Developing with backbone.js, how can I detect when multiple users(browsers) attempt to update?

I am very new to backbone.js (and MVC with javascript), and while reading several resources about backbone.js to adopt it in my project, I now have a question: how can I detect when multiple users(browsers) attempt to update? (and prevent it?)
My project is a tool for editing surveys/polls for users who want to create and distribute their own surveys. So far, my web app maintains a list of edit-commands fired by browser, sends it to the server, and the server does batch update.
What I did was, each survey maintains a version number and browser must request update with that version number, and if the request's version number does not match with the one in the server, the request fails and the user must reload his page (you know, implementing concurrent editing is not easy for everyone). Of course, when the browser's update was successful, it gets new version number from the server as ajax response, and one browser can request update to server only when its past update request is done.
Now, I am interested in RESTful APIs and MV* patterns, but having a hard time to solve this issue. What is the best / common approach for this?
There is a common trick instead of using versions, use TIMESTAMPS in your DB and then try to UPDATE WHERE timestamp = model.timestamp. If it returns zero result count - use appropriate HTTP 409 (conflict) response and ask the user to update the page in save() error callback. You can even use the local storage to merge changes, and compare the non-equivalent side by side.