What is the best way to collect metadata in salesforce using SOAP API? - soap

We are having java application which consumes salesforce partner.wsdl. We login to salesforce instance, then we get metadata for all the objects and we cache it. As salesforce objects become more we are seeing more time in getting metadata and cache it for first call.
What is the best way we can reduce this time, even if more objects are introduced in salesforce?
Is there any soap api call I can make to get metadata only for the object and its dependencies?
do we need to use only describeSobject to get these information.?

Cache the SF responses, flush the cache once a day, not with every request?
Look into REST API, either as complete replacement or just to take advantage of the "if-modified-since". This header works also per object.
Experiment with queries on EntityDefinition table to learn names of only the objects you're interested with (you probably don't care about apex classes, custom settings, *share and *history tables..). For example https://stackoverflow.com/a/64276053/313628
Then yes - describe just them, using REST or SOAP's describeSobject. If you have many objects - the network roundtrips might be an issue, you'd need to debug the app to see where it spends most time. Combat it by requesting up to 100 objects at a time, maybe issuing multiple requests (async processing? threads?) and combining results later.
Does it have to be partner WSDL? You could "preload" objects in your app using enterprise wsdl and combine some techniques listed above.

Related

Which HTTP method to use to build a REST API to perform following operation?

I am looking for a REST API to do following
Search based on parameters sent, if results found, return the results.
If no results found, create a record based on search parameters sent.
Can this be accomplished by creating one single API or 2 separate APIs are required?
I would expect this to be handled by a single request to a single resource.
Which HTTP method to use
This depends on the semantics of what is going on - we care about what the messages mean, rather than how the message handlers are implemented.
The key idea is the uniform interface constraint it REST; because we have a common understanding of what HTTP methods mean, general purpose connectors in the HTTP application can do useful work (for example, returning cached responses to a request without forwarding them to the origin server).
Thus, when trying to choose which HTTP method is appropriate, we can consider the implications the choice has on general purpose components (like web caches, browsers, crawlers, and so on).
GET announces that the meaning of the request is effectively read only; because of this, general purpose components know that they can dispatch this request at any time (for instance, a user agent might dispatch a GET request before the user decides to follow the link, to make the experience faster).
That's fine when you intend the request to provide the client with a copy of your search results, and the fact that you might end up making changes to server local state is just an implementation detail.
On the other hand, if the client is trying to edit the results of a particular search (but sometimes the server doesn't need to change anything), then GET isn't appropriate, and you should use POST.
A way to think about the difference is to consider what action you want to be taken when an intermediate cache holds a response from an earlier copy of "the same" request. If you want the cache to reuse the response, GET is the best; on the other hand, if you want the cache to throw away the old response (and possibly store the new one), then you should be using POST.

Rest API needs additional operations - how to structure?

My application deals requires users to sign up before they can use the service and to do that they create an Application. The initial plan of the interface is as follows...
POST /Users/Applications - Creates an application and returns a unique identifier.
GET /Users/Applications/{id} - Retrieves an existing application.
PUT /Users/Applications/{id} - Updates an existing application.
DELETE /Users/Applications/{id} - Deletes an existing application.
This seems very clean and logical and makes best use of the HTTP verbs. However what if I now need to do other operations on an application e.g.
ActivateApplication - once all of the data is in the system by using PUT I now want the users to activate their application. This isn't just a matter of updating a status on the application using PUT, there are several additional jobs that should be done to activate an application such as emailing the HR dept. to inform them a new application has arrived.
PrintApplication - when called from the client prints the application to the office printer. (Not an ideal example but you get the idea I'm sure!)
How would I structure my REST interface to handle this type of request? Maybe something like this...
POST /Users/Applications/{id}/print
POST /Users/Applications/{id}/activate
...for activate I'm changing state so I believe I need to use POST. I understand REST is about documents but how do I structure my API when I need to perform operations on documents, not just get and update the document itself?
This Martin Fowler's article states that:
Some people incorrectly make a correspondence between POST/PUT and create/update. The choice between them is rather different to that.
When I try to decide between PUT and POST I follow the next rule:
PUT -> Idempotent
POST -> Not Idempotent
Idempotent means that there's no difference between performing one and multiple operations. The DB data will be the same after the first operation and after each of the other operations.
In case of not-idempotent operations, every performed operation changes the data in the DB.
That's why PUT is usually used for UPDATE operations and POST for CREATE. But this is not the correct rule.
Comming back to your question, in my opinion you are using POSTs correctly as a not idempotent action, because multiple calls to ActivateApplication will send multiple emails.
Edited
As #elolos has commented, following the Single Responsability Principle, sending an e-mail should be another responsability not directly linked to Update the State. Handle an event when the property changed in order to trigger processes like sending emails would be a better approach. This way ActivateApplication operation may be idempotent and be called using PUT Http method.

Delete multiple records using REST

What is the REST-ful way of deleting multiple items?
My use case is that I have a Backbone Collection wherein I need to be able to delete multiple items at once. The options seem to be:
Send a DELETE request for every single record (which seems like a bad idea if there are potentially dozens of items);
Send a DELETE where the ID's to delete are strung together in the URL (i.e., "/records/1;2;3");
In a non-REST way, send a custom JSON object containing the ID's marked for deletion.
All options are less than ideal.
This seems like a gray area of the REST convention.
Is a viable RESTful choice, but obviously has the limitations you have described.
Don't do this. It would be construed by intermediaries as meaning “DELETE the (single) resource at /records/1;2;3” — So a 2xx response to this may cause them to purge their cache of /records/1;2;3; not purge /records/1, /records/2 or /records/3; proxy a 410 response for /records/1;2;3, or other things that don't make sense from your point of view.
This choice is best, and can be done RESTfully. If you are creating an API and you want to allow mass changes to resources, you can use REST to do it, but exactly how is not immediately obvious to many. One method is to create a ‘change request’ resource (e.g. by POSTing a body such as records=[1,2,3] to /delete-requests) and poll the created resource (specified by the Location header of the response) to find out if your request has been accepted, rejected, is in progress or has completed. This is useful for long-running operations. Another way is to send a PATCH request to the list resource, /records, the body of which contains a list of resources and actions to perform on those resources (in whatever format you want to support). This is useful for quick operations where the response code for the request can indicate the outcome of the operation.
Everything can be achieved whilst keeping within the constraints of REST, and usually the answer is to make the "problem" into a resource, and give it a URL.
So, batch operations, such as delete here, or POSTing multiple items to a list, or making the same edit to a swathe of resources, can all be handled by creating a "batch operations" list and POSTing your new operation to it.
Don't forget, REST isn't the only way to solve any problem. “REST” is just an architectural style and you don't have to adhere to it (but you lose certain benefits of the internet if you don't). I suggest you look down this list of HTTP API architectures and pick the one that suits you. Just make yourself aware of what you lose out on if you choose another architecture, and make an informed decision based on your use case.
There are some bad answers to this question on Patterns for handling batch operations in REST web services? which have far too many upvotes, but ought to be read too.
If GET /records?filteringCriteria returns array of all records matching the criteria, then DELETE /records?filteringCriteria could delete all such records.
In this case the answer to your question would be DELETE /records?id=1&id=2&id=3.
I think Mozilla Storage Service SyncStorage API v1.5 is a good way to delete multiple records using REST.
Deletes an entire collection.
DELETE https://<endpoint-url>/storage/<collection>
Deletes multiple BSOs from a collection with a single request.
DELETE https://<endpoint-url>/storage/<collection>?ids=<ids>
ids: deletes BSOs from the collection whose ids that are in the provided comma-separated list. A maximum of 100 ids may be provided.
Deletes the BSO at the given location.
DELETE https://<endpoint-url>/storage/<collection>/<id>
http://moz-services-docs.readthedocs.io/en/latest/storage/apis-1.5.html#api-instructions
This seems like a gray area of the REST convention.
Yes, so far I have only come accross one REST API design guide that mentions batch operations (such as a batch delete): the google api design guide.
This guide mentions the creation of "custom" methods that can be associated via a resource by using a colon, e.g. https://service.name/v1/some/resource/name:customVerb, it also explicitly mentions batch operations as use case:
A custom method can be associated with a resource, a collection, or a service. It may take an arbitrary request and return an arbitrary response, and also supports streaming request and response. [...] Custom methods should use HTTP POST verb since it has the most flexible semantics [...] For performance critical methods, it may be useful to provide custom batch methods to reduce per-request overhead.
So you could do the following according to google's api guide:
POST /api/path/to/your/collection:batchDelete
...to delete a bunch of items of your collection resource.
I've allowed for a wholesale replacement of a collection, e.g. PUT ~/people/123/shoes where the body is the entire collection representation.
This works for small child collections of items where the client wants to review a the items and prune-out some and add some others in and then update the server. They could PUT an empty collection to delete all.
This would mean GET ~/people/123/shoes/9 would still remain in cache even though a PUT deleted it, but that's just a caching issue and would be a problem if some other person deleted the shoe.
My data/systems APIs always use ETags as opposed to expiry times so the server is hit on each request, and I require correct version/concurrency headers to mutate the data. For APIs that are read-only and view/report aligned, I do use expiry times to reduce hits on origin, e.g. a leaderboard can be good for 10 mins.
For much larger collections, like ~/people, I tend not to need multiple delete, the use-case tends not to naturally arise and so single DELETE works fine.
In future, and from experience with building REST APIs and hitting the same issues and requirements, like audit, I'd be inclined to use only GET and POST verbs and design around events, e.g. POST a change of address event, though I suspect that'll come with its own set of problems :)
I'd also allow front-end devs to build their own APIs that consume stricter back-end APIs since there's often practical, valid client-side reasons why they don't like strict "Fielding zealot" REST API designs, and for productivity and cache layering reasons.
You can POST a deleted resource :). The URL will be
POST /deleted-records
and the body will be
{"ids": [1, 2, 3]}

Wordpress: Save custom plugin options from backend

I'm developing a plugin that will pull data from a third party API. The user user inputs a number of options in a normal settings form for the plugin (used Reduz Framework - that uses WP Settings API).
The user provided options will then be used to generate a request to the third party API.
Now to my problem / question: How can I store the data that's returned from that API? Is there a built in way to do this in Wordpress - or will I have to install a database table of my own? Seems to be a bit overkill... Is there any way to "hack" in to the Settings API and set custom settings without having to display them in a form on front end?
Thank you - and happy holidays to everyone!
It sounds like what you want to do is actually just store the data from the remote API request, rather than "options". If you don't want to create a table for them, I can think of three simple approaches.
Transients API
Save the data returned from the API as transients, i.e. temporary cached data. This is generally good for data that's going to expire anyway and thus will need to be refreshed. Set an expiry time! Even if you want to hang onto the data "for ever", set an expiry time or the data will be autoloaded on every page load and thus consume memory even if you don't need them. You can then easily retrieve them with get_transient; if expired, you'll get false and that is your trigger to make your API call again.
NB: on hosts with memcached or other object caches, there's a good chance that your transients will be pushed out of the object cache sooner than you intend, thus forcing your plugin to retrieve the data again from the API. Transients really are about caching, not "data storage" per se.
Options
Save your data as custom options using add_option -- and specify autoload="no" so that they don't fill up script memory when they aren't needed! Beware the update_option will add the data with autoload="yes" if it doesn't already exist, so I recommend you delete and then add rather than update. You can then retrieve your data easily.
Custom Post Type
You can easily store your data in the wp_posts table by registering a custom post type, and then you can use wp_insert to save them and the usual WordPress post queries to retrieve them. Great for long-term data that you want to hang onto. You can make use of the post_title, post_content, post_excerpt and other standard post fields to store some of your data, and if you need more, you can add post meta fields.

C# ASMX webservice semi -permanant storage requirement

I'm writing a mock of a third-party web service to allow us to develop and test our application.
I have a requirement to emulate functionality that allows the user to submit data, and then at some point in the future retrieve the results of processing on the service. What I need to do is persist the submitted data somewhere, and retrieve it later (not in the same session). What I'd like to do is persist the data to a database (simplest solution), but the environment that will host the mock service doesn't allow for that.
I tried using IsolatedStorage (application-scoped), but this doesn't seem to work in my instance. (I'm using the following to get the store...
IsolatedStorageFile.GetStore(IsolatedStorageScope.Application |
IsolatedStorageScope.Assembly, null, null);
I guess my question is (bearing in mind the fact that I understand the limitations of IsolatedStorage) how would I go about getting this to work? If there is no consistent way to do it, I guess I'll have to fall back to persisting to a specific file location on the filesystem, and all the pain of permission setting that entails in our environment.
Self-answer.
For the pruposes of dev and test, I realised it would be easiest to limit the lifetime of the persisted objects, and use
HttpRuntime.Cache
to store the objects. This has just enough flexibility to cope with my situation.