I am trying to explore the opportunity to build a connector for CouchDB for Loopback.io.
I know CouchDB has a REST interface but - for some reason - when putting the baseURL of my Couch local server into a Rest connector in Loopback, I get an error back on some headers missing in the request from Couch.
Since some useful functions could be added to exploit views and so on, I am exploring the loopback-connector-couchdb creation.
So easy question is: what are the methods that a connector needs to implement to map exactly to the standard API endpoints offered by Loopback.io for a model?
Basic example:
POST /models (with payload body) --> all good on the "create" function of the connector
DELETE /models/{id} --> I get an error saying that the destroyAll function is NOT implemented (correct) but the destroy function IS implemented instead...
what is the difference between HEAD /models/{id} and GET /models/{id}/exists in terms of the functions called?
I try to verify the existence of the model created (successfully) in CouchDB via ID and use GET /models/{id}/exists and instead of having the function "exists" called in the Connector, another function called "Count" is called instead.
It is as if some but not all functions are mapped to the connector (note, I am not using the DataAccessObject property of the connector, as that seems to be more for additional methods, so to speak... and one of the methods does work!)
...I am confused!
Thanks for any guidance. I am trying to follow this, but i can't easily map standard API endpoints to the minimum functions of the connector (see point 2 above, for instance)
Building a connector - Loopback.io documentation
I would suggest playing with the API explorer to figure out your endpoints.
Create a sample LoopBack project via slc loopback
Create some models via slc loopback:model
Start the app via slc run
Browse to localhost:3000/explorer
In there you can see all the endpoints that are automatically generated by LoopBack. Like if you click the GET endpoint for a model, it will show the query as GET /api/<modelname>.
Related
For securing a frontend application, I created a new Keycloak client with a custom configuration:
mapper which includes "client roles"
scope configuration
client-specific roles (composite and non-composite roles)
This setup works fine in the local development setup. Now we need to transfer this configuration to the other environments like develop/preproduction/production stage.
As far as I understand, Keycloak offers the following exports:
Complete realm
Specific client
It looks as if both apporaches have some major drawbacks. Either I would need to overwrite the complete realm (which I definitely don't want to do in production) or I can import the basic client configuration which is missing all the roles.
And as soon as we, for example, add more roles later on, then we would need to re-configure all stages manually.
Is there some "good practice" how to deal with that? Does keycloak offer some kind of "sync" between stages?
I thought it is hard answer question.
it is compare API call vs UI configuration.
Disadvantage of API call I prefer API call but it takes a time to figure out API function and call order is matter and some properties missing in parent have to set detail in child, complicated structure API URL path ( example id/property/id/property), require more deep of knowledge for Keycloak.
Advantage of API call more fine tunning fast, easy organize from top to bottom (example configure client, Auth resources, auth scopes, policies and permissions to other environment), can transfer 100% configuration.
Disadvantage of UI configuration - not flexible, if un-match, id makes error, can't update/add a partial data (example get client's resource missing it's scopes - it have to set by separate API call), can't move 100% configuration from source to target environment, can make human error
Advantage of UI configuration - easy, quick even manual
My preference is API call - using Postman (single API call or running correction for a sequence of API call - at the local and develop stage, can simple unit test and check HTTP status) and curl call with Bash Schell for higher stage. If check condition of target, can handle scenario based transfer(example already setting, skip that configuration)
One more tips, If using a debug section by F12 in Chrome or Firefox, can see the API call in network tab. It saves time to figure out API call methods and payload/response JSON data.
We are having java application which consumes salesforce partner.wsdl. We login to salesforce instance, then we get metadata for all the objects and we cache it. As salesforce objects become more we are seeing more time in getting metadata and cache it for first call.
What is the best way we can reduce this time, even if more objects are introduced in salesforce?
Is there any soap api call I can make to get metadata only for the object and its dependencies?
do we need to use only describeSobject to get these information.?
Cache the SF responses, flush the cache once a day, not with every request?
Look into REST API, either as complete replacement or just to take advantage of the "if-modified-since". This header works also per object.
Experiment with queries on EntityDefinition table to learn names of only the objects you're interested with (you probably don't care about apex classes, custom settings, *share and *history tables..). For example https://stackoverflow.com/a/64276053/313628
Then yes - describe just them, using REST or SOAP's describeSobject. If you have many objects - the network roundtrips might be an issue, you'd need to debug the app to see where it spends most time. Combat it by requesting up to 100 objects at a time, maybe issuing multiple requests (async processing? threads?) and combining results later.
Does it have to be partner WSDL? You could "preload" objects in your app using enterprise wsdl and combine some techniques listed above.
Inside of an OSGi component / service I'd need a JSON-representation of a resource (pages, CF etc) exactly like retrieving it via Sling model selector (resource.model.json).
Unfortunately being inside an OSGi component or service, there's no (sling) request object available.
Is there a way to get the json representation (with all the component's model exporters) without creating a http-request to localhost?
that's not a problem as long as you have access to the resource.
First you need to make sure that your Model can deliver the json via a method call. See Get .model.json as String for an explanation on how to do this.
If you are done with that, use the ModelFactory to "getModelFromResource". This will create an instance of your SlingModel for the given resource. Just call the method you created before to get your json.
See https://sling.apache.org/apidocs/sling10/org/apache/sling/models/factory/ModelFactory.html
Your model should probably have adaptables= {Resource.class} - if you adapt from Request there might be trouble ahead.
HTH,
OliG
I'm writing a web service in Swift using Vapor framework.
I use FluentSQLite to save data. I have a User Model which conforms to the SQLiteModel and Migration. I have added routes to create new user via a post methods and return the list of users via a get method like below.
When I hit the get API for first time, it returns an empty array. After I post some users, I am able to get them. But when I stop the service and run again, I am unable to get the previously saved users.
Since I am new to Vapor, I can't figure out what I am missing here and all the online searches and docs didn't help. Initially I did not have a save or query inside a transaction, after seeing that in the docs I tried that also, but same issue.
What does your configuration for the SQLite database (typically in Sources/App/configure.swift) look like?
Is it actually persisting to disk, or just running an in-memory database (which goes away when you restart)?
I'm trying to pull-of some tests for my RESTful api functions.
For this I did the following:
Installed PHPUnit.
Created a new database for testing.
Created a new enviorment (test) and changed the doctrine config for it.
Created a test.
My problem is this:
When performing a request (somedomain.com/api/somemethod) -> the requested page doesn't know i'm performing a test on it -> so the data it uses is the production/development database and not the 'test' db i have created for the tests.
(the script using test db, the requested page uses normal configurations).
Is there a way to solve it without touching or modifying the API code/behavior?.
Thanks.
Since you said you're requesting somedomain.com I can only suspect you're firing requests over HTTP.
Symfony is made to be easily testable and you can perform functional test without ever making a real HTTP request. Instead, it will make a request object and tell it's kernel to handle it as if it were coming from a real client.
There is a chapter in symfony book on this: Functional tests
If you use method described there (using Symfony BrowserKit client and paths instead of complete urls), Symfony will have it's kernel booted in test environment and will handle request like that.
If, however, for any reason you are unable/don't want to do it that way, and want to fire real HTTP requests, I suggest you to make a file in web directory called app_test.php. In that file you should boot the kernel in test environment and make sure your tests are actually hitting that file (instead of app.php or app_dev.php). However, have in mind that this file will be publicly available and as so, it will cause a security hole so make sure to guard it somehow (check app_dev.php for hints). As an idea, you could require specific key to be provided in request header to allow it to pass on. Or if it will be tested from a single machine, you could also guard it by IP, or whatever works for your case.