From https://github.com/servirtium/worldbank-climate-recordings/blob/main/serve.rb I have a wee two-line Sinatra script that serves XML files from a modest directory structure:
require 'sinatra'
set :public, Dir.pwd
I feed I need some ability to set response headers globally. Indeed I want to set some random things to simulate AWS serving this former dynamic API. I only use this for test invocations. Or will do when I can complete this fine-grained thing.
Related
For securing a frontend application, I created a new Keycloak client with a custom configuration:
mapper which includes "client roles"
scope configuration
client-specific roles (composite and non-composite roles)
This setup works fine in the local development setup. Now we need to transfer this configuration to the other environments like develop/preproduction/production stage.
As far as I understand, Keycloak offers the following exports:
Complete realm
Specific client
It looks as if both apporaches have some major drawbacks. Either I would need to overwrite the complete realm (which I definitely don't want to do in production) or I can import the basic client configuration which is missing all the roles.
And as soon as we, for example, add more roles later on, then we would need to re-configure all stages manually.
Is there some "good practice" how to deal with that? Does keycloak offer some kind of "sync" between stages?
I thought it is hard answer question.
it is compare API call vs UI configuration.
Disadvantage of API call I prefer API call but it takes a time to figure out API function and call order is matter and some properties missing in parent have to set detail in child, complicated structure API URL path ( example id/property/id/property), require more deep of knowledge for Keycloak.
Advantage of API call more fine tunning fast, easy organize from top to bottom (example configure client, Auth resources, auth scopes, policies and permissions to other environment), can transfer 100% configuration.
Disadvantage of UI configuration - not flexible, if un-match, id makes error, can't update/add a partial data (example get client's resource missing it's scopes - it have to set by separate API call), can't move 100% configuration from source to target environment, can make human error
Advantage of UI configuration - easy, quick even manual
My preference is API call - using Postman (single API call or running correction for a sequence of API call - at the local and develop stage, can simple unit test and check HTTP status) and curl call with Bash Schell for higher stage. If check condition of target, can handle scenario based transfer(example already setting, skip that configuration)
One more tips, If using a debug section by F12 in Chrome or Firefox, can see the API call in network tab. It saves time to figure out API call methods and payload/response JSON data.
Typically when designing an API I attempt to stick to the following structure:
GET: /resources (get multiple resources)
POST: /resource (create a single resource)
GET: /resource/:id (get a single resource)
PUT: /resource/:id (update a single resource)
DELETE: /resource/:id (delete a single resource)
But sometimes when you are "getting" data the parameters being passed in start to grow beyond what you can include in a query string. For example in the GET: /resources example I provided there might be a number of filters you want to apply to the resources you are selecting.
In this case is it ok to begin using a POST so that you can include parameters in the request body? What are the drawbacks from breaking away from adherence to the structure I mentioned above?
In this case is it ok to begin using a POST so that you can include parameters in the request body?
Yes, which is to say that there are trade offs.
What are the drawbacks from breaking away from adherence to the structure I mentioned above?
It interferes with the ability of generic components to intelligently participate in the protocol.
A GET request has safe semantics; the agent can take advantage of this to do pre-fetching of resources, crawlers can explore the content freely, and so on.
Successful unsafe methods invalidate cache entries. That gets awkward when you want multiple representations of the same resource; fetching one representation via POST will evict other representations of the same resource from the cache.
If all we really wanted was RPC, we could do everything with POST. See "SOAP", for instance, where all of the messaging in built into the payload, and HTTP is just used as a dumb tunnel.
1)REST is a like design pattern for HTTP communication. It always good to follow REST, especially when you expose your API to public usage or browser to server communication.
2)You can write HTTP Requests even without having a proper REST pattern, But it will lead to unnecessary problems if its browser to server communication. Because most of the modern browsers designed using REST standards, They will understands this pattern very well. By default GET requests will be cached, instead of GET if you use POST by default it won't cache. So a new request will be fired to server each and every time. So it will lead to a lot of connections, resource, etc.
3)GET - Word itself gives you the meaning it only for getting a resource. likewise, POST and PUT for creating records, DELETE is for deleting, etc.
4)POST-VS-GET - If you use POST you can include RequestBody whereas in GET request you can't.It better to follow GET for getting a resource data
I'm trying to pull-of some tests for my RESTful api functions.
For this I did the following:
Installed PHPUnit.
Created a new database for testing.
Created a new enviorment (test) and changed the doctrine config for it.
Created a test.
My problem is this:
When performing a request (somedomain.com/api/somemethod) -> the requested page doesn't know i'm performing a test on it -> so the data it uses is the production/development database and not the 'test' db i have created for the tests.
(the script using test db, the requested page uses normal configurations).
Is there a way to solve it without touching or modifying the API code/behavior?.
Thanks.
Since you said you're requesting somedomain.com I can only suspect you're firing requests over HTTP.
Symfony is made to be easily testable and you can perform functional test without ever making a real HTTP request. Instead, it will make a request object and tell it's kernel to handle it as if it were coming from a real client.
There is a chapter in symfony book on this: Functional tests
If you use method described there (using Symfony BrowserKit client and paths instead of complete urls), Symfony will have it's kernel booted in test environment and will handle request like that.
If, however, for any reason you are unable/don't want to do it that way, and want to fire real HTTP requests, I suggest you to make a file in web directory called app_test.php. In that file you should boot the kernel in test environment and make sure your tests are actually hitting that file (instead of app.php or app_dev.php). However, have in mind that this file will be publicly available and as so, it will cause a security hole so make sure to guard it somehow (check app_dev.php for hints). As an idea, you could require specific key to be provided in request header to allow it to pass on. Or if it will be tested from a single machine, you could also guard it by IP, or whatever works for your case.
I am building a RESTful API services with ZF 1.10.8 as am newbie its a little bit confusing when dealing with ZF routing.
I need to have versioning, api_key, and response format in url, something like:
/:version/:response_format/:api_key/:controller ...
/1.0/json/1234567890/articles/
The version is module based with the latest version as default
How to get this done?
Versioning is really not as simple as putting /v1/ in the URI.
In fact, that makes the API non-REST.
To do REST properly, every resource (thing the client wants to access) has one and only one URI.
The URI stays the same for v1 & v2 & v2; what changes is how you present that resource to the client.
How do you know which version they want? They set it as a request header.
How do you know which format (json,xml,html,wml,etc) they want it in? They set it as a request header.
How do you know which language they want it in? Request header.
The thing to remember is that the URI they are requesting stays the same.
Because each resource only has 1 URI, you never want a method name in the URI.
This is bad:
- /edit/place/43
Instead, you should use the proper HTTP methods
- to create a place, do an HTTP POST to /place
- to view place 43, do an HTTP GET to /place/43
- to update place 43, do an HTTP PUT to /place/43
- to delete place 43, do an HTTP DELETE to /place/43
When returning the response to the client, you should also include the URIs of all related bits of data the client might want to retrieve next. One of the principles of REST is that once the client has connected, it can find all the URIs it needs within the API itself. It only needs to know one URI to get into the system, and from that point on, all required URIs are provided in responses. This has the benefit of allowing you to change your URIs at will, since the client should never be paying attention to what they are... just using them as needed (i.e. the client knows what the URI points to, but not where it points).
Lastly, keep in mind that you don't want to be sending success/error markers as xml or json. They should be sent back as HTTP response codes. There's a code for creation, and one for deletion, and one for updating, etc.
Here are some fantastic articles on REST in general, and doing REST with the Zend Framework in particular:
http://blog.steveklabnik.com/2011/07/03/nobody-understands-rest-or-http.html
http://timelessrepo.com/haters-gonna-hateoas
http://martinfowler.com/articles/richardsonMaturityModel.html
http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven
http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_1
http://www.techchorus.net/create-restful-applications-using-zend-framework
http://www.techchorus.net/create-restful-applications-using-zend-framework-part-ii-using-http-response-code
http://weierophinney.net/matthew/archives/233-Responding-to-Different-Content-Types-in-RESTful-ZF-Apps.html
http://www.enrise.com/2010/12/rest-style-context-switching/
http://www.enrise.com/2011/01/rest-style-context-switching-part-2/
http://www.informit.com/articles/article.aspx?p=1566460
http://www.chrisdanielson.com/tag/zend_rest_controller/
http://barelyenough.org/blog/2008/05/versioning-rest-web-services/
I particularly recommend the article at weierophinney.net, for implementation details.
This is just an idea, but I would avoid making the code know anything at all about the version. (Other than what its current version number is.) Instead, I would make the /:version/ part of your URI the base in your rewrite scheme.
So instead of the base being something like: "http://www.example.com/"
It would be: "http://www.example.com/1.0/"
In this way you can simply have different branches of your source control on the server separately and your web server can determine which version to route the URI to. Then your code doesn't need any knowledge of how to handle different versions and your code base doesn't get polluted with large switch statements to do different things based on version.
To make it a little safer, you can require requests to contain the version number in the header. Then your code can just check if the version number in the header matches the version number of the code it's being routed to and throw an error if they don't match.
For example: Sending a GET to http://www.example.com/2.0/ with a version number in the header of 1.0 would throw a "wrong version" error. Your code would only need to know that header_version != current_version, so it shouldn't need to change as you release new versions.
I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations.
However, there are a few extra features that I need:
Custom authentication: I need to
check credentials against a database for each request.
Thus I must be able to integrate propietary
database interaction.
Support for
signed access keys: The access to
resources via PUT should be signed
using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this.
Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI.
Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings.
Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules.
So how would you solve this? Are there other leightweight webservers available for this?
Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application.
Thanks in advance!
Have a look at nodejs http://nodejs.org/
There are a few modules for static web servers and database interfaces:
http://wiki.github.com/ry/node/modules
You might have to write your own file upload handler, or use one from this example http://www.componentix.com/blog/13
nginx + spawn-fcgi + fcgi application written in C + memcached + sqlite serves for similar task well, latency is about 20-30 ms for small data and fast connections from the same local network. As far as I know production server handles about 100-150 requests per second with no problem. On test server I peaked up to 20k requests per second, again with no problem, average latency were about 60 ms. Aggressive caching and UNIX domain sockets is the key.
Do not know how that configuration will behave on frequent PUT requests, in our task they are very rare and typically batched.