I am trying to write a Javascript client for a web application which provides a REST API to interact with the application. I want to do this in a very advanced way like with a proven stack of tools and methodologies available in Javascript.
Most of the guides about javascript client library development I found in the web are application oriented which have a view part( I mean HTML part ). What I needed is like a client library with some methods which can be used to develop web applications. So I don't want to depend this library with any other javascript library like JQuery, Backbone etc.
I have gone through lot of design patterns available in javascript, especially patterns mentioned in Learning JavaScript Design Patterns a book by Addy Osmani. And after all I got confused, I couldn't decide which one to follow.
What I have in mind is like following:
Initialize the library with some key and secret (This can be compared to declaring object for a class in php).
There will be a data persistence unit which will keep the authenticated user's identity over a predefined amount of time like sessions in php. User data will be stored in cookies or local-storage. Also there will be provisions to override the methods of this unit so that user can implement their own storage mechanism. A reference to this unit will also be passed during library initialization.
Keep a global request method which handles all the API requests associated with the library(This can be compared to a method of the main class in php)
Define all the API methods encapsulated into different units according to the area of application it dealing with. This each unit will have a constructor method which defines some default properties to the unit( This can be compared to defining models in php which will fetch or save data from the application with API ). Each of this unit can be inherited from a super unit which provides some default properties and methods.
After reading some blogs and articles I have decided to use yeoman for library development. May be I can use some yeoman community javascript library generators to start the development.
As I described above I think what I needed is like: a class which keeps a single instance throughout the application which can be used to refer all the models and functions in the models. For this may be I can go with the singleton module pattern, but I am not fully sure about how to make use of it to my requirement.
Any advice is greatly appreciated!
Related
Imagine I have a Razor Page or such like. Imagine the data used by that Razor Page is not used by any other page at all. So the data retrieval is very specific to this page only.
Is it bad practice to just grab the data directly using a database connection from within that Razor Page local to the only place that data is to be used?
If so, why should I abstract the data away into a separate API that isn't re-used anywhere? Why is it good practice?
It seems to me, that REST APIs are sometimes used unnecessarily and for no good reason. As if because every example video shows data retrieval from REST APIs. Correct me if I am wrong.
If your application is purely a server-side app, there is no justification for creating RESTful API that serves up JSON for it. Those kinds of APIs are usually created for "external" consumers, by which I mean third parties or the browser (via JavaScript). They are commonly implemented for client side apps - single page apps typically like React, Angular or Blazor where JSON is the data format of choice for the browser.
As to whether you should open database connections in your PageModel class, that's another question. For simple apps, why not? But for apps that need unit testing, it's not a good idea. You will be unable to execute unit test against the PageModel class without hitting the database.
As a habit, I tend to put the code that connects to a database in a series of separate classes, each one having an interface, and then inject them into the PageModel via dependency injection. That way I can mock the service represented by the interface for unit testing.
You might want to implement services that generate data as JSON within a Razor Pages app if you have some functionality that depends on Ajax requests for data. For those, you could use Web API controllers, minimal request handlers or even named handler methods that return JsonResult objects in the PageModel classes. With all of those, you might still want to put the code that actually calls the database in a separate class that is injected into the handler.
After reading the official guide on how to structure projects and going through various (1, 2, 3 to name a few) examples and projects I can't help wondering whether my approach of structuring my REST-API server app is structured properly.
What is the API meant to do?
POST /auth/sign-in
Accepts a username and password and issues a JWT (JSON Web Token).
GET /auth/sign-out
Adds the JWT to a blacklist to invalidate the auth session.
GET /resources
Retrieves a list of all resources.
POST /resources (requires valid JWT authentication)
Accepts a JSON body, creates a new resource and sends out an email and notification to everyone about the new resource.
What my project looks like
Currently I'm not creating any libraries. Everything is sitting in the main package, the overarching server setup with routes etc. all done in main.go. I didn't go for the MVC pattern found à la Rails or Django to avoid overcomplicating things just for the sake of it. Also my impression was it doesn't really comply with the recommended structure for commands and libraries as described in the guide already mentioned above.
auth.go # generates, validates JWT, etc
auth-handler.go # Handles sign-in/out requests; includes middleware for required authentication
mailer.go # Provides methods to send out transactional email, loads correct template etc.
main.go # set up routes, runs server; inits mailer and notification instance for the request context
models.go # struct definition for User, Resource
notifications.go # Provides methods to publish push notifications
resource-handler.go # Handles request for resources, uses mailer and notifications instances for the POST request
What should it look like?
Should routes be separated? What about middleware? And how do you deal with interfaces to 3rd party code – imagine mailer.go in the outlined sample app talking to Mandrill and notifications.go to Amazon AWS SNS?
I can share a bit from my own experience.
In application code:
as opposed to library code, separating into packages and sub-packages is less important - as long as you don't have too much complexity in your code. I mostly design apps as integrating self contained libraries, so the app code itself is usually rather small. In general, try to avoid package separation if you don't really need it. But don't just slap tons of code in one package - that's also bad.
but don't have general packages like "util", they soon start to accumulate baggage and suck. I have a separate repo for generic utils reusable across projects, and under it each utility API is a sub package. e.g. github.com/me/myutils/countrycodes, github.com/me/myutils/set, github.com/me/myutils/whatevs.
Regardless of the package structure, the most important thing is to separate internal APIs from handler code. The handlers code should be a very thin layer that handles input, and calls an internal, self contained API, that can be tested without handlers, or tied to other handlers. Looks like you're doing this. Then you can separate your internal API into another package or not, it doesn't really matter.
When you're deciding what parts of the code should be separated into libraries, think in terms of code reuse. If this code will be used only by your app, there's no point in that.
I like to wrap integration with third party APIs in an interface that is defined in a secondary package. For example if you have something like sending emails with AWS SES, I'd create a pakcage called github.com/my_org/mailer, with an abstract interface, and under it an github.com/my_org/mailer/ses package that implements the SES integration. The application code imports the mailer package and its interface, and only in main do I somehow inject the usage of SES and integrate things together.
re middleware - I usually keep it in the same package as the API itself.
How can I manipulate other modules without editing them ? very the same thing that wordpress modules do .
They add functionality to core system without changing the core code and they work together like a charm.
I always wanted to know how to implement this in my own modular application
A long time ago I wrote the blog post "Use 3rd party modules in Zend Framework 2" specifically about extending Zend Framework 2 modules. The answer from Bez is technically correct, it could be a bit more specific about the framework.
Read the full post at https://juriansluiman.nl/article/117/use-3rd-party-modules-in-zend-framework-2, but it gives you a clue about:
Changing a route from a module (say, you want to have the url /account/login instead of /user/login)
Overriding a view script, so you can completely modify the page's rendering
Changing a form object, so you could add new form fields or mark some required field as not required anymore.
This is a long topic, but here is a short gist.
Extensibility in Zend Framework 2 heavily relies on the premise that components can be interchanged, added, and/or substituted.
Read up on SOLID principles: http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)
Modules typically consists of objects working together as a well-oiled machinery, designed to accomplish one thing or a bunch of related things, whatever that may be. These objects are called services, and managed by the service locator/service manager.
A big part of making your module truly extensible is to expect your developers to extend a class or implement a certain interface, which the developer register as services. You should provide a mode of definition wherein the developers can specify which things he wants to substitute, and/or add their own services to -- and this is where the application configuration comes in.
Given the application configuration, you should construct your machinery a.k.a. module services according to options the developer has specified i.e., use the developer defined Foo\Bar\UserService service as the YourModule\UserServiceInterface within your module, etc. (This is usually delegated to service factories, which has the opportunity to read the application configuration, and constructs the appropriate object given a particular set of configuration values.)
EDIT:
To add, a lot can be accomplished by leveraging Zend's Zend\EventManager component. This allows you to give developers the freedom to hook and listen to certain operations of your module and act accordingly (See: http://en.wikipedia.org/wiki/Observer_pattern)
I developed a REST API with Play 2.2.0. Some controllers expose GET methods, other expose POST methods with authentication etc...
I developed the client using Play as well but I have a problem. How can I avoid duplicating the model layer between both applications ?
In the server application, I have a Model Country(code, name).
In the client I am able to list countries and create new ones.
Currently, I have a class Country in both sides. When I get countries I deserialize them. The problem is that if I add a field in Country in the server, I have to maintain the client as well.
How can I share the Country entity between applications ?
PS : I don't want to create a dependency between the API and the client, as the client could have been developed with another language or framework
Thanks
This is not very specific to play framework but is more of a general question. You either create reusable representations of the data in your protocol (the actual data structures you send between your nodes) and get a tight coupling in representation and language. Many projects does it like this, since they know they will have the same platform throghout their architecture.
The other option is to duplicate all of or only the parts of parsing/generating that each part of the architecture needs, this way you get a looser coupling and can use any language in the different parts.
There are also some data protocols/tools that will have a representation in a protocol specific way and then can generate representations in various programming languages.
So as you see, it's all about pros and cons - neither solution is "the right way (tm)" to do this, you will have to think about your specific system/architecture and what pros are most valuable and what cons are most costly to you.
Well I suggest to send to the client a template of what they should display, on the client with js take advantage of js template frameworks, so you can tell to the client how can show them, dynamic... if they want to override them well... more job
We can call them Rest component oriented...
well suggestions :)
should works!
Is there a general Cocoa or Cocoa Touch library for interacting with any web service API, or one which can be used as a basis for creating my own library for a web service? For example, I could add some details about how to interact with the Vimeo API (how to verify user details, what URLs to call). I'm not sure how this would work in reality.
If not, can anyone suggest an web service library which I could alter to change the API calls? It would need to be fairly simple (a small API) and easy to adapt. An example is this Cocoa library for Twitter (although it would probably be too complicated to adapt). Would it be easier just to code it up from scratch?
I don't think there is a library that will automagically work with any web API. In fact I don't even think it's possible to write such a library, since you can define your web API any way you want to. That library would have to be pretty smart in order to figure out how to use an arbitrary API.
I think the closest you'll get is something like ASIHTTPRequest, which is a great library for interacting with web services. If you add a JSON and/or XML parser you'll have everything you need to interact with almost any web API.
Found another library for interacting with RESTful web services. It's called RestKit. From their description:
RestKit is a Cocoa framework for interacting with RESTful web services in Objective C on iOS and Mac OS X. It provides a set of primitives for interacting with web services wrapping GET, POST, PUT and DELETE HTTP verbs behind a clean, simple interface. RestKit also provides a system for modeling remote resources by mapping them from JSON (or XML) payloads back into local domain objects. Object mapping functions with normal NSObject derived classes with properties. There is also an object mapping implementation included that provides a Core Data backed store for persisting objects loaded from the web.