How to separate routes, handlers, 3rd-party interfaces and business logic in real world Go project - rest

After reading the official guide on how to structure projects and going through various (1, 2, 3 to name a few) examples and projects I can't help wondering whether my approach of structuring my REST-API server app is structured properly.
What is the API meant to do?
POST /auth/sign-in
Accepts a username and password and issues a JWT (JSON Web Token).
GET /auth/sign-out
Adds the JWT to a blacklist to invalidate the auth session.
GET /resources
Retrieves a list of all resources.
POST /resources (requires valid JWT authentication)
Accepts a JSON body, creates a new resource and sends out an email and notification to everyone about the new resource.
What my project looks like
Currently I'm not creating any libraries. Everything is sitting in the main package, the overarching server setup with routes etc. all done in main.go. I didn't go for the MVC pattern found à la Rails or Django to avoid overcomplicating things just for the sake of it. Also my impression was it doesn't really comply with the recommended structure for commands and libraries as described in the guide already mentioned above.
auth.go # generates, validates JWT, etc
auth-handler.go # Handles sign-in/out requests; includes middleware for required authentication
mailer.go # Provides methods to send out transactional email, loads correct template etc.
main.go # set up routes, runs server; inits mailer and notification instance for the request context
models.go # struct definition for User, Resource
notifications.go # Provides methods to publish push notifications
resource-handler.go # Handles request for resources, uses mailer and notifications instances for the POST request
What should it look like?
Should routes be separated? What about middleware? And how do you deal with interfaces to 3rd party code – imagine mailer.go in the outlined sample app talking to Mandrill and notifications.go to Amazon AWS SNS?

I can share a bit from my own experience.
In application code:
as opposed to library code, separating into packages and sub-packages is less important - as long as you don't have too much complexity in your code. I mostly design apps as integrating self contained libraries, so the app code itself is usually rather small. In general, try to avoid package separation if you don't really need it. But don't just slap tons of code in one package - that's also bad.
but don't have general packages like "util", they soon start to accumulate baggage and suck. I have a separate repo for generic utils reusable across projects, and under it each utility API is a sub package. e.g. github.com/me/myutils/countrycodes, github.com/me/myutils/set, github.com/me/myutils/whatevs.
Regardless of the package structure, the most important thing is to separate internal APIs from handler code. The handlers code should be a very thin layer that handles input, and calls an internal, self contained API, that can be tested without handlers, or tied to other handlers. Looks like you're doing this. Then you can separate your internal API into another package or not, it doesn't really matter.
When you're deciding what parts of the code should be separated into libraries, think in terms of code reuse. If this code will be used only by your app, there's no point in that.
I like to wrap integration with third party APIs in an interface that is defined in a secondary package. For example if you have something like sending emails with AWS SES, I'd create a pakcage called github.com/my_org/mailer, with an abstract interface, and under it an github.com/my_org/mailer/ses package that implements the SES integration. The application code imports the mailer package and its interface, and only in main do I somehow inject the usage of SES and integrate things together.
re middleware - I usually keep it in the same package as the API itself.

Related

JWT authentication for jBASE RESTful API

We are in the process of designing a front-end application with Angular which will call a jBASE server through RESTful APIs. APIs are created from jBASE component called jAgent.
Does jAgent support creating and verifying JWTs?
If not, what is the best way to handle authentication/authorization for the Angular application?
If we need to use JWTs, do we have to use a authentication middleware application (.NET Core or node.js) for that?
Great question! At the moment there is no handler within jAgent and our recommendation is to implement this, and advanced web server/API gateway technology by way of other applications like HAproxy or Kong.
An expansion of jAgent functionality to include things like this is something we're still considering but keep in mind, the power of jBASE lies in its native interactions with the host OS. Since there is no virtual OS layer it can be easier to plug and play off the shelf things to fill in for additional functionality, which gives you the flexibility to bring your own tooling.
In summary:
Not at the moment
Using an off the shelf package to act as your API gateway
Subject to the package you choose
That relegates jAgent to management of the API layer as it exists on the PICK/jBASE side while the off the shelf package manages your API security layer.
One other note for you--I noticed that you included a link to the old jBASE docs hosted on HelpJuice. It's worth mentioning that we've migrated those docs to docs.zumasys.com. You'll find the docs there to be more up to date, and also completely open sourced--part of the migration included their move to a GitHub repo, where we're happy to take community contributions.
For reference, the article you mentioned is available at https://docs.zumasys.com/jbase/connectivity/jagent/introduction-to-jagent-rest-services/.
Update:
One of our engineers has a program that will use openssl to generate the tokens for you, which you can find at https://github.com/patrickp/wjwt.
You will need openssl installed on the machine and in the path.
The WJWT.TEST program shows the usage. The important piece is the SECRET.KEY which is your internal KEY you use to sign the payloads.
When a user first authenticates you create the token with SIGN. Claims are any items/fields you wish to save/store. Do NOT put sensitive data in here as it is viewable by anybody. The concept is we sign this with our key, give it back to the client. On future calls the client sends the token and we pull it and call the VERIFY function which basically re-signs the payload and validates the signatures match. This validates the payload was not manipulated.
Activities such as expiration you would build into your code.
Long term we plan to take this library and refactor the code into our MVDB Toolkit library with more functionality. That library is something we provide to jBASE customers at no additional charge.

Public valid REST Api with wolkenkit.io

I am currently evaluating the framework "wolkenkit" [1] for using it in an application. Within this application I will have a user interface for tenant-based data management. Only authenticated users will have access to this application.
Additionally there should be a public REST API following common standards and being callable by public (tenant security done with submission of a tenant-based API Key within the request headers).
As far as I have found out, the wolkenkit REST API does not seem to fit these standards in forms of HTTP verbs.
But as wolkenkit at all appears to me as a really flexible and easy-to-use framework, I wonder how to basically implement such a public API.
May it be e.g. a valid approach to create an own web application which internally connects to the wolkenkit backend? What about the additional performance overhead then?
[1] https://www.wolkenkit.io/
In addition to the answer of mattwagl, I would like to point out a few things that you may be interested in.
First of all, since wolkenkit is based on CQRS, the application has a separate API for writing and reading. That means, that if you send a command (whose intent is to change state) this goes to the write API. If you subscribe for events or run a query, this goes to the read API.
This again means, that if you send a command, it's up to the write side to respond to it. As the write side is not meant to return application state, all it says is basically: "Thanks, I have received the command." To get the actual result you have to wait for the appropriate event, which means subscribing to the read API.
In the wolkenkit documentation there is a nice diagram which shows this in a clear way:
If you now add a separate REST API (which actually fulfills the requirements of REST), this means that you need to handle waiting for the result internally. In other words: Clients in wolkenkit are always meant to be asynchronous, REST is not. Hence it's your job to handle the asynchronous behavior of the wolkenkit APIs in your REST API. I think that this is the hardest part.
Once you have done this, you will have a synchronous REST API, and of course it will have some overhead. But I think that since its overhead is limited to passing through and translating network requests, it should be negligible.
Oh, and finally, there is another thing that you have to watch out for: Since REST as it was meant originally relies on the HTTP verbs to transport semantics, you need to map GET / POST / PUT / DELETE to the semantic commands of wolkenkit. As long as this can be done 1:1, everything's fine – problems start when there are multiple commands that (technically speaking) do an UPDATE.
PS: I'm also one of the developers of wolkenkit.
PPS: However you are going to solve this, I would be highly interested to hear from you! It would be very great if you could share your experiences with us, as you are most probably not the last one with this idea. If you want to contact us, the easiest way would be via Slack.
wolkenkit applications can be accessed using an HTTP- and a Websocket-API. These APIs are both provided by the tailwind module that wolkenkit uses under the hood. In the tailwind repo you can find a very simple documentation of the available HTTP routes.
You're right, the wolkenkit HTTP-API is not a classic REST-API. It's more RPC-style which in our experience is a good fit for applications. There are only 3 routes that your clients/tenants need to support: /v1/command (POST) is used for issuing commands. The commands you post should follow the command schema. /v1/events (POST) can be used for streaming events to clients. These events will follow the event schema. Finally you have /v1/read/:modelType/:modelName (POST) to read models. You can simply use HTTPie to test these routes.
Authentication of these APIs is currently done using OpenID-Connect. There's a very detailed article on how to setup authentication using Auth0. I'm not quite sure if this fits your use-case but you could basically use any Authentication Service that follows this standard or that is able to issue JWT tokens.
Finally you could also build your own JavaScript client-SDK that runs inside browsers by building a module that uses the wolkenkit-client-js under the hood. This SDK can just use the same API as any other client to connect to your application.
Hope this helps.
PS: Please note that I am one of the authors of wolkenkit.

Should RESTful service in Golang include Client interface?

If I develop Booking REST service in Golang (i.e., in package booking). Is it a "GO way" to create BookingClient interface (backed up by struct) with business operations allowed, so that clients of my restful service would use BookingClient (imported from package booking) instead of sending http requests directly?
In general, no – if you provide a client in a particular language it'd only be a convenience, so (some) users can use your API easier. This of course assumes your client is well designed. I wouldn't provide merely an interface in Go just to indicate a set of possible API calls. This would be beneficial to a very narrow range of audience, probably for people developing a client for your API themselves, in programming language which just happened to be the same as implementation of your server. And even then they might not really like the idea of using the interface (e.g. they might only need a specific set of methods).
If you want to provide a client for your API, go ahead, do it, but separate it from the actual server (different package, maybe even different repo). In general one develops APIs over HTTP to allow for wide range of clients to access it, which could be written in any language. Instead of providing some interfaces I would invest my time in writing a good documentation.
In my opinion the answer to your question, assuming there is no more context provided, should be no different if you asked yourself if you should provide a client in, say, Python. The whole situation might change though if, for example, your API is used internally by your company and you develop mainly in Go.
It's usually preferable to do this, and most companies do, but provide documentation for working directly with the API. The main use case for that is people working with different languages than the ones you intended.
You can have a look at a new RESTful framework I wrote, that includes infrastructure to automatically compile clients with Go templates, although I haven't gotten to writing a Go client compiler. If you want to write one it would be greatly appreciated :) https://github.com/EverythingMe/vertex
Testing is important in Go, so writing testable code is something you should do. If you use direct http requests you will have a harder time writing unit tests, compared to using a mocked struct.
Is there any reason to use a Client rather than calling the functions that call the REST endpoints? It's usually harder to mock a bigger thing, such as a Client struct, rather than a group of small functions.
You should put the client at booking.Client to avoid repeating yourself (booking.BookingClient) and maybe rename Client to something more descriptive.

Javascript client for REST API

I am trying to write a Javascript client for a web application which provides a REST API to interact with the application. I want to do this in a very advanced way like with a proven stack of tools and methodologies available in Javascript.
Most of the guides about javascript client library development I found in the web are application oriented which have a view part( I mean HTML part ). What I needed is like a client library with some methods which can be used to develop web applications. So I don't want to depend this library with any other javascript library like JQuery, Backbone etc.
I have gone through lot of design patterns available in javascript, especially patterns mentioned in Learning JavaScript Design Patterns a book by Addy Osmani. And after all I got confused, I couldn't decide which one to follow.
What I have in mind is like following:
Initialize the library with some key and secret (This can be compared to declaring object for a class in php).
There will be a data persistence unit which will keep the authenticated user's identity over a predefined amount of time like sessions in php. User data will be stored in cookies or local-storage. Also there will be provisions to override the methods of this unit so that user can implement their own storage mechanism. A reference to this unit will also be passed during library initialization.
Keep a global request method which handles all the API requests associated with the library(This can be compared to a method of the main class in php)
Define all the API methods encapsulated into different units according to the area of application it dealing with. This each unit will have a constructor method which defines some default properties to the unit( This can be compared to defining models in php which will fetch or save data from the application with API ). Each of this unit can be inherited from a super unit which provides some default properties and methods.
After reading some blogs and articles I have decided to use yeoman for library development. May be I can use some yeoman community javascript library generators to start the development.
As I described above I think what I needed is like: a class which keeps a single instance throughout the application which can be used to refer all the models and functions in the models. For this may be I can go with the singleton module pattern, but I am not fully sure about how to make use of it to my requirement.
Any advice is greatly appreciated!

Adapter Proxy for Restful APIs

this is a general 'what technologies are available' question.
My company provides a web application with a RESTful API. However, it is too slow for my needs and some of the results are in an awkward format.
I want to wrap their restful server with a proxy/adapter server, so when you connect to the proxy you get the RESTful API I wish the real one provides.
So it needs to do a few things:
passthrough most requests
cache some requests
do some extra requests on the original server to detect if a request is cacheable
for instance: there is a request for a field in a record: GET /records/id/field which might be slow, but there is a fingerprint request GET /records/id/fingerprint which is always fast. If there exists a cache of GET /records/1/field2 for the fingerprint feedbeef, then I need to check the original server still has the fingerprint feed beef before serving the cached version.
fix headers for some responses - e.g. content-type, based upon the path
do stream processing on some large content, for instance
GET /records/id/attachments/1234
returns a 100Mb log file in text format
remove null characters from files
optionally recode the log to filter out irrelevant lines, reducing the load on the client
cache the filtered version for later requests.
While I could modify the client to achieve this functionality, such code would not be re-usable for other clients (different languages), and complicates the client logic.
I had a look at whether clojure/ring could do it, and while there is a nice little proxy middleware for it, it doesn't handle streaming content as far as I can tell - the whole 100Mb would have to be downloaded. Also it doesn't include any cache logic yet.
I took a look at whether squid could do it, but I'm not familiar with the technology, and it seems mostly concerned with passing through requests rather than modifying them on the fly.
I'm looking for hints where I might find the correct technology to implement this. I'm mostly language agnostic if learning a new language gets me access to a really simple way to do it.
I believe you should choose a platform that is easier for you to implement your custom business logic on. The following web application frameworks provide easy connectivity with REST APIs, and allow you to create a web application that could work as a REST proxy:
Play framework (Java + Scala)
express + Node.js (Javascript)
Sinatra (Ruby)
I'm more familiar with Play, of which I know it provides utilities for caching you could find useful, and is also extendable by a number of plugins.
If you are familiar with Scala, you could have a also have a look at Finagle. It is a framework build be Twitter's infrastructure team to provide protocol-agnostic connectivity. It might be an overkill for REST to REST proxy, but it provides abstractions you might find useful.
You could also look at some 3rd party services like Apitools, which allows to create a proxy programmatically (in lua). Apirise is a similar service (of which I'm a co-founder) that intends to do provide similar functionalities with a user-friendly UI.
Beeceptor does exactly what you want. It plugs in-between your web-app and original API to route requests.
For your use-case of caching a few responses, you can create a rule. That way it shall not hit the original endpoint.
The requests to original APIs can be mocked, and you can inspect response
You can simulate delays.
(Note: it is a shameless plug, I am the author of Beeceptor and thought it should help you and other developers.)
https://github.com/nodejitsu/node-http-proxy is looking useful - although I don't yet know if it can stream process for transcoding.