We are looking for a CMS that built to expose content over API like contentful, or prismic. However our requirement is that needs to be multi-tenant. So one set of fields but many clients and languages per client in a structure like this.
fields/pages/container -> Client 1 -> English
-> Greek
Client 2 -> Japanese
-> Mandarin
Happy do do workarounds and hacks. Also cloud-based service would work nicely.
Suggestions?
Prismic.io seemed to be a good candiate for this anyway!
Related
I'm trying to figure out what the latest best practice is when it comes to REST APIs and finding an elegant way to "tell" the client what the response will look like. I'm no web expert. But I just recently joined a new team and I've noticed that in the client code, they have hardcoded URI to APIs... and anytime the schema of the return data changes, they have to patch their client code.
Trying to find a way to make things more dynamic by:
introducing patterns to "discover" API servers.
looking into HATEOAS.
More than anything else though, what I'm trying improve is having to change the client code each time the logic on the server changes as far as the body of a GET response.
I've been reading this:
https://www.programmableweb.com/news/rest-api-design-put-type-content-type/2011/11/18
And in particular, the following comments stood out to me: (under the WRML heading)
this media type communicates, directly to clients, distinct and
complementary bits of information regarding the content of a message.
The Web Resource Modeling Language (WRML, www.wrml.org) provides this
"pluggable" media type to give rich web applications direct access to
structural information and format serialization code. The media type's
self-descriptive and pluggable design reduces the need for information
to be communicated out-of-band and then hard-coded by client
developers
Questions
is WRML still a thing? this book that i'm reading is from 2011... and I'm assuming a lot has changed since then.
Can I cheaply build my own inhouse solution where we use the Content-Type or some other header to provide the clients with schema information?
can you point me to an example / sample code where someone is using custom values in Content-Type or other Headers to accomplish something similar?
Or if you have any other suggestions I'm all ears.
Thank you.
I don't know much about WRML, but I would look into:
HATEOAS formats like HAL/HAL Forms and Siren, which are self-describing.
JSON-Schema to describe responses and requests (and yes they can be linked from HATEOAS responses).
If you don't want to go the hypermedia route, OpenAPI and RAML
I've been developing Ketting for the last 5 years and HATEOAS has been nothing short of magic lately as all the tools have been falling into place.
I've been building a lot of quick prototypes on Netlify lately. I love the service for its ease of setup and deployment. But I keep running into this conflict between their JAMstacky conventions around API endpoints and my own background in RESTful API design.
To be more specific, say I am building a basic CRUD API in which I can create, fetch one, fetch all, and update some resource type . Let's say a User. If I were designing those endpoints from a RESTful perspective, it would look like this:
POST /users -> Create a user
GET /users -> Fetch all users
GET /users/{id} -> Fetch one user
PUT /users/{id} -> Update a user
Now, if I were setting this up on AWS, perhaps with the serverless framework, each of those endpoints would be their own lambda. But Netlify offers no such configuration options. Which is mostly nice. I hate configuration. But it is difficult to achieve these endpoints at all with Netlify.
Specifically in this case, Netlify automatically creates endpoints which match filenames. So if you have a file named users.js, that creates a /users endpoint. The problem is, that file will be used for every possible permutation of /users. Every HTTP method. Every subroute. They all go to this one lambda. So in order to achieve a RESTful API design, I have to put everything in a single lambda and essentially make it a router. Which seems to defeat the whole idea of serverless.
So usually when you read Netlify examples, which claim to follow JAMstack patterns (something I'm not super familiar with), they do not use RESTful endpoints. Instead they tend to do something like this:
POST /create-user -> Create a user
GET /fetch-users -> Fetch all users
GET /fetch-user?id={id} -> Fetch one user
POST /update-user -> Update a user
So this is in some ways a Netlify question, and in some way a larger question about JAMstack patterns. Is there something inherent about JAMstack that makes it incompatible with REST? Are there different conventions which tend to replace REST for Netflify/JAMstack projects?
"Is there something inherent about JAMstack that makes it incompatible with REST?"
I would say no as it's not related. You aren't building an API with the Jamstack. You are using a service (Netlify) which supports serverless functions that operate alongside the rest of your site. Remember that the Netlify serverless functions are just one option. You could set up your own AWS setup and support the mechanism you want, and still use it in conjunction with the rest of your Jamstack site. I like Netlify's serverless stuff, but it's not going to work for 100% of the use cases out there.
I guess my tl;dr is - Netlify tried to make serverless simple for folks building Jamstack sites, but it won't cover every use case. When it doesn't, you can still use your own solutions along with your site.
Why do API's use different URLs? Is there two different interfaces on the web server? One processing API requests and the other web HTTP requests? For example there might be a site called www.joecoffee.com but then they use the URL www.api.joecoffe.com for their API requests. Why are different URLS being used here?
We separate ours for a couple of reasons, and they won't always apply.
Separation of concerns.
We write API code in one project, and deploy it in one unit. When we work on the API we only worry about that and we don't worry about page layout. When we do web work, that's completely separate
Different authentication mechanisms.
The way you tell a user to log in is quite different to how you tell an API client it's not authenticated.
Different scalability requirements
It might be that the API does a lot of complex operations, while the web-server serves more or less static content. So you might want to add hundreds of API servers around the world, but only have 10 web servers.
Different Clients
You might have an API for the web client and a separate API for a mobile client. Or perhaps a public one and a private / authenticated one. This might not apply to your example.
Different Technologies
Kind of an extension of Separation of concerns, but it allows you to have Linux server for one and use something like an AWS Lambda for the other.
SSL Wrangling
This one is more of an anti-reason (particularly for the specific example you give). Many sites use SSL for both web and api. Most sites are going to use SSL for the API at least. You tend to have SSL certificates matched to your URL, so there might be a reason there. That said, if you had a *.joecoffee.com certificate you would use api.joecoffee.com not www.api.joecoffee.com (because apparently an extra '.' in your URL costs more, or something like that).
As #james suggested - there's no really right answer and some debate.
I have started to work in a new company. We have rest service (XML exchange with external system) and have web site. REST service work on subdomain, for example rest.mycompany.com. Company site is mycompany.com. Site and rest work like that
REST -> DB <-SITE. This means that REST is not a part of site. It's an independent system. REST and site work with one database and use 90% same code (model, mapper etc). The problem for me is double coding and I wonder why it can't be a part of site (Import export controller, XML parser and one logger system)? On the other hand, it may be better to have different systems in terms of security and highloading for each subdomain...separated traffic for each subdomain?
Site and rest work like that REST -> DB <-SITE. This means that REST is not a part of site. It's an independent system. REST and site work with one database and use 90% same code (model, mapper etc).
That's a big problem. Especially since one system might generate a bug (inconsistent data) which only shows in the other system. Quite hard to debug.
The problem for me is double coding and I wonder why it can't be a part of site (Import export controller, XML parser and one logger system)?
The REST service and the website are just UI layers. The actual business logic should be moved to a third project (class library / module / lib) which both UI layer uses.
On the other hand, it may be better to have different systems in terms of security and highloading for each subdomain...separated traffic for each subdomain?
I would stick with different sites. Not for performance but since they have distinct responsibilities.
Looking for a recommendation of which framework/web server to go with on Linux. The idea is to build database backed RESTful web services.
I know Java, c++, c# (irrelevant I guess on linux) and C. Okay with developing in any of those.
Here is a table of frameworks that have varying degrees of support for REST and the languages they use.
You might want to check out RESTx. It is multi lingual: You can write code in Java, Python (server-side JavaScript coming soon). RESTx is specifically a platform for the creation of RESTful resources and web services. It is NOT a traditional application framework. DB backed web services are actually a specialty of RESTx: You identify the reusable components you want (in this case a JDBC capable DB access component), and then just configure it through the RESTful API or by filling out a small form in a browser. As a result, you get a new RESTful web service, which encapsulates the query you specified when creating the new resource.
I'm the lead developer on RESTx, so if you have any questions, please contact me or visit our forums.
If I were you I would go with Ruby 1.9.2 + Rails 3
they're fun and you get to learn something new
ubuntu specific install guide: http://web2linux.com/installing-rails-3-on-ubuntu-10-04-lucid-lynx/
official RoR intro: http://edgeguides.rubyonrails.org/getting_started.html