Questions about the documentation of the Kafka Protocol - apache-kafka

I implement the Kafka protocol for the Dart language. My implementation is based on the documentation at Kafka Protocol Guide.
I have 2 questions to that:
1.) There are 3 versions of the request header and 2 versions of the response header. Beside that, the requests and responses are also available in multiple versions. What is not documented from my point of view is which version of request uses which version of the request header (same for response and response header). Where is this information documented?
2.) The documentation references the KIP-482. Is the KIP implemented as documented or were there changes made while it was implemented?

The version is up to the client implementation to decide.
If you have no client ID and no tags, use request v0, if only client ID, then v1, both are v2
Similarly for the response, tags are used only in v1

After der discussion with OneCricketeer I looked into the sources of the Java client. There I found the file ApiMessageTypeGenerator.java.
The code there is used to generate the source of another class with the logic to determine the header version of a request/response. For the requests it looks like that every request that supports flexible versions uses a header of version 2 and all other a header of version 1. For the responses it is header version 1 for responses supporting flexible versions and otherwise header version 0. Additionally, there are 2 hardcoded exception in the linked source.
The generator works on files like ProduceRequest.json.

Related

why Fiddler4 not support Content-Type 'application/x-protobuf'?

Both request/response not showing proper formatted text
Image show info on fiddler
example x-protobuf data not showing proper formatted
example x-protobuf data not showing proper formatted
Fiddler do not has no support for Protobuf encoded messages. If you want't to know why that can't be answered here - contact Telerik and ask them.
There is an old plugin for Fiddler named ProtoMiddler that extends Fiddler to have some basic protobuf support, however it is quite complicated to install as it relies on protobuf installed (AFAIR the old protobuf v2) into a certain absolute path and the plugin has not been updated for years.
If you really need a Proxy with good Protobuf support I can only recommend Charles proxy to you. It allows to view protobuf messages RAW as well as decoded using compiled protobuf definitions (can be loaded/updated at run-time). Furthermore you can assign protobuf definitions to certain request and/or response paths.
From what I know, Fiddler doesn't support Protobuf.
If you need the see qualified Protobuf content, you need a schema that generates a protobuf binary.
You can try out Charles Proxy or Proxyman. They both support Protobuf parser pretty well.

Use Rest Doc to document Rest APIs with versioning

I am using Rest Doc to document my Rest APIs. Based on the TestNg methods I generate the snippets that, later on, I use in the asciidoc.
For example, I have a rest endpoint to retrieve people, and also the TestNg method to document it. So then, I can use it in my ascii doc like this:
=== Get People
Get the people registered.
operation::get-people[snippets='http-request,request-fields,http-response,response-fields,error-codes']
But now, my API changed, and I introduced versioning, so the API is different from version 1 to version 2. I also want to document that properly, but I don't know how to do it by being the "less intrusive" possible.
I would like my documentation to have this structure
Resources
v1
getPeople
v2
getPeople
For the API that really change from one version to another I need to generate a different snippet per version the api support, since maybe the request fields or response fields are different.
But, I also have some other rest endpoints that has the same API from one version to another, and I think that I also need to generate the snippets for each version since the version is part of the rest endpoint path.
Do you have any idea on how to being able to add versioning to the rest docs?

Obtain the swagger document for a REST service [duplicate]

Is there any spec or convention on URL where one should place swagger.json (or whatever name it is agreed) so that public API of my site can be automatically discovered?
Updated 19 April 2017: The OpenAPI Wiki answer I gave previously is "for a very very very old version of the spec". The same source states that for 2.0 the standard is swagger.json, for 3.0 it changes to openapi.json.
Original answer:
The OpenAPI Wiki recommends using an /api-docs endpoint, at
least for server APIs. I've seen several sites in the wild that use
that, and it's our shop standard.
Hope that helps.
How about serving the Swagger JSON in an HTTP response body, in response to an OPTIONS request for the URL / ?
This is specifically permitted by the relevant RFC.
Further, consider implementing HATEOAS, as strongly advocated by Roy Fielding.
Okay. OpenAPI 3.0 still lacking auto-discovery mechanism, I try to propose a scheme based on some things that were already working:
https://example.com/.well-known/schema-discovery is a JSON document pointing to array of available schemas:
[
{
"schema_url": "/openapi.json",
"schema_type": "openapi-3.0"
},
{
"schema_url": "/v2/openapi.json",
"schema_type": "openapi-3.0"
}
]
If there is only one version of API, then https://example.com/openapi.json should be enough.
HTTP Headers. I remember somebody from Google proposed HTTP header for pointing to API. If you can find or remember it, please tell me.

REST versioning - URL vs. header

I am planning to write a RESTful API and I am clueless how to handle versioning.
I have read many discussions and blog articles, which suggest to use the accept header for versioning.
But then I found following website listening popular REST APIs and their versioning method and most of them using the URL for versioning.
Why?
Why are most people saying: "Don't use the URL, but use the accept header", but popular APIs using URL?
Both mechanisms are valid. You need to know your consumer to know which path to follow. In general, working with enterprises and academically-minded folks tends to point developers towards Resource Header versioning. However, if your clients are smaller businesses, then URL versioning approach is more widely used.
The Pros and Cons (I'm sure there are more, and some of the Cons have work-arounds not mentioned here)
It's more explorable. For most requests you can just use a browser, whereas, the Resource Header implementation requires a more programatic approach to testing. However, because not all HTTP requests are explorable, for example, POST requests, you should use a Rest Client plugin like Postman or Paw. URI Pro/Header Con
With a URI-versioned API, resource identification and the resource’s representation is munged together. This violates the basic principles of REST; one resource should be identified by one and only one endpoint. In this regard, the Resource Header versioning choice is more academically idealistic. Header Pro/URI Con.
A URI-versioned API is less error prone and more familiar to the client developers. Versioning by URL allows the developer to figure out the version of a service at a glance. f the client developer forgets to include a resource version in the header, you have to decide if they should be directed to the latest version (which can cause errors when incrementing the version) or a 301 (Moved Permanatly) error. Either way there is more confusion for your more novice client developers. URI Pro/Header Con
URI versioning lends itself to hosing multiple versions in the same application. In this case you do not have to further development your framework.
Note: If you do this your directory structure will most likely contain a substantial amount of duplicate code in the v2 directory. Also, deploying updates requires a system restart - Thus this technique should be avoided if possible. URI Pro/Header Con.
It is easier to add versioning to the HTTP Headers for an existing project that didn't already have versioning in mind from it's inception. Header Pro/URI Con.
According to the RMM Level 3 REST Principle: Hypermedia Controls, you should use the HTTP Accept and Content-Type headers to handle versioning of data as well as describing data. Header Pro/URI Con.
Here are some helpful links if you want to do some further reading:
Martin Fowler's description of the Richardson Maturity Model
API Versioning - Pivotal Labs
HATEAOS
Informit.com's Article on Versioning REST Services

OData version 2 and 3 differences

OData protocol documentation (http://www.odata.org/documentation) describes two versions - 2 and 3.
What are the core differences between two versions?
Are both versions are supported by existing client libraries or version 2 is considered to be "legacy"?
To rephrase - are version 2 clients compatible with version 3?
There are lots of differences between the two versions. For example, OData v3 adds support for actions, functions, collection values, navigation properties on derived types, and stream properties. It also introduces a completely new serialization format for JSON ("application/json" means completely different things in the two versions).
When an OData client makes a request to a server, it can (and should) specify the maximum protocol version it can understand via the MaxDataServiceVersion HTTP header. A client written to only understand v2 of the protocol won't be able to understand a v3 payload.
I don't think I'd call v2 "legacy" or unsupported, but individual servers can choose whether or not to support requests that can only understand up to v2 (or v1). I think many existing clients out there support both v2 and v3. I know the WCF Data Services clients (desktop, windows phone, windows store, and silverlight) do support both.
In addition to a previous answer be aware that some client tools may still support only OData v2 protocol, so in case you need v3 specific features, you should make sure your client code is not limited by something like auto-generated proxy classes that are not capable of handling array types.
Here's an example when server exposes v3 features, but it's not possible to use them because Visual Studio WCF Data Service client proxy generator only supports v2:
http://bloggingabout.net/blogs/vagif/archive/2012/12/16/using-odata-protocol-v3-with-mongodata-odata-provider.aspx
You can find the list of all the differences between the two versions in the pdf of the Open Data Protocol (OData) Specification.
Specifically, the changelog is at section "1.7 Versioning and Capability Negotiation"