Make kinesis stream accept multiple consumers - spring-cloud

I have a stream "dest" for which I want two consumers and the same message should be consumed by both the streams.
But If one stream consumes the message then the message is lost and doesn't go to the second stream. I don't want queue but topic here.
I tried creating 2 different groups for 2 different streams but didn't help.
How can I configure it here?
spring:
cloud:
stream:
bindings:
input:
group: group1
destination: dest
content-type: application/json
spring:
cloud:
stream:
bindings:
input:
group: group2
destination: dest
content-type: application/json
spring:
cloud:
stream:
bindings:
output:
destination: dest
content-type: application/json

You need to declare two different bindings for that use-case input1 and input2:
spring:
cloud:
stream:
bindings:
input1:
group: group1
destination: dest
content-type: application/json
spring:
cloud:
stream:
bindings:
input2:
group: group2
destination: dest
content-type: application/json
This way two different consumers are going to be created by the binder and, therefore, both of them are going to consume the same record from Kinesis stream.
Of course, you would need to have two different #StreamListener configuration for each binding target.

Related

Kafka headers are being overwritten by Kafka Source

I want to have Kafka message (cloud event) to be passed through Kafka Source -> Broker -> ASP.NET core service with headers from initial Kafka message.
Right now I can put message with body, headers and on Kafka, it's consumed by Kafka Source, but headers from Kafka message are replaced somewhere between Kafka and my service.
Initial headers:
correlationid = {guid}
ce-specversion = 1.0
ce-id = {guid}
ce-source = {differentRelativeUriThanBelow}
ce-type = {com.company.product.request.amqp.asynchronous:v1}
Content-Type = application/cloudevents
Received in service:
correlationid =
ce-specversion = 1.0
ce-id = partition:0/offset:52
ce-source = /apis/v1/namespaces/myNamespace/kafkasources/kafka-source-myNamespace#myKafkaTopic
ce-type = dev.knative.kafka.event
Content-Type = application/cloudevents
Any way I can prevent this behaviour or at least configure it in a way so my headers are included in HTTP request received by the service?

"DedupeResponseHeader" not working with Greenwich.SR3

DedupeResponseHeader is not working for me in Spring Cloud Greenwich.SR3, I have added CORS configuration in application.yml, and downstream application is also sending Access-Control-Allow-Origin in response header, which in ending up with:
The 'Access-Control-Allow-Origin' header contains multiple values 'http://localhost:4200, http://localhost:4200', but only one is allowed.
I have used DedupeResponseHeader but that is not working for me still seeing same error in browser console. Following is the config for CORS and DedupeResponseHeader:
spring:
cloud:
gateway:
default-filters:
- DedupeResponseHeader=Access-Control-Allow-Origin, RETAIN_UNIQUE
globalcors:
add-to-simple-url-handler-mapping: true
corsConfigurations:
'[/**]':
allowedOrigins: "http://localhost:4200"
allowedMethods: "*"
allowedHeaders: "*"
Tried in filters also, but also didn't work
spring:
cloud:
gateway:
routes:
- id: dedupe_response_header_route
uri: http://localhost:4200
predicates:
- Method=OPTIONS
- Method=GET
filters:
- DedupeResponseHeader=Access-Control-Allow-Origin
Couldn't figure out the reason why its not working, double checked the spring cloud version. I appreciate, if someone could help to understand why DedupeResponseHeader not working.
You can use the latest version of the spring cloud i.e. 2020.0.2 --- it is working perfectly there.

Can't make DREDD to use the schema.example as POST body

I'm trying to use Dredd to test my OpenAPI specified API but I can't get Dredd to recognize the JSON body of my POST requests, it keeps sending my POST requests with an empty body.
According to Dredd's documentation it uses the schema.example for "in": "body" and that's exactly what I am doing but Dredd keeps issuing the POST with empty body.
I've tried both OpenAPI3 and OpenAPI2 with the same result. My POST operation in OpenAPI2 specification looks like this:
/availableCounters:
post:
summary: Get the available counters for a specified time range
description: This API returns the available counters for the specific time range requested.
responses:
'200':
description: OK
schema:
type: object
properties:
properties:
type: array
items:
$ref: '#/definitions/property_spec'
'400':
description: 'Could not retrieve available Counters: ${error}'
parameters:
- required: true
name: body
in: body
schema:
example: {"searchSpan": {"from": {"dateTime": "2019-01-20T21:50:37.349Z"},"to": {"dateTime": "2019-01-22T21:50:37.349Z"}}}
type: object
properties:
searchSpan:
$ref: '#/definitions/from_to'
But when I use Dredd to test this OpenAPI definition, for this operation it doesn't send the body as it should:
request:
method: POST
uri: /availableCounters
headers:
User-Agent: Dredd/8.0.0 (Windows_NT 10.0.17134; x64)
body:
expected:
headers:
statusCode: 200
bodySchema: {"type":"object","properties":{"properties":{"type":"array","items":{"$ref":"#/definitions/property_spec"}}},"definitions":{"property_spec":{"type":"object","properties":{"name":{"type":"string"},"type":{"type":"string","enum":["Double","String","DateTime"]}}}}}
actual:
statusCode: 400
headers:
connection: close
date: Tue, 12 Feb 2019 23:22:09 GMT
content-type: text/plain; charset=utf-8
server: Kestrel
content-length: 96
bodyEncoding: utf-8
body:
Could not retrieve available Counters: TypeError: Cannot read property 'searchSpan' of undefined
I've tried using both schema.example and schema.x-example but Dredd will not send the body. As I've said previously I've also tried OpenAPI3 and I get the same result.
Any help would be greatly appreciated.
The question is old, but the problem is still there: Dredd seems to ignore the body parameter, if the consumes field is missing.
So try:
/availableCounters:
post:
summary: Get the available counters for a specified time range
consumes:
- application/json
[...]

Metadata information from kafka

I am new to Confluent/Kafka and I want to find metadata information from kafka
I want to know
list of producers
list of topics
schema information for topic
Confluent version is 5.0
What are classes (methods) that can give this information?
Are there any Rest API's for the same
Also is zookeeper connection necessary to get this information.
1) I don't think that Kafka brokers are aware of producers that produce messages in topics and therefore there is no command line tool for listing them. However, an answer to this SO question suggests that you can list producers by viewing the MBeans over JMX.
2) In order to list the topics you need to run:
kafka-topics --zookeeper localhost:2181 --list
Otherwise, if you want to list the topics using a Java client, you can call listTopics() method of KafkaConsumer.
You can also fetch the list of topics through ZooKeeper
ZkClient zkClient = new ZkClient("zkHost:zkPort");
List<String> topics = JavaConversions.asJavaList(ZkUtils.getAllTopics(zkClient));
3) To get the schema information for a topic you can use Schema Registry API
In particular, you can fetch all subjects by calling:
GET /subjects HTTP/1.1
Host: schemaregistry.example.com
Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
which should give a response similar to the one below:
HTTP/1.1 200 OK
Content-Type: application/vnd.schemaregistry.v1+json
["subject1", "subject2"]
You can then get all the versions of a particular subject:
GET /subjects/subject-name/versions HTTP/1.1
Host: schemaregistry.example.com
Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
And finally, you can get a specific version of the schema registered under this subject
GET /subjects/subject_name/versions/1 HTTP/1.1
Host: schemaregistry.example.com
Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
Or just the latest registered schema:
GET /subjects/subject-name/versions/latest HTTP/1.1
Host: schemaregistry.example.com
Accept: application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json
In order to perform such actions in Java, you can either prepare your own GET requests (see how to do it here) or use Confluent's Schema Registry Java Client. You can see the implementation and the available methods in their Github repo.
Regarding your question about Zookeeper, note that ZK is a requirement for Kafka.
Kafka uses ZooKeeper so you need to first start a ZooKeeper server if
you don't already have one. You can use the convenience script
packaged with kafka to get a quick-and-dirty single-node ZooKeeper
instance.

Amazon AWS Machine Learning HTTP request

I have created AWS Machine Learning model with working real-time endpoint. I want to consume created service via HTTP request. For testing purpose I'm using Postman, I've created request according to Amazon's API documentation but every time I get the same exception: UnknownOperationException. While I'm using Python SDK the service is working fine. Below example that gets model info.
That's my request (fake credentials):
POST HTTP/1.1
Host: realtime.machinelearning.us-east-1.amazonaws.com
Content-Type: application/json
X-Amz-Target: AmazonML_20141212.GetMLModel
X-Amz-Date: 20170714T124250Z
Authorization: AWS4-HMAC-SHA256 Credential=JNALSFNLANFAFS/20170714/us-east-1/AmazonML/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date;x-amz-target, Signature=fiudsf9sdfh9sdhfsd9hfsdkfdsiufhdsfoidshfodsh
Cache-Control: no-cache
Postman-Token: hd9sfh9s-idsfuuf-a32c-31ca-dsufhdso
{
"MLModelId": "ml-Hfdlfjdof0807",
"Verbose": true
}
Exception I get:
{
"Output": {
"__type": "com.amazon.coral.service#UnknownOperationException",
"message": null
},
"Version": "1.0"
}
After doing research on AWS forum I've found some similar HTTP requests. Turns out I had 3 incorrect parameters.
Host address should be:
Host: machinelearning.us-east-1.amazonaws.com
Content type:
Content-Type: application/x-amz-json-1.1
In credentials parameters target service has to be specified as machinelearning
Short instruction how to setup Postman's request:
In Authorization tab choose AWS Signature and fill in AccessKey and SecrectKey. In Service Name field write machinelearning. Click Update Request, this will update your header.
In Headers tab add two headers:
Key: X-Amz-Target, Value: AmazonML_20141212.GetMLModel
Key: Content-Type, Value: application/x-amz-json-1.1
Add body:
{ "MLModelId": "YOUR_ML_MODEL_ID", "Verbose": true }
Correct HTTP request below:
POST HTTP/1.1
Host: machinelearning.us-east-1.amazonaws.com
X-Amz-Target: AmazonML_20141212.GetMLModel
Content-Type: application/x-amz-json-1.1
X-Amz-Date: 20170727T113217Z
Authorization: AWS4-HMAC-SHA256 Credential=JNALNFAFS/20170727/us-east-1/machinelearning/aws4_request,
SignedHeaders=content-length;content-type;host;x-amz-date;x-amz-target,
Signature=fiudsf9sdfh9sdhfsd9hfsdkfdsiufhdsfoidshfodsh
Cache-Control: no-cache
Postman-Token: hd9sfh9s-idsfuuf-a32c-31ca-dsufhdso
{
"MLModelId": "ml-Hfdlfjdof0807",
"Verbose": true
}
Please check following link and validate your sigv4
http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html