Storm-kafka: set startOffsetTime to kafka.api.OffsetRequest.LatestTime in apache Flux Yaml topology - apache-kafka

I am working on a topology using apache flux. Currently, strom fetches messages from beginning but I want it to fetch only the latest messages from kafka.
I am writing topology in YAML file.
This is how my spoutConfig looks like:
- id: "stringScheme"
className: "org.apache.storm.kafka.StringScheme"
- id: "stringMultiScheme"
className: "org.apache.storm.spout.SchemeAsMultiScheme"
constructorArgs:
- ref: "stringScheme"
- id: "zkHosts"
className: "org.apache.storm.kafka.ZkHosts"
constructorArgs:
- "172.25.33.191:2181"
- id: "spoutConfig"
className: "org.apache.storm.kafka.SpoutConfig"
constructorArgs:
- ref: "zkHosts"
- "blockdata"
- ""
- "myId"
properties:
- name: "scheme"
ref: "stringMultiScheme"
- name: "ignoreZkOffsets"
value: true
- name: "startOffsetTime"
ref: "XXXXXXXXX"
Now, I am stuck. How do I set startOffsetTime to proper function to get only the latest messages from kafka?
I have tried ref:"LatestTime", but no matter what I put in there, it give me error :
java.lang.IllegalArgumentException: Can not set long field org.apache.storm.kafka.KafkaConfig.startOffsetTime to null value

I believe Flux can handle calling static factory methods.
- id: "startingOffsetTime"
className: "kafka.api.OffsetRequest"
factory: "LatestTime"
and then use it in your SpoutConfig definition like
properties:
- name: "startOffsetTime"
ref: "startingOffsetTime"
I haven't tested this, but I think it should work. The ability to call static factory methods was merged a while back https://issues.apache.org/jira/browse/STORM-2796, but it seems to be missing from the documentation. I've raised an issue to update the docs https://issues.apache.org/jira/browse/STORM-3086.
In case you'd like to see an example of this feature, take a look at https://github.com/apache/storm/blob/master/flux/flux-core/src/test/resources/configs/config-methods-test.yaml#L38

Related

Creating two ALB with exception if value put in instance id then will create alb, but Not able to pass two instance id values in parameter section

Please help me with the I'm using to create two ALB with the exception if a value is put in one of the parameters then will create an alb that has the instances id but if I pass two instances id in the parameter ELB1InstanceIds then it gave me an error:
Properties validation failed for resource HTTPTG1 with message: #/Targets/0/Id: expected type: String, found: JSONArray
earlier I was using parameter `Type: 'ListAWS::EC2::Instance::Id' but when i don't select any instance in ELB2InstanceIds then I got an error that it required a selected value then only stack can be run.
so I have created with the type string and passed an instance id, it is working with one instance id but not working when I specify two instances id in one of the parameters.
Parameters:
ELB1InstanceIds:
Type: String
Default: i-12345678
ELB2InstanceIds:
Type: String
Default: i-12345678
Conditions:
IsELB1InstanceIdsNotEmpty: !Not [!Equals [!Ref ELB1InstanceIds, "i-12345678"]]
IsELB2InstanceIdsNotEmpty: !Not [!Equals [!Ref ELB2InstanceIds, "i-12345678"]]
Resources:
HTTPTG1:
Type: 'AWS::ElasticLoadBalancingV2::TargetGroup'
Properties:
Targets:
- Id: !If
- IsELB1InstanceIdsNotEmpty
- - !Split [",", !Ref ELB1InstanceIds]
- !Ref 'AWS::NoValue'
HTTPTG2:
Type: 'AWS::ElasticLoadBalancingV2::TargetGroup'
Properties:
Targets:
- Id: !If
- IsELB2InstanceIdsNotEmpty
- - !Split [",", !Ref ELB2InstanceIds]
- !Ref 'AWS::NoValue'
earlier I was using two parameters ELB1InstanceIds, and ELB2InstanceIds with Type: 'List<AWS::EC2::Instance::Id>' but when I don't select any instance in ELB2InstanceIds then I got an error that it required a selected value then only stack can be run.

serverless-appsync-plugin 'pipeline' deployment error

I am using serverless to deploy an Appsync API using 'PIPELINE', for use as an API lambda-functions. This plugin https://github.com/sid88in/serverless-appsync-plugin is used to deploy Appsync with the ability to use 'pipeline'. I used the description from the documentation however when I try to do deploy it in myself I have an error:
Error: The CloudFormation template is invalid: Template error: instance of Fn::GetAtt references undefined resource GraphQlDsmeInfo
functions:
graphlql:
handler: handler.meInfo
name: meInfo
custom:
accountId: testId
appSync:
name: test-AppSync
authenticationType: API_KEY
mappingTemplates:
- type: Query
field: meInfo
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
kind: PIPELINE
functions:
- meInfo
functionConfigurations:
- dataSource: meInfo
name: 'meInfo'
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
Could somebody help me to configure 'serverless-appsync-plugin ' with pipeline kind?
You need to specify the data source used in your function.
It seems you've deployed the handler as Lambda function. If not, first you should have a separate serverless.yml config for your Lambda and deploy it. Then you need to attach this Lambda as AppSync data source, so your AppSync config would look like this:
custom:
accountId: testId
appSync:
name: test-AppSync
authenticationType: API_KEY
dataSources:
- type: AWS_LAMBDA
name: Lambda_Name
description: 'Lambda Description'
config:
lambdaFunctionArn: 'arn:aws:lambda:xxxx'
serviceRoleArn: 'arn:aws:iam::xxxx'
mappingTemplates:
- type: Query
field: meInfo
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
kind: PIPELINE
functions:
- meInfo
functionConfigurations:
- dataSource: Lambda_Name
name: 'meInfo'
request: 'meInfo-request-mapping-template.vtl'
response: 'meInfo-response-mapping-template.vtl'
There is an article which describes the process in details that might be useful: https://medium.com/hackernoon/running-a-scalable-reliable-graphql-endpoint-with-serverless-24c3bb5acb43

Is there a way to override properties' description and example in OAS3?

I have been looking for resources about inheritance in OAS3 but the closest i get is https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.2.md and the above did not have answer i am looking for.
This is the working example
components:
schemas:
Pet:
properties:
no_legs:
description: "Number of legs"
type: number
example: 4
Duck:
allOf:
- $ref: '#/components/schemas/Pet'
- type: object
properties:
no_legs:
example: 2
properties:
no_legs:
description: 'Number of webbed feet'
Failing example that was inspired by the spec
Duck:
allOf:
- $ref: '#/components/schemas/Pet'
- type: object
properties:
no_legs:
description: 'Number of webbed feet'
example: 2
My questions are
Is the overriding feature i am looking at available/supported?
If so what is the appropriate way of doing it?
I understood that i can use composition to tackle this issue but i will have a lot of the definition being repeated.
Yes, that's essentially the way to go about overriding properties. What sort of error are you getting on your failing example? It works just fine at https://editor.swagger.io/ at least. Did you remember to specify openapi: 3.0.0 at the root of your document?
Note that the following two definitions are identical:
Duck:
allOf:
- $ref: '#/components/schemas/Pet'
- type: object
properties:
no_legs:
description: 'Number of webbed feet'
example: 2
Duck:
allOf:
- $ref: '#/components/schemas/Pet'
properties:
no_legs:
description: 'Number of webbed feet'
example: 2

Cannot deploy aws sam stack due to Handler not found error

I am having issues deploying a lambda with a handler in a nested directory using sam.
I perform the following steps:
package:
sam package --template template.yaml --output-template-file packaged.yaml --s3-bucket
Creates a packaged.yaml that I use in the next step.
deploy:
aws cloudformation deploy --template-file /Users/localuser/Do/learn-sam/dynamo-stream-lambda/packaged.yaml --stack-name barkingstack
ERROR
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Transform AWS::Serverless-2016-10-31 failed with: Invalid Serverless Application Specification document. Number of errors found: 1. Resource with id [PublishNewBark] is invalid. Missing required property 'Handler'.
Cloudformation/SAM Template
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Globals:
Function:
Runtime: nodejs8.10
Timeout: 300
Resources:
PublishNewBark:
Type: AWS::Serverless::Function
FunctionName: publishNewBark
CodeUri: .
Handler: src/index.handler
Role: "<ROLE_ARN>"
Description: Reads from the DynamoDB Stream and publishes to an SNS topic
Events:
- ReceiveBark:
Type: DynamoDB
Stream: !GetAtt BarkTable.StreamArn
StartingPosition: TRIM_HORIZON
BatchSize: 1
BarkTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: BarkTable
KeySchema:
- KeyType: HASH
AttributeName: id
AttributeDefinitions:
- AttributeName: id
AttributeType: S
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
ProvisionedThroughput:
WriteCapacityUnits: 5
ReadCapacityUnits: 5
WooferTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: wooferTopic
TopicName: wooferTopic
Subscription:
- Endpoint: <my_email>
Protocol: email
DIRECTORY STRUCTURE
root_directory/
events/ (for sample events)
policies/ (for IAM Role to be created for the lambda using CLI)
src/index.js
package.json
node_modules
template.yaml
HANDLER CODE
async function handler (event, context) {
console.log(JSON.stringify(event, null, 2))
return {}
}
module.exports = {handler}
I believe you have to put everything except the resource type under "Properties".
Your function declaration should be:
PublishNewBark:
Type: AWS::Serverless::Function
Properties:
FunctionName: publishNewBark
CodeUri: .
Handler: src/index.handler
Role: "<ROLE_ARN>"
Description: Reads from the DynamoDB Stream and publishes to an SNS topic
Events:
- ReceiveBark:
Type: DynamoDB
Stream: !GetAtt BarkTable.StreamArn
StartingPosition: TRIM_HORIZON
BatchSize: 1

!ImportValue in Serverless Framework not working

I'm attempting to export a DynamoDb StreamArn from a stack created in CloudFormation, then reference the export using !ImportValue in the serverless.yml.
But I'm getting this error message:
unknown tag !<!ImportValue> in "/codebuild/output/src/serverless.yml"
The cloudformation and serverless.yml are defined as below. Any help appreciated.
StackA.yml
AWSTemplateFormatVersion: 2010-09-09
Description: Resources for the registration site
Resources:
ClientTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: client
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 2
WriteCapacityUnits: 2
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
Outputs:
ClientTableStreamArn:
Description: The ARN for My ClientTable Stream
Value: !GetAtt ClientTable.StreamArn
Export:
Name: my-client-table-stream-arn
serverless.yml
service: my-service
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeStream
- dynamodb:GetRecords
- dynamodb:GetShardIterator
- dynamodb:ListStreams
- dynamodb:GetItem
- dynamodb:PutItem
Resource: arn:aws:dynamodb:*:*:table/client
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn: !ImportValue my-client-table-stream-arn
batchSize: 1
Solved by using ${cf:stackName.outputKey}
I struggled with this as well, and what did trick for me was:
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn:
!ImportValue my-client-table-stream-arn
batchSize: 1
Note, that intrinsic functions ImportValue is on a new line and indented, otherwise the whole event is ignored when cloudformation-template-update-stack.json is generated.
It appears that you're using the !ImportValue shorthand for CloudFormation YAML. My understanding is that when CloudFormation parses the YAML, and !ImportValue actually aliases Fn::ImportValue. According to the Serverless Function documentation, it appears that they should support the Fn::ImportValue form of imports.
Based on the documentation for Fn::ImportValue, you should be able to reference the your export like
- stream:
type: dynamodb
arn: {"Fn::ImportValue": "my-client-table-stream-arn"}
batchSize: 1
Hope that helps solve your issue.
I couldn't find it clearly documented anywhere but what seemed to resolve the issue for me is:
For the Variables which need to be exposed/exported in outputs, they must have an "Export" property with a "Name" sub-property:
In serverless.ts
resources: {
Resources: resources["Resources"],
Outputs: {
// For eventbus
EventBusName: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME",
},
Value: {
Ref: "UNIQUE_EVENTBUS_NAME",
},
},
// For something like sqs, or anything else, would be the same
IDVerifyQueueARN: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_SQS_NAME",
},
Value: { "Fn::GetAtt": ["UNIQUE_SQS_NAME", "Arn"] },
}
},
}
Once this is deployed you can check if the exports are present by running in the terminal (using your associated aws credentials):
aws cloudformation list-exports
Then there should be a Name property in a list:
{
"ExportingStackId": "***",
"Name": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
"Value": "***"
}
And then if the above is successful, you can reference it with "Fn::ImportValue" like so, e.g.:
"Resource": {
"Fn::ImportValue": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
}