Infinispan - Cannot create more that one cache at startup using helm template - kubernetes

I want to create more than one cache using helm, my yaml is the following
deploy:
infinispan:
cacheContainer:
distributedCache:
- name: "mycache"
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
- name: "mycache1"
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
when when i install the helm i get the following error
Red Hat Data Grid Server failed to start org.infinispan.commons.configuration.io.ConfigurationReaderException: Missing required attribute(s): name[86,1]
I dont know if is possible to create more than one cache. I have followed the following documentation https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.3/html/building_and_deploying_data_grid_clusters_with_helm/configuring-servers
Thanks for your help.
Alexis

Yes it's possible to define multiple caches. You have to use the format:
deploy:
infinispan:
cacheContainer:
<1st-cache-name>:
<cache-type>:
<cache-definition>:
...
<2nd-cache-name>:
<cache-type>:
<cache-definition>:
So in your case that will be:
deploy:
infinispan:
cacheContainer:
mycache: # mycache definition follows
distributedCache:
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
mycache1: # mycache1 definition follows
distributedCache:
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
See here for an example of how to define multiple caches in json/xml/yaml formats.

Related

Sale cannot be executed in Kadena Pact Marmalde

Errors occur in the following conditions.
Running on local server (ubuntu on windows).
I deployed these files.
Token is minted.
The yaml file is generated via Django.
The generated file is then used to run the API request formatter to generate Json.
Template:
code:
(marmalade.ledger.sale
(read-msg 'token-id)
(read-msg 'account)
(read-decimal 'amount)
(read-integer 'timeout)
)
data:
token-id: "${token_id}"
account: "${account}"
amount: ${amount}
timeout: ${timeout}
quote:
price: "${price}"
recipient-guard:
keys:
[ "${recipient_key}", ]
recipient: ${recipient}
keyPairs:
- public: ${marmalade_pub}
secret: ${marmalade_sec}
caps:
- name: "marmalade.ledger.SALE"
args: ["${token_id}", "${account}", ${amount}, ${timeout}]
Generated File:
code:
(marmalade.ledger.sale
(read-msg 'token-id)
(read-msg 'account)
(read-decimal 'amount)
(read-integer 'timeout)
)
data:
token-id: "92496e13-4414-4c58-aceb-0c7d77120af2"
account: "a0fc037522ae0202052775f773e3cf823e3bd7640a12e80694b2dd76"
amount: 1.0
timeout: 100
quote:
price: "1.0"
recipient-guard:
keys:
[ "52d9d996e6cc77f48a1ff1b734224a2ce6cee3af84c3f6e1720047645e9c2c31", ]
recipient: a0fc037522ae0202052775f773e3cf823e3bd7640a12e80694b2dd76
keyPairs:
- public: b63680e667576818b713d5d398ad395610b522fbc53e08afe46a719a0a128fb5
secret: 4242d66fac539660702ce4f9c01a39bb16f1d6b6c7110c6647010cb4dd041dd8
caps:
- name: "marmalade.ledger.SALE"
args: ["92496e13-4414-4c58-aceb-0c7d77120af2", "a0fc037522ae0202052775f773e3cf823e3bd7640a12e80694b2dd76", 1.0, 100]
JSON:
b'{"cmds":[{"hash":"cD5gqQVmRj08xqOdqfKwLLL2eb76_j_eZDdzV4Xdkjg","sigs":[{"sig":"2fb4bbb496271a138c33a4c8b94cf4511c8ad494b5c0cca94a2d1f98a37025e865b1414116fe77e841fbe71c6430e
5da72209123a63e958b992f4e8ba1770103"}],"cmd":"{\\"networkId\\":null,\\"payload\\":{\\"exec\\":{\\"data\\":{\\"amount\\":1.0,\\"token-id\\":\\"92496e13-4414-4c58-aceb-0c7d7712
0af2\\",\\"account\\":\\"a0fc037522ae0202052775f773e3cf823e3bd7640a12e80694b2dd76\\",\\"quote\\":{\\"recipient-guard\\":{\\"keys\\":[\\"52d9d996e6cc77f48a1ff1b734224a2ce6cee3
af84c3f6e1720047645e9c2c31\\"]},\\"price\\":\\"1.0\\",\\"recipient\\":\\"a0fc037522ae0202052775f773e3cf823e3bd7640a12e80694b2dd76\\"},\\"timeout\\":100},\\"code\\":\\"(marmal
ade.ledger.sale (read-msg \'token-id) (read-msg \'account) (read-decimal
\'amount) (read-integer \'timeout)
)\\"}},\\"signers\\":[{\\"pubKey\\":\\"b63680e667576818b713d5d398a
d395610b522fbc53e08afe46a719a0a128fb5\\",\\"clist\\":[{\\"args\\":[\\"92496e13-4414-4c58-aceb-0c7d77120af2\\",\\"a0fc037522ae0202052775f773e3cf823e3bd7640a12e80694b2dd76\\",1
,100],\\"name\\":\\"marmalade.ledger.SALE\\"}]}],\\"meta\\":{\\"creationTime\\":0,\\"ttl\\":0,\\"gasLimit\\":0,\\"chainId\\":\\"\\",\\"gasPrice\\":0,\\"sender\\":\\"\\"},\\"n
once\\":\\"2022-07-20 09:53:05.975872 UTC\\"}"}]}\n'
The following error occurred when I posted the Json data to request.post on 'localhost:8080/api/v1/poll'.
{'cD5gqQVmRj08xqOdqfKwLLL2eb76_j_eZDdzV4Xdkjg': {'gas': 0, 'result': {'status': 'failure', 'error': {'callStack': [], 'type': 'EvalError', 'message': 'Managed capability not installed: (marmalade.ledger.OFFER "92496e13-4414-4c58-aceb-0c7d77120af2" "a0fc037522ae0202052775f773e3cf823e3bd7640a12e80694b2dd76" 1.0 100)', 'info': ''}}, 'reqKey': 'cD5gqQVmRj08xqOdqfKwLLL2eb76_j_eZDdzV4Xdkjg', 'logs': None, 'metaData': None,
'continuation': None, 'txId': None}}
How do I get these to work properly?
P.S. I am Japanese and not an English speaker. I used a translation site to ask this question. Sorry if the description is missing.

Using jhipster framework to configure mongodb prompt not authorized

I used scaffolding to generate a new microservice,then I made the following configuration for mongodb:
logging:
level:
ROOT: DEBUG
io.github.jhipster: DEBUG
com.fzai.fileservice: DEBUG
eureka:
instance:
prefer-ip-address: true
client:
service-url:
defaultZone: http://admin:${jhipster.registry.password}#localhost:8761/eureka/
spring:
profiles:
active: dev
include:
- swagger
# Uncomment to activate TLS for the dev profile
#- tls
devtools:
restart:
enabled: true
additional-exclude: static/**
livereload:
enabled: false # we use Webpack dev server + BrowserSync for livereload
jackson:
serialization:
indent-output: true
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
mail:
host: localhost
port: 25
username:
password:
messages:
cache-duration: PT1S # 1 second, see the ISO 8601 standard
thymeleaf:
cache: false
sleuth:
sampler:
probability: 1 # report 100% of traces
zipkin: # Use the "zipkin" Maven profile to have the Spring Cloud Zipkin dependencies
base-url: http://localhost:9411
enabled: false
locator:
discovery:
enabled: true
server:
port: 8081
# ===================================================================
# JHipster specific properties
#
# Full reference is available at: https://www.jhipster.tech/common-application-properties/
# ===================================================================
jhipster:
cache: # Cache configuration
hazelcast: # Hazelcast distributed cache
time-to-live-seconds: 3600
backup-count: 1
management-center: # Full reference is available at: http://docs.hazelcast.org/docs/management-center/3.9/manual/html/Deploying_and_Starting.html
enabled: false
update-interval: 3
url: http://localhost:8180/mancenter
# CORS is disabled by default on microservices, as you should access them through a gateway.
# If you want to enable it, please uncomment the configuration below.
cors:
allowed-origins: "*"
allowed-methods: "*"
allowed-headers: "*"
exposed-headers: "Authorization,Link,X-Total-Count"
allow-credentials: true
max-age: 1800
security:
client-authorization:
access-token-uri: http://uaa/oauth/token
token-service-id: uaa
client-id: internal
client-secret: internal
mail: # specific JHipster mail property, for standard properties see MailProperties
base-url: http://127.0.0.1:8081
metrics:
logs: # Reports metrics in the logs
enabled: false
report-frequency: 60 # in seconds
logging:
use-json-format: false # By default, logs are not in Json format
logstash: # Forward logs to logstash over a socket, used by LoggingConfiguration
enabled: false
host: localhost
port: 5000
queue-size: 512
audit-events:
retention-period: 30 # Number of days before audit events are deleted.
oauth2:
signature-verification:
public-key-endpoint-uri: http://uaa/oauth/token_key
#ttl for public keys to verify JWT tokens (in ms)
ttl: 3600000
#max. rate at which public keys will be fetched (in ms)
public-key-refresh-rate-limit: 10000
web-client-configuration:
#keep in sync with UAA configuration
client-id: web_app
secret: changeit
An error occurred while I was running the project:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [com/fzai/fileservice/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1771)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:847)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at com.fzai.fileservice.FileServiceApp.main(FileServiceApp.java:70)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: com.mongodb.MongoQueryException: Query failed with error code 13 and error message 'not authorized on fileService to execute command { find: "system.indexes", filter: { ns: "fileService.dbchangelog", key: { changeId: 1, author: 1 } }, limit: 1, singleBatch: true, $db: "fileService" }' on server 42.193.124.204:27017
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:706)
at com.mongodb.operation.FindOperation$1.call(FindOperation.java:695)
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:462)
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:406)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:695)
at com.mongodb.operation.FindOperation.execute(FindOperation.java:83)
at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:179)
at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:198)
at com.github.mongobee.dao.ChangeEntryIndexDao.findRequiredChangeAndAuthorIndex(ChangeEntryIndexDao.java:35)
at com.github.mongobee.dao.ChangeEntryDao.ensureChangeLogCollectionIndex(ChangeEntryDao.java:121)
at com.github.mongobee.dao.ChangeEntryDao.connectMongoDb(ChangeEntryDao.java:61)
at com.github.mongobee.Mongobee.execute(Mongobee.java:143)
at com.github.mongobee.Mongobee.afterPropertiesSet(Mongobee.java:126)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1830)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1767)
... 19 common frames omitted
But in my other simple springboot project, I used the same configuration, which can run and use successfully:
spring:
application:
name: springboot1
data:
mongodb:
host: 42.193.124.204
port: 27017
username: admin
password: admin123
authentication-database: fileService
database: fileService
This is the user and role I created:
{
"_id" : "fileService.admin",
"userId" : UUID("03f75395-f129-4273-b6a6-b2dc3d1f7974"),
"user" : "admin",
"db" : "fileService",
"roles" : [
{
"role" : "dbOwner",
"db" : "fileService"
},
{
"role" : "readWrite",
"db" : "fileService"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
I want to know what's wrong.

How to pass extra configuration to RabbitMQ with Helm?

I'm using this chart: https://github.com/helm/charts/tree/master/stable/rabbitmq to deploy a cluster of 3 RabbitMQ nodes on Kubernetes. My intention is to have all the queues mirrored within 2 nodes in the cluster.
Here's the command I use to run Helm: helm install --name rabbitmq-local -f rabbitmq-values.yaml stable/rabbitmq
And here's the content of rabbitmq-values.yaml:
persistence:
enabled: true
resources:
requests:
memory: 256Mi
cpu: 100m
replicas: 3
rabbitmq:
extraConfiguration: |-
{
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
However, the nodes fail to start due to some parsing errors, and they stay in crash loop. Here's the output of kubectl logs rabbitmq-local-0:
BOOT FAILED
===========
Config file generation failed:
=CRASH REPORT==== 23-Jul-2019::15:32:52.880991 ===
crasher:
initial call: lager_handler_watcher:init/1
pid: <0.95.0>
registered_name: []
exception exit: noproc
in function gen:do_for_proc/2 (gen.erl, line 228)
in call from gen_event:rpc/2 (gen_event.erl, line 239)
in call from lager_handler_watcher:install_handler2/3 (src/lager_handler_watcher.erl, line 117)
in call from lager_handler_watcher:init/1 (src/lager_handler_watcher.erl, line 51)
in call from gen_server:init_it/2 (gen_server.erl, line 374)
in call from gen_server:init_it/6 (gen_server.erl, line 342)
ancestors: [lager_handler_watcher_sup,lager_sup,<0.87.0>]
message_queue_len: 0
messages: []
links: [<0.90.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 610
stack_size: 27
reductions: 228
neighbours:
15:32:53.679 [error] Syntax error in /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf after line 14 column 1, parsing incomplete
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681369 ===
supervisor: {local,gr_counter_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.97.0>},
{id,gr_lager_default_tracer_counters},
{mfargs,{gr_counter,start_link,
[gr_lager_default_tracer_counters]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681514 ===
supervisor: {local,gr_param_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.96.0>},
{id,gr_lager_default_tracer_params},
{mfargs,{gr_param,start_link,[gr_lager_default_tracer_params]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
If I remove the rabbitmq.extraConfiguration part, the nodes start properly, so it must be something wrong with the way I'm typing in the policy. Any idea what I'm doing wrong?
Thank you.
According to https://github.com/helm/charts/tree/master/stable/rabbitmq#load-definitions, it is possible to link a JSON configuration as extraConfiguration. So we ended up with this setup that works:
rabbitmq-values.yaml:
rabbitmq:
loadDefinition:
enabled: true
secretName: rabbitmq-load-definition
extraConfiguration:
management.load_definitions = /app/load_definition.json
rabbitmq-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: rabbitmq-load-definition
type: Opaque
stringData:
load_definition.json: |-
{
"vhosts": [
{
"name": "/"
}
],
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
The secret must be loaded into Kubernetes before the Helm chart is played, which goes something like this: kubectl apply -f ./rabbitmq-secret.yaml.
You can use config default of HelmChart
If needed, you can use extraSecrets to let the chart create the secret for you. This way, you don't need to manually create it before deploying a release. For example :
extraSecrets:
load-definition:
load_definition.json: |
{
"vhosts": [
{
"name": "/"
}
]
}
rabbitmq:
loadDefinition:
enabled: true
secretName: load-definition
extraConfiguration: |
management.load_definitions = /app/load_definition.json
https://github.com/helm/charts/tree/master/stable/rabbitmq
Instead of using extraConfiguration, use advancedConfiguration, you should put all these info in this section as it is for classic config format (erlang)

How to provide resource limits in kubernetes go client pod spec?

Spec: v1.PodSpec{
Containers: []v1.Container{
v1.Container{
Name: podName,
Image: deploymentName,
ImagePullPolicy: "IfNotPresent",
Ports: []v1.ContainerPort{},
Env: []v1.EnvVar{
v1.EnvVar{
Name: "RASA_NLU_CONFIG",
Value: os.Getenv("RASA_NLU_CONFIG"),
},
v1.EnvVar{
Name: "RASA_NLU_DATA",
Value: os.Getenv("RASA_NLU_DATA"),
},
},
Resources: v1.ResourceRequirements{},
},
},
RestartPolicy: v1.RestartPolicyOnFailure,
},
I want to provide resource limits as corresponding like :
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
args:
- -cpus
- "2"
How do I go on to do that. I tried adding Limits and its map key value pair but it seems to be quite a nested structure. There doesnt seem to be any example as to how to provide resources in kube client go.
I struggled with the same when i was creating a statefulset. Maybe my codesnipped will help you:
Resources: apiv1.ResourceRequirements{
Limits: apiv1.ResourceList{
"cpu": resource.MustParse(cpuLimit),
"memory": resource.MustParse(memLimit),
},
Requests: apiv1.ResourceList{
"cpu": resource.MustParse(cpuReq),
"memory": resource.MustParse(memReq),
},
},
the vars cpuReq, memReq, cpuLimit and memLimit are supposed to be strings
Here you can find definition of v1.ResourceRequirements{}:
// ResourceRequirements describes the compute resource requirements.
type ResourceRequirements struct {
// Limits describes the maximum amount of compute resources allowed.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
// +optional
Limits ResourceList `json:"limits,omitempty" protobuf:"bytes,1,rep,name=limits,casttype=ResourceList,castkey=ResourceName"`
// Requests describes the minimum amount of compute resources required.
// If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
// otherwise to an implementation-defined value.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
// +optional
Requests ResourceList `json:"requests,omitempty" protobuf:"bytes,2,rep,name=requests,casttype=ResourceList,castkey=ResourceName"`
}
ResourceList:
// ResourceList is a set of (resource name, quantity) pairs.
type ResourceList map[ResourceName]resource.Quantity
Here you can find test file with example of use.
Sourcegraph plugin for Crome or Firefox could be very helpful to work with a source code on GitHub.

!ImportValue in Serverless Framework not working

I'm attempting to export a DynamoDb StreamArn from a stack created in CloudFormation, then reference the export using !ImportValue in the serverless.yml.
But I'm getting this error message:
unknown tag !<!ImportValue> in "/codebuild/output/src/serverless.yml"
The cloudformation and serverless.yml are defined as below. Any help appreciated.
StackA.yml
AWSTemplateFormatVersion: 2010-09-09
Description: Resources for the registration site
Resources:
ClientTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: client
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 2
WriteCapacityUnits: 2
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
Outputs:
ClientTableStreamArn:
Description: The ARN for My ClientTable Stream
Value: !GetAtt ClientTable.StreamArn
Export:
Name: my-client-table-stream-arn
serverless.yml
service: my-service
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeStream
- dynamodb:GetRecords
- dynamodb:GetShardIterator
- dynamodb:ListStreams
- dynamodb:GetItem
- dynamodb:PutItem
Resource: arn:aws:dynamodb:*:*:table/client
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn: !ImportValue my-client-table-stream-arn
batchSize: 1
Solved by using ${cf:stackName.outputKey}
I struggled with this as well, and what did trick for me was:
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn:
!ImportValue my-client-table-stream-arn
batchSize: 1
Note, that intrinsic functions ImportValue is on a new line and indented, otherwise the whole event is ignored when cloudformation-template-update-stack.json is generated.
It appears that you're using the !ImportValue shorthand for CloudFormation YAML. My understanding is that when CloudFormation parses the YAML, and !ImportValue actually aliases Fn::ImportValue. According to the Serverless Function documentation, it appears that they should support the Fn::ImportValue form of imports.
Based on the documentation for Fn::ImportValue, you should be able to reference the your export like
- stream:
type: dynamodb
arn: {"Fn::ImportValue": "my-client-table-stream-arn"}
batchSize: 1
Hope that helps solve your issue.
I couldn't find it clearly documented anywhere but what seemed to resolve the issue for me is:
For the Variables which need to be exposed/exported in outputs, they must have an "Export" property with a "Name" sub-property:
In serverless.ts
resources: {
Resources: resources["Resources"],
Outputs: {
// For eventbus
EventBusName: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME",
},
Value: {
Ref: "UNIQUE_EVENTBUS_NAME",
},
},
// For something like sqs, or anything else, would be the same
IDVerifyQueueARN: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_SQS_NAME",
},
Value: { "Fn::GetAtt": ["UNIQUE_SQS_NAME", "Arn"] },
}
},
}
Once this is deployed you can check if the exports are present by running in the terminal (using your associated aws credentials):
aws cloudformation list-exports
Then there should be a Name property in a list:
{
"ExportingStackId": "***",
"Name": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
"Value": "***"
}
And then if the above is successful, you can reference it with "Fn::ImportValue" like so, e.g.:
"Resource": {
"Fn::ImportValue": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
}