I'm unable to get a Spring-Cloud based AWS Lambda Function with an SQS Message trigger to work. I'm using the Spring Cloud Function AWS adapter version 2.0.1.RELEASE and attempting to deploy to the AWS EU-WEST-2 Region.
My SpringBootRequestHandler is defined as follows:
import org.springframework.cloud.function.adapter.aws.SpringBootRequestHandler;
import com.amazonaws.services.lambda.runtime.events.SQSEvent;
public class ReplicationHandler extends SpringBootRequestHandler<SQSEvent, String>{
}
My #Bean function looks as follows:
#Bean
public Function<SQSEvent, String> handleEvent() {
return value -> processEvent((SQSEvent)value);
}
I feed this with the following test event:
{
"Records": [
{
"messageId": "02a4e04b-a1d2-417a-b073-56123be35ac6",
"receiptHandle": "AQEB0fsSc76vU9Y6vQEz",
"body": "hello world",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "1553860061037",
"SenderId": "AIDAIVEA3AGEU7NF6DRAG",
"ApproximateFirstReceiveTimestamp": "1553860061042"
},
"messageAttributes": {},
"md5OfBody": "a4d19d8b1019e01bb875eea6232bf2f1",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:eu-west-2:XXXXX:YYYYY",
"awsRegion": "eu-west-2"
}
]
}
When I run this , I get the following error:
{
"errorMessage": "reactor.core.publisher.FluxJust cannot be cast to com.amazonaws.services.lambda.runtime.events.SQSEvent",
"errorType": "java.lang.ClassCastException",
"stackTrace": [
"org.springframework.cloud.function.adapter.aws.SpringFunctionInitializer.apply(SpringFunctionInitializer.java:132)",
"org.springframework.cloud.function.adapter.aws.SpringBootRequestHandler.handleRequest(SpringBootRequestHandler.java:48)"
]
}
Anyone have any suggestions around what's going wrong here? Alternatively, if there are any working samples on the web for my exact scenario, that would be good as well.
Related
I am trying to connect to a mongo db using the nodejs mongo driver and I'm doing this in a cypress project. I get the error in the title. Below is the simplified version of my code.
import {MongoClient} from 'mongodb';
export class SomeRepository {
static insertSomething(): void {
// Error in the line below: MongoRuntimeError Unable to parse localhost:27017 with URL
const client = new MongoClient('mongodb://localhost:27017');
}
}
Mongodb is running because I can connect from the terminal. Also tried replacing localhost with 127.0.0.1 and adding the authSource parameter to the connection string.
The reason I'm mentioning cypress is because in a simple node project that only connects to mongodb everything works as expected. Package.json below
{
"name": "e2e",
"version": "1.0.0",
"description": "",
"main": "index.js",
"dependencies": {
"cypress": "10.8.0",
"cypress-wait-until": "1.7.2",
"headers-utils": "3.0.2",
"mongodb": "4.10.0",
"otplib": "12.0.1",
"pg": "8.7.3",
"pg-native": "3.0.1",
"typescript": "4.9.3"
}
}
The error is in the way you are passing the url, it is necessary that you follow a pattern, in mongodb to connect you need to have this pattern that I will pass below:
Format:
mongodb://<user>:<password>#<host>
Format with filled values:
mongodb://root:mypassword#localhost:27017/
The reason it’s not working is because you’re calling a NodeJS library in a cypress test. Cypress tests run inside a browser and cannot run nodejs libraries.
If you wanted to execute nodejs code in cypress you must create a cypress task https://docs.cypress.io/api/commands/task#Syntax
// cypress.config.js
import { SomeRepository } from ‘./file/somewhere’
module.exports = defineConfig({
e2e: {
setupNodeEvents(on, config) {
on(‘task’, {
insertSomething() {
return SomeRepository.insertSomething();
}
}
}
}
})
// to call in a cypress test
it(‘test’, function () {
cy.task(‘insertSomething’).then(value => /* do something */);
}
});
I'm trying to connect IBM Cloud Functions with a Watson Assistant dialog as web_action. So I have specified web_action as following in watson dialog json editor.
"actions": [
{
"name": "rajesh#heltha.co_dev/default/callKinvey",
"type": "web_action",
"parameters": {
},
"credentials": "$private.mycredential",
"result_variable": "context.my_input_returned"
}
]
Now, the issue is while testing assistant I'm getting following error
Internal error: Content-type can not be retrieved. (and there is 1 more error in the log)
Following is my function that is created on IBM-cloud and enabled for Web Action:
/**
*
* main() will be run when you invoke this action
*
* #param Cloud Functions actions accept a single parameter, which must be a JSON object.
*
* #return The output of this action, which must be a JSON object.
*
*/
function main(params) {
return { message: 'Hello World' };
}
CURL of my function is:
curl -u API-KEY -X POST https://us-south.functions.cloud.ibm.com/api/v1/namespaces/rajesh#heltha.co_dev/actions/callKinvey?blocking=true
The easiest way to solve this type of error is to append .json to your endpoint.
I'm using Serverless to deploy my AWS cloudformation stack. On one of my tables, I enable streams via "StreamEnabled": true. When this is enabled, I get an error on deployment: Encountered unsupported property StreamEnabled.
If I remove the property, I get a validation exception: ValidationException: Stream StreamEnabled was null.
I found a git issue that was addressed and apparently fixed (here), but after upgrading to v1.3, I'm still getting the same errors on deployment.
Can anyone lend insight as to what the issue may be?
It is enabled by default. You can check it from shell:
aws dynamodbstreams list-streams
{
"Streams": [
{
"TableName": "MyTableName-dev",
"StreamArn": "arn:aws:dynamodb:eu-west-2:0000000000000:table/MyTableName-dev/stream/2018-10-26T15:06:25.995",
"StreamLabel": "2018-10-26T15:06:25.995"
}
]
}
And:
aws dynamodbstreams describe-stream --stream-arn "arn:aws:dynamodb:eu-west-2:00000000000:table/MyTableName-dev/stream/2018-10-26T15:06:25.995"
{
"StreamDescription": {
"StreamLabel": "2018-10-26T15:06:25.995",
"StreamStatus": "ENABLED",
"TableName": "MyTableName-dev",
"Shards": [
{
"ShardId": "shardId-000000000000000-0000000f",
"SequenceNumberRange": {
"StartingSequenceNumber": "00000000000000000000000"
}
}
],
"CreationRequestDateTime": 1540566385.987,
"StreamArn": "arn:aws:dynamodb:eu-west-2:0000000000000000:table/MyTableName-dev/stream/2018-10-26T15:06:25.995",
"KeySchema": [
{
"KeyType": "HASH",
"AttributeName": "application_id"
}
],
"StreamViewType": "KEYS_ONLY"
}
}
It is not a solution, but found that fact I realized that I don't have an issue.
I am using the VMWare vCenter REST API to deploy new Virtual Machines from OVF library items. Part of the API allows for additional_paramaters but I am unable to get it to function properly. Specifically, I would like to set the PropertyParams for custom OVF template properties.
When deploying VM from OVF, I am using the following REST API:
POST https://{server}/rest/com/vmware/vcenter/ovf/library-item/id:{ovf_library_item_id}?~action=deploy
I have tried many structures and either end up with the POST succeeding but the parameters completely ignored, or with a 500 Internal Server error with a message about failing to convert the properties structure:
Could not convert field 'properties' of structure 'com.vmware.vcenter.ovf.property_params'
The payload that seems correct from the documentation (but fails with the error above):
deployment_spec : {
/* ... */
additional_parameters : [
{
type : 'PropertyParams',
properties : [
{
id : 'my_property_name',
value : 'foo',
}
]
}
]
}
Given an OVF that contains the following:
<ProductSection>
<Info>Information about the installed software</Info>
<Product>MyProduct</Product>
<Vendor>MyCompany</Vendor>
<Version>1.0</Version>
<Category>Config</Category>
<Property ovf:userConfigurable="true" ovf:type="string" ovf:key="my_property_name" ovf:value="">
<Label>My Property</Label>
<Description>A custom property</Description>
</Property>
</ProductSection>
This also fails for other property types such as boolean.
Note that I have posted on the vCenter forums as well.
I had the same issue, i success to solve it by browsing the vapi structure /com/vmware/vapi/metadata/metamodel/structure/id:<idstructure>
Here is my finding :
firstly, get your properties structure by using the filter api :
https://{{vc}}/rest/com/vmware/vcenter/ovf/library-item/id:300401a5-4561-4c3d-ac67-67bc7a1a6
Then, to deploy, use the class com.vmware.vcenter.ovh.property_params. It will be more clear with the exemple :
{
"deployment_spec": {
"accept_all_EULA": true,
"name": "clientok",
"default_datastore_id": "datastore-10",
"additional_parameters": [
{
"#class": "com.vmware.vcenter.ovf.property_params",
"properties":
[
{
"instance_id": "",
"class_id": "",
"description": "The gateway IP for this virtual appliance.",
"id": "gateway",
"label": "Default Gateway Address",
"category": "LAN",
"type": "ip",
"value": "10.1.2.1",
"ui_optional": true
}
],
"type": "PropertyParams"
}
]
}
I have already created a service fabric cluster with azure diagnostics and it is functional currently with my services deployed into that cluster. I have an ETW EventSource in my service that I would like to start collecting events from because my service code already uses this event source to write my service related events. Since the cluster is already enabled for azure diagnostics and my services are already deployed into that cluster, I think it is a simple matter of updating the ETW provider with my event source in this service fabric cluster. Here is the exported template (only a partial is shown that is relevant for azure diagnostics):
{
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.5",
"autoUpgradeMinorVersion": true,
"settings": {
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": "50000",
"EtwProviders": {
"EtwEventSourceProviderConfiguration": [
{
"provider": "Microsoft-ServiceFabric-Actors",
"scheduledTransferKeywordFilter": "1",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableActorEventTable"
}
},
{
"provider": "Microsoft-ServiceFabric-Services",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
},
{
"provider": "Bb.ServiceFabric.Infrastructure.Container",
"scheduledTransferPeriod": "PT1M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
}
],
"EtwManifestProviderConfiguration": [
{
"provider": "cbd93bc2-71e5-4566-b3a7-595d8eeca6e8",
"scheduledTransferLogLevelFilter": "Information",
"scheduledTransferKeywordFilter": "4611686018427387904",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricSystemEventTable"
}
}
]
}
}
},
"StorageAccount": "sfdgsmsraghuplaygrou6827"
}
},
"name": "VMDiagnosticsVmExt_vmNodeType0Name"
}
I would like to update following EtwProviders/EtwEventSourceProviderConfiguration to contain following section (as MyCompany.MyServices.MyStatelessService is the name of my service's EventSource):
{
"provider": "MyCompany.MyServices.MyStatelessService",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
}
Here are my questions:
Is this the correct way of inserting an ETW provider/EventSource (from my service) into an existing cluster (that is already enabled with azure diagnostics)?
Can I add this event source (as a ETW event source provider) using a powershell command(s)?
If so, what is the exact powershell command (using all the information from the above code fragment)?
Note: I am using .net framework 4.5.2.
All seems good with the added configuration above. Just be aware that for ETWProviders the EventDestination cannot contain hyphens (-), yours don't so you are ok.
To update the Windows Azure Diagnostics (WAD) agent configuration, you can use either PowerShell or Cloud Explorer in Visual Studio.
For the former, simply update the ARM template and use the New-AzureRmResourceGroupDeployment cmdlet. See here for further information: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-diagnostics-how-to-setup-wad/#update-diagnostics-to-collect-and-upload-logs-from-new-eventsource-channels
For using Cloud Explorer in Visual Studio. Browse to your Virtual Machine Scale Set (as this is the Azure resource that holds the WAD configuration). Right-click and choose Update Diagnostics. In the dialog shown, you have the option to upload a private and public configuration file. Simple take a .json document containing the {"WadCfg": {}} element, and upload that as a public configuration.
If you need to update the private configuration specifies the storage account name and AccessKey:
{
"storageAccountName": "",
"storageAccountKey": "",
"storageAccountEndPoint": "https://core.windows.net",
}
Hope this helps.
Mikkel