Setting up AWS IoT using Serverless Framework for Multiple IoT Devices - aws-cloudformation

My goal is to create a system on AWS using the serverless framework for multiple IoT devices to send JSON payloads to AWS IoT, which in turn will be saved to DynamoDB.
I am very new to using AWS outside of creating EC2 servers and this is my first project using the serverless framework.
After referring to an example, the modified version that I came up with is posted below.
Problem: It appears that the example is for just 1 device to connect to AWS IoT, which I concluded from the hardcoded IoT Thing certificate being used, such as
SensorPolicyPrincipalAttachmentCert:
Type: AWS::IoT::PolicyPrincipalAttachment
Properties:
PolicyName: { Ref: SensorThingPolicy }
Principal: ${{custom.iotCertificateArn}}
SensorThingPrincipalAttachmentCert:
Type: "AWS::IoT::ThingPrincipalAttachment"
Properties:
ThingName: { Ref: SensorThing }
Principal: ${self:custom.iotCertificateArn}
If this conclusion is correct that serverless.yml is configured for only 1 Thing, then what modifications can we make such that more than 1 Thing can be used?
Maybe setup all the Things outside of serverless.yaml? Which means removing just SensorPolicyPrincipalAttachmentCert and SensorThingPrincipalAttachmentCert?
Also, how should we set the Resource property to in SensorThingPolicy? They are currently set to "*", is this too broard? Or is there a way to limit to just Things.
serverless.yml
service: garden-iot
provider:
name: aws
runtime: nodejs6.10
region: us-east-1
# load custom variables from a file
custom: ${file(./vars-dev.yml)}
resources:
Resources:
LocationData:
Type: AWS::DynamoDB::Table
Properties:
TableName: location-data-${opt:stage}
AttributeDefinitions:
-
AttributeName: ClientId
AttributeType: S
-
AttributeName: Timestamp
AttributeType: S
KeySchema:
-
AttributeName: ClientId
KeyType: HASH
-
AttributeName: Timestamp
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
SensorThing:
Type: AWS::IoT::Thing
Properties:
AttributePayload:
Attributes:
SensorType: soil
SensorThingPolicy:
Type: AWS::IoT::Policy
Properties:
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: ["iot:Connect"]
Resource: ["${self:custom.sensorThingClientResource}"]
- Effect: "Allow"
Action: ["iot:Publish"]
Resource: ["${self:custom.sensorThingSoilTopicResource}"]
SensorPolicyPrincipalAttachmentCert:
Type: AWS::IoT::PolicyPrincipalAttachment
Properties:
PolicyName: { Ref: SensorThingPolicy }
Principal: ${{custom.iotCertificateArn}}
SensorThingPrincipalAttachmentCert:
Type: "AWS::IoT::ThingPrincipalAttachment"
Properties:
ThingName: { Ref: SensorThing }
Principal: ${self:custom.iotCertificateArn}
IoTRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Principal:
Service:
- iot.amazonaws.com
Action:
- sts:AssumeRole
IoTRolePolicies:
Type: AWS::IAM::Policy
Properties:
PolicyName: IoTRole_Policy
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Action:
- dynamodb:PutItem
Resource: "*"
-
Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"
Roles: [{ Ref: IoTRole }]

EDIT 05/09/2018: I've found this blog post, which describes my approach pretty well: Ensure Secure Communication with AWS IoT Core Using the Certificate Vending Machine Reference Application
--
You could take a look at Just-in-Time Provisioning or build your own solution based on Programmatic Provisioning.
I have dealt with this topic many times and had to realize that it depends a lot on the use case, which makes more sense. Also security is an aspect to keep an eye on. You don't want to have a public API responsible for JIT device registration accessible by the whole Internet.
A simple Programmatic Provisioning-based scenario could look like this: You build a thing (maybe a sensor), which should be abled to connect to AWS IoT and have an in-house provisioning process.
Simple provisioning process:
Thing built
Thing has a serial number
Thing registers itself via an internal server
The registration code running on the server could look something like this (JS + AWS JS SDK):
// Modules
const AWS = require('aws-sdk')
// AWS
const iot = new AWS.Iot({ region: process.env.region })
// Config
const templateBodyJson = require('./register-thing-template-body.json')
// registerThing
const registerThing = async ({ serialNumber = null } = {}) => {
if (!serialNumber) throw new Error('`serialNumber` required!')
const {
certificateArn = null,
certificateId = null,
certificatePem = null,
keyPair: {
PrivateKey: privateKey = null,
PublicKey: publicKey = null
} = {}
} = await iot.createKeysAndCertificate({ setAsActive: true }).promise()
const registerThingParams = {
templateBody: JSON.stringify(templateBodyJson),
parameters: {
ThingName: serialNumber,
SerialNumber: serialNumber,
CertificateId: certificateId
}
}
const { resourceArns = null } = await iot.registerThing(registerThingParams).promise()
return {
certificateArn,
certificateId,
certificatePem,
privateKey,
publicKey,
resourceArns
}
}
const unregisterThing = async ({ serialNumber = null } = {}) => {
if (!serialNumber) throw new Error('`serialNumber` required!')
try {
const thingName = serialNumber
const { principals: thingPrincipals } = await iot.listThingPrincipals({ thingName }).promise()
const certificates = thingPrincipals.map((tp) => ({ certificateId: tp.split('/').pop(), certificateArn: tp }))
for (const { certificateId, certificateArn } of certificates) {
await iot.detachThingPrincipal({ thingName, principal: certificateArn }).promise()
await iot.updateCertificate({ certificateId, newStatus: 'INACTIVE' }).promise()
await iot.deleteCertificate({ certificateId, forceDelete: true }).promise()
}
await iot.deleteThing({ thingName }).promise()
return {
deleted: true,
thingPrincipals
}
} catch (err) {
// Already deleted!
if (err.code && err.code === 'ResourceNotFoundException') {
return {
deleted: true,
thingPrincipals: []
}
}
throw err
}
}
register-thing-template-body.json:
{
"Parameters": {
"ThingName": {
"Type": "String"
},
"SerialNumber": {
"Type": "String"
},
"CertificateId": {
"Type": "String"
}
},
"Resources": {
"thing": {
"Type": "AWS::IoT::Thing",
"Properties": {
"ThingName": {
"Ref": "ThingName"
},
"AttributePayload": {
"serialNumber": {
"Ref": "SerialNumber"
}
},
"ThingTypeName": "NewDevice",
"ThingGroups": ["NewDevices"]
}
},
"certificate": {
"Type": "AWS::IoT::Certificate",
"Properties": {
"CertificateId": {
"Ref": "CertificateId"
}
}
},
"policy": {
"Type": "AWS::IoT::Policy",
"Properties": {
"PolicyName": "DefaultNewDevicePolicy"
}
}
}
}
Make sure you got all the "NewDevice" Thing types, groups and policies in place. Also keep in mind ThingName = SerialNumber (important for unregisterThing).

Related

Get IP address from Azure Private Endpoint using Pulumi TypeScript API

I have a Private Endpoint created in my Azure subscription. If I look into the Azure Portal I can see that the private IP assigned to my Private Endpoint NIC is 10.0.0.4.
But how can I get the IP address value using the Pulumi TypeScript API, so I can use it in my scripts?
const privateEndpoint = new network.PrivateEndpoint("privateEndpoint", {
privateLinkServiceConnections: [{
groupIds: ["sites"],
name: "privateEndpointLink1",
privateLinkServiceId: backendApp.id,
}],
resourceGroupName: resourceGroup.name,
subnet: {
id: subnet.id,
}
});
export let ipc = privateEndpoint.networkInterfaces.apply(networkInterfaces => networkInterfaces[0].ipConfigurations)
console.log(ipc)
This is the current output for that ipc variable:
OutputImpl {
__pulumiOutput: true,
resources: [Function (anonymous)],
allResources: [Function (anonymous)],
isKnown: Promise { <pending> },
isSecret: Promise { <pending> },
promise: [Function (anonymous)],
toString: [Function (anonymous)],
toJSON: [Function (anonymous)]
}
You can't log a pulumi Output until its resolved, so if you change your code slightly, this will work:
const privateEndpoint = new network.PrivateEndpoint("privateEndpoint", {
privateLinkServiceConnections: [{
groupIds: ["sites"],
name: "privateEndpointLink1",
privateLinkServiceId: backendApp.id,
}],
resourceGroupName: resourceGroup.name,
subnet: {
id: subnet.id,
}
});
export let ipc = privateEndpoint.networkInterfaces.apply(networkInterfaces => console.log(networkInterfaces[0].ipConfigurations))
I found the solution to my problem.
I was publishing a Static Web App to an incorrect Private DNS Zone. Correct one should be privatelink.1.azurestaticapps.net.
Once that was fixed I got the data that I needed.
{
name: 'config1',
privateDnsZoneId: '/subscriptions/<subscription>/resourceGroups/rg-static-webappc0811aae/providers/Microsoft.Network/privateDnsZones/privatelink.1.azurestaticapps.net',
recordSets: [
{
fqdn: 'thankful-sand-084c7860f.privatelink.1.azurestaticapps.net',
ipAddresses: [Array],
provisioningState: 'Succeeded',
recordSetName: 'thankful-sand-084c7860f',
recordType: 'A',
ttl: 10
}
]
}
{
fqdn: 'thankful-sand-084c7860f.privatelink.1.azurestaticapps.net',
ipAddresses: [ '10.0.0.4' ],
provisioningState: 'Succeeded',
recordSetName: 'thankful-sand-084c7860f',
recordType: 'A',
ttl: 10
}

Mongoose schema for location in app created with #react-google-maps/api

What should the mongoose schema look like for a part of my dataset that looks like this:
"location": {
"lat": 59.369761,
"lng": 13.4867216
},
The format above is chosen to match #react-google-maps/api when used it as in this tutorial https://medium.com/#allynak/how-to-use-google-map-api-in-react-app-edb59f64ac9d
I have tried these below without success (either the app breaks or MongoDB skips key location when seeding the database.
location: {
type: {
type: Schema.Types.Decimal128,
type: Schema.Types.Decimal128 }
}
location: {
type: {
type: Decimal128,
type: Decimal128 }
}
location: {
type: {
type: mongoose.Decimal128,
type: mongoose.Decimal128 }
}
location: {
type: {
type: mongoose.Types.Decimal128,
type: mongoose.Types.Decimal128 }
}
location: {
type: {
type: Number,
type: Number }
}
location: {
type: mongoose.Types.Decimal128,
type: mongoose.Types.Decimal128
}
This works
location: { type: Object }

enabling CORS for AWS API gateway with the AWS CDK

I'm trying to build an application with the AWS CDK and if I were to build an application by hand using the AWS Console, I normally would enable CORS in API gateway.
Even though I can export the swagger out of API Gateway and have found numerous options to generate a Mock endpoint for the OPTIONS method I don't see how to do this with the CDK. Currently I was trying:
const apigw = require('#aws-cdk/aws-apigateway');
where:
var api = new apigw.RestApi(this, 'testApi');
and defining the OPTIONS method like:
const testResource = api.root.addResource('testresource');
var mock = new apigw.MockIntegration({
type: "Mock",
methodResponses: [
{
statusCode: "200",
responseParameters : {
"Access-Control-Allow-Headers" : "string",
"Access-Control-Allow-Methods" : "string",
"Access-Control-Allow-Origin" : "string"
}
}
],
integrationResponses: [
{
statusCode: "200",
responseParameters: {
"Access-Control-Allow-Headers" : "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'",
"Access-Control-Allow-Origin" : "'*'",
"Access-Control-Allow-Methods" : "'GET,POST,OPTIONS'"
}
}
],
requestTemplates: {
"application/json": "{\"statusCode\": 200}"
}
});
testResource.addMethod('OPTIONS', mock);
But this doesn't deploy. The error message I get from the cloudformation stack deploy when I run "cdk deploy" is:
Invalid mapping expression specified: Validation Result: warnings : [], errors : [Invalid mapping expression specified: Access-Control-Allow-Origin] (Service: AmazonApiGateway; Status Code: 400; Error Code: BadRequestException;
Ideas?
The recent change has made enabling CORS simpler:
const restApi = new apigw.RestApi(this, `api`, {
defaultCorsPreflightOptions: {
allowOrigins: apigw.Cors.ALL_ORIGINS
}
});
Haven't tested this myself, but based on this answer, it seems like you would need to use a slightly different set of keys when you define your MOCK integration:
const api = new apigw.RestApi(this, 'api');
const method = api.root.addMethod('OPTIONS', new apigw.MockIntegration({
integrationResponses: [
{
statusCode: "200",
responseParameters: {
"method.response.header.Access-Control-Allow-Headers": "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'",
"method.response.header.Access-Control-Allow-Methods": "'GET,POST,OPTIONS'",
"method.response.header.Access-Control-Allow-Origin": "'*'"
},
responseTemplates: {
"application/json": ""
}
}
],
passthroughBehavior: apigw.PassthroughBehavior.Never,
requestTemplates: {
"application/json": "{\"statusCode\": 200}"
},
}));
// since "methodResponses" is not supported by apigw.Method (https://github.com/awslabs/aws-cdk/issues/905)
// we will need to use an escape hatch to override the property
const methodResource = method.findChild('Resource') as apigw.cloudformation.MethodResource;
methodResource.propertyOverrides.methodResponses = [
{
statusCode: '200',
responseModels: {
'application/json': 'Empty'
},
responseParameters: {
'method.response.header.Access-Control-Allow-Headers': true,
'method.response.header.Access-Control-Allow-Methods': true,
'method.response.header.Access-Control-Allow-Origin': true
}
}
]
Would be useful to be able to enable CORS using a more friendly API.
Edit: With updates to CDK it is no longer necessary to use an escape hatch. Please see other answers as they are much cleaner.
Original answer:
This version, originally created by Heitor Vital on github uses only native constructs.
export function addCorsOptions(apiResource: apigateway.IResource) {
apiResource.addMethod('OPTIONS', new apigateway.MockIntegration({
integrationResponses: [{
statusCode: '200',
responseParameters: {
'method.response.header.Access-Control-Allow-Headers': "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,X-Amz-User-Agent'",
'method.response.header.Access-Control-Allow-Origin': "'*'",
'method.response.header.Access-Control-Allow-Credentials': "'false'",
'method.response.header.Access-Control-Allow-Methods': "'OPTIONS,GET,PUT,POST,DELETE'",
},
}],
passthroughBehavior: apigateway.PassthroughBehavior.NEVER,
requestTemplates: {
"application/json": "{\"statusCode\": 200}"
},
}), {
methodResponses: [{
statusCode: '200',
responseParameters: {
'method.response.header.Access-Control-Allow-Headers': true,
'method.response.header.Access-Control-Allow-Methods': true,
'method.response.header.Access-Control-Allow-Credentials': true,
'method.response.header.Access-Control-Allow-Origin': true,
},
}]
})
}
I also ported the same code to python using his version as a guidepost.
def add_cors_options(api_resource):
"""Add response to OPTIONS to enable CORS on an API resource."""
mock = apigateway.MockIntegration(
integration_responses=[{
'statusCode': '200',
'responseParameters': {
'method.response.header.Access-Control-Allow-Headers':
"'Content-Type,\
X-Amz-Date,\
Authorization,\
X-Api-Key,\
X-Amz-Security-Token,X-Amz-User-Agent'",
'method.response.header.Access-Control-Allow-Origin': "'*'",
'method.response.header.Access-Control-Allow-Credentials':
"'false'",
'method.response.header.Access-Control-Allow-Methods':
"'OPTIONS,\
GET,\
PUT,\
POST,\
DELETE'",
}
}],
passthrough_behavior=apigateway.PassthroughBehavior.NEVER,
request_templates={
"application/json": "{\"statusCode\": 200}"
}
)
method_response = apigateway.MethodResponse(
status_code='200',
response_parameters={
'method.response.header.Access-Control-Allow-Headers': True,
'method.response.header.Access-Control-Allow-Methods': True,
'method.response.header.Access-Control-Allow-Credentials': True,
'method.response.header.Access-Control-Allow-Origin': True
}
)
api_resource.add_method(
'OPTIONS',
integration=mock,
method_responses=[method_response]
)
BACKGROUND
I came across this answer while trying to implement the aws_api_gateway_integration_response in Terraform and accidentally came across the solution.
PROBLEM
I was getting this error message:
Invalid mapping expression specified: Validation Result: warnings : [], errors : [Invalid mapping expression specified: POST,GET,OPTIONS]
In the aws_api_gateway_integration_response resource I had the response_parameter key as:
response_parameters = {
"method.response.header.Access-Control-Allow-Headers" = "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token"
"method.response.header.Access-Control-Allow-Origin" = "*"
"method.response.header.Access-Control-Allow-Methods" = "POST,GET,OPTIONS"
# "method.response.header.Access-Control-Allow-Credentials" = "false"
}
I thought everything was fine as I assumed the double quotes were all that Terraform needed. However, that was not the case.
SOLUTION
I had to add a single quote around the values inside the double quote. Like this:
response_parameters = {
"method.response.header.Access-Control-Allow-Headers" = "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
"method.response.header.Access-Control-Allow-Origin" = "'*'"
"method.response.header.Access-Control-Allow-Methods" = "'POST,GET,OPTIONS'"
# "method.response.header.Access-Control-Allow-Credentials" = "false"
}

Non-unique add() to an many-to-many connection

Is it possible to add multiple of the same object to the many-to-many connection? The current setup i am running gives me an Error: Trying to '.add()' an instance which already exists! everytime i try to add a speaker multiple times. For example, maybe speaker X speaks for 20 min, Speaker Y takes over, and then its speaker X:s turn again. How can i solve this?
this is my event model:
attributes: {
id: {
type: "integer",
primaryKey: true,
autoIncrement: true
},
name: {
type: "string",
required: true
},
speakers: {
collection: "speaker",
via: 'events',
dominant: true
}
},
addSpeaker: function (options, cb) {
Event.findOne(options.id).exec(function (err, event) {
if (err) return cb(err);
if (!event) return cb(new Error('Event not found.'));
event.speakers.add(options.speaker);
event.save(cb);
});
And also an Speaker model:
attributes: {
name: {
type: "string",
required: true
},
title : {
type: "string"
},
event: {
model: "event"
},
events: {
collection: "event",
via: "speakers"
}
}

Updating specific attribute for a specific user

I am trying to update a specific attribute for a specific user in mongodb. I am having trouble understanding the nested-ness. Any help would be appreciated.
My specific problem is that I want to update a the activeID for a specific _cID that is specific to a User(a user can have many _cID's hence the array type in settings.
Here is the server call that I currently have
'updateActive' : function (p_id, c_id) {
Collections.Users.update({_id: Meteor.userId(), _cID: c_id},{$set: {'settings.$.activeID': p_id}});
}
and the schema
Schemas.CSettings = new SimpleSchema({
_id: {
type: String
},
_cID: {
type: String
},
activeID: {
type: String,
optional: true
}
});
Schemas.User = new SimpleSchema({
_id: {
type: String
},
createdAt: {
type: Date
},
profile: {
type: Schemas.UserProfile
},
settings: {
type: [Schemas.CSettings],
optional: true
}
});