We are injecting Databricks into a VNet subnet through our Bicep deployment.
Intermittently, we are seeing the error:
{"code":"PrepareSubnetError","message":"Failed to prepare subnet 'DataBricksPrivateSubnet'. Please try again later. Error details: 'Failed to prepare subnet 'DataBricksPrivateSubnet'. Please try again later'"}
Trying again later does usually work, but is not a very satisfactory solution.
Some extracts from our Biep templates:
resource virtualNetwork 'Microsoft.Network/virtualNetworks#2019-11-01' = {
properties: {
subnets: [
{
name: 'DataBricksPublicSubnet'
properties: {
delegations: [
{
name: 'DataBricksPublicSubnetDelegation'
properties: {
serviceName: 'Microsoft.Databricks/workspaces'
}
}
]
}
}
{
name: 'DataBricksPrivateSubnet'
properties: {
delegations: [
{
name: 'DataBricksPrivateSubnetDelegation'
properties: {
serviceName: 'Microsoft.Databricks/workspaces'
}
}
]
}
}
]
}
}
resource ws 'Microsoft.Databricks/workspaces#2018-04-01' = {
properties: {
parameters: {
customPublicSubnetName: {
value: 'DataBricksPublicSubnet'
}
customPrivateSubnetName: {
value: 'DataBricksPrivateSubnet'
}
}
}
}
Related
I'm using the Azure Devops Pipeline Run API documented here. It works fine except that it does not seem to support passing complex objects via the templateParameters in the request body.
E.g.
parameters:
- name: myObject
type: object
default:
- val1
Call the api with this request body:
{
"resources": {
"repositories": {
"self": {
"refName": "refs/heads/main"
}
}
},
"templateParameters": {
"myObject": [
"val2"
]
}
}
The pipeline runs with myObject set to the default val1.
The body should be like this:
{
"resources": {
"repositories": {
"self": {
"refName": "refs/heads/main"
}
}
},
"templateParameters": {
"myObject": "- val2"
}
}
Results of - powershell: Write-Host "${{ parameters.myObject[0] }}":
With pipeline parameter like :
parameters:
- name: myObject
type: object
default:
- val1
- name: myObject2
type: object
default:
- Name: toto
Value: tata
- Name: toto2
Value: tata2
You should use :
$RunPipelineBody = #{
"templateParameters" = #{
"myObject" = "- val1new"
"myObject2" = "- Name: totonew`n Value: tatanew`n- Name: toto2new`n Value: tata2new"
}
}
And
-Body $( $RunPipelineBody | ConvertTo-JSON -Depth 10 -Compress)
What should the mongoose schema look like for a part of my dataset that looks like this:
"location": {
"lat": 59.369761,
"lng": 13.4867216
},
The format above is chosen to match #react-google-maps/api when used it as in this tutorial https://medium.com/#allynak/how-to-use-google-map-api-in-react-app-edb59f64ac9d
I have tried these below without success (either the app breaks or MongoDB skips key location when seeding the database.
location: {
type: {
type: Schema.Types.Decimal128,
type: Schema.Types.Decimal128 }
}
location: {
type: {
type: Decimal128,
type: Decimal128 }
}
location: {
type: {
type: mongoose.Decimal128,
type: mongoose.Decimal128 }
}
location: {
type: {
type: mongoose.Types.Decimal128,
type: mongoose.Types.Decimal128 }
}
location: {
type: {
type: Number,
type: Number }
}
location: {
type: mongoose.Types.Decimal128,
type: mongoose.Types.Decimal128
}
This works
location: { type: Object }
I would like to use GitHub repository for posts in my Gatsby site. Right now I'm using two queries, first to get the names of the files:
{
viewer {
repository(name: "repository-name") {
object(expression: "master:") {
id
... on Tree {
entries {
name
}
}
}
pushedAt
}
}
}
And the second to get the contents of the files:
{
viewer {
repository(name: "repository-name") {
object(expression: "master:file.md") {
... on Blob {
text
}
}
}
}
}
Is there any way to get information about when each file was created and last updated with GraphQL? Right now I can get only pushedAt for the whole repository and not individual files.
You can use the following query to get the file content and at the same time getting the last commit for this file. This way you also get the fields pushedAt, committedDate and authorDate depending on what you need :
{
repository(owner: "torvalds", name: "linux") {
content: object(expression: "master:Makefile") {
... on Blob {
text
}
}
info: ref(qualifiedName: "master") {
target {
... on Commit {
history(first: 1, path: "Makefile") {
nodes {
author {
email
}
message
pushedDate
committedDate
authoredDate
}
pageInfo {
endCursor
}
totalCount
}
}
}
}
}
}
Note that we need to also get the endCursor field in order to get the first commit on the file (to get the file creation date)
For instance on the Linux repo, for the Makefile file it gives:
"pageInfo": {
"endCursor": "b29482fde649c72441d5478a4ea2c52c56d97a5e 0"
}
"totalCount": 1806
So there are 1806 commit for this file
In order to get the first commit, a query referencing the last cursor which would be b29482fde649c72441d5478a4ea2c52c56d97a5e 1804:
{
repository(owner: "torvalds", name: "linux") {
info: ref(qualifiedName: "master") {
target {
... on Commit {
history(first: 1, after:"b29482fde649c72441d5478a4ea2c52c56d97a5e 1804", path: "Makefile") {
nodes {
author {
email
}
message
pushedDate
committedDate
authoredDate
}
}
}
}
}
}
}
which returns the first commit of this file.
I don't have any source about the cursor string format "b29482fde649c72441d5478a4ea2c52c56d97a5e 1804", I've tested with some other repositories with files with more than 1000 commits and it seems that it's always formatted like :
<static hash> <incremented_number>
which avoid to iterate over all the commits in case that there is more than 100 commits referencing your file
Here is an implementation in javascript using graphql.js :
const graphql = require('graphql.js');
const token = "YOUR_TOKEN";
const queryVars = { name: "linux", owner: "torvalds" };
const file = "Makefile";
const branch = "master";
var graph = graphql("https://api.github.com/graphql", {
headers: {
"Authorization": `Bearer ${token}`,
'User-Agent': 'My Application'
},
asJSON: true
});
graph(`
query ($name: String!, $owner: String!){
repository(owner: $owner, name: $name) {
content: object(expression: "${branch}:${file}") {
... on Blob {
text
}
}
info: ref(qualifiedName: "${branch}") {
target {
... on Commit {
history(first: 1, path: "${file}") {
nodes {
author {
email
}
message
pushedDate
committedDate
authoredDate
}
pageInfo {
endCursor
}
totalCount
}
}
}
}
}
}
`)(queryVars).then(function(response) {
console.log(JSON.stringify(response, null, 2));
var totalCount = response.repository.info.target.history.totalCount;
if (totalCount > 1) {
var cursorPrefix = response.repository.info.target.history.pageInfo.endCursor.split(" ")[0];
var nextCursor = `${cursorPrefix} ${totalCount-2}`;
console.log(`total count : ${totalCount}`);
console.log(`cursorPrefix : ${cursorPrefix}`);
console.log(`get element after cursor : ${nextCursor}`);
graph(`
query ($name: String!, $owner: String!){
repository(owner: $owner, name: $name) {
info: ref(qualifiedName: "${branch}") {
target {
... on Commit {
history(first: 1, after:"${nextCursor}", path: "${file}") {
nodes {
author {
email
}
message
pushedDate
committedDate
authoredDate
}
}
}
}
}
}
}`)(queryVars).then(function(response) {
console.log("first commit info");
console.log(JSON.stringify(response, null, 2));
}).catch(function(error) {
console.log(error);
});
}
}).catch(function(error) {
console.log(error);
});
My goal is to create a system on AWS using the serverless framework for multiple IoT devices to send JSON payloads to AWS IoT, which in turn will be saved to DynamoDB.
I am very new to using AWS outside of creating EC2 servers and this is my first project using the serverless framework.
After referring to an example, the modified version that I came up with is posted below.
Problem: It appears that the example is for just 1 device to connect to AWS IoT, which I concluded from the hardcoded IoT Thing certificate being used, such as
SensorPolicyPrincipalAttachmentCert:
Type: AWS::IoT::PolicyPrincipalAttachment
Properties:
PolicyName: { Ref: SensorThingPolicy }
Principal: ${{custom.iotCertificateArn}}
SensorThingPrincipalAttachmentCert:
Type: "AWS::IoT::ThingPrincipalAttachment"
Properties:
ThingName: { Ref: SensorThing }
Principal: ${self:custom.iotCertificateArn}
If this conclusion is correct that serverless.yml is configured for only 1 Thing, then what modifications can we make such that more than 1 Thing can be used?
Maybe setup all the Things outside of serverless.yaml? Which means removing just SensorPolicyPrincipalAttachmentCert and SensorThingPrincipalAttachmentCert?
Also, how should we set the Resource property to in SensorThingPolicy? They are currently set to "*", is this too broard? Or is there a way to limit to just Things.
serverless.yml
service: garden-iot
provider:
name: aws
runtime: nodejs6.10
region: us-east-1
# load custom variables from a file
custom: ${file(./vars-dev.yml)}
resources:
Resources:
LocationData:
Type: AWS::DynamoDB::Table
Properties:
TableName: location-data-${opt:stage}
AttributeDefinitions:
-
AttributeName: ClientId
AttributeType: S
-
AttributeName: Timestamp
AttributeType: S
KeySchema:
-
AttributeName: ClientId
KeyType: HASH
-
AttributeName: Timestamp
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
SensorThing:
Type: AWS::IoT::Thing
Properties:
AttributePayload:
Attributes:
SensorType: soil
SensorThingPolicy:
Type: AWS::IoT::Policy
Properties:
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: ["iot:Connect"]
Resource: ["${self:custom.sensorThingClientResource}"]
- Effect: "Allow"
Action: ["iot:Publish"]
Resource: ["${self:custom.sensorThingSoilTopicResource}"]
SensorPolicyPrincipalAttachmentCert:
Type: AWS::IoT::PolicyPrincipalAttachment
Properties:
PolicyName: { Ref: SensorThingPolicy }
Principal: ${{custom.iotCertificateArn}}
SensorThingPrincipalAttachmentCert:
Type: "AWS::IoT::ThingPrincipalAttachment"
Properties:
ThingName: { Ref: SensorThing }
Principal: ${self:custom.iotCertificateArn}
IoTRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Principal:
Service:
- iot.amazonaws.com
Action:
- sts:AssumeRole
IoTRolePolicies:
Type: AWS::IAM::Policy
Properties:
PolicyName: IoTRole_Policy
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: Allow
Action:
- dynamodb:PutItem
Resource: "*"
-
Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"
Roles: [{ Ref: IoTRole }]
EDIT 05/09/2018: I've found this blog post, which describes my approach pretty well: Ensure Secure Communication with AWS IoT Core Using the Certificate Vending Machine Reference Application
--
You could take a look at Just-in-Time Provisioning or build your own solution based on Programmatic Provisioning.
I have dealt with this topic many times and had to realize that it depends a lot on the use case, which makes more sense. Also security is an aspect to keep an eye on. You don't want to have a public API responsible for JIT device registration accessible by the whole Internet.
A simple Programmatic Provisioning-based scenario could look like this: You build a thing (maybe a sensor), which should be abled to connect to AWS IoT and have an in-house provisioning process.
Simple provisioning process:
Thing built
Thing has a serial number
Thing registers itself via an internal server
The registration code running on the server could look something like this (JS + AWS JS SDK):
// Modules
const AWS = require('aws-sdk')
// AWS
const iot = new AWS.Iot({ region: process.env.region })
// Config
const templateBodyJson = require('./register-thing-template-body.json')
// registerThing
const registerThing = async ({ serialNumber = null } = {}) => {
if (!serialNumber) throw new Error('`serialNumber` required!')
const {
certificateArn = null,
certificateId = null,
certificatePem = null,
keyPair: {
PrivateKey: privateKey = null,
PublicKey: publicKey = null
} = {}
} = await iot.createKeysAndCertificate({ setAsActive: true }).promise()
const registerThingParams = {
templateBody: JSON.stringify(templateBodyJson),
parameters: {
ThingName: serialNumber,
SerialNumber: serialNumber,
CertificateId: certificateId
}
}
const { resourceArns = null } = await iot.registerThing(registerThingParams).promise()
return {
certificateArn,
certificateId,
certificatePem,
privateKey,
publicKey,
resourceArns
}
}
const unregisterThing = async ({ serialNumber = null } = {}) => {
if (!serialNumber) throw new Error('`serialNumber` required!')
try {
const thingName = serialNumber
const { principals: thingPrincipals } = await iot.listThingPrincipals({ thingName }).promise()
const certificates = thingPrincipals.map((tp) => ({ certificateId: tp.split('/').pop(), certificateArn: tp }))
for (const { certificateId, certificateArn } of certificates) {
await iot.detachThingPrincipal({ thingName, principal: certificateArn }).promise()
await iot.updateCertificate({ certificateId, newStatus: 'INACTIVE' }).promise()
await iot.deleteCertificate({ certificateId, forceDelete: true }).promise()
}
await iot.deleteThing({ thingName }).promise()
return {
deleted: true,
thingPrincipals
}
} catch (err) {
// Already deleted!
if (err.code && err.code === 'ResourceNotFoundException') {
return {
deleted: true,
thingPrincipals: []
}
}
throw err
}
}
register-thing-template-body.json:
{
"Parameters": {
"ThingName": {
"Type": "String"
},
"SerialNumber": {
"Type": "String"
},
"CertificateId": {
"Type": "String"
}
},
"Resources": {
"thing": {
"Type": "AWS::IoT::Thing",
"Properties": {
"ThingName": {
"Ref": "ThingName"
},
"AttributePayload": {
"serialNumber": {
"Ref": "SerialNumber"
}
},
"ThingTypeName": "NewDevice",
"ThingGroups": ["NewDevices"]
}
},
"certificate": {
"Type": "AWS::IoT::Certificate",
"Properties": {
"CertificateId": {
"Ref": "CertificateId"
}
}
},
"policy": {
"Type": "AWS::IoT::Policy",
"Properties": {
"PolicyName": "DefaultNewDevicePolicy"
}
}
}
}
Make sure you got all the "NewDevice" Thing types, groups and policies in place. Also keep in mind ThingName = SerialNumber (important for unregisterThing).
Series is one document. It has an array of series inside it (in this case it has 'Revenge' and 'Raines').
Each series has a cast array with names. And I need a query to get those names.
Who know how can I get a list of all the names from both cast arrays?
My best approach was this query db.series.find( {}, { _id: 0, cast: 1 } ) where a get a cursor with the two cast json arrays.
{ series:
[
{
name: 'Revenge',
user_rating: 7.9,
duration: 44,
genres: [ ' Drama', ' Mystery', ' Thriller' ],
year_start: '2011',
year_end: '',
cast:
[ { name: 'Madeleine Stowe' },
{ name: 'Emily VanCamp' },
{ name: 'Gabriel Mann' },
{ name: 'Nick Wechsler' },
{ name: 'Henry Czerny' },
{ name: 'Joshua Bowman' },
{ name: 'Christa B. Allen' },
{ name: 'Ashley Madekwe' },
{ name: 'Connor Paolo' },
{ name: 'Barry Sloane' },
{ name: 'Margarita Levieva' } ],
seasons: [ { number: '3' }, { number: '2' }, { number: '1' } ]
},
{
name: 'Raines',
user_rating: 7.4,
duration: 45,
genres: [ ' Crime', ' Drama' ],
year_start: '2007',
year_end: '',
cast:
[ { name: 'Jeff Goldblum' },
{ name: 'Matt Craven' },
{ name: 'Nicole Sullivan' },
{ name: 'Linda Park' },
{ name: 'Dov Davidoff' },
{ name: 'Malik Yoba' },
{ name: 'Madeleine Stowe' } ],
seasons: [ { number: '1' } ]
}
]
}
I need an output like this:
I need this:
{ name: 'Madeleine Stowe' },
{ name: 'Emily VanCamp' },
{ name: 'Gabriel Mann' },
{ name: 'Nick Wechsler' },
{ name: 'Henry Czerny' },
{ name: 'Joshua Bowman' },
{ name: 'Christa B. Allen' },
{ name: 'Ashley Madekwe' },
{ name: 'Connor Paolo' },
{ name: 'Barry Sloane' },
{ name: 'Margarita Levieva' },
{ name: 'Jeff Goldblum' },
{ name: 'Matt Craven' },
{ name: 'Nicole Sullivan' },
{ name: 'Linda Park' },
{ name: 'Dov Davidoff' },
{ name: 'Malik Yoba' },
{ name: 'Madeleine Stowe' }
You can use aggregation framework for this:
db.series.aggregate( { $unwind : "$series" },
{ $unwind : "$series.cast" },
{ $group : { _id : "$_id",
cast : {$push:"$series.cast}
}
}
);
If you want to consolidate multiple actor appearances into one then replace $push with $addToSet.