I have an Event Streams instance called myKafka (with an appropriate topic called myTopic) and an IBM Cloud Function action called myAction. I want to trigger the myAction when a message arrived on myKafka's topic. I have to write this relation in terrafrom. I have checked this documentation but it only shows examples for alarm trigger and not based on Event Streams. So my question is how to create it with terrafom?
I am trying with the following:
resource "ibm_function_trigger" "myTrigger" {
name = "myTrigger"
namespace = "myNameSpace"
feed {
name = "???"
parameters = <<EOF
[
{
"key":"???",
"value":"???"
},
{
"key":"???",
"value":"???"
}
]
EOF
}
}
I don't really know what should I put into the question marks' place. I expect the myKafka instance and myTopic should be passed with the myAction but could not determine the feed's name and keys with appropriate values.
I finally made it with this configuration:
resource "ibm_function_trigger" "myTrigger" {
name = "myTrigger"
namespace = "myNameSpace"
feed {
name = "/whisk.system/messaging/messageHubFeed"
parameters = <<EOF
[
{
"key":"kafka_brokers_sasl",
"value":<MY_KAFKA_BROKERS_SASL>
},
{
"key":"user",
"value":"<MY_USERNAME>"
},
{
"key":"password",
"value":"<MY_PASSWORD>"
},
{
"key":"topic",
"value":"myTopic"
},
{
"key":"kafka_admin_url",
"value":"<MY_KAFKA_ADMIN_URL>"
}
]
EOF
}
}
The keys and /whisk.system/messaging/messageHubFeed are important.
Related
I am learning terraform and trying to translate kubernetes infrastructure over to terraform.
I have a terraform script which creates a given namespace, and then creates secrets from local files. Most of the files do not create properly due to the namespace not being created fast enough.
Is there a correct method to create and wait for confirmation of the name space before continuing within the terraform script? Such as depends_on, etc.?
My current approach:
resource "kubernetes_namespace" "namespace" {
metadata {
name = "specialNamespace"
}
}
resource "kubernetes_secret" "api-env" {
metadata {
name = var.k8s_name_api_env
namespace = "specialNamespace"
}
data = {
".api" = file("${path.cwd},${var.local_dir_path_api_env_file}")
}
}
resource "kubernetes_secret" "password-env" {
metadata {
name = var.k8s_name_password_env
namespace = "specialNamespace"
}
data = {
".password" = file("${path.cwd},${var.local_dir_path_password_env_file}")
}
}
resource "kubernetes_secret" "tls-crt-env" {
metadata {
name = var.k8s_name_tls_crt_env
namespace = "specialNamespace"
}
data = {
"server.crt" = file("${path.cwd},${var.local_dir_path_tls_crt_env_file}")
}
}
resource "kubernetes_secret" "tls-key-env" {
metadata {
name = var.k8s_name_tls_key_env
namespace = "specialNamespace"
}
data = {
"server.key" = file("${path.cwd},${var.local_dir_path_tls_key_env_file}")
}
}
Since there is a way to get the name property of the metadata from the kubernetes_namespace resource, I would advise going with that. For example, for the kubernetes_secret resource:
resource "kubernetes_secret" "api-env" {
metadata {
name = var.k8s_name_api_env
namespace = kubernetes_namespace.namespace.metadata[0].name
}
data = {
".api" = file("${path.cwd},${var.local_dir_path_api_env_file}")
}
}
Also, note that most of the resources also have the _v1 version (e.g., namespace [1], secret [2] etc.), so I would strongly suggest going with those ones.
[1] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/namespace_v1
[2] https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/secret_v1
Such as depends_on, etc.?
Exactly. Here, you should use depends_on:
resource "kubernetes_secret" "api-env" {
depends_on = [resource.kubernetes_namespace.namespace]
...
}
...
This should be fairly easy, or I might doing something wrong, but after a while digging into it I couldn't find a solution.
I have a Terraform configuration that contains a Kubernetes Secret resource which data comes from Vault. The resource configuration looks like this:
resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
}
Everything is working fine so far, but every time I do terraform plan or terraform apply, it marks that resource as "changed" and updates it, even when I didn't touch the resource or other resources related to it. E.g.:
... (other actions to be applied, unrelated to the offending resource) ...
# kubernetes_secret.external-api-token will be updated in-place
~ resource "kubernetes_secret" "external-api-token" {
~ data = (sensitive value)
id = "platform/external-api-token"
type = "Opaque"
metadata {
annotations = {
"vault.security.banzaicloud.io/vault-addr" = "https://vault.infra.megacorp.io:8200"
"vault.security.banzaicloud.io/vault-path" = "kubernetes/gke-pipe-stg-2"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
generation = 0
labels = {}
name = "external-api-token"
namespace = "platform"
resource_version = "160541784"
self_link = "/api/v1/namespaces/platform/secrets/external-api-token"
uid = "40e93d16-e8ef-47f5-92ac-6d859dfee123"
}
}
Plan: 3 to add, 1 to change, 0 to destroy.
It is saying that the data for this resource has been changed. However the data in Vault remains the same, nothing has been modified there. This update happens every single time now.
I was thinking on to use the ignore_changes lifecycle feature, but I assume this will make any changes done in Vault secret to be ignored by Terraform, which I also don't want. I would like the resource to be updated only when the secret in Vault was changed.
Is there a way to do this? What am I missing or doing wrong?
You need to add in the Terraform Lifecycle ignore changes meta argument to your code. For data with API token values but also annotations for some reason Terraform seems to assume that, that data changes every time a plan or apply or even destroy has been run. I had a similar issue with Azure KeyVault.
Here is the code with the lifecycle ignore changes meta argument included:
resource "kubernetes_secret" "external-api-token" {
metadata {
name = "external-api-token"
namespace = local.platform_namespace
annotations = {
"vault.security.banzaicloud.io/vault-addr" = var.vault_addr
"vault.security.banzaicloud.io/vault-path" = "kubernetes/${var.name}"
"vault.security.banzaicloud.io/vault-role" = "reader"
}
}
data = {
"EXTERNAL_API_TOKEN" = "vault:secret/gcp/${var.env}/micro-service#EXTERNAL_API_TOKEN"
}
lifecycle {
ignore_changes = [
# Ignore changes to data, and annotations e.g. because a management agent
# updates these based on some ruleset managed elsewhere.
data,annotations,
]
}
}
link to meta arguments with lifecycle:
https://www.terraform.io/language/meta-arguments/lifecycle
I'm using the AWS CDK. When I deploy, according to CloudFormation Events in the AWS console, resources(CfnTrigger) in createWorkflow are initialized before createDDBCrawler and createS3Crawler. Which is causing create_failed(Entity not found) since createWorkflow depends on resources in these two.
So I am wondering:
AWS CDK resource generating sequence (since I see no async or promise in functions, so TypeScript is handling the code in sequence. Then it's a CDK/CloudFormation behavior?)
How to avoid this or arrange resource create sequence, except creating two stack.
export class QuicksightGlue extends Construct {
constructor(scope: Construct, name: string, props: QuicksightGlueProps) {
super(scope, name);
this.createGlueDb(props.glueDb);
for (let zone of zones) {
const ddbCrawler = this.createDDBCrawler(...);
const etlJob = this.createEtlJob(...);
// crawler for processed data
this.createS3Crawler(...);
// create workflow that use crawler
this.createWorkflow(...);
}
}
private createS3Crawler(...) {
return new glue.CfnCrawler(this, 's3Crawler_' + zone, {
name: 's3Crawler_' + zone,
databaseName: glueDb,
role: roleArn,
targets: {
s3Targets: [{ path: s3 }]
}
});
}
private createWorkflow(...) {
const extracDdbWorkflow = new glue.CfnWorkflow(this, `ExtractDdb_` + zone, {
name: `udhcpExtractDdb_` + zone.toLowerCase(),
description: "Workflow to extract and process data from DDB"
});
const scheduledTriggerDdbCrawler = new glue.CfnTrigger(this, 'DdbTrigger_' + zone, {
workflowName: extracDdbWorkflow.name,
type: "SCHEDULED",
schedule: scheduleCronExpression, //"cron(0 * * * ? *)",
description: "Trigger to start the workflow every hour to update ddb data",
actions: [{
crawlerName: ddbCrawler,
}],
});
...
You can cause a construct to become dependent on another construct by calling addDependency on the construct's node property, like this:
// Normally these two constructs would be created in parallel
const construct1 = ...;
const construct2 = ...;
// But with this line, construct2 will not be created until construct 1 is
construct2.node.addDependency(construct1);
Here is a practical example.
Probably, you'd want to save the return value of createS3Crawler to a variable, and then pass that variable as an argument to createWorkflow. Then, createWorkflow will call .node.addDependency(createS3Crawler) on each construct that it creates internally which is dependent on the S3 crawler.
I want to create multiple entries on an internal service call. But for external transports (rest, websockets) this functionality should still be blocked.
I know that the multi option can be set to true or ['create'] in the service options but this does not fix the problem, because external transports could then create multiple entries.
My first solutions was this:
someService.hooks.js
...
before: {
create: [
context => {
if (!context.params.provider) {
context.service.options.multi = true;
}
return context;
}
],
}
...
But this completely overwrites the service options for all service calls.
The only other solutions I came up with, is to set service.multi to true and validate each external service call with a hook.
Would this be the only solution which would work or did I missed something?
What you can currently do is enable multi: [ 'create' ] and check in a hook if it is an external call and throw an error for arrays in that case:
const { BadRequest } = require('#feathersjs/errors');
// ...
create: [
async context => {
if (context.params.provider && Array.isArray(context.data)) {
throw new BadRequest('Not allowed');
}
return context;
}
],
In upcoming versions this will be possible by just passing the multi option in params (tracked in this issue)
There are restful APIs, for instance:
/players - to get list for all players
/players{/playerName} - to get info for specific player
and I already have a function using ng-resource like:
function Play() {
return $resource('/players');
}
Can I reuse this function for specific player like:
function Play(name) {
return $resource('/players/:name', {
name: name
});
}
so I want to...
send request for /players if I didn't pass name parameter.
send request for /players/someone if I passed name parameter with someone
Otherwise, I have to write another function for specific play?
Using ngResource it's very, very simple (it's basically a two-liner). You don't need even need to create any custom actions here*.
I've posted a working Plunkr here (just open Chrome Developer tools and go to the Network tab to see the results).
Service body:
return $resource('/users/:id/:name', { id:'#id', name: '#name' })
Controller:
function( $scope, Users ){
Users.query(); // GET /users (expects an array)
Users.get({id:2}); // GET /users/2
Users.get({name:'Joe'}); // GET /users/Joe
}
of course, you could, if you really wanted to :)
This is how I did it. This way you don't have to write a custom resource function for each one of your endpoints, you just add it to your list resources list. I defined a list of the endpoints I wanted to use like this.
var constants = {
"serverAddress": "foobar.com/",
"resources": {
"Foo": {
"endpoint": "foo"
},
"Bar": {
"endpoint": "bar"
}
}
}
Then created resources out of each one of them like this.
var service = angular.module('app.services', ['ngResource']);
var resourceObjects = constants.resources;
for (var resourceName in resourceObjects) {
if (resourceObjects.hasOwnProperty(resourceName)) {
addResourceFactoryToService(service, resourceName, resourceObjects[resourceName].endpoint);
}
}
function addResourceFactoryToService (service, resourceName, resourceEndpoint) {
service.factory(resourceName, function($resource) {
return $resource(
constants.serverAddress + resourceEndpoint + '/:id',
{
id: '#id',
},
{
update: {
method: 'PUT',
params: {id: '#id'}
},
}
);
});
}
The nice thing about this is that it takes 2 seconds to add a new endpoint, and I even threw in a put method for you. Then you can inject any of your resources into your controllers like this.
.controller('homeCtrl', function($scope, Foo, Bar) {
$scope.foo = Foo.query();
$scope.bar = Bar.get({id:4});
}
Use Play.query() to find all players
Use Play.get({name:$scope.name}) to find one player