I have used Terraform in an IBM Cloud multi-account setup to define Activity Tracker Event Routing targets and routes and had hoped that I can stream logdna to logdna across account boundaries.
resource "ibm_atracker_target" "atracker_logdna_target" {
target_type = "logdna"
logdna_endpoint {
target_crn = data.ibm_resource_instance.testacc_atracker_resource_instance.id
ingestion_key = "some-ingestion-key"
}
name = "my-logdna-target"
region = "eu-de"
}
resource "ibm_atracker_route" "atracker_route" {
name = "my-a2a-route"
rules {
target_ids = [ ibm_atracker_target.atracker_logdna_target.id ]
locations = [ "global", "us-south", "eu-de" ]
}
}
I don't seem to have incoming events. How can I check that events are routed successfully?
The Terraform code in my question works, but I had to change something in my test setup:
enable VRF and service endpoints
change the service plan to at least 7 days data retainment, it did not working with just the streaming logs (Lite plan)
Related
I need to create many MongoDB Atlas endpoint connections using terraform.
I successfully create first, using this code:
#Private endpoint connection
resource "mongodbatlas_private_endpoint" "dbpe" {
project_id = var.prj_id
provider_name = "AWS"
region = var.aws_region
}
#AWS endpoint for secure connect to mongo db
resource "aws_vpc_endpoint" "ec2" {
vpc_id = var.sh_vpc
#service_name = "com.amazonaws.${var.aws_region}.ec2"
service_name = mongodbatlas_private_endpoint.dbpe.endpoint_service_name
vpc_endpoint_type = "Interface"
security_group_ids = [
aws_security_group.lb_sg.id,
]
subnet_ids = [
aws_subnet.subnet1.id,
var.sh_subnet
]
tags = {
"Name" = local.tname
}
#private_dns_enabled = true
}
But when I try to use this code second time in another folder (another tfstate) it failed cause error:
Error: error creating MongoDB Private Endpoints Connection: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/***/privateEndpoint: 409 (request "Conflict") A PrivateLink Endpoint Service already exists for AWS region US_EAST_2.
As I understand, a second "mongodbatlas_private_endpoint" "dbpe" trying to create another one Endpoint service. But, when I creating second Endpoint manually through WebUI, it using the same service like first Endpoint.
How I can tell to second Endpoint to use the existing service?
Or maybe it all wrong?
Please, help!
Thank you!
I found the solution.
Creating the "Endpoint Connection" really creates Endpoint only when you do it at first time. All of next times is creating an only association between Atlas endpoint and new AWS Endpoint.
In terraform I tried to create an Atlas endpoint second time and catch an error (because of limit - 1 endpoint per region). All I need to do - is create "Basic Endpoint" one time (by separate folder with own tfstate) and don't delete it. And for each new AWS endpoint need to create a new link from AWS Endpoint to "Basic". I do it by a terraform resource:
mongodbatlas_private_endpoint_interface_link
Resource "mongodbatlas_private_endpoint" is not need now. A "service_name" parameter in "aws_vpc_endpoint" you can hardcoded from "Basic" Endpoint. Use "output" to see mongodbatlas_private_endpoint.test.private_link_id - this is what you need.
We're trying to use nextflow on a k8s namespace other than our default, the namespace we're using is nextflownamespace. We've created our PVC and ensured the default service account has an admin rolebinding. We're getting an error that nextflow can't access the PVC:
"message": "persistentvolumeclaims \"my-nextflow-pvc\" is forbidden:
User \"system:serviceaccount:mynamespace:default\" cannot get resource
\"persistentvolumeclaims\" in API group \"\" in the namespace \"nextflownamespace\"",
In that error we see that system:serviceaccount:mynamespace:default is incorrectly pointing to our default namespace, mynamespace, not nextflownamespace which we created for nextflow use.
We tried adding debug.yaml = true to our nextflow.config but couldn't find the YAML it submits to k8s to validate the error. Our config file looks like this:
profiles {
standard {
k8s {
executor = "k8s"
namespace = "nextflownamespace"
cpus = 1
memory = 1.GB
debug.yaml = true
}
aws{
endpoint = "https://s3.nautilus.optiputer.net"
}
}
We did verify that when we change the namespace to another arbitrary value the error message used the new arbitrary namespace, but the service account name continued to point to the users default namespace erroneously.
We've tried every variant of profiles.standard.k8s.serviceAccount = "system:serviceaccount:nextflownamespace:default" that we could think of but didn't get any change with those attempts.
I think it's best to avoid using nested config profiles with Nextflow. I would either remove the 'standard' layer from your profile or just make 'standard' a separate profile:
profiles {
standard {
process.executor = 'local'
}
k8s {
executor = "k8s"
namespace = "nextflownamespace"
cpus = 1
memory = 1.GB
debug.yaml = true
}
aws{
endpoint = "https://s3.nautilus.optiputer.net"
}
}
Summary
I am unable to issue a valid certificate for my terraform kubernetes cluster on azure aks. The domain and certificate is successfully created (cert is created according to crt.sh), however the certificate is not applied to my domain and my browser reports "Kubernetes Ingress Controller Fake Certificate" as the applied certificate.
The terraform files are converted to the best of my abilities from a working set of yaml files (that issues certificates just fine). See my terraform code here.
UPDATE! In the original question I was also unable to create certificates. This was fixed by using the "tls_cert_request" resource from here. The change is included in my updated code below.
Here a some things I have checked out and found NOT to be the issue
The number of issued certificates from acme letsencrypt is not above rate-limits for either staging or prod.
I get the same "Fake certificate" error using both staging or prod certificate server.
Here are some areas that I am currently investigating as potential sources for the error.
I do not see a terraform-equivalent of the letsencrypt yaml input "privateKeySecretRef" and consequently what the value of my deployment ingress "certmanager.k8s.io/cluster-issuer" should be.
If anyone have any other suggestions, I would really appreciate to hear them (as this has been bugging me for quite some time now)!
Certificate Resources
provider "acme" {
server_url = var.context.cert_server
}
resource "tls_private_key" "reg_private_key" {
algorithm = "RSA"
}
resource "acme_registration" "reg" {
account_key_pem = tls_private_key.reg_private_key.private_key_pem
email_address = var.context.email
}
resource "tls_private_key" "cert_private_key" {
algorithm = "RSA"
}
resource "tls_cert_request" "req" {
key_algorithm = "RSA"
private_key_pem = tls_private_key.cert_private_key.private_key_pem
dns_names = [var.context.domain_address]
subject {
common_name = var.context.domain_address
}
}
resource "acme_certificate" "certificate" {
account_key_pem = acme_registration.reg.account_key_pem
certificate_request_pem = tls_cert_request.req.cert_request_pem
dns_challenge {
provider = "azure"
config = {
AZURE_CLIENT_ID = var.context.client_id
AZURE_CLIENT_SECRET = var.context.client_secret
AZURE_SUBSCRIPTION_ID = var.context.azure_subscription_id
AZURE_TENANT_ID = var.context.azure_tenant_id
AZURE_RESOURCE_GROUP = var.context.azure_dns_rg
}
}
}
Pypiserver Ingress Resource
resource "kubernetes_ingress" "pypi" {
metadata {
name = "pypi"
namespace = kubernetes_namespace.pypi.metadata[0].name
annotations = {
"kubernetes.io/ingress.class" = "inet"
"kubernetes.io/tls-acme" = "true"
"certmanager.k8s.io/cluster-issuer" = "letsencrypt-prod"
"ingress.kubernetes.io/ssl-redirect" = "true"
}
}
spec {
tls {
hosts = [var.domain_address]
}
rule {
host = var.domain_address
http {
path {
path = "/"
backend {
service_name = kubernetes_service.pypi.metadata[0].name
service_port = "http"
}
}
}
}
}
}
Let me know if more info is required, and I will update my question text with whatever is missing. And lastly I will let the terraform code git repo stay up and serve as help for others.
The answer to my question was that I had to include a cert-manager to my cluster and as far as I can tell there are no native terraform resources to create it. I ended up using Helm for my ingress and cert manager.
The setup ended up a bit more complex than I initially imagined, and as it stands now it needs to be run twice. This is due to the kubeconfig not being updated (have to apply "set KUBECONFIG=.kubeconfig" before running "terraform apply" a second time). So it's not pretty, but it "works" as a minimum example to get your deployment up and running.
There definitively are ways of simplifying the pypi deployment part using native terraform resources, and there is probably an easy fix to the kubeconfig not being updated. But I have not had time to investigate further.
If anyone have tips for a more elegant, functional and (probably most of all) secure minimum terraform setup for a k8s cluster I would love to hear it!
Anyways, for those interested, the resulting terraform code can be found here
I am trying to integrate Azure App insights service into the service fabric app for logging and instrumentation. I am running fabric code on my local VM. I exactly followed the document here [scenario 2]. Other resources on learn.microsoft.com also seem to indicate the same steps. [ex: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-diagnostics-event-aggregation-eventflow
For some reason, I don’t see any event entries in App insights. No errors in code when I do this:
ServiceEventSource.Current.ProcessedCountMetric("synced",sw.ElapsedMilliseconds, crc.DataTable.Rows.Count);
eventflowconfig.json contents
{
"inputs": [
{
"type": "EventSource",
"sources": [
{ "providerName": "Microsoft-ServiceFabric-Services" },
{ "providerName": "Microsoft-ServiceFabric-Actors" },
{ "providerName": "mystatefulservice" }
]
}
],
"filters": [
{
"type": "drop",
"include": "Level == Verbose"
}
],
"outputs": [
{
"type": "ApplicationInsights",
// (replace the following value with your AI resource's instrumentation key)
"instrumentationKey": "XXXXXXXXXXXXXXXXXXXXXX",
"filters": [
{
"type": "metadata",
"metadata": "metric",
"include": "ProviderName == mystatefulservice && EventName == ProcessedCountMetric",
"operationProperty": "operation",
"elapsedMilliSecondsProperty": "elapsedMilliSeconds",
"recordCountProperty": "recordCount"
}
]
}
],
"schemaVersion": "2016-08-11"
}
In ServiceEventSource.cs
[Event(ProcessedCountMetricEventId, Level = EventLevel.Informational)]
public void ProcessedCountMetric(string operation, long elapsedMilliSeconds, int recordCount)
{
if (IsEnabled())
WriteEvent(ProcessedCountMetricEventId, operation, elapsedMilliSeconds, recordCount);
}
EDIT
Adding diagnostics pipeline code from "Program.cs in fabric stateful service
using (var diagnosticsPipeline =
ServiceFabricDiagnosticPipelineFactory.CreatePipeline($"{ServiceFabricGlobalConstants.AppName}-mystatefulservice-DiagnosticsPipeline")
)
{
ServiceRuntime.RegisterServiceAsync("mystatefulserviceType",
context => new mystatefulservice(context)).GetAwaiter().GetResult();
ServiceEventSource.Current.ServiceTypeRegistered(Process.GetCurrentProcess().Id,
typeof(mystatefulservice).Name);
// Prevents this host process from terminating so services keep running.
Thread.Sleep(Timeout.Infinite);
}
Event Source is a tricky technology, I have been working with it for a while and always have problems. The configuration looks good, It is very hard to investigate without access to the environments, so I will make my suggestions.
There are a few catches you must be aware of:
If you are listening etw events from a different process, your process must be running with a user with permissions on 'Performance Log Users. Check which identity your service is running on and if it is part of performance log users, who has permissions to create event sessions to listen for these events.
Ensure the events are being emitted correctly and you can listen to them from Diagnostics Events Window, if it is not showing there, there is a problem in the provider.
For testing purposes, comment out the line if (IsEnabled()). it is an internal check to validate if your events should be emitted. I had situations where it is always false and skip the emit of events, probably it cache the results for a while, the docs are not clear how it should work.
Whenever possible, use the EventSource from the nuget package instead of the framework one, the framework version is full of bugs and lack fixes found in the nuget version.
Application Insights are not RealTime, sometimes it might take a few minutes to process your events, I would recommend to output the events to a console or file and check if it is listening correctly, afterwards, enable the AppInsights.
The link you provide is quite outdated and there's actually a much better way to log application error and exception info to Application insights. For example, the above won't help with tracking the call hierarchy of an incoming request between multiple services.
Have a look at the Microsoft App Insights Service Fabric nuget packages. It works great for:
Sending error and exception info
Populating the application map with all your services and their dependencies (including database)
Reporting on app performance metrics,
Tracing service call dependencies end-to-end,
Integrating with native as well as non-native SF applications
I am trying to provision some AWS resources, specifically an API Gateway which is connected to a Lambda. I am using Terraform v0.8.8.
I have a module which provisions the Lambda and returns the lambda function ARN as an output, which I then provide as a parameter to the following API Gateway provisioning code (which is based on the example in the TF docs):
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
# Variables
variable "myregion" { default = "eu-west-2" }
variable "accountId" { default = "" }
variable "lambdaArn" { default = "" }
variable "stageName" { default = "lab" }
# API Gateway
resource "aws_api_gateway_rest_api" "api" {
name = "myapi"
}
resource "aws_api_gateway_method" "method" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
http_method = "GET"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "integration" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
resource_id = "${aws_api_gateway_rest_api.api.root_resource_id}"
http_method = "${aws_api_gateway_method.method.http_method}"
integration_http_method = "POST"
type = "AWS"
uri = "arn:aws:apigateway:${var.myregion}:lambda:path/2015-03-31/functions/${var.lambdaArn}/invocations"
}
# Lambda
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = "${var.lambdaArn}"
principal = "apigateway.amazonaws.com"
source_arn = "arn:aws:execute-api:${var.myregion}:${var.accountId}:${aws_api_gateway_rest_api.api.id}/*/${aws_api_gateway_method.method.http_method}/resourcepath/subresourcepath"
}
resource "aws_api_gateway_deployment" "deployment" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
stage_name = "${var.stageName}"
}
When I run the above from scratch (i.e. when none of the resources exist) I get the following error:
Error applying plan:
1 error(s) occurred:
* aws_api_gateway_deployment.deployment: Error creating API Gateway Deployment: BadRequestException: No integration defined for method
status code: 400, request id: 15604135-03f5-11e7-8321-f5a75dc2b0a3
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
If I perform a 2nd TF application it consistently applies successfully, but every time I destroy I then receive the above error upon the first application.
This caused me to wonder if there's a dependency that I need to explicitly declare somewhere, I discovered #7486, which describes a similar pattern (although relating to an aws_api_gateway_integration_response rather than an aws_api_gateway_deployment). I tried manually adding an explicit dependency from the aws_api_gateway_deployment to the aws_api_gateway_integration but this had no effect.
Grateful for any thoughts, including whether this may indeed be a TF bug in which case I will raise it in the issue tracker. I thought I'd check with the community before doing so in case I'm missing something obvious.
Many thanks,
Edd
P.S. I've asked this question on the Terraform user group but this seems to get very little in the way of responses, I'm yet to figure out the cause of the issue hence now asking here.
You are right about the explicit dependency declaration.
Normally Terraform would be able to figure out the relationships and schedule create/update/delete operations accordingly to that - this is mostly possible because of the interpolation mechanisms under the hood (${resource_type.ref_name.attribute}). You can display the relationships affecting this in a graph via terraform graph.
Unfortunately in this specific case there's no direct relationship between API Gateway Deployments and Integrations - meaning the API interface for managing API Gateway resources doesn't require you to reference integration ID or anything like that to create deployment and the api_gateway_deployment resource in turn doesn't require that either.
The documentation for aws_api_gateway_deployment does mention this caveat at the top of the page. Admittedly the Deployment not only requires the method to exist, but integration too.
Here's how you can modify your code to get around it:
resource "aws_api_gateway_deployment" "deployment" {
rest_api_id = "${aws_api_gateway_rest_api.api.id}"
stage_name = "${var.stageName}"
depends_on = ["aws_api_gateway_method.method", "aws_api_gateway_integration.integration"]
}
Theoretically the "aws_api_gateway_method.method" is redundant since the integration already references the method in the config:
http_method = "${aws_api_gateway_method.method.http_method}"
so it will be scheduled for creation/update prior to the integration either way, but if you were to change that to something like
http_method = "GET"
then it would become necessary.
I have submitted PR to update the docs accordingly.