Can apim policy fragments be imported/exported - fragment

I've read the documentation and while the policy fragment idea seems good for code reuse, the system doesn't seem to provide a way to deploy them in an automated way.
I've even exported the entire configuration of the apim to git and could not find my policy fragment.

Seems like it's a very recent feature, we had the same problem, and as a first approach we decided to use terraform for deploying policy fragments from dev environment to stagging and production environments.
https://learn.microsoft.com/es-mx/azure/templates/microsoft.apimanagement/2021-12-01-preview/service/policyfragments?pivots=deployment-language-terraform
$computer> cat main.tf
terraform {
  required_providers {
    azapi = {
      source = "azure/azapi"
    }
  }
}
provider "azapi" {
}
resource "azapi_resource" "symbolicname" {
  type = "Microsoft.ApiManagement/service/policyFragments#2021-12-01-preview"
  name = “fragmentpolicyname”
  parent_id = "/subscriptions/[subscriptionid]/resourceGroups/[resourcegroupname]/providers/Microsoft.ApiManagement/service/[apimanagementservicename]”
  body = jsonencode({
    properties = {
      description = “fragment policy description”
      format = "xml" # it could also be rawxml
      value = <<EOF
<!--
    IMPORTANT:
    - Policy fragment are included as-is whenever they are referenced.
    - If using variables. Ensure they are setup before use.
    - Copy and paste your code here or simply start coding
 -->
 <fragment>
        //some magical code here that you will use in a lot of policies
 </fragment>
EOF
    }
  })
}
terraform init
terraform plan
terraform apply
You can integrate this part in your azure devops pipeline.

Related

Duplicate builds leading to wrong commit status in GitHub

This issue is also described in https://issues.jenkins.io/browse/JENKINS-70459
When using Jenkins, we noticed that the wrong pipeline status is often reported in GitHub PRs.
Further investigation showed very odd behavior. We have not yet found the cause of this problem (random?).
The 'Detail' link leads to the build which is successful.
Now comes the odd thing: The Jenkins log shows that the same build id was build twice!
First, it runs successful (trigger: PR Update). Here is an excerpt from the log:
{ [-]
   build_number: 2
   build_url: job/(...)/PR-2906/2/
   event_tag: job_event
   job_duration: 1108.635
   job_name: (...)/PR-2906
   job_result: SUCCESS
   job_started_at: 2023-01-19T14:41:14Z
   job_type: Pipeline
   label: master
   metadata: { [+]
   }
   node: (built-in)
   queue_id: 1781283
   queue_time: 5.063
   scm: git
   test_summary: { [+]
   }
   trigger_by: Pull request #2906 updated
   type: completed
   upstream:
   user: anonymous
}
Then, another run, under the exact same build id / url appears in the log:
{
   build_number: 2
   build_url: job/(...)/PR-2906/2/
   event_tag: job_event
   job_duration: 1.959
   job_name: (...)/PR-2906
   job_result: FAILURE
   job_started_at: 2023-01-20T07:14:50Z
   job_type: Pipeline
   label: master
   node: (built-in)
   queue_id: 2261495
   queue_time: 7.613
   test_summary: { [+]
   }
   trigger_by: Branch indexing
   type: completed
   upstream:
   user: anonymous
}
Notice that the trigger is now "Branch indexing". We do not know why this build happens but it is likely the root cause of this issue.
The failed build is not displayed in the Jenkins UI and the script console also returns #2 as the last successful build. We assume that this "corrupt" build is reported to GitHub. Does anyone have any ideas how this may happen? Any ideas are very welcome!
We checked our logs and tried to reproduce this behaviour - unsuccessful, so far.
¿Are you using Multibranch Pipeline plugin?
By default, Jenkins will not automatically re-index the repository for branch additions or deletions (unless using an Organization Folder), so it is often useful to configure a Multibranch Pipeline to periodically re-index in the configuration
Source: https://www.jenkins.io/doc/book/pipeline/multibranch/
Maybe this can also help: What are "Branch indexing" activities in Jenkins BlueOcean

GCP: using image from one account's Artifact Registry on other account

Hello and wish you a great time!
I've got following terraform service account deffinition:
resource "google_service_account" "gke_service_account" {
project = var.context
account_id = var.gke_account_name
display_name = var.gke_account_description
}
That I use in GCP kubernetes node pool:
resource "google_container_node_pool" "gke_node_pool" {
name = "${var.context}-gke-node"
location = var.region
project = var.context
cluster = google_container_cluster.gke_cluster.name
management {
auto_repair = "true"
auto_upgrade = "true"
}
autoscaling {
min_node_count = var.gke_min_node_count
max_node_count = var.gke_max_node_count
}
initial_node_count = var.gke_min_node_count
node_config {
machine_type = var.gke_machine_type
service_account = google_service_account.gke_service_account.email
metadata = {
disable-legacy-endpoints = "true"
}
# Needed for correctly functioning cluster, see
# https://www.terraform.io/docs/providers/google/r/container_cluster.html#oauth_scopes
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/cloud-platform"
]
}
}
However the current solution requires the prod and dev envs to be on various GCP accounts but use the same image from prod artifact registry.
As for now I have JSON key file for service account in prod having access to it's registry. Maybe there's a pretty way to use the json file as a second service account for kubernetes or update current k8s service account with json file to have additional permissions to the remote registry?
I've seen the solutions like put it to a secret or user cross-account-service-account.
But it's not the way I want to resolve it since we have some internal restrictions.
Hope someone faced similar task and has a solution to share - it'll save me real time.
Thanks in advance!

Azure DevOps REST api - Run pipeline with variables

I have a pipeline on Azure Devops that I'm trying to run programatically/headless using the REST api: https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/run%20pipeline?view=azure-devops-rest-6.0
So far so good, I can auth and start a run. I would like to pass data to this pipeline which the docs suggests is possible using variables in the request body. My request body:
{
"variables": {
"HELLO_WORLD": {
"isSecret": false,
"value": "HelloWorldValue"
}
}
}
My pipeline YAML looks like this:
trigger: none
pr: none
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
KEY=$(HELLO_WORLD)
echo "Hello world key: " $KEY
This however gives me an error that "HELLO_WORLD: command not found".
I have tried adding a "HELLO_WORLD" variable to the pipeline and enabled the "Let users override this value when running this pipeline"-setting. This results in the HELLO_WORLD variable no longer being unknown, but instead its stuck on its initial value and not set when i trigger a run with the REST api
How do you pass variables to a pipeline using the REST api? It is important that the variable value is set only for a specific run/build
I found another API to run a build, but it seems like you cannot use Personal Access Token auth with it, like you can with the pipeline api - only OAuth2 - https://learn.microsoft.com/en-us/rest/api/azure/devops/build/builds/queue?view=azure-devops-rest-6.0
You can do it with both the Runs API and Build Queue API, both work with Personal Access Tokens. For which one is the better/preferred, see this question: Difference between Azure Devops Builds - Queue vs run pipeline REST APIs, but in short the Runs API will be the more future proof option
Option 1: Runs API
POST https://dev.azure.com/{{organization}}/{{project}}/_apis/pipelines/{{PipelineId}}/runs?api-version=6.0-preview.1
Your body will be of type application/json (HTTP header Content-Type is set to application/json) and similar to the below, just replace resources.repositories.self.refName with the appropriate value
{
"resources": {
"repositories": {
"self": {
"refName": "refs/heads/main"
}
}
},
"variables": {
"HELLO_WORLD": {
"isSecret": false,
"value": "HelloWorldValue"
}
}
}
Option 2: Build API
POST https://dev.azure.com/{{organization}}/{{project}}/_apis/build/builds?api-version=6.0
Your body will be of type application/json (HTTP header Content-Type is set to application/json), something similar to below, just replace definition.id and sourcebranch with appropriate values. Please also note the "stringified" content of the parameter section (it should be a string representation of a json map)
{
"parameters": "{\"HELLO_WORLD\":\"HelloWorldValue\"}",
"definition": {
"id": 1
},
"sourceBranch": "refs/heads/main"
}
Here's the way I solved it....
The REST call:
POST https://dev.azure.com/<myOrg>/<myProject>/_apis/pipelines/17/runs?api-version=6.0-preview.1
 
The body of the request:
{
    "resources": {
        "repositories": {
            "self": {
                "refName": "refs/heads/main"
            }
        }
    },
    "templateParameters": {
        "A_Parameter": "And now for something completely different."
    }
}
Note: I added an authorization header with basic auth containing a username (any name will do) and password (your PAT token value). Also added a Content-Type application/json header.
 
Here's the entire yaml pipeline I used:
 
parameters:
- name: A_Parameter
  displayName: A parameter
  default: noValue
  type: string
 
trigger:
- none
 
pool:
  vmImage: ubuntu-latest
 
steps:
 
- script: |
    echo '1 - using dollar sign parens, p dot A_Parameter is now: ' $(parameters.A_Parameter)
    echo '2 - using dollar sign double curly braces, p dot A_Parameter is now::' ${{ parameters.A_Parameter }} '::'
    echo '3 - using dollar sign and only the var name: ' $(A_Parameter)
  displayName: 'Run a multi-line script'
 
 
And here's the output from the pipeline log. Note that only the second way properly displayed the value.  
 
1 - using dollar sign parens, p dot A_Parameter is now: 
2 - using dollar sign double curly braces, p dot A_Parameter is now:: And now for something completely different. :: 
3 - using dollar sign and only the var name:

Terraform: module outputs not being recognised as variables

I think just a quick sanity check, maybe my eyes are getting confused. I'm breaking a monolithic terraform file into modules.
My main.tf call just two modules, gke for the google kubernetes engine and storage which creates a persistent volume on the cluster created previously.
Module gke has an outputs.tf which outputs the following:
output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
Then in the main.tf for the storage module, I have:
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
host = "${var.host}"
Then in the root main.tf I have the following:
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
host = "${module.gke.host}"
From what I see, it looks right. The values for the certs, key and host variables should be outputted from the gke module by outputs.tf, picked up by main.tf of root, and then delivered to storage as a regular variable.
Have I got it the wrong way around? Or am I just going crazy, something doesn't seem right.
I get questioned about the variable not being filled when I run a plan.
EDIT:
Adding some additional information including my code.
If I manually add dummy entries for the variables it's asking for I get the following error:
Macbook: $ terraform plan
var.client_certificate
Enter a value: 1
var.client_key
Enter a value: 2
var.cluster_ca_certificate
Enter a value: 3
var.host
Enter a value: 4
...
(filtered out usual text)
...
* module.storage.data.google_container_cluster.kube-cluster: 1 error(s) occurred:
* module.storage.data.google_container_cluster.kube-cluster: data.google_container_cluster.kube-cluster: project: required field is not set
It looks like it's complaining that the data.google_container_cluster resource needs the project attribute. But it doesn't it's not a valid resource. It is for the provider, but it's filled out for provider.
Code below:
Folder structure:
root-folder/
├── gke/
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
├── storage/
│ ├── main.tf
│ └── variables.tf
├── main.tf
├── staging.json
├── terraform.tfvars
└── variables.tf
root-folder/gke/main.tf:
provider "google" {
credentials = "${file("staging.json")}"
project = "${var.project}"
region = "${var.region}"
zone = "${var.zone}"
}
resource "google_container_cluster" "kube-cluster" {
name = "kube-cluster"
description = "kube-cluster"
zone = "europe-west2-a"
initial_node_count = "2"
enable_kubernetes_alpha = "false"
enable_legacy_abac = "true"
master_auth {
username = "${var.username}"
password = "${var.password}"
}
node_config {
machine_type = "n1-standard-2"
disk_size_gb = "20"
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring"
]
}
}
root-folder/gke/outputs.tf:
output "client_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_certificate}"
sensitive = true
}
output "client_key" {
value = "${google_container_cluster.kube-cluster.master_auth.0.client_key}"
sensitive = true
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.kube-cluster.master_auth.0.cluster_ca_certificate}"
sensitive = true
}
output "host" {
value = "${google_container_cluster.kube-cluster.endpoint}"
sensitive = true
}
root-folder/gke/variables.tf:
variable "region" {
description = "GCP region, e.g. europe-west2"
default = "europe-west2"
}
variable "zone" {
description = "GCP zone, e.g. europe-west2-a (which must be in gcp_region)"
default = "europe-west2-a"
}
variable "project" {
description = "GCP project name"
}
variable "username" {
description = "Default admin username"
}
variable "password" {
description = "Default admin password"
}
/root-folder/storage/main.cf:
provider "kubernetes" {
host = "${var.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
}
data "google_container_cluster" "kube-cluster" {
name = "${var.cluster_name}"
zone = "${var.zone}"
}
resource "kubernetes_storage_class" "kube-storage-class" {
metadata {
name = "kube-storage-class"
}
storage_provisioner = "kubernetes.io/gce-pd"
parameters {
type = "pd-standard"
}
}
resource "kubernetes_persistent_volume_claim" "kube-claim" {
metadata {
name = "kube-claim"
}
spec {
access_modes = ["ReadWriteOnce"]
storage_class_name = "kube-storage-class"
resources {
requests {
storage = "10Gi"
}
}
}
}
/root/storage/variables.tf:
variable "username" {
description = "Default admin username."
}
variable "password" {
description = "Default admin password."
}
variable "client_certificate" {
description = "Client certificate, output from the GKE/Provider module."
}
variable "client_key" {
description = "Client key, output from the GKE/Provider module."
}
variable "cluster_ca_certificate" {
description = "Cluster CA Certificate, output from the GKE/Provider module."
}
variable "cluster_name" {
description = "Cluster name."
}
variable "zone" {
description = "GCP Zone"
}
variable "host" {
description = "Host endpoint, output from the GKE/Provider module."
}
/root-folder/main.tf:
module "gke" {
source = "./gke"
project = "${var.project}"
region = "${var.region}"
username = "${var.username}"
password = "${var.password}"
}
module "storage" {
source = "./storage"
host = "${module.gke.host}"
username = "${var.username}"
password = "${var.password}"
client_certificate = "${module.gke.client_certificate}"
client_key = "${module.gke.client_key}"
cluster_ca_certificate = "${module.gke.cluster_ca_certificate}"
cluster_name = "${var.cluster_name}"
zone = "${var.zone}"
}
/root-folder/variables.tf:
variable "project" {}
variable "region" {}
variable "username" {}
variable "password" {}
variable "gc_disk_size" {}
variable "kpv_vol_size" {}
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
variable "cluster_name" {}
variable "zone" {}
I won't paste the contents of my staging.json and terraform.tfvars for obvious reasons :)
In your /root-folder/variables.tf, delete the following entries:
variable "host" {}
variable "client_certificate" {}
variable "client_key" {}
variable "cluster_ca_certificate" {}
Those are not variables per se that the Terraform code at the root level needs. Instead, they are being passed as 1 module's output --> input to the 2nd module.

Having trouble getting usable results from Watson's Document Conversion service

When I try to convert this document
https://public.dhe.ibm.com/common/ssi/ecm/po/en/poq12347usen/POQ12347USEN.PDF
with Watson's Document Conversion service, all I get is four answer units, one for each level-4 heading. What I really need is 47 answer units, one for each FAQ question. How can I achieve this?
Often a custom configuration can be used to produce more usable results in the case of a document such as this one.
The custom configuration can be passed to Document Conversion in a config form part on the request.
Please refer to the documentation (https://www.ibm.com/watson/developercloud/doc/document-conversion/customizing.shtml)
for more details on the options available. In this particular case, the following seems to give improved results:
{
  "conversion_target": "ANSWER_UNITS",
  "pdf": {
    "heading": {
      "fonts": [
        {"level": 1, "min_size": 14, "max_size": 80},
        {"level": 2, "min_size": 11, "max_size": 12, "bold": true},
        {"level": 3, "min_size": 9, "max_size": 11, "bold": true}
      ]
    }
  }
}