Getting started with KrakenD - kubernetes

I need some beginner help to KrakenD. I am running it on Ubuntu. The config is provided below.
I am able to reach the /healthz API without problem.
My challenge is that the /hello path returns error 500. I want this path to redirect to a Quarkus app that runs at http://getting-started36-getting-going.apps.bamboutos.hostname.us/.
Why is this not working? If I modify the /hello backend and use a fake host, I get the exacts ame result. This suggests that KrakendD is not even trying to connect to the backend.
In logs, KrakendD is saying:
Error #01: invalid character 'H' looking for beginning of value
kraken.json:
{
"version": 2,
"port": 9080,
"extra_config": {
"github_com/devopsfaith/krakend-gologging": {
"level": "DEBUG",
"prefix": "[KRAKEND]",
"syslog": false,
"stdout": true,
"format": "default"
}
},
"timeout": "3000ms",
"cache_ttl": "300s",
"output_encoding": "json",
"name": "KrakenD API Gateway Service",
"endpoints": [
{
"endpoint": "/healthz",
"extra_config": {
"github.com/devopsfaith/krakend/proxy": {
"static": {
"data": { "status": "OK"},
"strategy": "always"
}
}
},
"backend": [
{
"url_pattern": "/",
"host": ["http://fake-backend"]
}
]
},
{
"endpoint": "/hello",
"extra_config": {},
"backend": [
{
"url_pattern": "/hello",
"method": "GET",
"host": [
"http://getting-started36-getting-going.apps.bamboutos.hostname.us/"
]
}
]
}
]
}
What am I missing?

add "encoding": "string" to the backend section.
"backend": [
{
"url_pattern": "/hello",
"method": "GET",
"encoding": "string" ,
"host": [
"http://getting-started36-getting-going.apps.bamboutos.hostname.us/"
]
}
]

Related

How do I use Argo Workflows Using Previous Step Outputs As Inputs?

I am trying to format my workflow per these instructions (https://argoproj.github.io/argo-workflows/workflow-inputs/#using-previous-step-outputs-as-inputs) but cannot seem to get it right. Specifically, I am trying to imitate "Using Previous Step Outputs As Inputs"
I have included my workflow below. In this version, I have added a path to the inputs.artifacts because the error requests one. The error I am now receiving is:
ATA[2022-02-28T14:14:45.933Z] Failed to submit workflow: templates.entrypoint.tasks.print1 templates.print1.inputs.artifacts.result.from not valid in inputs
Can someone please tell me how to correct this workflow so that it works?
---
{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Workflow",
"metadata": {
"annotations": {
"workflows.argoproj.io/description": "Building from the ground up",
"workflows.argoproj.io/version": ">= 3.1.0"
},
"labels": {
"workflows.argoproj.io/archive-strategy": "false"
},
"name": "data-passing",
"namespace": "sandbox"
},
"spec": {
"artifactRepositoryRef": {
"configMap": "my-config",
"key": "data"
},
"entrypoint": "entrypoint",
"nodeSelector": {
"kubernetes.io/os": "linux"
},
"parallelism": 3,
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
},
"templates": [
{
"container": {
"args": [
"Hello World"
],
"command": [
"cowsay"
],
"image": "docker/whalesay:latest",
"imagePullPolicy": "IfNotPresent"
},
"name": "whalesay",
"outputs": {
"artifacts": [
{
"name": "msg",
"path": "/tmp/raw"
}
]
},
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
}
},
{
"inputs": {
"artifacts": [
{
"from": "{{tasks.whalesay.outputs.artifacts.msg}}",
"name": "result",
"path": "/tmp/raw"
}
]
},
"name": "print1",
"script": {
"command": [
"python"
],
"image": "python:alpine3.6",
"imagePullPolicy": "IfNotPresent",
"source": "cat {{inputs.artifacts.result}}\n"
},
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
}
},
{
"dag": {
"tasks": [
{
"name": "whalesay",
"template": "whalesay"
},
{
"arguments": {
"artifacts": [
{
"from": "{{tasks.whalesay.outputs.artifacts.msg}}",
"name": "result",
"path": "/tmp/raw"
}
]
},
"dependencies": [
"whalesay"
],
"name": "print1",
"template": "print1"
}
]
},
"name": "entrypoint"
}
]
}
}
...
In the artifact argument of print1, you should only put name and from parameters
E.g:
- name: print1
arguments:
artifacts: [{name: results, from: "{{tasks.whalesay.outputs.artifacts.msg}}"}]
and then in your template declaration, you should put name and path in your artifact input, as follows:
- name: input1
inputs:
artifacts:
- name: result
path: /tmp/raw
...
This works because in the argument of you task (in the dag declaration) you tell the program how you want that input to be called and from where to extract it, and in the template declaration you receive the input from name and tell the program where to place it temporarily. (This is what I understand in my own words)
Another problem I see is in print1 instead of printing to stdout or using sys to run the cat command, you run cat directly, this (I think) is not posible.
You should instead do something like
import sys
sys.stdout.write("{{inputs.artifacts.result}}\n")
or
import os
os.system("cat {{inputs.artifacts.result}}\n")
A very similar workflow from the Argo developers/maintainers can be found here:
https://github.com/argoproj/argo-workflows/blob/master/examples/README.md#artifacts

AWS CloudFormation. Calling synchronously Step Function from API Gateway v1

I am trying to synchronously execute AWS Step Function via API Gateway. The problem is that with API Gateway V1 I have to use OpenAPI syntax (i.e. swagger) in order to specify integrationSubtype parameter, but something just doesn't work. Here is CloudFormation template I am using:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"restApiName": {
"Type": "String",
"Default": "stepApi"
}
},
"Resources": {
"MyStepFunction": {
"Type": "AWS::StepFunctions::StateMachine",
"Properties": {
"StateMachineName": "HelloWorld-StateMachine",
"StateMachineType": "EXPRESS",
"DefinitionString": "{\"Comment\": \"A Hello World example of the Amazon States Language using Pass states\", \"StartAt\": \"Hello\", \"States\": {\"Hello\": { \"Type\": \"Pass\", \"Result\": \"Hello\", \"Next\": \"World\" }, \"World\": { \"Type\": \"Pass\", \"Result\": \"World\", \"End\": true } } }",
"RoleArn": {
"Fn::GetAtt": [
"StepFunctionRole",
"Arn"
]
}
}
},
"StepFuncGateway": {
"Type": "AWS::ApiGateway::RestApi",
"Properties": {
"Name": {
"Ref": "restApiName"
},
"Body": {
"openapi": "3.0.1",
"info": {
"title": "processFormExample",
"version": "2020-11-06 15:32:29UTC"
},
"paths": {
"/step": {
"post": {
"responses": {
"200": {
"description": "Pet updated.",
"content": {
"application/json": {},
"application/xml": {}
}
},
"405": {
"description": "Method Not Allowed",
"content": {
"application/json": {},
"application/xml": {}
}
}
},
"parameters": [
],
"x-amazon-apigateway-integration": {
"integrationSubtype": "StepFunctions-StartSyncExecution",
"credentials": {
"Fn::GetAtt": [
"APIGatewayRole",
"Arn"
]
},
"RequestTemplates": {
"application/json": {
"Fn::Join": [
"",
[
"#set( $body = $util.escapeJavaScript($input.json('$')) ) \n\n{\"input\": \"$body\",\"name\": \"$context.requestId\",\"stateMachineArn\":\"",
{
"Ref": "MyStepFunction"
},
"\"}"
]
]
}
},
"httpMethod": "POST",
"payloadFormatVersion": "1.0",
"passthroughBehavior": "NEVER",
"type": "AWS_PROXY",
"connectionType": "INTERNET"
}
}
}
},
"x-amazon-apigateway-cors": {
"allowMethods": [
"*"
],
"maxAge": 0,
"allowCredentials": false,
"allowOrigins": [
"*"
]
}
}
},
"DependsOn": [
"APIGatewayRole",
"MyStepFunction"
]
},
"APIGatewayRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs",
"arn:aws:iam::aws:policy/AWSStepFunctionsFullAccess"
]
}
},
"StepFunctionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "states.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/service-role/AWSLambdaRole"
]
}
}
},
"Outputs": {
"HelloWorldApi": {
"Description": "Sync WF API endpoint",
"Value": {
"Fn::Sub": "https://${StepFuncGateway}.execute-api.${AWS::Region}.amazonaws.com/step"
}
}
}
}
The error I am seeing is following:
Errors found during import: Unable to put integration on 'POST' for
resource at path '/step': Invalid integration URI specified (Service:
AmazonApiGateway; Status Code: 400; Error Code: BadRequestException;
Request ID: 0c74acf9-147f-4561-9f4f-e457096c5533; Proxy: null)
I am out of ideas. Please help me to fix it.
UPDATE:
I had to add following code into x-amazon-apigateway-integration section and change type to AWS:
"uri": {
"Fn::Join": [
"",
[
"arn:aws:apigateway:",
{
"Ref": "AWS::Region"
},
":states:action/StartSyncExecution"
]
]
},
Another thing I had to fix is RequestTemplates, it should start with lower case r. After mentioned change the stack was deployed correctly, but now I have throttling problem to solve.
x-amazon-apigateway-integration is missing the uri property.
From the Amazon Developer Guide, the URI property is defined as:
The endpoint URI of the backend. For integrations of the aws type,
this is an ARN value. For the HTTP integration, this is the URL of the
HTTP endpoint including the https or http scheme.
For example:
"x-amazon-apigateway-integration": {
"type": "AWS_PROXY",
"httpMethod": "POST",
"uri": "http://petstore.execute-api.us-west-1.amazonaws.com/petstore/pets",
"payloadFormatVersion": 1.0,
"otherPropterties": "go here"
}
Amazon has additional information on URI definitions here. (Copied for convienience)
For HTTP or HTTP_PROXY integrations, the URI must be a fully formed, encoded HTTP(S) URL according to the RFC-3986 specification, for either standard integration, where connectionType is not VPC_LINK, or private integration, where connectionType is VPC_LINK. For a private HTTP integration, the URI is not used for routing.
For AWS or AWS_PROXY integrations, the URI is of the form arn:aws:apigateway:{region}:{subdomain.service|service}:path|action/{service_api}. Here, {Region} is the API Gateway region (e.g., us-east-1); {service} is the name of the integrated AWS service (e.g., s3); and {subdomain} is a designated subdomain supported by certain AWS service for fast host-name lookup. action can be used for an AWS service action-based API, using an Action={name}&{p1}={v1}&p2={v2}... query string. The ensuing {service_api} refers to a supported action {name} plus any required input parameters. Alternatively, path can be used for an AWS service path-based API. The ensuing service_api refers to the path to an AWS service resource, including the region of the integrated AWS service, if applicable. For example, for integration with the S3 API of GetObject, the uri can be either arn:aws:apigateway:us-west-2:s3:action/GetObject&Bucket={bucket}&Key={key} or arn:aws:apigateway:us-west-2:s3:path/{bucket}/{key}

Envoy External Authorization with OPA - evaluate fail with large JSON body

I have k8s pod running 3 containers: my app, opa, envoy
All my setup follow this guide: https://www.openpolicyagent.org/docs/latest/envoy-authorization/
Everything went well until I have 15kb JSON body.
Checking the OPA container log I see in request.http.body - only about half of JSON there.
{
"decision_id": "",
"error": {},
"input": {
"attributes": {
"destination": {
"address": {
"Address": {
"SocketAddress": {
"PortSpecifier": {
"PortValue": 8000
},
"address": "10.244.8.102"
}
}
}
},
"request": {
"http": {
"body": "only half of JSON body come here",
"headers": {
":authority": "api-service.com",
":method": "PUT",
":path": "/api",
"accept": "application/json",
"content-length": "14822",
"content-type": "application/json",
"x-envoy-decorator-operation": "....",
"x-envoy-internal": "true",
"x-forwarded-for": "10.244.6.0",
"x-forwarded-proto": "https",
"x-istio-attributes": "..."
},
"host": "....com",
"id": "12114967460600931537",
"method": "PUT",
"path": "/api",
"size": 14822
}
},
"source": {
"address": {
"Address": {
"SocketAddress": {
"PortSpecifier": {
"PortValue": 34670
},
"address": "10.244.3.164"
}
}
}
}
},
"parsed_path": [
"api"
],
"parsed_query": {}
},
"level": "info",
"msg": "Decision Log",
"query": "data.app.allow",
"type": "openpolicyagent.org/decision_logs"
}
I tried increase with_request_body.
http_filters:
- name: envoy.ext_authz
config:
with_request_body:
max_request_bytes: 819200
allow_partial_message: true
failure_mode_allow: false
Is there any other thing I missed?
Thanks a lot for your help
Are there any errors in the Envoy logs ?
What is the data that you are trying to send ? Does it need to be part of OPA's input document or can you leverage OPA's bundle feature.
I finally make it works by increasing max_request_bytes.
name: envoy.ext_authz
config:
with_request_body:
max_request_bytes: 819200
I configured this before in configmap but forgot to restart the pod. Just redeploy everything with new max_request_bytes - it's ok now
Reference: https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/buffer/v3/buffer.proto.html?highlight=max_request_bytes
Thank you all

Problems to POST chaincode (smart contract) to hyperledger-fabric using API

I've deployed the hyperledger-fabric service on Bluemix and obtained the credentials from there, one line looks like this:
{"enrollId":"user_type1_0","enrollSecret":"XXXXX","group":"group1","affiliation":"0001","username":"user_type1_0","secret":"XXXXX"}
I post the following to the "registrar" REST endpoint:
Secret: { "enrollId": "user_type1_0", "enrollSecret": "xxxxx" }
I get this response:
{ "OK": "Login successful for user 'user_type1_0'." }
Then I try to register some chaincode using POSTing the following to the chaincode REST endpoint:
QuerySpec {
"jsonrpc": "2.0",
"method": "deploy",
"params": {
"type": 1,
"chaincodeID": {
"path": "https://github.com/ibm-blockchain/learn-chaincode/finished"
},
"ctorMsg": {
"function": "init",
"args": [
"hi there"
]
},
"secureContext": "user_type1_0_xxxxx"
},
"id": 1 }
I get this reponse:
{ "jsonrpc": "2.0", "error": {
"code": -32000,
"message": "Registration missing",
"data": "User not logged in. Use the '/registrar' endpoint to obtain a security token." }, "id": 1 }
Any idea?
Fabric expects that you will provide EnrolmentID as a security context but you are trying to use "ID+Pass".
Can you try to run your deploy command with another SecurityContext value ?
QuerySpec { "jsonrpc": "2.0", "method": "deploy", "params": { "type": 1, "chaincodeID": { "path": "https://github.com/ibm-blockchain/learn-chaincode/finished" }, "ctorMsg": { "function": "init", "args": [ "hi there" ] }, "secureContext": "user_type1_0" }, "id": 1 }

Simple REST API Call from logic app - Azure

First of all I want you to know that I am new to Azure.
Recently, I am trying to work on Azure Logic App.
My motive is to make a simple REST API Call from the HTTP API (from Microsoft) and mail the response JSON via Office 365 connector.
Here is my code:
{
..
.
.
"triggers": {
"http": {
"recurrence": {
"frequency": "Day",
"interval": 1
},
"type": "Http",
"inputs": {
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"uri": "http://xxx/wcf/myrestservice.svc/is_online"
}
}
},
"actions": {
"office365connector": {
"type": "ApiApp",
"inputs": {
"apiVersion": "2015-01-14",
"host": {
"id": "/subscriptions/xxx/resourcegroups/resourcegroup1/providers/Microsoft.AppService/apiapps/office365connector",
"gateway": "https://xxx.azurewebsites.net"
},
"operation": "SendMail",
"parameters": {
"message": {
"To": "xxx#example.com",
"Subject": "My Service Status",
"Importance": "High",
"Body": "Hi #{triggers().outputs.body.Is_OnlineResult}"
}
},
"authentication": {
"type": "Raw",
"scheme": "Zumo",
"parameter": "#parameters('/subscriptions/xxx/resourcegroups/resourcegroup1/providers/Microsoft.AppService/apiapps/office365connector/token')"
}
},
"conditions": []
}
},
"outputs": {}
}
I am wondering, how could I get the response of the HTTP call?
Then I want to send the same in the mail body.
Please correct me if I am going in wrong direction. Any response from you will be very helpful to me.
Manish!
Have you tried using "Content"?
#triggers().outputs.body.Content