Ory Kratos can't start on Kubernetes with Helm - kubernetes

Whenever I try to install Ory Kratos with Helm on Kubernetes, it doesn't work.
Here is my values.yaml file
kratos:
config:
dsn: postgres://admin:Strongpassword#10.43.90.243:5432/postgres_db
secrets:
cookie:
- randomsecret
cipher:
- randomsecret
default:
- randomsecret
identity:
default_schema_id: default
schemas:
- id: default
url: file:///etc/config/identity.default.schema.json
courier:
smtp:
connection_uri: smtps://username:password#smtp.gmail.com
selfservice:
default_browser_return_url: http://127.0.0.1:4455/
automigration:
enabled: true
identitySchemas:
'identity.default.schema.json': |
{
"$id": "https://schemas.ory.sh/presets/kratos/identity.email.schema.json",
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Person",
"type": "object",
"properties": {
"traits": {
"type": "object",
"properties": {
"email": {
"type": "string",
"format": "email",
"title": "E-Mail",
"ory.sh/kratos": {
"credentials": {
"password": {
"identifier": true
}
},
"recovery": {
"via": "email"
},
"verification": {
"via": "email"
}
}
}
},
"required": [
"email"
],
"additionalProperties": false
}
}
}
I type in the command $helm install kratos -f values.yaml ory/kratos. It pauses for a while and then outputs
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
It then creates one job which repeatedly creates kratos-automigrate pods which crash in a couple of minutes with status "Error" and creates a new pod.

Related

How do I use Argo Workflows Using Previous Step Outputs As Inputs?

I am trying to format my workflow per these instructions (https://argoproj.github.io/argo-workflows/workflow-inputs/#using-previous-step-outputs-as-inputs) but cannot seem to get it right. Specifically, I am trying to imitate "Using Previous Step Outputs As Inputs"
I have included my workflow below. In this version, I have added a path to the inputs.artifacts because the error requests one. The error I am now receiving is:
ATA[2022-02-28T14:14:45.933Z] Failed to submit workflow: templates.entrypoint.tasks.print1 templates.print1.inputs.artifacts.result.from not valid in inputs
Can someone please tell me how to correct this workflow so that it works?
---
{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Workflow",
"metadata": {
"annotations": {
"workflows.argoproj.io/description": "Building from the ground up",
"workflows.argoproj.io/version": ">= 3.1.0"
},
"labels": {
"workflows.argoproj.io/archive-strategy": "false"
},
"name": "data-passing",
"namespace": "sandbox"
},
"spec": {
"artifactRepositoryRef": {
"configMap": "my-config",
"key": "data"
},
"entrypoint": "entrypoint",
"nodeSelector": {
"kubernetes.io/os": "linux"
},
"parallelism": 3,
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
},
"templates": [
{
"container": {
"args": [
"Hello World"
],
"command": [
"cowsay"
],
"image": "docker/whalesay:latest",
"imagePullPolicy": "IfNotPresent"
},
"name": "whalesay",
"outputs": {
"artifacts": [
{
"name": "msg",
"path": "/tmp/raw"
}
]
},
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
}
},
{
"inputs": {
"artifacts": [
{
"from": "{{tasks.whalesay.outputs.artifacts.msg}}",
"name": "result",
"path": "/tmp/raw"
}
]
},
"name": "print1",
"script": {
"command": [
"python"
],
"image": "python:alpine3.6",
"imagePullPolicy": "IfNotPresent",
"source": "cat {{inputs.artifacts.result}}\n"
},
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
}
},
{
"dag": {
"tasks": [
{
"name": "whalesay",
"template": "whalesay"
},
{
"arguments": {
"artifacts": [
{
"from": "{{tasks.whalesay.outputs.artifacts.msg}}",
"name": "result",
"path": "/tmp/raw"
}
]
},
"dependencies": [
"whalesay"
],
"name": "print1",
"template": "print1"
}
]
},
"name": "entrypoint"
}
]
}
}
...
In the artifact argument of print1, you should only put name and from parameters
E.g:
- name: print1
arguments:
artifacts: [{name: results, from: "{{tasks.whalesay.outputs.artifacts.msg}}"}]
and then in your template declaration, you should put name and path in your artifact input, as follows:
- name: input1
inputs:
artifacts:
- name: result
path: /tmp/raw
...
This works because in the argument of you task (in the dag declaration) you tell the program how you want that input to be called and from where to extract it, and in the template declaration you receive the input from name and tell the program where to place it temporarily. (This is what I understand in my own words)
Another problem I see is in print1 instead of printing to stdout or using sys to run the cat command, you run cat directly, this (I think) is not posible.
You should instead do something like
import sys
sys.stdout.write("{{inputs.artifacts.result}}\n")
or
import os
os.system("cat {{inputs.artifacts.result}}\n")
A very similar workflow from the Argo developers/maintainers can be found here:
https://github.com/argoproj/argo-workflows/blob/master/examples/README.md#artifacts

Getting started with KrakenD

I need some beginner help to KrakenD. I am running it on Ubuntu. The config is provided below.
I am able to reach the /healthz API without problem.
My challenge is that the /hello path returns error 500. I want this path to redirect to a Quarkus app that runs at http://getting-started36-getting-going.apps.bamboutos.hostname.us/.
Why is this not working? If I modify the /hello backend and use a fake host, I get the exacts ame result. This suggests that KrakendD is not even trying to connect to the backend.
In logs, KrakendD is saying:
Error #01: invalid character 'H' looking for beginning of value
kraken.json:
{
"version": 2,
"port": 9080,
"extra_config": {
"github_com/devopsfaith/krakend-gologging": {
"level": "DEBUG",
"prefix": "[KRAKEND]",
"syslog": false,
"stdout": true,
"format": "default"
}
},
"timeout": "3000ms",
"cache_ttl": "300s",
"output_encoding": "json",
"name": "KrakenD API Gateway Service",
"endpoints": [
{
"endpoint": "/healthz",
"extra_config": {
"github.com/devopsfaith/krakend/proxy": {
"static": {
"data": { "status": "OK"},
"strategy": "always"
}
}
},
"backend": [
{
"url_pattern": "/",
"host": ["http://fake-backend"]
}
]
},
{
"endpoint": "/hello",
"extra_config": {},
"backend": [
{
"url_pattern": "/hello",
"method": "GET",
"host": [
"http://getting-started36-getting-going.apps.bamboutos.hostname.us/"
]
}
]
}
]
}
What am I missing?
add "encoding": "string" to the backend section.
"backend": [
{
"url_pattern": "/hello",
"method": "GET",
"encoding": "string" ,
"host": [
"http://getting-started36-getting-going.apps.bamboutos.hostname.us/"
]
}
]

AWS CloudFormation. Calling synchronously Step Function from API Gateway v1

I am trying to synchronously execute AWS Step Function via API Gateway. The problem is that with API Gateway V1 I have to use OpenAPI syntax (i.e. swagger) in order to specify integrationSubtype parameter, but something just doesn't work. Here is CloudFormation template I am using:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"restApiName": {
"Type": "String",
"Default": "stepApi"
}
},
"Resources": {
"MyStepFunction": {
"Type": "AWS::StepFunctions::StateMachine",
"Properties": {
"StateMachineName": "HelloWorld-StateMachine",
"StateMachineType": "EXPRESS",
"DefinitionString": "{\"Comment\": \"A Hello World example of the Amazon States Language using Pass states\", \"StartAt\": \"Hello\", \"States\": {\"Hello\": { \"Type\": \"Pass\", \"Result\": \"Hello\", \"Next\": \"World\" }, \"World\": { \"Type\": \"Pass\", \"Result\": \"World\", \"End\": true } } }",
"RoleArn": {
"Fn::GetAtt": [
"StepFunctionRole",
"Arn"
]
}
}
},
"StepFuncGateway": {
"Type": "AWS::ApiGateway::RestApi",
"Properties": {
"Name": {
"Ref": "restApiName"
},
"Body": {
"openapi": "3.0.1",
"info": {
"title": "processFormExample",
"version": "2020-11-06 15:32:29UTC"
},
"paths": {
"/step": {
"post": {
"responses": {
"200": {
"description": "Pet updated.",
"content": {
"application/json": {},
"application/xml": {}
}
},
"405": {
"description": "Method Not Allowed",
"content": {
"application/json": {},
"application/xml": {}
}
}
},
"parameters": [
],
"x-amazon-apigateway-integration": {
"integrationSubtype": "StepFunctions-StartSyncExecution",
"credentials": {
"Fn::GetAtt": [
"APIGatewayRole",
"Arn"
]
},
"RequestTemplates": {
"application/json": {
"Fn::Join": [
"",
[
"#set( $body = $util.escapeJavaScript($input.json('$')) ) \n\n{\"input\": \"$body\",\"name\": \"$context.requestId\",\"stateMachineArn\":\"",
{
"Ref": "MyStepFunction"
},
"\"}"
]
]
}
},
"httpMethod": "POST",
"payloadFormatVersion": "1.0",
"passthroughBehavior": "NEVER",
"type": "AWS_PROXY",
"connectionType": "INTERNET"
}
}
}
},
"x-amazon-apigateway-cors": {
"allowMethods": [
"*"
],
"maxAge": 0,
"allowCredentials": false,
"allowOrigins": [
"*"
]
}
}
},
"DependsOn": [
"APIGatewayRole",
"MyStepFunction"
]
},
"APIGatewayRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs",
"arn:aws:iam::aws:policy/AWSStepFunctionsFullAccess"
]
}
},
"StepFunctionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "states.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Path": "/",
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/service-role/AWSLambdaRole"
]
}
}
},
"Outputs": {
"HelloWorldApi": {
"Description": "Sync WF API endpoint",
"Value": {
"Fn::Sub": "https://${StepFuncGateway}.execute-api.${AWS::Region}.amazonaws.com/step"
}
}
}
}
The error I am seeing is following:
Errors found during import: Unable to put integration on 'POST' for
resource at path '/step': Invalid integration URI specified (Service:
AmazonApiGateway; Status Code: 400; Error Code: BadRequestException;
Request ID: 0c74acf9-147f-4561-9f4f-e457096c5533; Proxy: null)
I am out of ideas. Please help me to fix it.
UPDATE:
I had to add following code into x-amazon-apigateway-integration section and change type to AWS:
"uri": {
"Fn::Join": [
"",
[
"arn:aws:apigateway:",
{
"Ref": "AWS::Region"
},
":states:action/StartSyncExecution"
]
]
},
Another thing I had to fix is RequestTemplates, it should start with lower case r. After mentioned change the stack was deployed correctly, but now I have throttling problem to solve.
x-amazon-apigateway-integration is missing the uri property.
From the Amazon Developer Guide, the URI property is defined as:
The endpoint URI of the backend. For integrations of the aws type,
this is an ARN value. For the HTTP integration, this is the URL of the
HTTP endpoint including the https or http scheme.
For example:
"x-amazon-apigateway-integration": {
"type": "AWS_PROXY",
"httpMethod": "POST",
"uri": "http://petstore.execute-api.us-west-1.amazonaws.com/petstore/pets",
"payloadFormatVersion": 1.0,
"otherPropterties": "go here"
}
Amazon has additional information on URI definitions here. (Copied for convienience)
For HTTP or HTTP_PROXY integrations, the URI must be a fully formed, encoded HTTP(S) URL according to the RFC-3986 specification, for either standard integration, where connectionType is not VPC_LINK, or private integration, where connectionType is VPC_LINK. For a private HTTP integration, the URI is not used for routing.
For AWS or AWS_PROXY integrations, the URI is of the form arn:aws:apigateway:{region}:{subdomain.service|service}:path|action/{service_api}. Here, {Region} is the API Gateway region (e.g., us-east-1); {service} is the name of the integrated AWS service (e.g., s3); and {subdomain} is a designated subdomain supported by certain AWS service for fast host-name lookup. action can be used for an AWS service action-based API, using an Action={name}&{p1}={v1}&p2={v2}... query string. The ensuing {service_api} refers to a supported action {name} plus any required input parameters. Alternatively, path can be used for an AWS service path-based API. The ensuing service_api refers to the path to an AWS service resource, including the region of the integrated AWS service, if applicable. For example, for integration with the S3 API of GetObject, the uri can be either arn:aws:apigateway:us-west-2:s3:action/GetObject&Bucket={bucket}&Key={key} or arn:aws:apigateway:us-west-2:s3:path/{bucket}/{key}

Envoy External Authorization with OPA - evaluate fail with large JSON body

I have k8s pod running 3 containers: my app, opa, envoy
All my setup follow this guide: https://www.openpolicyagent.org/docs/latest/envoy-authorization/
Everything went well until I have 15kb JSON body.
Checking the OPA container log I see in request.http.body - only about half of JSON there.
{
"decision_id": "",
"error": {},
"input": {
"attributes": {
"destination": {
"address": {
"Address": {
"SocketAddress": {
"PortSpecifier": {
"PortValue": 8000
},
"address": "10.244.8.102"
}
}
}
},
"request": {
"http": {
"body": "only half of JSON body come here",
"headers": {
":authority": "api-service.com",
":method": "PUT",
":path": "/api",
"accept": "application/json",
"content-length": "14822",
"content-type": "application/json",
"x-envoy-decorator-operation": "....",
"x-envoy-internal": "true",
"x-forwarded-for": "10.244.6.0",
"x-forwarded-proto": "https",
"x-istio-attributes": "..."
},
"host": "....com",
"id": "12114967460600931537",
"method": "PUT",
"path": "/api",
"size": 14822
}
},
"source": {
"address": {
"Address": {
"SocketAddress": {
"PortSpecifier": {
"PortValue": 34670
},
"address": "10.244.3.164"
}
}
}
}
},
"parsed_path": [
"api"
],
"parsed_query": {}
},
"level": "info",
"msg": "Decision Log",
"query": "data.app.allow",
"type": "openpolicyagent.org/decision_logs"
}
I tried increase with_request_body.
http_filters:
- name: envoy.ext_authz
config:
with_request_body:
max_request_bytes: 819200
allow_partial_message: true
failure_mode_allow: false
Is there any other thing I missed?
Thanks a lot for your help
Are there any errors in the Envoy logs ?
What is the data that you are trying to send ? Does it need to be part of OPA's input document or can you leverage OPA's bundle feature.
I finally make it works by increasing max_request_bytes.
name: envoy.ext_authz
config:
with_request_body:
max_request_bytes: 819200
I configured this before in configmap but forgot to restart the pod. Just redeploy everything with new max_request_bytes - it's ok now
Reference: https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/buffer/v3/buffer.proto.html?highlight=max_request_bytes
Thank you all

Why Jelastic environment not working when using postgres9 in jps?

I have created a jps file using documentation https://docs.jelastic.com/application-manifest.
But there is no clear documentation to use PostgreSQL.
Jelastic JPS Node:
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxxx",
"user": "xxx",
"dump": "xxx.sql"
}
}
Error while configuring environment,
"data": {
"result": 11005,
"source": "marketplace",
"error": "database query error: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=10.101.3.225)(port=3306)(type=master) : Connection refused (Connection refused)"
}
I have provided whole JPS file content here. In this, i got error when importing database and others are working fine in configs object.
{
"jpsVersion": "0.1",
"jpsType": "install",
"application": {
"id": "xxx",
"name": "xxx",
"version": "0.0.1",
"logo": "http://example.com/img/logo.png",
"type": "php",
"homepage": "http://example.com/",
"description": {
"en": "xxx"
},
"env": {
"topology": {
"ha": false,
"engine": "php7.2",
"ssl": false,
"nodes": [
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "nginxphp"
},
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "postgres9"
}
]
},
"upload": [
{
               "nodeType": "nginxphp",
               "sourcePath": "https://example.com/xxx.conf",
               "destPath": "${SERVER_CONF_D}/xxx.conf"
}
],
"deployments": [
{
"archive": "https://example.com/xxx.zip",
"name": "xxx.zip",
"context": "ROOT"
}
],
"configs": [
{
"nodeType": "nginxphp",
"restart": true,
"path": "${SERVER_CONF_D}/xxx.conf",
"replacements": [
                    {
                       "pattern":"/usr/share/nginx/html",
                       "replacement":"${SERVER_WEBROOT}"
                    }
                    ]
},
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxx",
"user": "xxx",
"dump": "https://example.com/xxx.sql"
}
}, {
"restart": false,
"nodeType": "nginxphp",
"path": "${SERVER_WEBROOT}/ROOT/server/php/config.inc.php",
"replacements": [{
"replacement": "${nodes.postgres9.address}",
"pattern": "localhost"
}, {
"replacement": "${nodes.postgres9.database.password}",
"pattern": "xxx"
}
]
}
]
},
"success": {
"text": "Installation completed. username: admin and password: xxx"
}
}
}
Since Actions are disabled for the Postgres so far (The action is executed only for mysql5, mariadb, and mariadb10 containers) we've improved your manifest based on the recent updates. Yaml was used because it's more clear for reading and understanding:
jpsVersion: 0.1
jpsType: install
name: xxx
version: 0.0.1
logo: http://example.com/img/logo.png
engine: php7.2
nodes:
- cloudlets: 16
nodeType: nginxphp
- cloudlets: 16
nodeType: postgres9
onInstall:
- upload [nginxphp]:
sourcePath: https://example.com/xxx.conf
destPath: ${SERVER_CONF_D}/xxx.conf
- deploy:
archive: https://example.com/xxx.zip
name: xxx.zip
context: ROOT
- replaceInFile [nginxphp]:
path: ${SERVER_CONF_D}/xxx.conf
replacements:
- pattern: /usr/share/nginx/html
replacement: ${SERVER_WEBROOT}
- restartNodes [nginxphp]
- replaceInFile [nginxphp]:
path: ${SERVER_WEBROOT}/ROOT/server/php/config.inc.php
replacements:
- pattern: localhost
replacement: ${nodes.postgres9.address}
- pattern: xxx
replacement: ${nodes.postgres9.password}
- cmd [postgres9]: printf "PGPASSWORD=${nodes.postgres9.password};\nexport PGPASSWORD;\npsql postgres webadmin -c \"CREATE DATABASE Jelastic;\"\n" > /tmp/createDb
- cmd [postgres9]: chmod +x /tmp/createDb && /tmp/createDb
success: Installation completed. username admin and password xxx
Please note, that you can debug every action in the /console tab