I have created a jps file using documentation https://docs.jelastic.com/application-manifest.
But there is no clear documentation to use PostgreSQL.
Jelastic JPS Node:
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxxx",
"user": "xxx",
"dump": "xxx.sql"
}
}
Error while configuring environment,
"data": {
"result": 11005,
"source": "marketplace",
"error": "database query error: java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=10.101.3.225)(port=3306)(type=master) : Connection refused (Connection refused)"
}
I have provided whole JPS file content here. In this, i got error when importing database and others are working fine in configs object.
{
"jpsVersion": "0.1",
"jpsType": "install",
"application": {
"id": "xxx",
"name": "xxx",
"version": "0.0.1",
"logo": "http://example.com/img/logo.png",
"type": "php",
"homepage": "http://example.com/",
"description": {
"en": "xxx"
},
"env": {
"topology": {
"ha": false,
"engine": "php7.2",
"ssl": false,
"nodes": [
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "nginxphp"
},
{
"extip": false,
"count": 1,
"cloudlets": 16,
"nodeType": "postgres9"
}
]
},
"upload": [
{
"nodeType": "nginxphp",
"sourcePath": "https://example.com/xxx.conf",
"destPath": "${SERVER_CONF_D}/xxx.conf"
}
],
"deployments": [
{
"archive": "https://example.com/xxx.zip",
"name": "xxx.zip",
"context": "ROOT"
}
],
"configs": [
{
"nodeType": "nginxphp",
"restart": true,
"path": "${SERVER_CONF_D}/xxx.conf",
"replacements": [
{
"pattern":"/usr/share/nginx/html",
"replacement":"${SERVER_WEBROOT}"
}
]
},
{
"nodeType": "postgres9",
"restart": false,
"database": {
"name": "xxx",
"user": "xxx",
"dump": "https://example.com/xxx.sql"
}
}, {
"restart": false,
"nodeType": "nginxphp",
"path": "${SERVER_WEBROOT}/ROOT/server/php/config.inc.php",
"replacements": [{
"replacement": "${nodes.postgres9.address}",
"pattern": "localhost"
}, {
"replacement": "${nodes.postgres9.database.password}",
"pattern": "xxx"
}
]
}
]
},
"success": {
"text": "Installation completed. username: admin and password: xxx"
}
}
}
Since Actions are disabled for the Postgres so far (The action is executed only for mysql5, mariadb, and mariadb10 containers) we've improved your manifest based on the recent updates. Yaml was used because it's more clear for reading and understanding:
jpsVersion: 0.1
jpsType: install
name: xxx
version: 0.0.1
logo: http://example.com/img/logo.png
engine: php7.2
nodes:
- cloudlets: 16
nodeType: nginxphp
- cloudlets: 16
nodeType: postgres9
onInstall:
- upload [nginxphp]:
sourcePath: https://example.com/xxx.conf
destPath: ${SERVER_CONF_D}/xxx.conf
- deploy:
archive: https://example.com/xxx.zip
name: xxx.zip
context: ROOT
- replaceInFile [nginxphp]:
path: ${SERVER_CONF_D}/xxx.conf
replacements:
- pattern: /usr/share/nginx/html
replacement: ${SERVER_WEBROOT}
- restartNodes [nginxphp]
- replaceInFile [nginxphp]:
path: ${SERVER_WEBROOT}/ROOT/server/php/config.inc.php
replacements:
- pattern: localhost
replacement: ${nodes.postgres9.address}
- pattern: xxx
replacement: ${nodes.postgres9.password}
- cmd [postgres9]: printf "PGPASSWORD=${nodes.postgres9.password};\nexport PGPASSWORD;\npsql postgres webadmin -c \"CREATE DATABASE Jelastic;\"\n" > /tmp/createDb
- cmd [postgres9]: chmod +x /tmp/createDb && /tmp/createDb
success: Installation completed. username admin and password xxx
Please note, that you can debug every action in the /console tab
Related
Whenever I try to install Ory Kratos with Helm on Kubernetes, it doesn't work.
Here is my values.yaml file
kratos:
config:
dsn: postgres://admin:Strongpassword#10.43.90.243:5432/postgres_db
secrets:
cookie:
- randomsecret
cipher:
- randomsecret
default:
- randomsecret
identity:
default_schema_id: default
schemas:
- id: default
url: file:///etc/config/identity.default.schema.json
courier:
smtp:
connection_uri: smtps://username:password#smtp.gmail.com
selfservice:
default_browser_return_url: http://127.0.0.1:4455/
automigration:
enabled: true
identitySchemas:
'identity.default.schema.json': |
{
"$id": "https://schemas.ory.sh/presets/kratos/identity.email.schema.json",
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Person",
"type": "object",
"properties": {
"traits": {
"type": "object",
"properties": {
"email": {
"type": "string",
"format": "email",
"title": "E-Mail",
"ory.sh/kratos": {
"credentials": {
"password": {
"identifier": true
}
},
"recovery": {
"via": "email"
},
"verification": {
"via": "email"
}
}
}
},
"required": [
"email"
],
"additionalProperties": false
}
}
}
I type in the command $helm install kratos -f values.yaml ory/kratos. It pauses for a while and then outputs
Error: INSTALLATION FAILED: failed pre-install: timed out waiting for the condition
It then creates one job which repeatedly creates kratos-automigrate pods which crash in a couple of minutes with status "Error" and creates a new pod.
I am trying to format my workflow per these instructions (https://argoproj.github.io/argo-workflows/workflow-inputs/#using-previous-step-outputs-as-inputs) but cannot seem to get it right. Specifically, I am trying to imitate "Using Previous Step Outputs As Inputs"
I have included my workflow below. In this version, I have added a path to the inputs.artifacts because the error requests one. The error I am now receiving is:
ATA[2022-02-28T14:14:45.933Z] Failed to submit workflow: templates.entrypoint.tasks.print1 templates.print1.inputs.artifacts.result.from not valid in inputs
Can someone please tell me how to correct this workflow so that it works?
---
{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Workflow",
"metadata": {
"annotations": {
"workflows.argoproj.io/description": "Building from the ground up",
"workflows.argoproj.io/version": ">= 3.1.0"
},
"labels": {
"workflows.argoproj.io/archive-strategy": "false"
},
"name": "data-passing",
"namespace": "sandbox"
},
"spec": {
"artifactRepositoryRef": {
"configMap": "my-config",
"key": "data"
},
"entrypoint": "entrypoint",
"nodeSelector": {
"kubernetes.io/os": "linux"
},
"parallelism": 3,
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
},
"templates": [
{
"container": {
"args": [
"Hello World"
],
"command": [
"cowsay"
],
"image": "docker/whalesay:latest",
"imagePullPolicy": "IfNotPresent"
},
"name": "whalesay",
"outputs": {
"artifacts": [
{
"name": "msg",
"path": "/tmp/raw"
}
]
},
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
}
},
{
"inputs": {
"artifacts": [
{
"from": "{{tasks.whalesay.outputs.artifacts.msg}}",
"name": "result",
"path": "/tmp/raw"
}
]
},
"name": "print1",
"script": {
"command": [
"python"
],
"image": "python:alpine3.6",
"imagePullPolicy": "IfNotPresent",
"source": "cat {{inputs.artifacts.result}}\n"
},
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
}
},
{
"dag": {
"tasks": [
{
"name": "whalesay",
"template": "whalesay"
},
{
"arguments": {
"artifacts": [
{
"from": "{{tasks.whalesay.outputs.artifacts.msg}}",
"name": "result",
"path": "/tmp/raw"
}
]
},
"dependencies": [
"whalesay"
],
"name": "print1",
"template": "print1"
}
]
},
"name": "entrypoint"
}
]
}
}
...
In the artifact argument of print1, you should only put name and from parameters
E.g:
- name: print1
arguments:
artifacts: [{name: results, from: "{{tasks.whalesay.outputs.artifacts.msg}}"}]
and then in your template declaration, you should put name and path in your artifact input, as follows:
- name: input1
inputs:
artifacts:
- name: result
path: /tmp/raw
...
This works because in the argument of you task (in the dag declaration) you tell the program how you want that input to be called and from where to extract it, and in the template declaration you receive the input from name and tell the program where to place it temporarily. (This is what I understand in my own words)
Another problem I see is in print1 instead of printing to stdout or using sys to run the cat command, you run cat directly, this (I think) is not posible.
You should instead do something like
import sys
sys.stdout.write("{{inputs.artifacts.result}}\n")
or
import os
os.system("cat {{inputs.artifacts.result}}\n")
A very similar workflow from the Argo developers/maintainers can be found here:
https://github.com/argoproj/argo-workflows/blob/master/examples/README.md#artifacts
I need some beginner help to KrakenD. I am running it on Ubuntu. The config is provided below.
I am able to reach the /healthz API without problem.
My challenge is that the /hello path returns error 500. I want this path to redirect to a Quarkus app that runs at http://getting-started36-getting-going.apps.bamboutos.hostname.us/.
Why is this not working? If I modify the /hello backend and use a fake host, I get the exacts ame result. This suggests that KrakendD is not even trying to connect to the backend.
In logs, KrakendD is saying:
Error #01: invalid character 'H' looking for beginning of value
kraken.json:
{
"version": 2,
"port": 9080,
"extra_config": {
"github_com/devopsfaith/krakend-gologging": {
"level": "DEBUG",
"prefix": "[KRAKEND]",
"syslog": false,
"stdout": true,
"format": "default"
}
},
"timeout": "3000ms",
"cache_ttl": "300s",
"output_encoding": "json",
"name": "KrakenD API Gateway Service",
"endpoints": [
{
"endpoint": "/healthz",
"extra_config": {
"github.com/devopsfaith/krakend/proxy": {
"static": {
"data": { "status": "OK"},
"strategy": "always"
}
}
},
"backend": [
{
"url_pattern": "/",
"host": ["http://fake-backend"]
}
]
},
{
"endpoint": "/hello",
"extra_config": {},
"backend": [
{
"url_pattern": "/hello",
"method": "GET",
"host": [
"http://getting-started36-getting-going.apps.bamboutos.hostname.us/"
]
}
]
}
]
}
What am I missing?
add "encoding": "string" to the backend section.
"backend": [
{
"url_pattern": "/hello",
"method": "GET",
"encoding": "string" ,
"host": [
"http://getting-started36-getting-going.apps.bamboutos.hostname.us/"
]
}
]
I tried to connect Strapi to mLab with this database.js config but it doesn't work. I get the error :
ConnectorError: connector "strapi-hook-mongoose" not found: Cannot find module 'strapi-connector-strapi-hook-mongoose'
Here is my database.js config file :
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "strapi-hook-mongoose",
"settings": {
"database": "strapi-test",
"host": "ds131914.mlab.com",
"srv": false,
"port": "31914",
"username": "root",
"password": "root010101"
},
"options": {
"authenticationDatabase": "strapi-test"
}
}
}
}
What should I do ?
After some search, it appers to me that this database.js config was from an old tutorial (this one). So to solve this probleme, you first need to install npm i -S strapi-connector-mongoose in order to install the right connecter.
Now, you need to change you database.js config for the desire environement. In my case, it was production. So edit config/environement/production/database.js like this :
{
"defaultConnection": "default",
"connections": {
"default": {
"connector": "mongoose",
"settings": {
"client": "mongo",
"host": "ds131914.mlab.com",
"port": "31914",
"srv": false,
"database": "strapi-test",
"username": "root",
"password": "root010101"
},
"options": {
"authenticationDatabase": "strapi-test",
"ssl": false
}
}
}
}
Like this, it should work !
I am trying to run Wazuh/Wazuh docker container on ECS. I was able to register task definition and launch container using Terraform. However, I am facing an issue with "Volume"(Data Volume) while registering tak definition using AWS CLI command.
Command: aws ecs --region eu-west-1 register-task-definition --family hids --cli-input-json file://task-definition.json
Error:
ParamValidationError: Parameter validation failed:
Unknown parameter in volumes[0]: "dockerVolumeConfiguration", must be one of: name, host
2019-08-29 07:31:59,195 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
{
"containerDefinitions": [
{
"portMappings": [
{
"hostPort": 514,
"containerPort": 514,
"protocol": "udp"
},
{
"hostPort": 1514,
"containerPort": 1514,
"protocol": "udp"
},
{
"hostPort": 1515,
"containerPort": 1515,
"protocol": "tcp"
},
{
"hostPort": 1516,
"containerPort": 1516,
"protocol": "tcp"
},
{
"hostPort": 55000,
"containerPort": 55000,
"protocol": "tcp"
}
],
"image": "wazuh/wazuh",
"essential": true,
"name": "chids",
"cpu": 1600,
"memory": 1600,
"mountPoints": [
{
"containerPath": "/var/ossec/data",
"sourceVolume": "ossec-data"
},
{
"containerPath": "/etc/filebeat",
"sourceVolume": "filebeat_etc"
},
{
"containerPath": "/var/lib/filebeat",
"sourceVolume": "filebeat_lib"
},
{
"containerPath": "/etc/postfix",
"sourceVolume": "postfix"
}
]
}
],
"volumes": [
{
"name": "ossec-data",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "filebeat_etc",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "filebeat_lib",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
},
{
"name": "postfix",
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
}
]
}
I tried by adding "host" parameter(however it supports Bind Mounts only). But got the same error.
"volumes": [
{
"name": "ossec-data",
"host": {
"sourcePath": "/var/ossec/data"
},
"dockerVolumeConfiguration": {
"scope": "shared",
"driver": "local",
"autoprovision": true
}
}
]
ECS should register the task definition having 4 Data Volumes and associated mount points.
Got the issue.
Removed "dockerVolumeConfiguration" parameter from "Volume" configuration and it worked.
"volumes": [
{
"name": "ossec-data",
"host": {
"sourcePath": "/ecs/ossec-data"
}
},
{
"name": "filebeat_etc",
"host": {
"sourcePath": "/ecs/filebeat_etc"
}
},
{
"name": "filebeat_lib",
"host": {
"sourcePath": "/ecs/filebeat_lib"
}
},
{
"name": "postfix",
"host": {
"sourcePath": "/ecs/postfix"
}
}
]
Can you check on your version of awscli?
aws --version
According to all the documentation, your first task definition should work fine and I tested it locally without any issues.
It might be that you are using an older aws cli version where the syntax was different or parameters were different at the time.
Could you try updating your aws cli to the latest version and try again?
--
Some additional info I found:
Checking on the aws ecs cli command, they added docker volume configuration as part of the cli in v1.80
The main aws-cli releases updates periodically to update the commands but they don't provide much info on what specific versions of each command is changed:
https://github.com/aws/aws-cli/blob/develop/CHANGELOG.rst
If you update your aws-cli version things should work