Chef: Trying to get build-essential to install on our node before Postgres - postgresql

Here's our node configuration:
{
"run_list": [
"recipe[apt]",
"recipe[build-essential]",
[
"rackbox"
]
],
"rackbox": {
"jenkins": {
"job": "job1",
"git_repo": "https://github.com/hayesmp/railsgirls-app.git",
"command": "bundle exec rake",
"ip_address": "192.237.181.154",
"host": "subocean-southerner"
},
"ruby": {
"versions": [
"2.0.0-p247"
],
"global_version": "2.0.0-p247"
},
"apps": {
"unicorn": [
{
"appname": "app1",
"hostname": "app1"
}
]
},
"db_root_password": "iloverandompasswordsbutthiswilldo",
"databases": {
"postgresql": [
{
"database_name": "app1_production",
"username": "app1",
"password": "app1_pass"
}
]
}
}
}
I'm just not sure where to insert the build essential compiletime = true attribute for my configuration.
This is the sample code for this stack overflow post: Chef: Why are resources in an "include_recipe" step being skipped?
name "myapp"
run_list(
"recipe[build-essential]",
"recipe[myapp]"
)
default_attributes(
"build_essential" => {
"compiletime" => true
}
)

Paste this into your node configuration:
"build_essential": {
"compiletime": true
}
BTW: you should use recipe[rackbox] instead of [rackbox] in your run_list

Related

How do I use Argo Workflows Using Previous Step Outputs As Inputs?

I am trying to format my workflow per these instructions (https://argoproj.github.io/argo-workflows/workflow-inputs/#using-previous-step-outputs-as-inputs) but cannot seem to get it right. Specifically, I am trying to imitate "Using Previous Step Outputs As Inputs"
I have included my workflow below. In this version, I have added a path to the inputs.artifacts because the error requests one. The error I am now receiving is:
ATA[2022-02-28T14:14:45.933Z] Failed to submit workflow: templates.entrypoint.tasks.print1 templates.print1.inputs.artifacts.result.from not valid in inputs
Can someone please tell me how to correct this workflow so that it works?
---
{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Workflow",
"metadata": {
"annotations": {
"workflows.argoproj.io/description": "Building from the ground up",
"workflows.argoproj.io/version": ">= 3.1.0"
},
"labels": {
"workflows.argoproj.io/archive-strategy": "false"
},
"name": "data-passing",
"namespace": "sandbox"
},
"spec": {
"artifactRepositoryRef": {
"configMap": "my-config",
"key": "data"
},
"entrypoint": "entrypoint",
"nodeSelector": {
"kubernetes.io/os": "linux"
},
"parallelism": 3,
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
},
"templates": [
{
"container": {
"args": [
"Hello World"
],
"command": [
"cowsay"
],
"image": "docker/whalesay:latest",
"imagePullPolicy": "IfNotPresent"
},
"name": "whalesay",
"outputs": {
"artifacts": [
{
"name": "msg",
"path": "/tmp/raw"
}
]
},
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
}
},
{
"inputs": {
"artifacts": [
{
"from": "{{tasks.whalesay.outputs.artifacts.msg}}",
"name": "result",
"path": "/tmp/raw"
}
]
},
"name": "print1",
"script": {
"command": [
"python"
],
"image": "python:alpine3.6",
"imagePullPolicy": "IfNotPresent",
"source": "cat {{inputs.artifacts.result}}\n"
},
"securityContext": {
"fsGroup": 2000,
"fsGroupChangePolicy": "OnRootMismatch",
"runAsGroup": 3000,
"runAsNonRoot": true,
"runAsUser": 1000
}
},
{
"dag": {
"tasks": [
{
"name": "whalesay",
"template": "whalesay"
},
{
"arguments": {
"artifacts": [
{
"from": "{{tasks.whalesay.outputs.artifacts.msg}}",
"name": "result",
"path": "/tmp/raw"
}
]
},
"dependencies": [
"whalesay"
],
"name": "print1",
"template": "print1"
}
]
},
"name": "entrypoint"
}
]
}
}
...
In the artifact argument of print1, you should only put name and from parameters
E.g:
- name: print1
arguments:
artifacts: [{name: results, from: "{{tasks.whalesay.outputs.artifacts.msg}}"}]
and then in your template declaration, you should put name and path in your artifact input, as follows:
- name: input1
inputs:
artifacts:
- name: result
path: /tmp/raw
...
This works because in the argument of you task (in the dag declaration) you tell the program how you want that input to be called and from where to extract it, and in the template declaration you receive the input from name and tell the program where to place it temporarily. (This is what I understand in my own words)
Another problem I see is in print1 instead of printing to stdout or using sys to run the cat command, you run cat directly, this (I think) is not posible.
You should instead do something like
import sys
sys.stdout.write("{{inputs.artifacts.result}}\n")
or
import os
os.system("cat {{inputs.artifacts.result}}\n")
A very similar workflow from the Argo developers/maintainers can be found here:
https://github.com/argoproj/argo-workflows/blob/master/examples/README.md#artifacts

Configuring wch3 hash function in mcrouter

I came across this page discussing different hash functions that mcrouter can use, but could not find an example of how a hash function can be configured if you do not want to use the default ch3. In my case, i would like to use the "wch3" with a balanced (50/50) weight between two nodes in a pool. How can i manually change the default to configure wch3?
Thanks in advance.
Here is an example that can help you:
{
"pools": {
"cold": {
"servers": [
"memc_1:11211",
"memc_2:11211"
]
},
"warm": {
"servers": [
"memc_11:11211",
"memc_12:11211"
]
}
},
"route": {
"type": "OperationSelectorRoute",
"operation_policies": {
"get": {
"type": "WarmUpRoute",
"cold": "PoolRoute|cold",
"warm": "PoolRoute|warm",
"exptime": 20
}
},
"default_policy": {
"type": "AllSyncRoute",
"children": [{
"type": "PoolRoute",
"pool": "cold",
"hash": {
"hash_func": "WeightedCh3",
"weights": [
1,
1
]
}
},
{
"type": "PoolRoute",
"pool": "warm",
"hash": {
"hash_func": "WeightedCh3",
"weights": [
1,
1
]
}
}
]
}
}
}
You can adjust the weight in the range [0.0, 1.0].

Populating only vertex from CSV file

Need help to know how should I populate my vertex class in orientdb with the csv file. The format in csv file is
name,type,status
xxxxx,ABC,3
yyyyy,ABC,1
zzzzz,123,5
--
I have a vertex and edges extended in OrientDB, where the vertex have 3 property name,type and status. I only want the vertex to get populated from csv, the edges will be created dynamically via API
I tried to create ETL file as below :
{
"source":{"file": { "path": "/tmp/ientdb-community-2.2.18/config/data.csv" } },
"extractor": { "csv": {} },
"transformers": [
{ "vertex": { "class": "MyObject" } }
],
"loader": {
"orientdb": {
"dbURL": "remote:localhost/mydb",
"dbUser": "root",
"dbPassword": "root",
"dbType": "graph",
"classes": [
{"name": "MyObject", "extends": "V"},
], "indexes": [
{"class":"MyObject", "fields":["name:string"], "type":"UNIQUE" }
]
}
}
}
I find that, if I use plocal the root/root credential is not working. And the classes are not as same as when logged in with remote (after starting server)
I tried your code and it works for me, this is what I get:
the only changes that I made to your code are: credential, and dbUrl plocal instead of remote:
{
"source":{"file": { "path": "mypath/config/data.csv" } },
"extractor": { "csv": {} },
"transformers": [
{ "vertex": { "class": "MyObject" } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:mypath/databases/mydb",
"dbType": "graph",
"dbUser": "<user name>",
"dbPassword": "<user password>",
**BEGIN UPDATE**
"serverUser": "<server administrator user name, usually root>",
"serverPassword": "<server administrator user password that is provided at server startup>",
**END UPDATE**
"classes": [
{"name": "MyObject", "extends": "V"},
], "indexes": [
{"class":"MyObject", "fields":["name:string"], "type":"UNIQUE" }
]
}
}
}
By the way I noticed that your path is called: ientdb-community-2.2.18 is that correct?
Hope it helps.
Regards.

"No nodes configured for partition" after creating a database via ETL

I've just created a custom database using the following ETL config,
{
"source": { "file": { "path": "./mydata.csv" } },
"extractor": { "row": {} },
"transformers": [
{ "csv": {} },
{ "vertex": { "class": "MyClass" } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:/opt/orientdb/databases/MyData",
"dbUser": "root",
"dbPassword": "qrefhiuqwriouhwqv",
"dbType": "graph",
"classes": [
{"name": "MyClass", "extends": "V"},
]
}
}
}
Now, when I go to the web console, I can see I have 433k records of type MyClass created at database MyData.
When I try to query it with "select from MyClass", I get the error
2015-04-06 23:56:25:541 SEVERE Internal server error:
com.orientechnologies.orient.server.distributed.ODistributedException:
No nodes configured for partition 'MyClass.[]' request:
id=-1 from=node1428362873334 task=command_sql(select from MyClass) userName= [ONetworkProtocolHttpDb]
What am I doing wrong?

I cannot see the properties/values using spring cloud config and git

I am using the sample project
https://github.com/spring-cloud-samples/configserver
I run the project and when i point my browser to
http://localhost:8888/foo/development/
I get the following values
{
"name": "foo",
"profiles": [
"development"
],
"label": "master",
"propertySources": [
{
"name": "overrides",
"source": {
"eureka.instance.nonSecurePort": "${CF_INSTANCE_PORT:${PORT:${server.port:8080}}}",
"eureka.instance.hostname": "${CF_INSTANCE_IP:localhost}",
"eureka.client.serviceUrl.defaultZone": "http://localhost:8761/eureka/"
}
}
]
}
But i do not get the values in the file foo-development.properties in
https://github.com/spring-cloud-samples/config-repo
I am new to spring-cloud config. Could somebody point in the right direction to the values of the property file ?
Thank you
I ran the config-server in Ubuntu and everything works there as expected. This must be a problem in windows only. The output I get in Ubuntu is the following:
{
"name": "foo",
"profiles": [
"development"
],
"label": "master",
"propertySources": [
{
"name": "overrides",
"source": {
"eureka.instance.nonSecurePort": "${CF_INSTANCE_PORT:${PORT:${server.port:8080}}}",
"eureka.instance.hostname": "${CF_INSTANCE_IP:localhost}",
"eureka.client.serviceUrl.defaultZone": "http://localhost:8761/eureka/"
}
},
{
"name": "https://github.com/spring-cloud-samples/config-repo/foo-development.properties",
"source": {
"bar": "spam"
}
},
{
"name": "https://github.com/spring-cloud-samples/config-repo/foo.properties",
"source": {
"foo": "bar"
}
},
{
"name": "https://github.com/spring-cloud-samples/config-repo/application.yml",
"source": {
"info.description": "Spring Cloud Samples",
"info.url": "https://github.com/spring-cloud-samples",
"eureka.client.serviceUrl.defaultZone": "http://user:${eureka.password:}#localhost:8761/eureka/",
"invalid.eureka.password": "<n/a>"
}
}
]
}