Spec: v1.PodSpec{
Containers: []v1.Container{
v1.Container{
Name: podName,
Image: deploymentName,
ImagePullPolicy: "IfNotPresent",
Ports: []v1.ContainerPort{},
Env: []v1.EnvVar{
v1.EnvVar{
Name: "RASA_NLU_CONFIG",
Value: os.Getenv("RASA_NLU_CONFIG"),
},
v1.EnvVar{
Name: "RASA_NLU_DATA",
Value: os.Getenv("RASA_NLU_DATA"),
},
},
Resources: v1.ResourceRequirements{},
},
},
RestartPolicy: v1.RestartPolicyOnFailure,
},
I want to provide resource limits as corresponding like :
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"
args:
- -cpus
- "2"
How do I go on to do that. I tried adding Limits and its map key value pair but it seems to be quite a nested structure. There doesnt seem to be any example as to how to provide resources in kube client go.
I struggled with the same when i was creating a statefulset. Maybe my codesnipped will help you:
Resources: apiv1.ResourceRequirements{
Limits: apiv1.ResourceList{
"cpu": resource.MustParse(cpuLimit),
"memory": resource.MustParse(memLimit),
},
Requests: apiv1.ResourceList{
"cpu": resource.MustParse(cpuReq),
"memory": resource.MustParse(memReq),
},
},
the vars cpuReq, memReq, cpuLimit and memLimit are supposed to be strings
Here you can find definition of v1.ResourceRequirements{}:
// ResourceRequirements describes the compute resource requirements.
type ResourceRequirements struct {
// Limits describes the maximum amount of compute resources allowed.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
// +optional
Limits ResourceList `json:"limits,omitempty" protobuf:"bytes,1,rep,name=limits,casttype=ResourceList,castkey=ResourceName"`
// Requests describes the minimum amount of compute resources required.
// If Requests is omitted for a container, it defaults to Limits if that is explicitly specified,
// otherwise to an implementation-defined value.
// More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
// +optional
Requests ResourceList `json:"requests,omitempty" protobuf:"bytes,2,rep,name=requests,casttype=ResourceList,castkey=ResourceName"`
}
ResourceList:
// ResourceList is a set of (resource name, quantity) pairs.
type ResourceList map[ResourceName]resource.Quantity
Here you can find test file with example of use.
Sourcegraph plugin for Crome or Firefox could be very helpful to work with a source code on GitHub.
Related
I want to create more than one cache using helm, my yaml is the following
deploy:
infinispan:
cacheContainer:
distributedCache:
- name: "mycache"
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
- name: "mycache1"
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
when when i install the helm i get the following error
Red Hat Data Grid Server failed to start org.infinispan.commons.configuration.io.ConfigurationReaderException: Missing required attribute(s): name[86,1]
I dont know if is possible to create more than one cache. I have followed the following documentation https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.3/html/building_and_deploying_data_grid_clusters_with_helm/configuring-servers
Thanks for your help.
Alexis
Yes it's possible to define multiple caches. You have to use the format:
deploy:
infinispan:
cacheContainer:
<1st-cache-name>:
<cache-type>:
<cache-definition>:
...
<2nd-cache-name>:
<cache-type>:
<cache-definition>:
So in your case that will be:
deploy:
infinispan:
cacheContainer:
mycache: # mycache definition follows
distributedCache:
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
mycache1: # mycache1 definition follows
distributedCache:
mode: "SYNC"
owners: "2"
segments: "256"
capacityFactor: "1.0"
statistics: "false"
encoding:
mediaType: "application/x-protostream"
expiration:
lifespan: "3000"
maxIdle: "1001"
memory:
maxCount: "1000000"
whenFull: "REMOVE"
partitionHandling:
whenSplit: "ALLOW_READ_WRITES"
mergePolicy: "PREFERRED_NON_NULL"
See here for an example of how to define multiple caches in json/xml/yaml formats.
I've created an Output for environment variables in Pulumi just like https://github.com/pulumi/examples/blob/master/aws-ts-airflow/index.ts#L61 but I need to add one entry to these env vars for one of the containers I'm spinning up.
I'd like to do something like when declaring a container similar to https://github.com/pulumi/examples/blob/master/aws-ts-airflow/index.ts#L79-L85
"webserver": {
image: awsx.ecs.Image.fromPath("webserver", "./airflow-container"),
portMappings: [airflowControllerListener],
environment: environment + {name: "ANOTHER_ENV", value: "value"},
command: [ "webserver" ],
memory: 128,
},
I've tried playing around with pulumi.all (pulumi.all([environment, {name: "FLASK_APP", value: "server/__init.py"}])) and environment.apply but haven't been able to figure out how to contact to an Output.
Is this possible? If so, how?
You should be able to
const newEnvironment = environment.apply(env =>
env.concat({ name: "ANOTHER_ENV", value: "value"}));
// ...
"webserver": {
image: awsx.ecs.Image.fromPath("webserver", "./airflow-container"),
portMappings: [airflowControllerListener],
environment: newEnvironment,
command: [ "webserver" ],
memory: 128,
},
I'm using this chart: https://github.com/helm/charts/tree/master/stable/rabbitmq to deploy a cluster of 3 RabbitMQ nodes on Kubernetes. My intention is to have all the queues mirrored within 2 nodes in the cluster.
Here's the command I use to run Helm: helm install --name rabbitmq-local -f rabbitmq-values.yaml stable/rabbitmq
And here's the content of rabbitmq-values.yaml:
persistence:
enabled: true
resources:
requests:
memory: 256Mi
cpu: 100m
replicas: 3
rabbitmq:
extraConfiguration: |-
{
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
However, the nodes fail to start due to some parsing errors, and they stay in crash loop. Here's the output of kubectl logs rabbitmq-local-0:
BOOT FAILED
===========
Config file generation failed:
=CRASH REPORT==== 23-Jul-2019::15:32:52.880991 ===
crasher:
initial call: lager_handler_watcher:init/1
pid: <0.95.0>
registered_name: []
exception exit: noproc
in function gen:do_for_proc/2 (gen.erl, line 228)
in call from gen_event:rpc/2 (gen_event.erl, line 239)
in call from lager_handler_watcher:install_handler2/3 (src/lager_handler_watcher.erl, line 117)
in call from lager_handler_watcher:init/1 (src/lager_handler_watcher.erl, line 51)
in call from gen_server:init_it/2 (gen_server.erl, line 374)
in call from gen_server:init_it/6 (gen_server.erl, line 342)
ancestors: [lager_handler_watcher_sup,lager_sup,<0.87.0>]
message_queue_len: 0
messages: []
links: [<0.90.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 610
stack_size: 27
reductions: 228
neighbours:
15:32:53.679 [error] Syntax error in /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf after line 14 column 1, parsing incomplete
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681369 ===
supervisor: {local,gr_counter_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.97.0>},
{id,gr_lager_default_tracer_counters},
{mfargs,{gr_counter,start_link,
[gr_lager_default_tracer_counters]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
=SUPERVISOR REPORT==== 23-Jul-2019::15:32:53.681514 ===
supervisor: {local,gr_param_sup}
errorContext: child_terminated
reason: killed
offender: [{pid,<0.96.0>},
{id,gr_lager_default_tracer_params},
{mfargs,{gr_param,start_link,[gr_lager_default_tracer_params]}},
{restart_type,transient},
{shutdown,brutal_kill},
{child_type,worker}]
If I remove the rabbitmq.extraConfiguration part, the nodes start properly, so it must be something wrong with the way I'm typing in the policy. Any idea what I'm doing wrong?
Thank you.
According to https://github.com/helm/charts/tree/master/stable/rabbitmq#load-definitions, it is possible to link a JSON configuration as extraConfiguration. So we ended up with this setup that works:
rabbitmq-values.yaml:
rabbitmq:
loadDefinition:
enabled: true
secretName: rabbitmq-load-definition
extraConfiguration:
management.load_definitions = /app/load_definition.json
rabbitmq-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: rabbitmq-load-definition
type: Opaque
stringData:
load_definition.json: |-
{
"vhosts": [
{
"name": "/"
}
],
"policies": [
{
"name": "queue-mirroring-exactly-two",
"pattern": "^ha\.",
"vhost": "/",
"definition": {
"ha-mode": "exactly",
"ha-params": 2
}
}
]
}
The secret must be loaded into Kubernetes before the Helm chart is played, which goes something like this: kubectl apply -f ./rabbitmq-secret.yaml.
You can use config default of HelmChart
If needed, you can use extraSecrets to let the chart create the secret for you. This way, you don't need to manually create it before deploying a release. For example :
extraSecrets:
load-definition:
load_definition.json: |
{
"vhosts": [
{
"name": "/"
}
]
}
rabbitmq:
loadDefinition:
enabled: true
secretName: load-definition
extraConfiguration: |
management.load_definitions = /app/load_definition.json
https://github.com/helm/charts/tree/master/stable/rabbitmq
Instead of using extraConfiguration, use advancedConfiguration, you should put all these info in this section as it is for classic config format (erlang)
I have nodejs code running inside a pod. From inside the pod I want to find the zone of the node where this pod is running. What is the best way do do that? Do I need extra permissions?
I have not been able to find a library but I post the code that does it below. The getContent function was slightly adapted from this post This code should work inside a GKE pod or and GCE host.
Use it as following:
const gcp = require('./gcp.js')
gcp.zone().then(z => console.log('Zone is: ' + z))
Module: gcp.js
const getContent = function(lib, options) {
// return new pending promise
return new Promise((resolve, reject) => {
// select http or https module, depending on reqested url
const request = lib.get(options, (response) => {
// handle http errors
if (response.statusCode < 200 || response.statusCode > 299) {
reject(new Error('Failed to load page, status code: ' + response.statusCode));
}
// temporary data holder
const body = [];
// on every content chunk, push it to the data array
response.on('data', (chunk) => body.push(chunk));
// we are done, resolve promise with those joined chunks
response.on('end', () => resolve(body.join('')));
});
// handle connection errors of the request
request.on('error', (err) => reject(err))
})
};
exports.zone = () => {
return getContent(
require('http'),
{
hostname: 'metadata.google.internal',
path: '/computeMetadata/v1/instance/zone',
headers: {
'Metadata-Flavor': 'Google'
},
method: 'GET'
})
}
You can use failure-domain.beta.kubernetes.io/region and failure-domain.beta.kubernetes.io/zone labels of the pod to getting its region and AZ.
But, please keep in mind, that:
Only GCE and AWS are currently supported automatically (though it is easy to add similar support for other clouds or even bare metal, by simply arranging for the appropriate labels to be added to nodes and volumes).
To get access to labels, you can use DownwardAPI for attaching a Volume with your current labels and annotations of the pod. You don't need any extra permissions for use it, just mount them as a volume.
Here is an example from a documentation:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
zone: us-est-coast
cluster: test-cluster1
rack: rack-22
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args:
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
readOnly: false
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
When you have a mounted volume with labels, you can read a file /etc/labels which will contain information about AZ and Region as a Key-Pairs, like this:
failure-domain.beta.kubernetes.io/region=us-east-1
failure-domain.beta.kubernetes.io/zone=us-east-1c
I'm attempting to export a DynamoDb StreamArn from a stack created in CloudFormation, then reference the export using !ImportValue in the serverless.yml.
But I'm getting this error message:
unknown tag !<!ImportValue> in "/codebuild/output/src/serverless.yml"
The cloudformation and serverless.yml are defined as below. Any help appreciated.
StackA.yml
AWSTemplateFormatVersion: 2010-09-09
Description: Resources for the registration site
Resources:
ClientTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: client
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 2
WriteCapacityUnits: 2
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
Outputs:
ClientTableStreamArn:
Description: The ARN for My ClientTable Stream
Value: !GetAtt ClientTable.StreamArn
Export:
Name: my-client-table-stream-arn
serverless.yml
service: my-service
frameworkVersion: ">=1.1.0 <2.0.0"
provider:
name: aws
runtime: nodejs6.10
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeStream
- dynamodb:GetRecords
- dynamodb:GetShardIterator
- dynamodb:ListStreams
- dynamodb:GetItem
- dynamodb:PutItem
Resource: arn:aws:dynamodb:*:*:table/client
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn: !ImportValue my-client-table-stream-arn
batchSize: 1
Solved by using ${cf:stackName.outputKey}
I struggled with this as well, and what did trick for me was:
functions:
foo:
handler: foo.main
events:
- stream:
type: dynamodb
arn:
!ImportValue my-client-table-stream-arn
batchSize: 1
Note, that intrinsic functions ImportValue is on a new line and indented, otherwise the whole event is ignored when cloudformation-template-update-stack.json is generated.
It appears that you're using the !ImportValue shorthand for CloudFormation YAML. My understanding is that when CloudFormation parses the YAML, and !ImportValue actually aliases Fn::ImportValue. According to the Serverless Function documentation, it appears that they should support the Fn::ImportValue form of imports.
Based on the documentation for Fn::ImportValue, you should be able to reference the your export like
- stream:
type: dynamodb
arn: {"Fn::ImportValue": "my-client-table-stream-arn"}
batchSize: 1
Hope that helps solve your issue.
I couldn't find it clearly documented anywhere but what seemed to resolve the issue for me is:
For the Variables which need to be exposed/exported in outputs, they must have an "Export" property with a "Name" sub-property:
In serverless.ts
resources: {
Resources: resources["Resources"],
Outputs: {
// For eventbus
EventBusName: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME",
},
Value: {
Ref: "UNIQUE_EVENTBUS_NAME",
},
},
// For something like sqs, or anything else, would be the same
IDVerifyQueueARN: {
Export: {
Name: "${self:service}-${self:provider.stage}-UNIQUE_SQS_NAME",
},
Value: { "Fn::GetAtt": ["UNIQUE_SQS_NAME", "Arn"] },
}
},
}
Once this is deployed you can check if the exports are present by running in the terminal (using your associated aws credentials):
aws cloudformation list-exports
Then there should be a Name property in a list:
{
"ExportingStackId": "***",
"Name": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
"Value": "***"
}
And then if the above is successful, you can reference it with "Fn::ImportValue" like so, e.g.:
"Resource": {
"Fn::ImportValue": "${self:service}-${self:provider.stage}-UNIQUE_EVENTBUS_NAME", <-- same as given above (but will be populated with your service and stage)
}