k8s scdf2 how config volumenMount in a task (no freetext) - kubernetes

Deploying a task, as user, i need config k8s params like i do using "freetext".
The k8s config is following
Secret: "kind": "Secret","apiVersion": "v1","metadata": {"name": "omni-secret","namespace": "default",
bootstrap.yml:
spring:
application:
name: mk-adobe-analytics-task
cloud:
kubernetes:
config:
enabled: false
secrets:
enabled: true
namespace: default
paths:
- /etc/secret-volume
log.info(AdobeAnalyticsConstants.LOG_RECOVERING_SECRET, env.getProperty("aws.bucketname"));
Deploying task:
task launch test-007 --properties "deployer.*.kubernetes.volumeMounts=[{name: secret-volume, mountPath: '/etc/secret-volume'}], deployer.* .kubernetes.volumes=[{name: 'secret-volume', secret: {secretName: 'omni-secret' }}]"
Result:
2019-06-10 10:32:50.852 INFO 1 --- Recovering property "aws.bucketname": null
How can i map into a task the k8s volumens? simply k8s deploy , it is ok using streams

it's not clear how to start with your issue but please take a look for Kubernetes PropertySource implementations.
Inside "Secrets PropertySource - Table 3.2. Properties" you can find other settings like:
- spring.cloud.kubernetes.secrets.name
- spring.cloud.kubernetes.secrets.labels
- spring.cloud.kubernetes.secrets.enableApi
So please refer to the documentation.
It's also possible that your environment variable aws.bucketname wasn't configured properly.
Hope this help.

Related

loki-stack helm chart not able to disable kube-system logs

I am using loki-stack helm chart I am doing following configuration to disable kube-system namespace logs in promtail so that loki doesnt use it
promtail:
enabled: true
#
# Enable Promtail service monitoring
# serviceMonitor:
# enabled: true
#
# User defined pipeline stages
pipelineStages:
- docker: {}
- drop:
source: namespace
expression: "kube-.*"
Please help in solving inside container this values are not getting updated
The configuration is already mentioned above
I had the same issue with this configuration and it seems like the pipelineStages at this level is being ignored. I solved my problem by moving it to snippets.
promtail:
enabled: true
config:
snippets:
pipelineStages:
- docker: {}
- drop:
source: namespace
expression: "kube-.*"
This worked for me and I hope it helps someone else who might run into the same problem. For more details, please check out this link: https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml

Zipkin tracing not working for docker-compose and Dapr

Traces that should have been sent by dapr runtime to zipkin server somehow fails to reach it.
The situation is the following:
I'm using Docker Desktop on my Windows PC. I have downloaded the sample from dapr repository (https://github.com/dapr/samples/tree/master/hello-docker-compose) which runs perfectly out of the box with docker-compose up.
Then I've added Zipkin support as per dapr documentation:
added this service in the bottom of docker-compose.yml
zipkin:
image: "openzipkin/zipkin"
ports:
- "9411:9411"
networks:
- hello-dapr
added config.yaml in components folder
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprsystem
spec:
mtls:
enabled: false
tracing:
enabled: true
exporterType: zipkin
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: "http://zipkin:9411/api/v2/spans"
When application runs, it should send traces to the server, but nothing is found in zipkin UI and logs.
Strange thing start to appear in the logs from nodeapp-dapr_1 service: error while reading spiffe id from client cert
pythonapp-dapr_1 | time="2021-03-15T19:14:17.9654602Z" level=debug msg="found mDNS IPv4 address in cache: 172.19.0.7:34549" app_id=pythonapp instance=ce32220407e2 scope=dapr.contrib type=log ver=edge
nodeapp-dapr_1 | time="2021-03-15T19:14:17.9661792Z" level=debug msg="error while reading spiffe id from client cert: unable to retrieve peer auth info. applying default global policy action" app_id=nodeapp instance=773c486b5aac scope=dapr.runtime.grpc.api type=log ver=edge
nodeapp_1 | Got a new order! Order ID: 947
nodeapp_1 | Successfully persisted state.
Additional info - current dapr version used is 1.0.1. I made sure that security (mtls) is disabled in config file.
Configuration file is supposed to be in different folder then components.
Create new folder e.g. dapr next to the components folder.
Move components folder into newly created dapr folder.
Then create config.yaml in dapr folder.
Update docker-compose accordingly.
docker-compose
services:
nodeapp-dapr:
image: "daprio/daprd:edge"
command: ["./daprd",
"-app-id", "nodeapp",
"-app-port", "3000",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002",
"-components-path", "/dapr/components",
"-config", "/dapr/config.yaml"]
volumes:
- "./dapr/components/:/dapr"
depends_on:
- nodeapp
network_mode: "service:nodeapp"
config.yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprConfig
spec:
mtls:
enabled: false
tracing:
enabled: true
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: http://host.docker.internal:9411/api/v2/spans
I had issue with localhost and 127.0.0.1 in URL which I resolved using host.docker.internal as hostname.
PS: Don't forget to kill all *-dapr_1 containers so it can load new configuration.

gke cluster deployment with custom network

I am trying to create a yaml file to deploy gke cluster in a custom network I created. I get an error
JSON payload received. Unknown name \"network\": Cannot find field."
I have tried a few names for the resources but I am still seeing the same issue
resources:
- name: myclus
type: container.v1.cluster
properties:
network: projects/project-251012/global/networks/dev-cloud
zone: "us-east4-a"
cluster:
initialClusterVersion: "1.12.9-gke.13"
currentMasterVersion: "1.12.9-gke.13"
## Initial NodePool config.
nodePools:
- name: "myclus-pool1"
initialNodeCount: 3
version: "1.12.9-gke.13"
config:
machineType: "n1-standard-1"
oauthScopes:
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
- https://www.googleapis.com/auth/ndev.clouddns.readwrite
preemptible: true
## Duplicates node pool config from v1.cluster section, to get it explicitly managed.
- name: myclus-pool1
type: container.v1.nodePool
properties:
zone: us-east4-a
clusterId: $(ref.myclus.name)
nodePool:
name: "myclus-pool1"
I expect it to place the cluster nodes in this network.
The network field needs to be part of the cluster spec. The top-level of properties should just be zone and cluster, network should be on the same indentation as initialClusterVersion. See more on the container.v1.cluster API reference page
Your manifest should look more like:
EDIT: there is some confusion in the API reference docs concerning deprecated fields. I offered a YAML that applies to the new API, not the one you are using. I've update with the correct syntax for the basic v1 API and further down I've added the newer API (which currently relies on gcp-types to deploy.
resources:
- name: myclus
type: container.v1.cluster
properties:
projectId: [project]
zone: us-central1-f
cluster:
name: my-clus
zone: us-central1-f
network: [network_name]
subnetwork: [subnet] ### leave this field blank if using the default network
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 0
config:
imageType: cos
- name: my-pool-1
type: container.v1.nodePool
properties:
projectId: [project]
zone: us-central1-f
clusterId: $(ref.myclus.name)
nodePool:
name: my-clus-pool2
initialNodeCount: 0
version: "1.13"
config:
imageType: ubuntu
The newer API (which provides more functionality and allows you to use more features including the v1beta1 API and beta features) would look something like this:
resources:
- name: myclus
type: gcp-types/container-v1:projects.locations.clusters
properties:
parent: projects/shared-vpc-231717/locations/us-central1-f
cluster:
name: my-clus
zone: us-central1-f
network: shared-vpc
subnetwork: local-only ### leave this field blank if using the default network
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 0
config:
imageType: cos
- name: my-pool-2
type: gcp-types/container-v1:projects.locations.clusters.nodePools
properties:
parent: projects/shared-vpc-231717/locations/us-central1-f/clusters/$(ref.myclus.name)
nodePool:
name: my-clus-separate-pool
initialNodeCount: 0
version: "1.13"
config:
imageType: ubuntu
Another note, you may want to modify your scopes, the current scopes will not allow you to pull images from gcr.io, some system pods may not spin up properly and if you are using Google's repository, you will be unable to pull those images.
Finally, you don't want to repeat the node pool resource in both the cluster spec and separately. Instead, create the cluster with a basic (default) node pool, for all additional node pools, create them as separate resources to manage them without going through the cluster. There are very few updates you can perform on a node pool, asside from resizing

Consul overrides spring profiles

I am moving configuration file to consul. Configuration files are held in yaml on consul. This is a part of configuration yaml file (As you can see there are 2 profiles DEV, DEV2):
---
spring
profiles: DEV2
environment:
current: DEV2
urls:
de: http://10.11.22.44
be: http://10.11.22.44
---
spring:
profiles: DEV
environment:
current: DEV
urls:
de: http://10.11.22.33
be: http://10.11.22.33
The problem is that when i am running application with profile DEV2. Always urls from profile DEV are taken(Because they are lower in yaml file). Is there a way to force consul to read data from DEV2 profile ? Here is my bootstrap yaml config:
spring:
cloud:
consul:
host: 10.11.22.33
port: 8500
config:
name: config
acl-token: sometoken
prefix: someprefix
format: yaml

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"