Consul overrides spring profiles - spring-cloud

I am moving configuration file to consul. Configuration files are held in yaml on consul. This is a part of configuration yaml file (As you can see there are 2 profiles DEV, DEV2):
---
spring
profiles: DEV2
environment:
current: DEV2
urls:
de: http://10.11.22.44
be: http://10.11.22.44
---
spring:
profiles: DEV
environment:
current: DEV
urls:
de: http://10.11.22.33
be: http://10.11.22.33
The problem is that when i am running application with profile DEV2. Always urls from profile DEV are taken(Because they are lower in yaml file). Is there a way to force consul to read data from DEV2 profile ? Here is my bootstrap yaml config:
spring:
cloud:
consul:
host: 10.11.22.33
port: 8500
config:
name: config
acl-token: sometoken
prefix: someprefix
format: yaml

Related

How to set up kong service and routes from yaml file in Azure devops pipeline

So I have this yaml file with kong service, routes and plugins for a microservice:
_format_version: "1.1"
_info:
defaults: {}
select_tags:
- ms-planning-and-finance
services:
- connect_timeout: 60000
enabled: true
host: ms-planning-and-finance-svc.pdgr-business-services.svc.cluster.local
name: planning-and-finance-api
path: /api/planning-and-finance
port: 4002
protocol: http
read_timeout: 60000
retries: 5
routes:
- https_redirect_status_code: 426
name: planning-and-finance
path_handling: v0
paths:
- /api/planning-and-finance
plugins:
- config:
bearer_only: "yes"
client_id: ...
client_secret: ...
...
...
and I have its CICD pipeline configured in Azure devops (a YAML pipeline), which has a kong step where it creates the service, routes and plugins by using CURL (http PUT and POST requests).
Now Im trying to update that step so it becomes simpler, in the sense that I would like to use that kong.yaml file above to create everything "at once". I'm still researching on this but I haven't found anything useful so far...
How can I "call" that kong.yaml file from my azure yaml pipeline, in order to create those kong resources?
Haven't found anything useful so far...
After some better research, we configured deck in the agent we're using.
So now the pipelines will call that to sync the changes in the yaml file with the ones currently in the kong gateway. More specifically, it uses deck to:
Ping the connection to kong (tests if it connects to the endpoint successfully);
Validates the state YAML file with the configurations to update/create in kong;
Syncs the changes in the state file with the current kong configuration.
(deck CLI reference)

Zipkin tracing not working for docker-compose and Dapr

Traces that should have been sent by dapr runtime to zipkin server somehow fails to reach it.
The situation is the following:
I'm using Docker Desktop on my Windows PC. I have downloaded the sample from dapr repository (https://github.com/dapr/samples/tree/master/hello-docker-compose) which runs perfectly out of the box with docker-compose up.
Then I've added Zipkin support as per dapr documentation:
added this service in the bottom of docker-compose.yml
zipkin:
image: "openzipkin/zipkin"
ports:
- "9411:9411"
networks:
- hello-dapr
added config.yaml in components folder
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprsystem
spec:
mtls:
enabled: false
tracing:
enabled: true
exporterType: zipkin
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: "http://zipkin:9411/api/v2/spans"
When application runs, it should send traces to the server, but nothing is found in zipkin UI and logs.
Strange thing start to appear in the logs from nodeapp-dapr_1 service: error while reading spiffe id from client cert
pythonapp-dapr_1 | time="2021-03-15T19:14:17.9654602Z" level=debug msg="found mDNS IPv4 address in cache: 172.19.0.7:34549" app_id=pythonapp instance=ce32220407e2 scope=dapr.contrib type=log ver=edge
nodeapp-dapr_1 | time="2021-03-15T19:14:17.9661792Z" level=debug msg="error while reading spiffe id from client cert: unable to retrieve peer auth info. applying default global policy action" app_id=nodeapp instance=773c486b5aac scope=dapr.runtime.grpc.api type=log ver=edge
nodeapp_1 | Got a new order! Order ID: 947
nodeapp_1 | Successfully persisted state.
Additional info - current dapr version used is 1.0.1. I made sure that security (mtls) is disabled in config file.
Configuration file is supposed to be in different folder then components.
Create new folder e.g. dapr next to the components folder.
Move components folder into newly created dapr folder.
Then create config.yaml in dapr folder.
Update docker-compose accordingly.
docker-compose
services:
nodeapp-dapr:
image: "daprio/daprd:edge"
command: ["./daprd",
"-app-id", "nodeapp",
"-app-port", "3000",
"-placement-host-address", "placement:50006",
"-dapr-grpc-port", "50002",
"-components-path", "/dapr/components",
"-config", "/dapr/config.yaml"]
volumes:
- "./dapr/components/:/dapr"
depends_on:
- nodeapp
network_mode: "service:nodeapp"
config.yaml
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: daprConfig
spec:
mtls:
enabled: false
tracing:
enabled: true
samplingRate: "1"
expandParams: true
includeBody: true
zipkin:
endpointAddress: http://host.docker.internal:9411/api/v2/spans
I had issue with localhost and 127.0.0.1 in URL which I resolved using host.docker.internal as hostname.
PS: Don't forget to kill all *-dapr_1 containers so it can load new configuration.

How to wait until env for appid is created in jelastic manifest installation?

I have the following manifest:
jpsVersion: 1.3
jpsType: install
application:
id: shopozor-k8s-cluster
name: Shopozor k8s cluster
version: 0.0
baseUrl: https://raw.githubusercontent.com/shopozor/services/dev
settings:
fields:
- name: envName
caption: Env Name
type: string
default: shopozor
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: version
type: string
caption: Version
default: v1.16.3
onInstall:
- installKubernetes
- enableSubDomains
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cmd
cmd: |-
curl -fsSL ${baseUrl}/scripts/install_k8s.sh | /bin/bash
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.version}
jaeger: false
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
Unfortunately, when I run that manifest, the k8s cluster gets installed, but the subdomains cannot be created (yet), because:
[15:26:28 Shopozor.cluster:3]: enableSubDomains: {"action":"enableSubDomains","params":{}}
[15:26:29 Shopozor.cluster:4]: api [cp]: {"method":"jelastic.env.binder.AddDomains","params":{"domains":"staging,api-staging,assets-staging,api,assets"},"nodeGroup":"cp"}
[15:26:29 Shopozor.cluster:4]: ERROR: api.response: {"result":2303,"source":"JEL","error":"env for appid [5ce25f5a6988fbbaf34999b08dd1d47c] not created."}
What jelastic API methods can I use to perform the necessary waiting until subdomain creation is possible?
My current workaround is to split that manifest into two manifests: one cluster installation manifest and one update manifest creating the subdomains. However, I'd like to have everything in the same manifest.
Please change this:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
domains: staging,api-staging,assets-staging,api,assets
to:
enableSubDomains:
- jelastic.env.binder.AddDomains[cp]:
envName: ${settings.envName}
domains: staging,api-staging,assets-staging,api,assets

k8s scdf2 how config volumenMount in a task (no freetext)

Deploying a task, as user, i need config k8s params like i do using "freetext".
The k8s config is following
Secret: "kind": "Secret","apiVersion": "v1","metadata": {"name": "omni-secret","namespace": "default",
bootstrap.yml:
spring:
application:
name: mk-adobe-analytics-task
cloud:
kubernetes:
config:
enabled: false
secrets:
enabled: true
namespace: default
paths:
- /etc/secret-volume
log.info(AdobeAnalyticsConstants.LOG_RECOVERING_SECRET, env.getProperty("aws.bucketname"));
Deploying task:
task launch test-007 --properties "deployer.*.kubernetes.volumeMounts=[{name: secret-volume, mountPath: '/etc/secret-volume'}], deployer.* .kubernetes.volumes=[{name: 'secret-volume', secret: {secretName: 'omni-secret' }}]"
Result:
2019-06-10 10:32:50.852 INFO 1 --- Recovering property "aws.bucketname": null
How can i map into a task the k8s volumens? simply k8s deploy , it is ok using streams
it's not clear how to start with your issue but please take a look for Kubernetes PropertySource implementations.
Inside "Secrets PropertySource - Table 3.2. Properties" you can find other settings like:
- spring.cloud.kubernetes.secrets.name
- spring.cloud.kubernetes.secrets.labels
- spring.cloud.kubernetes.secrets.enableApi
So please refer to the documentation.
It's also possible that your environment variable aws.bucketname wasn't configured properly.
Hope this help.

Spring Cloud Config keeps checking out master branch

I am running into an issue with Spring Cloud Config. I have a remote git repo with my config files, which is cloned down locally for local testing.
My project has 4 bootstrap.yml files as shown below
bootstrap.yml
spring:
application:
name: ConfigurationService
profiles:
active: dev, local
cloud:
config:
fail-fast: true
server:
git:
clone-on-start: true
search-paths: '{application}'
username: USERNAME
password: PASSWORD
bootstrap: true
enabled: true
bootstrap-dev.yml
spring:
application:
name: ConfigurationService
profiles:
active: dev, local
cloud:
config:
label: develop
server:
port: 0
bootstrap-local.yml
spring:
cloud:
config:
server:
git:
uri: file:///${user.home}/Projects/project
clone-on-start: false
bootstrap-remote.yml
spring:
cloud:
config:
server:
git:
uri: https://bitbucket.org/
clone-on-start: true
The remote repo has a master branch and a develop branch. When I checkout develop locally and start my config service it checks out the master branch.
Why is this happening and how do i stop it? I am starting the config service in the dev profile and the local profile which uses the 'develop' label as seen in bootstrap-dev.yml.