how to configure vcap.services property for mtar deployment of spring boot application on SAP CloudFoundry? - sap-cloud-platform

I am working on a Spring Boot application using SAP Cloud SDK for Java and deploying the application on SAP CloudFoundry.
The application is dependent on a user provided service for configuration and we can access this service using vcap.services property.
An example of the manifest.yml that is working for me is as below.
---
applications:
- name: some-app-name
buildpacks:
- sap_java_buildpack
env:
SPRING_CLOUD_CONFIG_URI: ${vcap.services.config-server-uri.credentials.uri}
services:
- config-server-uri
In the above manifest.yml, config-server-uri is the name of the service and the application environment variable SPRING_CLOUD_CONFIG_URI derives the value using ${vcap.services.config-server-uri.credentials.uri}.
Now, I want to deploy the same above as MTAR application on SAP Cloud platform Cloud Foundry.
To achieve this, I configured mta.yml as below.
ID: some_id
_schema-version: '2.1'
description: some description
version: 0.0.1
parameters:
keep-existing-routes: true
modules:
- name: some-app-name
type: java
properties:
SPRING_CLOUD_CONFIG_URI: ${vcap.services.config-server-uri.credentials.uri}
requires:
- name: config-server-uri
resources:
- name: config-server-uri
type: org.cloudfoundry.user-provided-service
However, deployment with the above mta.yml configuration fails as ${vcap.services.config-server-uri.credentials.uri} for SPRING_CLOUD_CONFIG_URI could not be resolved.
Instead, if I replace ${vcap.services.config-server-uri.credentials.uri} with the actual url, mtar deployment of the application works fine.
Can anyone please guide me here on what am I doing wrong and how to configure/access vcap.services property in mta.yml for mtar deployment?

I would recommend to not read/map this environment variable as part of the manifest.yml or the mta.yml but read the value as part of your Spring application, for example using a bootstrap.yml.
The following link also provides an example how to configure it for Cloud Foundry:
https://cloud.spring.io/spring-cloud-config/multi/multi__spring_cloud_config_client.html#_security_2
If you deploy your apps on Cloud Foundry, the best way to provide the password is through service credentials (such as in the URI, since it does not need to be in a config file). The following example works locally and for a user-provided service on Cloud Foundry named configserver:
bootstrap.yml
spring:
cloud:
config:
uri: ${vcap.services.configserver.credentials.uri:http://user:password#localhost:8888}

Related

Prisma 1 + MongoDB Atlas deploy to Heroku returns error 404

I've deployed a Prisma 1 GraphQL server app on Heroku, connected to a MongoDB Atlas cluster.
Running prisma deploy locally with the default endpoint http://localhost:4466, the action being run successfully and all the schemas are being generated correctly.
But, if I change the endpoint with the Heroku remote host https://<myapp>.herokuapp.com, prisma deploy fails, returning this exception:
ERROR: GraphQL Error (Code: 404)
{
"error": "\n<html lang="en">\n\n<meta charset="utf-8">\nError\n\n\nCannot POST /management\n\n\n",
"status": 404
}
I think that's could be related to an authentication problem, but I'm getting confused because I've defined both security token in prisma.yml than the management API secret key in docker-compose.yml.
Here's my current configs if it could be helpful:
prisma.yml
# The HTTP endpoint for your Prisma API
# Tried with https://<myapp>.herokuapp.com only too with the same result
endpoint: https://<myapp>.herokuapp.com/dinai/staging
secret: ${env:PRISMA_SERVICE_SECRET}
# Points to the file that contains your datamodel
datamodel: datamodel.prisma
databaseType: document
# Specifies language & location for the generated Prisma client
generate:
- generator: javascript-client
output: ../src/generated/prisma-client
# Ensures Prisma client is re-generated after a datamodel change.
hooks:
post-deploy:
- prisma generate
docker-compose.yml
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
managementApiSecret: ${PRISMA_MANAGEMENT_API_SECRET}
databases:
default:
connector: mongo
uri: mongodb+srv://${MONGO_DB_USER}:${MONGO_DB_PASSWORD}#${MONGO_DB_CLUSTER}/myapp?retryWrites=true&w=majority
database: myapp
Plus, a weird situation happens too. In both cases, if I try to navigate the resulting API with GraphQL Playground, clicking on the tab "Schema" returns an error. On the other side, the tab "Docs" is being populated correctly. Apparently, seems that the exception is blocking the script finishing to generate the rest of the schemas.
A little help by someone experienced with Prisma/Heroku would be awesome.
Thanks in advance.
To date, I still do not clear what was causing the exception in detail. But looking deeply on Prisma docs, I discovered that in version 1 there's the necessity to proxy the app through the Prisma Cloud.
So probably, deploying straight on Heroku without it, was generating the main issue: basically there wasn't any Prisma container service running on the server.
What I did is to follow step by step the official doc about how to deploy your server on Prisma Cloud (here's the video version). As in the example shown in the guide, I already have my own project, which is actually splitted in two different apps: respectively one for the client (front-end) and one for the API (back-end). So, instead to generate a new one, I pointed the back-end API endpoint to the remote URL of the Prisma server generated by the cloud (the Heroku container created by following the tutorial). Then, leaving the management secret API key only on the Prisma server container configuration (which has been generated automatically by the cloud) and, on the other hand, the service secret only on the back-end app, finally I was able to run the prisma deploy correctly and run my project remotely.

serverless framework AWS pseudo parameters stack name

Question
What is the correct way to get the output of a cloudformation stack in a serverless.yml file without hardcoding the stack name?
Steps
I have a serverless.yml file where I import a cloudformation template to create an ElastiCache cluster.
When I try to do so, I get this error:
Serverless Error ---------------------------------------
Invalid variable reference syntax for variable AWS::StackName. You can only reference env vars, options, & files. You can check our docs for more info.
In my file I'd like to expose as an environment variable the ElastiCacheAddress output from the cloudformation stack. I am using the serverless pseudo-parameters plugin to get the output:
# Here is where I try to reference the CF output value
service: hello-world
provider:
name: aws
# ...
environment:
cacheUrl: ${cf:#{AWS::StackName}.ElastiCacheAddress}
# Reference to the CF template
resources:
- '${file(./cf/cf-elasticache.yml)}'
The CF template is the one from the AWS Samples GitHub repository.
The snippet with the output is here:
ElastiCacheAddress:
Description: ElastiCache endpoint address
Value: !If [ IsRedis, !GetAtt ElastiCacheCluster.RedisEndpoint.Address, !GetAtt ElastiCacheCluster.ConfigurationEndpoint.Address]
Export:
Name: !Sub ${AWS::StackName}-ElastiCacheAddress
You can use a workaround to get your way through these syntax caveats.
In this case, I would suggest you to create a custom node to set variables you would want to reuse. You can then reference these variables using Serverless Framework syntax only, to avoid that error, like so:
# Here is where I try to reference the CF output value
service: hello-world
custom:
stackName:'#{AWS::StackName}'
provider:
name: aws
# ...
environment:
cacheUrl: ${cf:${self:custom.stackName}.ElastiCacheAddress}
# Reference to the CF template
resources:
- '${file(./cf/cf-elasticache.yml)}'

setup an aws api gatway with serverless

I built out my dev environment manually, I wanted to focus on logic and skip the learning curve on serverless, but before deploying to prod I want to standardize and parameterize my stack.
setting up my dynamodb tables has been straight forward, but I'm running into snags with deploying a new api-gateway.
I've been using aws codebuild to package layers for lambda functions and an s3 bucket to store my lambda code.
Let's take my dev-rest-auth api (custom authentication) as an example.
I have several resources for login/out, passwords and registration; most are protected by a custom authorizer (login/logout aren't) and all have cors policies. I'm using a custom domain account-api.dev.example.com I use several dynamodb tables for housing user data (let's avoid security discussions please, I'm not storing raw passwords and am encrypting using leading industry standards) and temporary codes (password reset & account verification).
to test serverless implementation I'd like to build a yaml file that recreates my existing infrastructure - so the first question is -- is that possible? Can I parameterize the deployment of an API gateway, with custom authorizer, custom domain, and several lambdas?
Next question is how?
Organizationally I'm breaking out my yml files by resource:
I have several dynamodb yml files that look like this:
Resources:
UserTable:
Type: AWS::DynamoDB::Table
DeletionPolicy: Retain
Properties:
TableName: ${self:custom.resource-prefix}-UserTable-${self:custom.stage}
AttributeDefinitions:
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: email
KeyType: HASH
# Set the capacity to auto-scale
BillingMode: PAY_PER_REQUEST
This was a much earlier attempt (several months ago, from googling, but I don't remember where I found it or what it does) of standing up an API gateway:
Resources:
SharedGW:
Type: AWS::ApiGateway::RestApi
Properties:
Name: SharedGW
Outputs:
apiGatewayRestApiId:
Value:
Ref: SharedGW
Export:
Name: SharedGW-restApiId
apiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- SharedGW
- RootResourceId
Export:
Name: SharedGW-rootResourceId
I pull everything together in a serverless.yml file that references the resource files like this:
...
resources:
# S3 Bucket
- ${file(resources/s3/s3-static-host.yml)}
- ${file(resources/s3/s3-CodeBuildResults.yml)}
# DynamoDB
- ${file(resources/dynamodb/dynamodb-mealtable.yml)}
- ${file(resources/dynamodb/dynamodb-ziptable.yml)}
- ${file(resources/dynamodb/dynamodb-usertable.yml)}
- ${file(resources/dynamodb/dynamodb-passwordresettable.yml)}
- ${file(resources/dynamodb/dynamodb-accountregistrationtable.yml)}
- ${file(resources/dynamodb/dynamodb-restaurant_table.yml)}
# DNS Records (Route 53)
# TODO: Determine why DNS hangs
# - ${file(resources/route_53/dev_dns.yml)}
# Gateways
- ${file(resources/api_gateway/local_rest_auth.yml)}
# - ${file(resources/api_gateway/rest_auth.yml)}
...
I've seen several examples of connecting a lambda to a gateway, but it's not clear where the gateway is being created), it's also not clear how the lambda is being created/if I'd be able to reference layers/function code in s3.
I've seen some tutorials for doing this with aws amplify via the cli, but my dream-state would be that I could effectively create a new aws account, deploy this serverless and have my site up and running automatically - with just a little route 53 work to point to a new domain.

FeignClient with absolute URL doesn't work using Spring Cloud Loadbalancer in Hoxton.RELEASE

I have a feign client with absolute host url (target host not registered in eureka)
#FeignClient(name = "feedback-client", url = "http://some.absolute.url")
interface FeedbackClient {
#RequestMapping(value = ["/feedback"], method = [RequestMethod.GET])
fun findAll(#RequestParam("page") page: Int): Page<Feedback>
}
Everything works fine with Spring Boot 2.2.1, until i disable Ribbon using the below configuration in application.yml
spring:
cloud:
loadbalancer:
ribbon:
enabled: false
Feign client starts giving below error:
s.c.o.l.FeignBlockingLoadBalancerClient : Load balancer does not contain an instance for the service <absolute http url>
I a trying to use spring cloud loadbalancer instead of ribbon, which is causing this issue.
I have added the below dependency in build.gradle
compile('org.springframework.cloud:spring-cloud-starter-loadbalancer')
Any help? Does Feign support spring cloud loadbalancer with static list of servers?
Github Repo to reproduce the scenario:
https://github.com/cancerian0684/demo-openfeign
Dependencies:
Spring Boot 2.2.1
Spring Cloud Hoxton.RELEASE
Update 21st Dec 2019
This bug (https://github.com/spring-cloud/spring-cloud-openfeign/issues/259) has been fixed in Spring Boot 2.2.2.RELEASE and Spring Cloud Hoxton.SR1
Loadbalancer shall work fine with this release.

How do I set environment properties in AWS codestar?

I created a spring project in AWS codestar.
I would like to pass environment properties to my application (e.g. DATA_SOURCE_URL). I can do it in elastic beanstalk in "Configuration" -> "Software" "modify" and adding the properties. But whenever a new deployment is triggered this configuration gets reseted.
I was wondering what is the way of setting environment properties when using AWS codestar.
As it may help other people that search a solution
I finally get it to work by using the Saved Configuration function in Beanstalk, and calling it via the cloud formation template.yml : EBConfigurationTemplate (from the autogenerated template.yml by codestar)
EBConfigurationTemplate:
[...]
SourceConfiguration:
ApplicationName: !Ref 'EBApplication'
TemplateName: "Saved Configuration Name"
After that my django application was able to read the os.environ['ENV_VAR_NAME']
as well as django.config that was able to connect to an RDS (Non-managed by beanstalk) to do the migration as a container_command