spring-cloud-deployer - specifying maven remote repository URLs - spring-cloud

Spring Cloud Data Flow - Cloudfoundry Server (v1.0.0.M4)
In working on trying to externalize configuration info in a properties file, and then use Spring Cloud Config Server to provide these environment settings at installation time, I've got a question about some of the values I would normally put a certain way in the YML manifest.
First, in a YML manifest, I might define them this way:
JAVA_OPTS: -Dhttp.keepAlive=false
MAVEN_REMOTE_REPOSITORIES_SNAPSHOTS_URL: <nexus url>
MAVEN_REMOTE_REPOSITORIES_RELEASES_URL: <another nexus url>
So how would I put these into a properties file? This is my guess:
java.opts=-Dhttp.keepAlive=false
maven.remote.repositories.snapshots.url=<nexus url>
maven.remote.repositories.releases.url=<another nexus url>

If you're looking to configure custom maven repository mirrors, please review this section.
The Spring Cloud Deployer's maven resolution strategy looks for naming conventions defined in this section. One you have the right set of k/v pairs, you could then list them in property file or as env-var's.

Related

Pass environment variable values to spring boot profiles (application.properties file) when deploying from Github

I have a simple Spring Boot REST app that uses Mongo Atlas as the database and I have a couple of environment variables to pass to the project for the connection URL. I have defined this in the src/main/resources/application.properties which is the standard Spring profile for storing such properties. Here's the property name and value.
spring.data.mongodb.uri=mongodb+srv://${mongodb.username}:${mongodb.password}#....
I use VSCode for local dev and use a launch.json file which is not committed to my github repo to pass these values and I can run this locally. I was able to deploy this app successfully to Heroku and setup these two values in the Heroku console in my App settings and it all works fine on Heroku also. I am trying to deploy the same app to GCP App Engine but I could not find an easy way to pass these values. All the help articles seem to indicate that I need to use some gcp datastore and some cloud provider specific code in my app. Or use some kind of a github action script. That seems a little bit involved and I wanted to know if there is an easy way of passing these values to the app via gcp settings (just like in Heroku) without polluting my repo with cloud provider specific code or yml files.

Skaffold config dependencies with profiles

I have a microservice application in one repo that communicates with another service that's managed by another repo.
This is not an issue when deploying to cloud, however, when devving locally the other service needs to be deployed too.
I've read this documentation: https://skaffold.dev/docs/design/config/#remote-config-dependency and this seems like a clean solution, but I only want it to depend on the git skaffold config if deploying locally (i.e. current context is "minikube").
Is there a way to do this?
Profiles can be automatically activated based on criteria such as environment variables, kube-context names, and the Skaffold command being run.
Profiles are processed after resolving the config dependencies though. But you could have your remote config include a profile that is contingent on a kubeContext: minikube.
Another alternative is to have several skaffold.yamls: one for prod, one for dev.

How to include config files when deploying to Vercel

I have a NextJS project I want to deploy to Vercel. The server needs a config file which is a typescript file containing an object, and is ignored from version control. Obviously when Vercel clones my repo it doesn't get the config file. Is there any way to sideload this config file into Vercel or do I need to fork my own repo privately so I can include the config file?
I've done some research and the only faster way I found is to push directly to Vercel using the cmd/cli.
Here's the doc: https://vercel.com/docs/cli
Another way could be to create two repositories, one private where is your Vercel project linked, and another public without your config file (as you said).

How to get s2i to connect to a private NuGet feed

I have OpenShift set up to build a ASP.NET Core application. I've succeeded in configuring OpenShift so it pulls in the latest source code. I see in the logs that it starts to build, but it immediately stops on the restore step.
OpenShift doesn't have access to our private NuGet feeds.
I know I can add credentials to the NuGet.config file, but that would mean committing sensitive information to the repository, which we don't want.
I've tried adding Input Secrets, as mentioned in the docs. I did this by creating a secret that contains the NuGet.config contents and adding the secret to my BuildConfig. I still get the same error (a HTTP 401).
Can I somehow tell OpenShift how to connect to the private NuGet feeds? Maybe using the secrets feature perhaps?
In the case of nuget configuration, you will need to specify where the NuGet.Config build input secret gets mounted into. This can be done by setting the destinationDir parameter to a valid configuration location.
As for being able to add the config file in your repository itself, you can do this by making use of environment variable references in the config, for example <add key="ClearTextPassword" value="%NUGET_REPO_PASSWORD%" />. The NUGET_REPO_PASSWORD environment variable can then be configured in your build configuration and value referenced from an OpenShift secret.
Hope this gets you going. If all else fails, you can definitely override the s2i assemble script with your own by adding an executable script at .s2i/bin/assemble of your project repository.

What server.xml is for in Java DB Web Starter GIT code?

I've created a liberty bluemix project. Then bluemix created the GIT project. I've downloaded it in eclipse and now I want to enable more features.
There's a server.xml there
but no matter what features I add there, bluemix logs says I am still using the default ones.
I am just pushing the changes to GIT (so jazz will push them to bluemix)
What am I doing wrong?
From my understanding the server.xml from the starter is for your local Liberty runtime that you can also fire up from within the maven plugin. If you want to make changes to your bluemix Liberty feature set you can do so by setting cf environment variables.
See my recent blogpost on how I did this.
https://bluemixdev.wordpress.com/2016/02/07/bootstrap-a-websphere-liberty-webapp/
I added the following to the build script in my deployment pipeline.
cf set-env blueair-web JBP_CONFIG_LIBERTY “app_archive: {features: [servlet-3.1]}”
cf push “${CF_APP}”
Alternatively you can set the liberty feature set within your manifest, see this blogpost on how to do so: https://bluemixdev.wordpress.com/2016/02/21/specify-liberty-app-featureset-in-manifest/
If all you're trying to do is update the feature list, then setting the JBP_CONFIG_LIBERTY is the easiest way.
But if you're looking to provide more config in the server.xml then you'll need to provide a server package.
For example, for this case:
I can either:
I can issue a cf push myBluemixApp directly from the "videoServer" directory.
Or, package the server using the wlp/binserver package videoServer --include=usr command and then push the resulting zip file cf push myBluemixApp -p wlp/usr/servers/videoServer/videoServer.zip https://developer.ibm.com/bluemix/2015/01/06/modify-liberty-server-xml-configurations-ibm-bluemix/
Or, manually or using your build, create a wlp dir structure keeping only the files you want to upload as I've done in the deploy directory here: https://hub.jazz.net/project/rvennam/Microservices_Shipping/overview You can then push that directory as I'm doing (see manifest.yml). This will work with jazz/DevOps Services.
Packaging the server.xml within a war file is not the correct way.