Keycloak.X (quarkus based) on CloudFoundry - keycloak

we want to deploy keycloak.X to cloudfoundry. I found older approaches (How to deploy keycloak to cloudfoundry) with two options:
wrapping it as a Spring application (the corresponding stuff on github seems abandoned, guess this is because keycloak switched to quarkus?)
using the docker image (diego_dockeris disabled in my target environment)
So I am stuck with the Quarkus distribution.
Ideally, I dont want to change too much on the application itself (risk assessment ...) but only wrap it for cloud foundry.
The start script targets a class named QuarkusEntryPoint, but I don't know how to put it into a buildpack.

Related

How to run SCDF using weblogic?

I am trying to use Weblogic instead of Tomcat to bring up SCDF locally. I am unable to find the respective guide on spring.io. Any pointer would help.
Spring Cloud Data Flow builds on the Spring Boot foundation. We ship the uber-jar binary through Maven Central and/or as a container image in DockerHub or Bitnami.
You would start/run the shipped binary stand alone either in the bare-metal VMs or in a container orchestration platforms like Kubernetes.
Weblogic doesn't fit any of the functional and non-functional requirements that we expect (in SCDF) to be useful for a production setting. Simply put, you won't be able to run SCDF in Weblogic.
Please consider experimenting with SCDF using Docker Compose or Kubernetes instead.

Is it possible to modify the test server configuration in each separate microservice project?

I am developing a number of microservices which will run on Open Liberty. I have set up a test server in my eclipse environment which is configured to use all the features required by all the services which I am currently working on.
Whilst this works, it seems a heavy-handed approach and it would be good to test each service in an environment which closely resembles the target server. The services can differ in the set of features they require as well as the JVM settings necessary.
Each service will run in its own docker container and the docker configuration is defined in each project.
Is there a way to better test these services without explicitly setting up a new server for each individual service?
I am not aware of any way to segment the Liberty runtime (its features) nor the jvm (for different jvm settings) for different applications running in a single Liberty instance.
You can set app specific variables and retrieve them using MP Config, but that's not the same as jvm settings and certainly not the same as trying to segment specific features of the runtime to a specific application.
However, in general when testing, I would highly recommend trying to mimic your production environment as much as possible. Since you're planning on deploying into docker, I would do the same locally when testing, and given Liberty's lightweight, composable nature, it's unlikely that you'll hit resource issues locally when doing this (you should only enable the features on each Liberty instance that your app is using to minimize the size of that instance). This approach is one of the big benefits/value provided by containers and Liberty.
In other words, even if you could have one Liberty instance segmented per application, I would not recommend it for your testing because, as you said, "it would be good to test each service in an environment which closely resembles the target server"

How Karaf and Fabric containers are related?

I have installed jboss-fuse-karaf-6.3.0 and created a project in developer studio.
I'm not able to figure out certain concepts around it.
In Apache Fuse how Karaf and Fabric containers are related ? What I understood is Karaf provides runtime environment for the project to run. Fabric is for managing deployments. Is that correct ?
I have started Karaf container by running FuseInstall/bin/start.bat . How to start the fabric container ?
Is http://localhost:8181/hawtio is fabric console ?
Is there a way to directly deploy a project to Karaf container using maven ? or we need to deploy the project to fabric ?
Thanks !
Fuse is an ESB product by Redhat. And yes, you understood it correctly that Karaf provides an OSGI runtime whereas Fabric is for managing multi-container deployments.
You don't start a fabric container. You need a Fabric agent or something similar for that. Not very familiar with it, but you can refer Fuse's documentation here and here regarding this.
Hawtio is basically a visual management console for various containers.
You can definitely deploy your OSGI bundle directly into a Karaf container. There are various commands such as :osgi:install " OR placing the bundle at FuseInstallDir/deploy. The Documentation it explains much better.
A Fabric is just a group of commonly managed Karaf containers. It lets you manage your containers using Profiles instead of just features and bundles.
Once you have started a Karaf container you can CREATE a Fabric. Follow these instructions: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.2.1/html-single/Fabric_Guide/index.html#Deploy-Fabric-Create . Any other Karaf containers you start will then be JOINED to the existing Fabric.
Once the Fabric has been created, localhost:8181/hawtio will have Fabric specific content
If you are using Fabric, then you can use the fabric8 Maven plugin to deploy your application to a Profile directly. See more details here: https://fabric8.io/gitbook/mavenPlugin.html . Basically you can just run mvn fabric8:deploy and it will update the fabric to use your new code. Be careful here as this will tell Fabric where to find your new code in its list of Maven repos. If you have not deployed your code to a central or shared repo and it is only on your local machine, and the container that is getting the deployment is on a separate machine, it will not work.
Be sure to read up on how profiles work as well, because adding your code to a profile does not add it to a container unless that container is already set up to include the profile you are updating. The fabric guide I linked first explains this well.

Creating custom boilerplates in Bluemix

I created an application on Bluemix and I want to create a private Boilerplate from it in order to automatically deploy them when required through a web interface. Is there any possible way to create that boilerplate?
Boilerplates are not public documented, so not possible to create your own in the catalog.
But you can check "Deploy to Bluemix Button" which perhaps covers your requirement of being able to deploy an app and its runtime and required services.
I think that IBM Containers can be used to achieve that goal. A container is basically an application with all its dependencies, that is stored in a portable, platform-independent module (the container). The structure of a container is explained in an image. From a single image you can then instantiate all the containers you want.
So, if you create an image composed by your application and its dependencies and you push it on Bluemix, you can automatically instantiate and deploy new containers (with your application up and running inside) when required through a Web interface as you requested.
IBM Containers are based on Docker containers, that wrap up a piece of software in a complete filesystem that contains everything it needs to run. This guarantees that it will always run the same, regardless of the environment it is running in.
Please refer to IBM Containers Docs to understand how to use Docker on Bluemix and Docker training to learn the basics.

Heroku-like services for Scala?

I love Heroku but I would prefer to develop in Scala rather than Ruby on Rails.
Does anyone know of any services like Heroku that work with Scala?
UPDATE: Heroku now officially supports Scala - see answers below for links
As of October 3rd 2011, Heroku officially supports Scala, Akka and sbt.
http://blog.heroku.com/archives/2011/10/3/scala/
Update
Heroku has just announced support for Java.
Update 2
Heroku has just announced support for Scala
Also
Check out Amazon Elastic Beanstalk.
To deploy Java applications using
Elastic Beanstalk, you simply:
Create your application as you
normally would using any editor or IDE
(e.g. Eclipse).
Package your
deployable code into a standard Java
Web Application Archive (WAR file).
Upload your WAR file to Elastic
Beanstalk using the AWS Management
Console, the AWS Toolkit for Eclipse,
the web service APIs, or the Command
Line Tools.
Deploy your application.
Behind the scenes, Elastic Beanstalk
handles the provisioning of a load
balancer and the deployment of your
WAR file to one or more EC2 instances
running the Apache Tomcat application
server.
Within a few minutes you will
be able to access your application at
a customized URL (e.g.
http://myapp.elasticbeanstalk.com/).
Once an application is running,
Elastic Beanstalk provides several
management features such as:
Easily deploy new application versions
to running environments (or rollback
to a previous version).
Access
built-in CloudWatch monitoring metrics
such as average CPU utilization,
request count, and average latency.
Receive e-mail notifications through
Amazon Simple Notification Service
when application health changes or
application servers are added or
removed.
Access Tomcat server log
files without needing to login to the
application servers.
Quickly restart
the application servers on all EC2
instances with a single command.
Another strong contender is Cloud Foundry. One of the nice features of Cloud Foundry is the ability to have a local version of "the cloud" running on your laptop so you can deploy and test offline.
I started working on the exact same thing as what you said a few weeks ago. I use Lift, which is a great framework and has a lot of potential, on top of Linux chroot environment.
I'm done with a demo version, but Linux chroot is not that stable (nor secure), so I'm now switching to FreeBSD jail on Amazon EC2, and hopefully it'll be done soon.
http://lifthub.net/
There are also other Java hosting environment including VMForce mentioned above.
If you are looking for a custom setup which also has the ease of deployment that heroku offers: http://dotcloud.com. They are invite only right now but I was given access in under three days. I am working on a Lift/MongoDB project there and it works well.
Off the top of my head, only VMForce comes to mind, but its not available yet. This will be a Java-oriented service, so that probably means you'll have to spend a wee bit of time figuring out how to package the app.
For more discussion, there was a debate about this in 2008.
I'm not entirely sure if it's really suitable or not, but people have deployed Scala applications to Google App Engine, for example http://mawson.wordpress.com/2009/04/10/first-steps-with-scala-on-google-app-engine/
Actually you can run scala on heroku right now. You don't believe it?
https://github.com/lstoll/heroku-playframework-scala
I'm not sure the tricks lstoll has used are legit but using the
new cedar platform where you can run custom processes and some
ingenious Gemfile hacking he has managed to bootstrap the Java
play platform into a process. Seems to work as he has a live
site running a test page.
Stax cloud service offers preconfigured lift project skeleton. Also, there is a tutorial on how to deploy lift project to appengine.