Encrypt the docker images and ship them to client - mongodb

We have a spring boot application that is using mongo db.
We need to up this complete application on a machine which is owned by our client and installed at their premises. We need to encrypt the application in such a way that nothing can be extracted from it.
We are planning to do this by using docker. As of now we are planning to create a docker-compose file and give it to the client. We will create images on our end and push them to a repository.
As we can extract the containers and get the data from them, hence this approach would not work for us. Is there any way to get things done with help of docker itself so the files will not be extracted?
Files that we need to abstract are our jar files and database.
we have already created a compose file that will up two containers one for the spring-boot application and another for mongo.
We have also tried to extract the container and we easily get the jar out of it and also the db credentials which we have mentioned in the script and copied in /docker-entrypoint-initdb.d/.
Need to do something so that the credentials and jar files will not get extracted.

Related

Keycloak configure with PostgreSQL

I develop Spring Boot Rest API project using JDBC and the database is PostgreSQL. I added authorization with Keycloak. I wanna use User Federation because I would like to use Users in my PostgreSQL DB. How can I use it and other ways not to use User Federation?
I have faced the same problem recently. I have different clients with different RDBMS, so I have decided to address this problem so that I could reuse my solution across multiple clients.
I published my solution as a multi RDBMS implementation (oracle, mysql, postgresl, sqlserver) to solve simple database federation needs, supporting bcrypt and several types of hashes.
Just build and deploy this solution on keycloak and configure it through the admin console providing jdbc connection string, login, password, the required SQL queries and the type of hash used.
Feel free to clone, fork or do whatever you need to solve your issue.
GitHub repo:
https://github.com/opensingular/singular-keycloak-database-federation
I'm doing similar development but with Oracle and JSF.
I created a project with three classes:
one implementing UserStorageProvider, UserLookupProvider and CredentialInputValidator
one implementing UserStorageProviderFactory
one extending AbstractUserAdapter
Then I created another project which creates an ear file containing the jar file generated in the previous project plus the driver jar file (of PostgreSQL in your case) inside a lib folder.
Finally the ear file is copied in the /opt/jboss/keycloak/standalone/deployments/ folder of the Keycloak server and it gets autodeployed as a SPI. It's necessary to add this provider in the User federation section of the administration application of Keycloak.

How to deploy EAR into the WAS cluster and IBM HTTP server by Monitored Directory Deployment functionality

My goal is to automatically deploy EAR file in WebSphere Application server cluster by Monitored Directory Deployment. So my deployment target is cluster. WAS version is 9.0.0.10. Everything works fine if I drop EAR file in
monitored directory for example '/{monitored_directory_defined_in_WAS}/clusters/my_cluster_name/'. Application is deployed and started. But I also want to deploy that application into the IBM HTTP server(resides in the same WebSphere cell with cluster) in the same automatically process by Monitored Directory Deployment.
I tried to manually predefine deploymentTargets(cluster and ibm http server) in deployment.xml file, put it in the EAR file, and drop EAR in the '/{monitored_directory_defined_in_WAS}/clusters/my_cluster_name/', but WAS deploy EAR only in cluster. As a consequence I must manually via WAS console map all modules from EAR to ibm http server, but I do not want to do it manually.
My second idea/attempt was sto create separate monitored directory for the ibm http server '/{monitored_directory_defined_in_WAS}/servers/my_ibm_http_server_name/'.
First I drop EAR into the '/{monitored_directory_defined_in_WAS}/clusters/my_cluster_name/' and right after that I drop EAR in to the '/{monitored_directory_defined_in_WAS}/servers/my_ibm_http_server_name/'. Result is that the EAR modules are deployed only in to the web server, and that is not my goal.
Is that even possible by WAS Monitored Deployment Directory functionality?
Is it allowed to manually create the deployment.xml file and add it to the EAR file?
First of all installing via monitored director is not recommended in the production environments, as it lacks control.
As you correctly suspected it is not possible to install it to the cluster and web server - check Installing enterprise application files by adding them to a monitored directory.
Because you can use only one server directory, drag and drop to map
applications to combinations of servers is limited. Scenarios
requiring use of more than one server, such as mapping to an
application server and a web server, are not supported by direct drag
and drop of an application file.
However, you still want to use it, you may deploy property file into monitored directory. That property file can fully customize your deployment e.g. configuring also module to map to web server.
Check Installing enterprise application files by adding properties files to a monitored directory for more details
UPDATE
If you have issues, I'd suggest the following approach - install your application 'classically' via admin console and map it to both web server and cluster. Then run wsadmin command to extract propertes:
AdminTask.extractConfigProperties('[-propertiesFileName myApp.props -configData Deployment=MyApplication -options [[SimpleOutputFormat true]]]')
Try to use format from that exported file for your properties
I had time to run it in my environment. I have app with 2 modules inside, one module is just mapped to the cluster, other is mapped to both cluster and web server. Here is relevant part of the property file:
taskName=MapModulesToServers
row1={ module=HelloTestUI #readonly
uri=HelloTestUI.war,WEB-INF/web.xml #readonly
server=WebSphere:cell=!{cellName},cluster=!{clusterName} }
row0={ module=HelloTestWeb #readonly
uri=HelloTestWeb.war,WEB-INF/web.xml #readonly
server=WebSphere:cell=!{cellName},cluster=!{clusterName}+WebSphere:cell=!{cellName},node=!{nodeName},server=!{serverName} }
I didnt try to use that property file to deploy the app via monitored directory, but as you can see the entry is created and mapping is done via + sign that connects cluster and web server.
If you dont see the mapping to your web server, make sure you saved the changes done in the console, and then connected via wsadmin, otherwise wsadmin will have not current data.

swift Perfect server deployment in Amazon AWS Buildpack

Am trying to deploy my web service written in swift, I don't see the web-root folder and not sure where to create the folder. Anybody have tried, please help me to copy the source code and start the server.
so, can you tell me what you have done so far, in relation to getting the AWS instance up and running ? Did you use the AWS instructions from here?
Once you have the instance up and running, there are several ways to deploy your code, such as a git pull. By default, the webroot folder for the PerfectServer is created in the current working directory when you run the app. There are several command line options to define the location of libraries, server port and location of webroot.
With more information, I'd be happy to help get you running.

Two different data sources in a Weblogic server

I have a Weblogic server configured in Eclipse with a local database as the data source. When debugging issues it would be nice to be able to connect to the database the test group is using. I thought I would be able to clone the default "myserver" in the default mydomain and create new data sources which point to the test groups database. I've done this but now I'm attempting to figure out how to start this new server and deploy my application to it through Eclipse.
I don't really care how it works, I just need to be able to easily switch between the two data sources, either through the Weblogic admin console or through eclipse via multiple servers. Being able to clone the current server would be nice since it's configuration is rather complex or just switch the sources out.
Any ideas on how to accomplish this would be much appreciated.
The JNDI name must be different, because it connects through the JNDI name Every data source should have unique JNDI name.

Spring batch application integration with spring batch admin

I developed one spring batch application which is deployed as executable jar using batch/shell script. It works fine.
Now recently I read about spring batch admin application release. As per their doc, they say you have to point to job-context.xml and that will allow to manage spring batch app to be started,restarted and stopped from admin app. Now my question is do I have to keep my job-context.xml outside the jar or what are the exact steps, i am confused about this configuration.
Any insight on this is very useful and by the way I am using spring batch 2.1.
Thanks
The Spring Batch admin application is a good reference implementation and is highly customizable. All interface implementations may be replaced via Spring DI using your own classes. UI is also template driven(FreeMarker I think) and therefore may be customized to display relevant information, change skin etc.
I had a similar need like yours - need admin functionality included in an app built as jar. I did not quite like the fact that I had to package my jobs as a .war file. Instead I extracted relevant configurations from Spring Batch Admin source and created a deployment that works off file system and runs on embedded Jetty server.
See screen shots here : https://github.com/regunathb/Trooper/wiki/Trooper-Batch-Web-Console
Source, configurations etc are available here : https://github.com/regunathb/Trooper/tree/master/batch-core . This project actually creates a .jar and not .war
If your application has custom classes and is deployed as a runnable jar and not contained within the spring batch admin, you cannot start jobs. You can only view the status of jobs and "kill" their status in the database.
If you look at http://static.springsource.org/spring-batch-admin/reference/reference.xhtml at the end of the Configuration Upload section it states
You can see a new entry in the job registry ("test-job") which is
launchable in-process because the application has a reference to the
Job. (Jobs which are not launchable were executed out of process, but
used the same database for its JobRepository, so they show up with
their executions in the UI.)
If your jobs are strictly configurable jobs, as-in you use only XML to define them and do not need to do any customized item readers/processors/writers or other custom classes, then you can upload the job XML and it will be runnable from within the admin site. If you have custom classes then, from my experience, you will have to have the spring batch application deployed within your web application and then upload an XML that contains the jobs you want to run separately.
I personally just used the Admin tool to view job status and provide me with statistics through some custom pages. I left the scheduler to run the jobs and I didn't want those with access to the admin site to kick off a job when they knew nothing about it. Basically, used it to give the users a warm fuzzy without allowing them to muck it up. (leave it to a user to find an edge case you didn't account for)