Error on DB connection when trying to build forms and bpm from deployment->docker - formsflow.ai

when i tried to build formsflow-form,formflow-bpm ,keycloak etc from deployment docker i got error saying "could not connect to database". But i can build all these seperately.

It looks like from the error logs you are missing environment variables for database which is required to set up forms-flow-ai in Docker. You can checkout the Installation steps of formsflow.ai solution in docker and follow along it to resolve the issue.

Related

DBeaver cannot connect PostgreSQL due to driver error

I am getting following error when I try to connect to PostgreSQL database on a Docker container during downloading and installing phase of PostgreSQL drivers:
Driver file download failed. Do you want to retry?
Reason: Maven artifact 'maven:/net.postgis:postgis-jdbc:RELEASE' not found
On the other hand, when I run DBeaver via command line as admin, I can connect to the database. But not want to run DBeaver on terminal and want to fix that problem. I tried to add drivers as jar file but it does not fix the problem. So, how can I fix this problem?
enter image description here

`gcloud run deploy` raises "Revision <revision_name> is not ready and cannot serve traffic."

Command
gcloud run deploy api --region=$REGION --image=$IMAGE
Logs
Deploying container to Cloud Run service [api] in project [[MASKED]] region [[MASKED]]
Deploying...
Creating Revision...........interrupted
Deployment failed
ERROR: (gcloud.run.deploy) Revision [[MASKED]] is not ready and cannot serve traffic.
I've tried to search Google Cloud documentation, but it does not mention such problem.
How to solve the "Revision is not ready and cannot serve traffic."?
Try to wait a few minutes and then just re-launch the procedure. The good old "let's retry without changing anything" worked for me! :)
EDIT: I talked with a Cloud Architect who works with me and he told me that this is the actual solution, because if you retry too quickly to restart the deploy, GCP may still have some pending operations from the previous one!
I faced the same error in Cloud Run after getting the container working correctly locally. In my case the revisions weren't showing as failing, they had a grey checkmark
and when hovering I got the message
The revision is healthy but not currently serving traffic.
I just needed to click Manage Traffic and set 100% of the traffic to a new revision
I faced this problem as well. In my case I checked "Cloud Run" section from hamburger menu of google cloud console. The "Logs" section should give you more idea about what went wrong. I was missing a python library, and adding correct python dependency in my requirements.txt solved the issue for me. Somehow my local testing went well without this issue. I hope this helps. :)
I faced with this problem, my problem is that my docker image is missing required dependency package at build stage, my Dockerfile missed some steps to copy required files for preparing to install package.
To find you problem if cloud build logs was not make sense for you, I think you should:
From gcloud console, go to service "Container Registry" > Images
Select your repository name
From the image version (maybe latest) that you want to check > more actions > show pull command > then copy that command ex: docker pull gcr.io/..
From gcloud console header > select activate cloud shell
At cloud shell terminal, pull docker images of your latest build by running "pull command" that you copied before.
Start your container from this image to see what exactly happens with your run revision

GCR Cloud Run says "Image [name] not found"

I'm trying to take my first baby steps with podman (instead of Docker) and Google Cloud Run. I've managed to build an image with a gcr.io tag and push it to Google. I then create a new service, and I can select the image in the "Select Image URL" pop-up dialog. But then the service fails to start, saying "Image [full name] not found".
I can't find anything on Google's support pages, or anywhere else. I can pull the image, I can push new versions, and they appear on the pop-up dialog. But the service still reports that they can't be found.
What am I doing wrong?
Edit in answer to DazWilkin's questions below:
Can you run the podman-created container locally using Docker?
I can't run Docker locally because it is not compatible with Fedora 31 (hence podman). But I can run it locally using podman run
Can you deploy a Docker-created container in Cloud Run?
As above: F31. However podman is supposed to be a drop-in replacement.
Is the container registry in the same project as Cloud Run?
Yes. I did have a problem with that, but I got a permissions message rather than "not found".
Have you tried deploying via gcloud rather than the console?
Yes.
$ podman push eu.gcr.io/my-project/hs-hello-world
Getting image source signatures
Copying blob c7f3d2e0289b done
Copying blob def7032cea8e done
Copying config f1c2e2615f done
Writing manifest to image destination
Storing signatures
$ gcloud run deploy --image eu.gcr.io/my-project/hs-hello-world --platform managed
Service name (hs-hello-world):
Deploying container to Cloud Run service [hs-hello-world] in project [my-project] region [europe-west1]
X Deploying... Image 'eu.gcr.io/my-project/hs-hello-world' not found.
X Creating Revision... Image 'eu.gcr.io/my-project/hs-hello-world' not found.
. Routing traffic...
Deployment failed
ERROR: (gcloud.run.deploy) Image 'eu.gcr.io/my-project/hs-hello-world' not found.
When I used a Google-built container it worked fine.
Update: 5 March 2020
In the end I just carried on with the Google build service, and it works fine. My initial wish for local builds was in large part because a build on Google was taking over half an hour (lots of Haskell libraries to import), but now I've figured out how to use staged builds and multi-processor VMs to avoid this. I appreciate the efforts of those who have tried to help, but right now it's not broke so I'm not going to try to fix it.
I had the same issue: it seems Cloud Run is picky about the kind of manifest it can pull.
By building my images with --format docker and pushing them with --remove-signatures (inspired by this issue), podman will create and push docker-style manifests to the Container Registry and everything ran smoothly!
Too bad I spent a lot of time thinking it was a lack of permissions problem
I had the same error. My issue was that I was using the docker/setup-buildx-action in a GitHub action. When this was removed, Cloud Run was happy with the resulting manifest / container image.
Thanks to #André-Breda for providing the direction.
I've been having the same issue today. I'm using buildah to create the new image. I realized that the image I used successfully yesterday was built as root. So I built the new one as root and pushed it successfully.
Wish I knew why. The images built as my username ran fine locally with rootless podman.

Does anyone have tried the HLF 2.0 feature "External Builders and Launchers" and wants to get in touch?

I'm getting my way through the HLF 2.0 docs and would love to discuss and try out the new features "External Builders and Launchers" and "Chaincode as an external service".
My goal is to run HLF2.0 on an K8s cluster (OpenShift). Does anyone wants to get in touch or has anyone already figured his way through?
Cheers from Germany
Also trying to use the ExternalBuilder. Setup core.yaml, rebuilt the containers to use it. I get an error that on "peer lifecycle chaincode install .tgz...", that the path to the scripts in core.yaml can not be found.
I've added volume bind commands in the peer-base.yaml, and in docker-compose-cli.yaml, and am using the first-network setup. Dropped out the part of the byfn.sh that would connect to the cli container, so that I do that part manually, do the create, join, update anchors successfully, and then try to do the install and fail. However, on the install, I'm failing on the /bin/detect, because it can't find that file to fork/exec it. To get that far, peer was able to read my external configuration, and read the core.yaml file. At the moment, trying the "mode: dev" in the core.yaml which seems to indicate that the scripts and the chaincode will be run "locally", which I think means it should run in the cli container. Otherwise, tried to walk the code to see how the docker containers are being created dynamically, and from what image, but haven't been able to nail that down yet.

maven was6:installApp TransferFailed

I'm trying to deploy my ear to a remote Websphere 8.5 with maven plugin was6 via SOAP.
I set up the correct ear with correct host, node,cell,server, port etc.
But when I try to install app I got WASX7017E: ConnectException - Connection refused with a TransferFailedException
In logs, there is nothing in local and remote neither. As I see with netstat when I run the mvn installApp there are many connection trying on serverside whiches have status TIME_WAIT.
The plugin seems to be working with v8.5 cause other functions for example wsListApp work.
I tried to google it but with no results.
Does anyone get this error before me? Or has anyone idea what should I do?
Thank you.
was6-maven-plugin generates a temporal ant file under "target\was6-maven-plugin\" folder and then calls $WAS_HOME\bin\ws_ant.sh/bat utility using -f option with the previously generated ant file.
If you don't use the clean target, the files should continue there so you can use ws_ant to find the real error and find if there is an error on your configuration or in the plugin.
If you find an error in the plugin, please open an issue here: http://jira.codehaus.org/browse/MWAS
In this link you'll find all the options available in the wsInstallApp target:
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/index.jsp?topic=%2Fcom.ibm.websphere.javadoc.doc%2Fweb%2Fapidocs%2Fcom%2Fibm%2Fwebsphere%2Fant%2Ftasks%2FInstallApplication.html
Regards,
Javier Murciego