Unable to pull dpage/pgadmin - postgresql

I'm trying to link postgresql and pgadmin4 with docker on Ubuntu 22.04. To do this, I need to pull dpage/pgadmin4, but for some reason it can't be pulled fully and I can't figure out why. I attached a picture of what I see, with no change regardless of how long I leave it running. All of the pulls are okay, except the last one, which is stuck on "Waiting". picture

Related

Postgres, Prisma Working Fine One Day, 'P1001 Error: Can't Reach Database' the next

For this project, I am using a prisma / Postgres database. I have made no changes to my code, and I have pulled a coworkers working version of the code to no avail. I am unable to do anything with the database, I cannot migrate, I cannot run mutations, and I cannot even open the psql console, as every command is met with
P1001: Can't reach database server at localhost:5432
Please make sure your database server is running at localhost:5432
I am not sure what I could have possibly done, I don't know enough about ports or even the contents of app.json well enough to have messed anything up. Now, no mutations can go through.
Interestingly enough, this all happened after I ran npx primsa migrate deploy on the deployed database which is on a EC2 VM from AWS. Since then, the native app associated with the database refuses to work, though it is worth nothing that the webapp connects to the deployed database just fine. This being said, nothing works locally, as the database / Port / Server don't exist anymore according to my machine, which makes no sense. I have no idea how to try to re-spin it, or why every single query / mutation from my Native App now ONLy returns Response not successful: Received status code 400 despite it having the same exact syntax it did when it worked, as well as the WebApp having the same syntax and server (ExpressJS). Does anyone have any ideas what could be causing this?
The error code 400 refers to a bad request coming from the client: too large request, malformed syntax, invalid request message framing, etc.
First step: make sure that your database server is indeed running. Try connecting to it with other SQL Clients or Libraries. Sometimes Prisma is just being difficult.
Second thing: are you hosting the database on the local server? I can assume you are because of the localhost. Make sure no other programs are using this port or maybe waiting for it.
Sorry if this doesn't help. Good luck!

Docker - under Windows 10 Pro - Need to map volumes and have them work, not quietly fail

I ran various containers on Two different Windows 10 Pro machines, and thought that I had the data drives mapped correctly, but now I'm finding out that it isn't writing the data there at all. One example was Mongo db, where I mapped /mongodb/database:/data/db I upgraded docker, and when it restarted mongodb.. POOF! no data, I thought that was weird and looked in /mongodb/database and the directory is empty. Thankfully, the app is still in the development phase, and not critical that the data was lost...
the line from the docker compose file:
volumes:
- /mongodb/database:/data/db
Different machine:
I installed Gogs/gogs image, mapping the data:
docker run --name=Gogs-Git -p 10022:22 -p 10080:3000 -v /var/gogs:/Docker/Gogs-GitServer/Data gogs/gogs
Seemed to work perfectly, so I was thinking everything was fine, I pushed a Repo up to it.. and today, I looked at \Docker\Gogs-gitserver\data and no files... so where did it write the data?
I also installed TeamCity, mapping that data.. nope, it has no logs, no data...
This feature seems to just not work at all. I found a reference from 2016 saying I need to look at the 'shared' tab (below general),and check C: to be shared, but well, no, that isn't a tab, so it isn't that.
There is no way someone would write a system that just quietly wrote the data some other place, or didn't bother actually mapping it without giving an error - that would be nuts.
So, there must be some other explanation... One of the machines has Hyper-V enabled in the BIOS, the other one doesn't even support it as far as I know.
I think some of the images are Linux, and some are Windows (TeamCity I'm pretty sure is)
OK, this is interesting... If I look at the volumes, and enter one that is in use, I get this:
The Target looks about like the right path, but I'm not sure about the /backup and the /data on the last two lines, if these are supposed to be directories under that, they don't exist, but if I click on the data tab, I can see the data, it is in Docker, hidden and not shared, in spite of there being a 'target' that points at the right directory... how to I get it to start writing this data correctly to that folder??
I've not confirmed this yet with the above configuration, but I found that for other containers, I needed to specify the path as 'c:/data/MongoDb/Database' when I created the container using that as the path, it worked and I have data there now. I just need to go back and fix all these VMs so they have their data correctly...

VSCode Remote Server stuck on initializing server

I'm currently trying to access a remote server using VS Code's Remote SSH extension. I haven't had a problem when using it before (that was around a month ago) but today when I tried to access the server I ran into some trouble.
I have the hostname and everything configured in a config file, and so I just click on that option and type in the password. However, VS Code seems to be stuck on "Opening Remote..." for the past hour or so. The dialogue I get in the terminal is as follows:
username#host's password:
Running remote connection script
Acquiring lock on /home/username/.vscode-server/bin/abcdefghijklmnop1234567989/
vscode-remote-lock.abcdefghijklmnop1234567989
Installing to /home/username/.vscode-server/bin/abcdefghijklmnop1234567989...
Downloading with wget
Does anybody know what the problem might be? Is this normal?
EDIT
As soon as I posted this the connection was successfully made. However, I would also like to still know what the problem was and if it normally takes around an hour, and what this process might be doing. I also believe it would be helpful to the community overall.
Thank you.
I've faced the same issue just now and realized that firewall protection has something to do with it.
As soon as I disabled it, the remote connection was established and I managed to see my code again.

Whatsapp Business API production setup not working

I am trying to configure or setup the production environment of whatsapp business api as mentioned in the link https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
I have done everything mentioned in this my dockers are also running on port:9090 as can be seen in the image
still I can't access it. Whenever I try to call https://localhost:9090 the error with "This site can’t be reached" occurs. Whatsapp business api does not have good documentation or tutorials till now. So this site is the only last way for me.
I had a similar problem which could be your case, I saw the docker containers OK but nothing was working. After a day searching I saw where it happened, my problem was I installed mysql MANUALLY (not docker container) in the same instance where docker is running and in db.env I just used 127.0.0.1, this was passed literally to docker container, then looking at a the wait_on_mysql.sh script, the whastapp docker containers were waiting util the mysql ip has conectivity to actually do something and was printing "MySQL is not up yet - sleeping" each second, of course they wouldn't find any conectivity.
Since my instalation is for development, and I am already using such database to other stuff, my solution was to use the 172.17.0.1(docker gateway of the containers) IP instead, then add two sets of network iptables rules to the host to redirect from the docker containers IP to the IP binded by mysql when using such port (3306, the default in my case). After that everything works well. I think there are better solutions, but I didn't want to go far on it, you should evaluate you case if apply.
check the command:
docker-compose logs > debug_output.txt
That gives you insight about whats happening, hope it can helps someone.
I think your setup is already complete. You just need to start with the registration process and start sending messages. The containers are up and running but calling https://localhost:9090 won't send you any response as this is not any specified API endpoint expected to be used.
Since you're using prod single instance, the documentation can be found here which seems pretty straight forward. https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
You seem to have completed till the 7 steps. The next step can be to perform a health check to make sure it is healthy. The API endpoint for that would be https://localhost:9090/v1/health https://developers.facebook.com/docs/whatsapp/api/health
Has your db also been setup?
I cannot see it in the docker screenshot.
Also - you have to accept the certificate, as it does not have a public CA issues certificate.

How to remain running status of zopectl and plone?

Last week my lab's power outage occurred and the web server went out.
And after then, my webpage doesn't work anymore. My webpage is using plone and zope.
So I first went to the directory /Plone/zinstance/bin
typed ./instance
then did zopectl start
then I typed ./plonectl start.
But the problem is the following : everytime I start zopectl and plonectl, the daemon process soon died.
The command line is like this.
I don't know what is the problem and what should I do. Anyone who knows well about plone and zopectl please help me.
Try ./instance fg. If you have an error it will be displayed in the console.
(fg - means running it in the foreground)