Save NestJS local project settings without committing to Github - postgresql

I have created two (2) NestJS projects using Visual Studio Code, that are 99.9% identical, on 2 distinct host machines. Each project relies upon a docker-compose.yaml file and a separate .env file to connect to a Postgres database. The issue is that on one host machine I successfully use the default port (5432). However, on another host machine, I am unable to use the Postgres default port, and so changed the port value to 5433. The project on each machine runs successfully without issue(s).
My question is how I keep the local project settings in the .env file and the docker-compose.yaml without committing to Github?
I modified .gitignore to include both files. Git expects me to commit my newest changes, which is only the port number (5432 <--> 5433). And of now .gitignore is also a change. I am expected to commit these three (3) changed files to GitHub.
I have also tried 'git stash', which, according to GitHub specs, rollbacks to the HEAD of last commit. Which just undoes my changes. Committing to GitHub means that 'git pull' will be necessary on 2nd host, and so the port number must be modified to expected value. This is a cycle that I have been unable to get past. Git stash command does not appear to provide any options that I can use to resolve. I did mention that I am using a docker container which I start with the compose YAML file.
I am seeking help to resolve. I am including the relevant code.
docker-compose.yaml
ports:
5432:5434 **** On 2nd host ports - 5433:5435 ***
environment:
POSTGRES_USER: postgres
.env file
DATABASE_URL="postgresql://postgres:XXXXxxx#localhost:5432/nest?schema=public"
*** On 2nd host port set to 5433 ***

Related

Datalore local installation to Docker postgresql error

I'm following the directions from here and I've gotten this far...
Prerequisites:
Docker Compose version v2.10.2
Clone or download the content of this repository.
Do the following to set up your database:
Open docker-compose.yaml in the [repository_folder]/docker-compose folder in any text editor and replace the values of DB_PASSWORD and POSTGRES_PASSWORD properties with any random string (both properties must have the same value). This string will be used as your database password. Make sure you keep it secret.
Run the following command and wait for Datalore to start up: docker compose up
It's at this point I get the following message:
[+] Running 0/2
- postgresql Error 0.8s
- datalore Error 0.8s
Error response from daemon: manifest for jetbrains/datalore-server:2022.3 not found: manifest unknown: manifest unknown
I'm completely void of ideas for what to try or where to look for more details to even get started other than emailing support at Jetbrains directly (which I've done). The only thing I can think of is that there's some unspoken prerequisite that I'm not aware of because the instructions don't really seem that complicated to this point.
you cloned master branch with datalore-server 2022.3, which is not released yet. You need to either clone an older version (like 2022.2.3) or edit your /docker-compose/docker-compose.yaml and change the image tags there:
datalore:
image: jetbrains/datalore-server:2022.2.3
[...]
postgresql:
image: jetbrains/datalore-postgres:2022.2.3

Setup a PostgreSQL connection to an already existing project in Docker

I had never used PostgreSQL nor Docker before. I set up an already developed project that uses these two technologies in order to modify it.
To get the project running on my Linux (Pop!_OS 20.04) machine I was given these instructions (sorry if this is irrelevant but I don't know what is important and what is not to state my problem):
Installed Docker CE and Docker Compose.
Cloned the project with git and ran the commands git submodule init and git submodule update.
Initialized the container with: docker-compose up -d
Generated the application configuration file: ./init.sh
After all of that the app was available at http://localhost:8080/app/ and I got inside the project's directory the following subdirectories:
And inside dbdata:
Now I need to modify the DB and there's where the difficulty arose since I don't know how to set up the connection with PostgreSQL inside Docker.
In a project without Docker which uses MySQL I would
Create the local project's database "dbname".
Import the project's DB: mysql -u username -ppassword dbname < /path/to/dbdata.sql
Connect a DB client (DBeaver in my case) to the local DB and perform the necessary modifications.
In an endeavour to do something like that with PostgeSQL, I have read that I need to
Install and configure Ubuntu 20.04 serve.
Install PostgreSQL.
Configure Postgres “roles” to handle authentication and authorization.
Create a new Database.
And then what?
How can I set up the connection in order to be able to modify the DB from DBeaver and see the changes reflected on http://localhost:8080/app/ when Docker is involved?
Do I really need an Ubuntu server?
Do I need other program than psql to connect to Postgres from the command line?
I have found many articles related to the local setup of PostgreSQL with Docker but all of them address the topic from scratch, none of them talk about how to connect to the DB of an "old" project inside Docker. I hope someone here can give directions for a newbie on what to do or recommend an article explaining from scratch how to configure PostgreSQL and then connecting to a DB in Docker. Thanks in advance.
Edit:
Here's the output of docker ps
You have 2 options to get into known waters pretty fast:
Publish the postgres port on the docker host machine, install any postgres client you like on the host and connect to the database hosted in the container as you would have done this traditionally. You will use localhost:5433 to reach the DB. << Update: 5433 is the port where the postgres container is published on you host, according to the screenshot.
Another option is to add another service in your docker-compose file to host the client itself in a container.
Here's a minimal example in which I am launching two containers: the postgres and an adminer that is exposed on the host machine on port 9999.
version: '3'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 9999:8080
then I can access the adminer at localhost:9999 (password is example):
Once I'm connected to my postgres through adminer, I can import and execute any SQL query I need:
A kind advice is to have a thorough lecture to understand how the data is persisted in a Docker context. Performance and security are also topics that you might want to add under your belt as a novice in the field better sooner than later.
If you're running your PostgreSQL container inside your own machine you don't need anything else to connect using a database client. That's because to the host machine, all the containers are accessible using their own subnet.
That means that if you do this:
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 341164c5050f`
it will output a list of IPs that you can configure in your DBeaver to access the container instance directly.
If you're not fond of doing that (or you prefer to use cli) you can always use the psql inside the installation of the PostgreSQL container to achieve something like you described in mysql point nº2:
docker exec -i 341164c5050f bash -c 'psql -U $POSTGRES_USER' < /path/to/your/schema.sql
It's important to inform the -i, otherwise it'll not read the schema from the stdin. If you're looking for psql in the interactive mode, use -it instead.
Last but not least, you can always edit the docker-compose.yml file to export the port and connect to the instance using the public IP/loopback device.

GitLab Runner won't connect to other ec2, via scp ERROR: debug1: read_passphrase: can't open /dev/tty: No such device or address

Currently, I'm trying to use a docker image gitlab file inorder to connect to my production server and overwrite my code within the production on deployment. While I can ssh from my local machine with the private key given, whenever I try to copy the private key as a variable and connect I consistently get the error of:
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
...
debug1: read_passphrase: can't open /dev/tty: No such device or address
Host key verification failed.
lost connection
I've verified that the /dev/tty address exists on both machines and that my .pem can be read appropriately on the gitlab runner post copying, I've established appropriate permissions with chmod and also have tried multiple permutations of calling the scp script. I'm currently running my connection within the before_script of my gitlab.yml file to avoid the delay of building the docker images within my file and the relevant portion is enclosed below.
EDIT: /dev/tty also has the correct permissions, I've viewed the previous stack overflow posts related to this issue and they either weren't relevant to the problem or weren't the solution
image: docker:19.03.5
services:
- docker:19.03.1-dind
before_script:
- docker info
- apk update
- apk add --no-cache openssh
- touch $SSH_KEY_NAME
- echo "$SSH_KEY" > "./$SSH_KEY_NAME"
- chmod 700 $SSH_KEY_NAME
- ls -la /dev/tty
- scp -v -P 22 $SSH_KEY_NAME -i $SSH_KEY_NAME $PROD_USER#$SERVER_URL:.
Apologies if it feels dumb, but I have little experience within the technical nature of private key setup from another machine, currently I'm unsure if I need to link the private key within my gitlab runner in a specific way? If it's possible that the echo isn't saving the .pem as a private key. My IP inbound for the aws instance is set for all traffic on port 22, and copying this key and connecting from my PC works fine. It's just the runner that has problems. Thanks for your help!
The best solution I found to this is to either run an ubuntu gitlab-image and manually call docker inside it with a volume tied to ssh or use a aws instance with a password to gitlab secrets if you're hardpressed on the dind image in gitlab. Neither is truly optimal, but due to the containerization's effectiveness in isolation, you have to resolve it in one of the two method's.

Click to deploy MEAN Stack on Google Compute Engine Clone Repo Locally

On Compute Engine, using the click-to-deploy option for MEAN, how can we clone the repo of the sample app it locally creates so that we can start editing and pushing changes?
I tried gcloud init my-project however all it seems to do is initialize an empty repo. And indeed when I go to "source code" section for that project, there is nothing there.
How do I get the source code for this particular instance, setup a repo locally for it and then deploy changes to the same instance? Any help would be greatly appreciated.
OK, well I have made some progress. Once you click-to-deploy GCE will present you with a command to access your MEAN stack application through an SSH tunnel.
It will look something like this:
gcloud compute ssh --ssh-flag=-L3000:localhost:3000 --project=project-id --zone us-central1-f instance-name
You can change the port numbers as long as your firewall rules allow that specific port.
https://console.developers.google.com/project/your-project-id/firewalls/list
Once you SSH in, you will see the target directory, named the same as you told mean-io to use as the name of the application when you ran mean init
I first made a copy of this folder where mine was named "flow" cp -r flow flow-bck and then I removed some unnecessary directories with:
cd flow-bck && rm -rf node_modules bower_components .bower* .git
All of this to setup copying that folder to my local machine using gcloud compute copy-files availabe after installing Google Cloud SDK.
On my local machine, I ran the following:
gcloud compute copy-files my-instance-name:/remote/path/to/flow-bck /local/path/to/destination --zone the-instance-region
Above 'my-instance-name', '/remote/path/to', '/local/path/to', and 'the-instance-region' obviously need to changed to your deployment's info, etc.
This copied all the files from the remote instance to a folder called flow-bck on local found at the defined local path. I renamed this folder to what it is on remote flow and then did:
cd flow && npm install
This installed all the needed modules and stuff for MEAN io. Now the important part about this is you have to kill your remote ssh connection so that you can start running the local version of the app, because the ssh tunnel will be using that same port (3000) already, unless you changed it when you tunneled in.
Then in my local app directory flow I ran gulp to start the local version of the app on port 3000. So it loads up and runs just fine. I needed to create a new user as it's obviously not the same database.
Also I know this is basic stuff, but not too long ago I would have forgotten to start mongodb process by running mongod beforehand. In any case, mongo must be running before you can start the app locally.
Now the two things I haven't done yet, is editing and deploying a new version based on this... and the nagging question of whether this is all even necessary. That'd be great to find that this is all done with a few simple commands.

Rubymine :: Git is not working, getting fatal: Could not read from remote repository

Rubymine was working fine. I was able to perform all kind of git related operation using rubymine seamlessly. But suddenly its not working. When I want to pull its says:
fatal: Could not read from remote repository.
It also fails for all other read/write > operation.
But git is working fine in my machine. I mean I can push/pull using command line. But now working with rubymine.
I am using Mac.
First make sure that you can use git from command line. If you can then,
Go to settings -> Version Control -> Git
Select "SSH executable" -> Native
And then restart Rubymine.
Did you init a local git repository, into which this remote is supposed to be added?
Does your local directory have a .git folder?
Try git init...
Step 1 - Attempt adding you public key to Heroku
heroku keys:add ~/.ssh/id_rsa.pub
Step 2 - Generate a new set of SSH keys, then attempt the first step again
https://help.github.com/articles/generating-ssh-keys
Step 3 - Verify and/or modify your config file
vim ~/.ssh/config
Host heroku.com
Hostname heroku.com
Port 22
IdentitiesOnly yes
IdentityFile ~/.ssh/id_rsa <--- Should be your public SSH key
TCPKeepAlive yes
User jsmith#gmail.com
Step 4 - Remove the heroku remote from git, the recreate the connection
Add the remote via heroku create will only be an option for new repositories. Be sure to delete your old repo that you originally attempted to create
$ git remote rm heroku
$ git heroku create
Step 5: Reinstall Heroku Toolkit
http://hayley.ws/2010/12/04/getting-jekyll-running.html