TypeORM - generate migrations from existings entites - postgresql

Got a nest.js/typeorm/postgres application that I have been developing for the last year. Been creating/adding/removing tables/columns but never used migrations.
Now comes time to deploy and whenever I run typeorm migration:generate, a migration file is created for the last colums that I added/removed from only 1 or 2 tables.
Is it possible to generate a migration for all existing entities that are already part of my db schema?
Basically, the up would create each table with FKs, indexes, constraints, etc and the down would drop all of that.
Note: I noticed this post. Is this the correct approach for my issue?

I solved this by doing the following:
dropping and recreating my database in postgres
ran npm run typeorm:reset
then npm run typeorm:migrate
then typeorm:run
scripts for reference
package.json
"typeorm": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js",
"typeorm:sync": "npm run typeorm schema:sync",
"typeorm:drop": "npm run typeorm schema:drop",
"typeorm:reset": "npm run typeorm:drop && npm run typeorm:sync",
"typeorm:migrate": "env NODE_ENV=development npm run typeorm migration:generate -- -n",
"typeorm:create": "env NODE_ENV=development npm run typeorm migration:create -- -n",
"typeorm:run": "ts-node -r tsconfig-paths/register $(yarn bin typeorm) migration:run"
ormconfig.js
const config = {
type: 'postgres',
host: process.env.RDS_HOST,
port: Number(process.env.RDS_PORT),
username: process.env.RDS_USERNAME,
password: process.env.RDS_PASSWORD,
database: process.env.RDS_DB_NAME,
synchronize: process.env.NODE_ENV === 'production' ? false : true,
dropSchema: false,
logging: process.env.NODE_ENV === 'development' ? true : false,
entities: [`${__dirname}/src/**/**.entity{.ts,.js}`],
migrations: [`${__dirname}/src/migrations/**/*{.ts,.js}`],
cli: {
migrationsDir: 'src/migrations',
},
}
module.exports = config
Note:
in my typeORM scripts, I use tsconfig-paths/register because I have path aliases in my tsconfig.json file that I use in my *.entity.ts files.

Related

vscode builtin node binary like atom

Hi in atom we got builtin node and npm binary without installing in the OS that can be call from extension.
This is the path of the default binary in atom
/usr/share/atom/resources/app/apm/bin/npm
/usr/share/atom/resources/app/apm/bin/npx
Does VSCode provide it ?
We have a strict rule not to cluttering our development computer with any unnecessary binary ( e.g we use docker for php, using virtualenv for python and node related project )
In our case for our web development ( php and python mainly ) we use babel to transpile all our file.js on save, in atom we use language-babel extension that allow us to transpile using atom node but with project node_modules package. So our babel dependencies is install inside a project and not clutter the OS, and doesn't disturb other project.
On VSCode babel extension I check they don't have this capabilities. Any info on this or is this not do able in VSCode ?
You can install and run VS Code without Node, which does suggest it has Node baked in. However, at this time (June 2022) you can't debug or run JavaScript on Node until you install Node separately.
Separate installation decouples your editor dependencies from your code dependencies.
Your code can debug/run on a version of Node that differs from the version VS Code uses.
Those not using Node are not obliged to install the CLI toolchain for it.
VS Code supports a multitude of languages. Baking in their toolchains would make it enormous.
This is the closest setup in vscode / codium for non nodejs project
OS only require to install nodejs for running babel ( npm , npx not require )
All node package inside project is install using docker
example using Makefile
SHELL := /bin/bash
THIS_FILE := $(lastword $(MAKEFILE_LIST))
PROJECT_NAME := "$$(basename `pwd` | cut -d. -f1 )"
yarn:
docker run --rm -it \
-v $$(pwd)/${PROJECT_NAME}:/srv/${PROJECT_NAME} \
-w /srv/${PROJECT_NAME} \
-e NODE_ENV=development \
--user $$(id -u):$$(id -g) \
node:lts-slim yarn $(filter-out $#,$(MAKECMDGOALS))
init and install babel inside project
make yarn init
make yarn -- add -D #babel/cli
make yarn -- add -D #babel/core
make yarn -- add -D #babel/preset-react
make yarn -- add -D babel-preset-minify
Create ${workspaceRoot}/.babelrc
{
"comments" : false,
"sourceMaps": true,
"only": [
"./asset/js/src"
],
"presets": [
"#babel/preset-react",
["minify", {
"mangle" : true,
"builtIns": false,
"keepClassName" : true
}]
]
}
Create ${workspaceRoot}/.vscode/tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label" : "Babel Watch",
"type": "shell",
"group": "none",
"command" : "${workspaceRoot}/node_modules/.bin/babel",
"args" : [
"${workspaceRoot}/asset/js/src/",
"--config-file=${workspaceRoot}/.babelrc",
"--out-dir=${workspaceRoot}/asset/js/dist/",
"--watch"
],
"presentation": {
"echo": true,
"reveal": "always",
"focus": false,
"panel": "shared",
"showReuseMessage": true,
"clear": false
},
"runOptions": {
"runOn": "folderOpen"
}
}
]
}
Then Enable Automatic tasks in folder to run on folder open
Press F1
Search Tasks: Manage Automatic Task in Folder
Select Tasks: Manage Automatic Task in Folder
Select Allow Automatic Tasks in Folder
On next reopen folder / project babel will watch the folder specify in tasks
It looks like there is no way to allow automatic task in folder from .vscode/settings.json so need enable this on every project once in their own dev computer

ModuleNotFoundError: No module named 'azure.mgmt.network.version' [duplicate]

After upgrading ansible to version 2.10.5 and python3.8.10 my playbook.yml fails with this error.
ModuleNotFoundError: No module named 'azure.mgmt.monitor.version'
fatal: [localhost]: FAILED! => {"attempts": 1, "changed": false, "msg": "Failed to import the required Python library (ansible[azure] (azure >= 2.0.0)) on certrenewplay's Python /usr/bin/python3`
The module is there if I run python3 -c "import azure.mgmt.monitor" and if I run pip3 list I see it installed as azure-mgmt-monitor==2.0.0
The exact part of the playbook code that is erroring is this:
- name: Create _acme-challenge record for zone "{{ env_name_dot }}"
azure_rm_dnsrecordset:
subscription_id: "{{ mgmt_subscription }}"
client_id: "{{ mgmt_vault_azure_client_id }}"
tenant: "{{ mgmt_vault_azure_tenant_id }}"
secret: "{{ mgmt_vault_azure_client_secret }}"
resource_group: "{{ mgmt_rg }}"
relative_name: "_acme-challenge.{{ env_name }}"
zone_name: "{{ dns_zone_name }}.{{ dns_zone_domain }}"
record_type: TXT
state: present
records:
- entry: "{{ cn_challenge_data }}"
time_to_live: 60
when: dns_zone_name != 'activedrop'
register: add_record
retries: 1
delay: 10
until: add_record is succeeded
I'm not sure what I'm doing wrong-can anyone advise please or help me on this please?
Thanks
This same issue happened to me because Ansible now ships with its own version of the Azure collection and it was conflicting with the version I had manually installed in my own playbook using the "ansible-galaxy collection" command.
What I suggest you do is only use the version that ships with Ansible and then install its requirements like so:
pip install -r /usr/lib/python3/dist-packages/ansible_collections/azure/azcollection/requirements-azure.txt
It is easier to setup correctly on a freshly installed system (e.g in Docker) than it is to fix a broken system.
I think that you did not follow the instruction about installing azure collection from https://github.com/ansible-collections/azure
Installing collection itself does not install python dependencies, these are installed using python pip and you need to be sure you are installing them inside the same python (v)env where ansible is installed, or ansible will give you the error that you seen when trying to load the module.
Unfortunately the azure-mgmt-monitor package is bugged, even on 3.0.0 to not properly create a version submodule. Haven't been able to track down exactly where in the code it's busted, but it is and there is a direct import of that submodule in the Ansible Galaxy module causing it to fail. Unfortunately you should use the Azure CLI at this time and forget about using azure_rm

Symfony 3.4 LTS, PostgreSQL 10, DoctrineDBAL could not find driver

I use MAMP server with PHP 7.1.5; Symfony Framework 3.4.2, PostgreSQL 10 for a new project. I also use PostGIS to store space data with geometry data type in PostgreSQL. Therefore I installed and used: "jsor/doctrine-postgis": "^1.5"
Following is part of my composer.json:
"require": {
"php": ">=5.5.9",
"doctrine/doctrine-bundle": "^1.6",
"doctrine/orm": "^2.5",
"incenteev/composer-parameter-handler": "^2.0",
"jsor/doctrine-postgis": "^1.5",
"sensio/distribution-bundle": "^5.0.19",
"sensio/framework-extra-bundle": "^5.0.0",
"symfony/monolog-bundle": "^3.1.0",
"symfony/polyfill-apcu": "^1.0",
"symfony/swiftmailer-bundle": "^2.6.4",
"symfony/symfony": "3.4.*",
"twig/twig": "^1.0||^2.0"
The parameters.yml:
server_ver: 10.0
database_driver: pdo_pgsql
database_host: localhost
database_port: 5432
database_name: qtqg
database_path: ~
database_user: postgres
database_password: admin
The config.yml:
doctrine:
dbal:
default_connection: default
connections:
default:
mapping_types:
_text: string
server_version: %server_ver%
driver: %database_driver%
host: %database_host%
port: %database_port%
dbname: %database_name%
path: %database_path%
user: %database_user%
password: %database_password%
persistent: true
charset: UTF8
logging: %kernel.debug%
profiling: %kernel.debug%
# if using pdo_sqlite as your database driver:
# 1. add the path in parameters.yml
# e.g. database_path: '%kernel.project_dir%/var/data/data.sqlite'
# 2. Uncomment database_path in parameters.yml.dist
# 3. Uncomment next line:
#path: '%database_path%'
types:
geography:
class: 'Jsor\Doctrine\PostGIS\Types\GeographyType'
commented: false
geometry:
class: 'Jsor\Doctrine\PostGIS\Types\GeometryType'
commented: false
raster:
class: 'Jsor\Doctrine\PostGIS\Types\RasterType'
commented: false
Everything work well, I can use cmd to generate Entity:
php bin/console doctrine:mapping:import --force AppBundle annotation
And after that generate CRUD:
php bin/console generate:doctrine:crud AppBundleMonitoringAdminBundle:coquan -n --format=annotation --with-write
The Running PHP is PHP 7.1.5 and I also checked php.ini file in C:\Windows and loaded php.ini in MAMP server. php -m command shows:
PDO
pdo_mysql
PDO_ODBC
pdo_pgsql
pdo_sqlite
I don't think any problem with data driver because it can connect and generate Entities, CRUD....
But after generate CRUD and try to access the controller to list all item in one Entity, I got the Error:
An exception occurred in driver: could not find driver
One of the error line is:
AbstractPostgreSQLDriver->convertException('An exception occurred in driver: could not find driver', object(PDOException)) in vendor\doctrine\dbal\lib\Doctrine\DBAL\DBALException.php (line 176)
I tried many way including changing mamp to xamp, wamp server, all recommendation about how to config dbal... but the error is still there
Could anyone help me!
the problem is with PostgreSQL 10
Inside vendor/doctrine/dbal/lib/Doctrine/DBAL/Schema/PostgreSqlSchemaManager.php line line 292 change
$data = $this->_conn->fetchAll('SELECT min_value, increment_by FROM ' . $this->_platform->quoteIdentifier($sequenceName));
For
$version = floatval($this->_conn->getWrappedConnection()->getServerVersion());
if ($version >= 10) {
$data = $this->_conn->fetchAll('SELECT min_value, increment_by FROM pg_sequences WHERE schemaname = \'public\' AND sequencename = '.$this->_conn->quote($sequenceName));
}
else
{
$data = $this->_conn->fetchAll('SELECT min_value, increment_by FROM ' . $this->_platform->quoteIdentifier($sequenceName));
}
I was have the same issue with postgres 9.6.4 with WAMP on windows 10
phpinfo() would display pdo_pgsql as loaded, and it worked fine in another PHP application. php -m from the gitbash would not show pdo_pgsql
In the end, I had to edit the php.ini that was loaded and uncomment these 2 lines
extension=php_pdo_pgsql.dll
extension=php_pgsql.dll
restart apache and now it works!!!!

Heroku Review Apps: copy DB to review app

Trying to fully automate Heroku's Review Apps (beta) for an app. Heroku wants us to use db/seeds.rb to seed the recently spun up instance's DB.
We don't have a db/seeds.rb with this app. We'd like to set up a script to copy the existing DB from the current parent (staging) and use that as the DB for the new app under review.
This I can do manually:
heroku pg:copy myapp::DATABASE_URL DATABASE_URL --app myapp-pr-1384 --confirm myapp-pr-1384
But I can't get figure out how to get the app name that Heroku creates into the postdeploy script.
Anyone tried this and know how it might be automated?
I ran into this same issue and here is how I solved it.
Set up the database url you want to copy from as an environment variable on the base app for the pipeline. In my case this is STAGING_DATABASE_URL. The url format is postgresql://username:password#host:port/db_name.
In your app.json file make sure to copy that variable over.
In your app.json provision a new database which will set the DATABASE_URL environment variable.
Use the following script to copy over the database pg_dump $STAGING_DATABASE_URL | psql $DATABASE_URL
Here is my app.json file for reference:
{
"name": "app-name",
"scripts": {
"postdeploy": "pg_dump $STAGING_DATABASE_URL | psql $DATABASE_URL && bundle exec rake db:migrate"
},
"env": {
"STAGING_DATABASE_URL": {
"required": true
},
"HEROKU_APP_NAME": {
"required": true
}
},
"formation": {
"web": {
"quantity": 1,
"size": "hobby"
},
"resque": {
"quantity": 1,
"size": "hobby"
},
"scheduler": {
"quantity": 1,
"size": "hobby"
}
},
"addons": [
"heroku-postgresql:hobby-basic",
"papertrail",
"rediscloud"
],
"buildpacks": [
{
"url": "heroku/ruby"
}
]
}
An alternative is to share the database between review apps. You can inherit DATABASE_URL in your app.json file.
PS: This is enough for my case which is a small team, keep in mind that maybe is not enough for yours. And, I keep my production and test (or staging, or dev, whatever you called it) data separated.
Alternatively:
Another solution using pg_restore, thanks to
https://gist.github.com/Kalagan/1adf39ffa15ae7a125d02e86ede04b6f
{
"scripts": {
"postdeploy": "pg_dump -Fc $DATABASE_URL_TO_COPY | pg_restore --clean --no-owner -n public -d $DATABASE_URL && bundle exec rails db:migrate"
}
}
I ran into problem after problem trying to get this to work. This postdeploy script finally worked for me:
pg_dump -cOx $STAGING_DATABASE_URL | psql $DATABASE_URL && bundle exec rails db:migrate
I see && bundle exec rails db:migrate as part of the postdeploy step in a lot of these responses.
Should that actually just be bundle exec rails db:migrate in the release section of app.json?

mongo --shell file.js and "use" statement

can't find solution for simple question:
I have file text.js
use somedb
db.somecollection.findOne()
When I run this file in cmd with redirection command from file:
"mongo < text.js"
it's work properly
But when I try this way
"mongo text.js" or "mongo --shell test.js"
I got this error message
MongoDB shell version: 2.2.0
connecting to: test
type "help" for help
Wed Dec 05 16:05:21 SyntaxError: missing ; before statement pathToFile\test.js.js:1
failed to load: pathToFile\test.js.js
It's fail on "use somedb". If I remove this line, it's run without error, but console is clear.
is there any idea, what is this and how to fix?
I'm tying to find sollution for this, to create build tool for Sublime Text 2.
default build file was
{
"cmd": ["mongo","$file"]
}
but in this case I get the error above
PS. right after posting this question I find sollution for SublimeText2:
{
"selector": "source.js",
"shell":true,
"cmd": ["mongo < ${file}"]
}
PSS. right after posting this question I find sollution for SublimeText3:
{
"selector": "source.js",
"shell":true,
"cmd": ["mongo","<", "$file"]
}
this build tool work properly
use dbname is a helper function in the interactive shell which does not work when you are using mongo shell with a JS script file like you are.
There are multiple solutions to this. The best one, IMO is to explicitly pass the DB name along with host and port name to mongo like this:
mongo hostname:27017/dbname mongoscript.js // replace 27017 with your port number
A better way to do this would be to define the DB at the beginning of your script:
mydb=db.getSiblingDB("yourdbname");
mydb.collection.findOne();
etc.
The latter is preferable as it allows you to interact with multiple DBs in the same script if you need to do so.
You can specify the database while starting the mongo client:
mongo somedb text.js
To get the output from the client to stdout just use the printjson function in your script:
printjson(db.somecollection.findOne());
Mongo needs to be invoked from a shell to get that mode, with Ansible you would have this:
- name: mongo using different databases
action: shell /usr/bin/mongo < text.js
Instead of this:
- name: mongo breaking
command: /usr/bin/mongo < text.js
This is what finally worked for me on Windows + Sublime Text 2 + MongoDB 2.6.5
{
"selector": "source.js",
"shell":true,
"cmd": ["mongo","<", "$file"],
"working_dir" : "C:\\MongoDB\\bin"
}