Hi guys I am using superset. I have 6 million data in database which is postgres, so I am not able load the complete data. I have changed SQL_MAX_ROW = 7000000,QUERY_SEARCH_LIMIT = 7000000, DEFAULT_SQLLAB_LIMIT = 7000000, but still unable to load the complete data. After changing the settings in config.py . I have also re-build the docker, using this steps
docker-compose down
sudo docker-compose -f docker-compose-non-dev.yml pull
sudo docker-compose -f docker-compose-non-dev.yml up
Please let me know if I am missing any steps
Related
I'm currently in the process of switching my cloud server from Heroku to Digital Ocean. However is there a way to migrate the database from the heroku server to the digital ocean one? I use postgresql for my database
I hope you already got a solution, but in case you didn’t, I’ll provide a simple guide on how I did it. I am going to assume that you have already created a postgres database on digitalocean. Also you need navigate to your project directory and log in to heroku using the heroku cli. And, you need to have postgresql installed or a psql client. Installing postgresql would do it as it comes with psql.
Step 1: Create a backup and download the backup from heroku postgres
heroku pg:backups:capture --app <app_name>
heroku pg:backups:download --app <app_name>
The first command will create a backup of your database and the second command will download it to your current directory, its a .dump file. If you would like to read more, here is an article.
Step 2: Connect to your remote (digital ocean’s) database using psql
Before you can do this, you need to go and add your machine you are connecting from to the list of database’s list of trusted sources, If you don’t, you’ll get a Connection Timed Out error. That’s because the database’s firewall doesn’t allow you to connect to the database from your local machine or resource (for security reasons).
Step 3: Import the Database
pg_restore -d "postgresql://<database_username>:<database_password>#<host>:<port>/<database>?sslmode=require" --jobs 4 -c "/path/to/dump_file.dump"
This will import your database from your dump file. Just substitute the variables will your connection parameters that you get from your dashboard. If you would like to read more, here is another article for this step.
Another thing to make clear is, sometimes, you will see some harmless error messages when running this command, but it will push through anyway. To learn more about pg_restore read this article.
And that’s it, your database has been migrated. Now, can you confirm it worked?, well, as for me, I used pgAdmin to connect to the remote database and I saw the tables and data as expected.
Hope this helps anyone with the same problem :)
I'm trying to connect Tarantool Docker Image to local PostgreSQL, to replicate some test data, and ran into the following problems:
It seems there is no CL (except Tarantool console) to check which
files are in place (exec bin/bash fails)
pg = require('pg') leads to
an error: "init.lua:4: module 'pg.driver' not found", despite the
presence of the pg module in the Docker description
I have doubts about how to replicate efficiently 4 tables, and
relations between them, to the container from outside Postgres
Does anyone know sources to dig in and find solutions to those problems? Any direction would be greatly appreciated.
docker exec -ti tnt_container sh
the issue. You should find an older base image or build it yourself.
This is PostgreSQL-related doubts. You may pass batches of data to pg functions or use intermediate application to transfer data via COPY. It looks like tarantool's pg driver does not support COPY.
I'd like to create a docker image with data.
My attempt is as follows:
FROM postgres
COPY dump_testdb /image
RUN pg_restore /image
RUN rm -rf /image
Then I run docker build -t testdb . and docker run -d testdb.
When I connect to the container I don't see the restored db. How do I get an image with the restored data?
COPY the dump file, with a .sql extension, into /docker-entrypoint-initdb.d/. Do not try to RUN anything. The postgres image will run everything in that directory the first time a container is started on a particular data directory.
You generally can’t RUN commands that interact with the database in a Dockerfile because the database won’t be running at that point. (There is a script in the base image that goes through some complicated gymnastics to do the first-time setup.) In any case, because of the mechanics of Docker’s volume system, you can’t create an image that contains prepopulated database data; you have to use a mechanism like this to cause the image to restore a dump or otherwise set itself up at first start.
Currently we have all in one single docker container for our production gitlab, where we are using bundled postgres and redis. So everything in same container. We want to use external postgres db and separate container for redis as well to follow the production standards.
How can I migrate from internal postgres db to external postgres db? If anyone provides process and steps that will be really helpful. We are new to this process. Please let us know If anyone knows
Thank you everyone for your inputs ,
PRS
You can follow the article "Migrating GitLab from internal to external PostgreSQL", which involves:
a database dump/reload, using pg_dumpall
sudo -u gitlab-psql /opt/gitlab/embedded/bin/pg_dumpall \
--username=gitlab-psql --host=/var/opt/gitlab/postgresql > /var/lib/pgsql/database.sql
sudo -u postgres psql -f /var/lib/pgsql/database.sql
Note: yuo can also use a backup of the database, but only if the external PostgreSQL version matches exactly the embedded one.
setting its password
sudo -u postgres psql -c "ALTER USER gitlab ENCRYPTED PASSWORD '***' VALID UNTIL 'infinity';"
and modifying the GitLab configuration:
That is:
# Disable the built-in Postgres
postgresql['enable'] = false
# Fill in the connection details
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
gitlab_rails['db_host'] = '127.0.0.1'
gitlab_rails['db_port'] = 5432
gitlab_rails['db_database'] = "gitlabhq_production"
gitlab_rails['db_username'] = 'gitlab'
gitlab_rails['db_password'] = '***'
apply tour changes:
gitlab-ctl reconfigure && gitlab-ctl restart
#VonC
Hi, let me know about the process I have done below
We currently have single all in one docker gitlab container which is using bundled postgres and redis . To follow the production standards we are looking to maintain separate postgres and redis instances for our prod gitlab..We already have data in bundled db ..so we took back up current gitlab with bundled postgres ..it generated .tar file....Next we did change gitlab.rb to point external post gres db [ same version ]..then we are able connect to gitlab but didn;t see any data because nothing was there as it is fresh db. Later we did the restore using external postgres db ...now we can see all the data?? Can we do in this method ? Now our gitlab is attached to external postgres and I can see all the restored data. Will this process works ? Any downfalls?
How this process is different from pgdump and import ?
i have been following the Flask book by Miguel Ginberg and I am thinking about how to deploy my app and use the PostgresDB.
In my local production config I have to manually go in and run Role.insert_roles() before any roles can be assigned.
How do I do this in Heroku with postgres? In fact, how do you connect to the postgres db? It is not really clear where in the code postgres takes over using the environment variable:
https://github.com/miguelgrinberg/flasky/blob/master/config.py
I have a feeling my app is just running sqlite and the book isn't really clear on how to switch over.
SOLUTION:
if you have deployed to heroku and you have not changed the environment variables:
DATABASE_URI to SQLALCHEMY_DATABASE_URI
FLASK_CONFIG = heroku
FLASKY_ADMIN = your email
then ran in your shell:
heroku run python manage.py shell
db.create_all()
db.commit()
Role.insert_roles()
then you are probably running the development config from the SQLite database!
If you want to connect manually, you could use the http://initd.org/psycopg/ libarary directly. Flask-SQLAlchemy provides uses psycopg underneath the hood itself, see here - in your example it may be easier to continue using SQLAlchemy. More information here.
I had the same problem and solved like the follow:
1. provisioning a db
$:heroku addons:create heroku-postgresql bobby-dev
......
add database_url:
$:hero config -s |grep HEROKU-PSOTGRESQL
( then show HEROKU_POSTGRESQL_RED_URL=..... )
$:heroku pg:promote HEROKU-POSTGRESQL-RED
install this extention:psycopg2
3.change the config.py:
config = {
.....
'default' : ProductionConfig
}