Data getting automatically deleted - mongodb

I am self-hosting a simple mondo database on Ubuntu 20.04 and the data I store is being deleted daily and I have no idea why
I am relatively new to MongoDB so if anyone has had something similar please let me know how I can fix this.

Related

AssertionError for feast materialize command

I am trying to configure Feast with PostgreSQLSource as both online and offline source. I have created a table in db and edited feature_store.yaml file with proper credentials. I can successfully generate feature views and deploy infrastructure.
But when I run feast materialize command, it throwing an AssertionError for offline_stores. What might be the possible error/mistake and how can I resolve it??
Thank you
I faced same issue recently, I tried using postgresql as datasource, online store and offline store by editing feature-store.yaml file. Postgres supported as Registry, Online store, and Offline store is now available officially in feast version 0.21.0.
So if you use old version for postgresql then you will face issue and instead of editing feature-store.yaml just use postgres as template while feast init.
Refer:
https://github.com/feast-dev/feast/releases

Restoring data to couchbase due to fresh couchbase installation

We have a couchbase server that somehow had a fresh installation and all our data that was there is lost. I had managed to take the backup of /opt/couchbase/var/lib/couchbase/data/ … Now when i try to copy the data back it is not showing the couchbase server.Any help is appreciated.

APIC Cannot GET /apim while updating schema

am new to stackoverflow, and I thought of sharing this regarding an IBM product called APIC.
I did the whole deployment correctly as recommended by IBM on an Ubuntu Environment with mongoDB and MySQL using an AZURE Virtual Machine as Server. Whenever I try to update the schema of the database with the new models, I get an error saying:
Cannot GET /apim/dataSources/partials/dataSourceMigrate.html
Please help or ask me anything in case you need more info, and tell me if it's an error from Azure or IBM or me.
Thanks
This exactly happened to me once, and I contacted IBM for several weeks of give and take, to find out it's a bug on their side, not on the cloud side or your side :)
Check it here: https://stackoverflow.com/a/40016171/4694311
In this case, go back to using strongloop until they get it fixed.
Note that his is a bug on the operating system itself, it works on iOS but that would be useless on cloud.

Couchbase - Run on bucket file to get all keys of specific node

I am currently working with couchbase server 1.8.1 an in a process of upgrading to 2.2 version.
We want to dump all the keys of couchbase 1.8.1 to a text file and then run on this file and copy all the data to the new couchbase 2.2.
The reason we chose to use this method instead of backup and restore is because our server do not respond well to backup and there is a risk of server failing.
Can you help me figure out how to create this dump file from couchbase bucket files?
In addition to what Dave posted, I recommend reading this blog post: http://blog.couchbase.com/Couchbase-rolling-upgrades
Also, there are some unique considerations when upgrading from 1.8.1 to 2.x, so make sure you read the documentation Dave linked to.
Note you can upgrade an existing cluster online (without having to manually copy data to a new 2.2 cluster) - see http://docs.couchbase.com/couchbase-manual-2.5/cb-install/#upgrading
We use this script: CouchbaseDump
It works and help us getting the keys from the sqlite files.

Is there a way to upgrade from a Heroku shared database to a production grade database like Basic or Crane?

I have been using the Heroku shared database for a while now in an app and I would like to upgrade to their new Basic/Crane/etc. production grade databases. I don't however see a clear path to do that.
The options as I see them are:
I could use db:pull/db:push to migrate data/schema from the current production database to the new database. I could go into maintenance mode, move the data, then update the config to point to the new database. Not terrible, but I fear that the old schema from the shared database is not v9 compatible? Maybe I'm wrong. This could also take a long time resulting in some major downtime. Not cool.
Use pg:backups to create a backup, and use the heroku pg:restore to move the data over. Again I fear the same schema issues but this would be much faster.
Start with a Basic/Crane database and use their Followers concept. This feels like the right way to do it, but I don't know if this works with the shared databases. If it does I do not understand how.
All of these options I feel require me to upgrade to postgres v9 at some point since all the new databases are v9. Is there a way to do this in the shared environment, and then maybe migrating will be less painful... maybe.
Any ideas or suggestions?
Their Migrating Between Databases document points out that your option (3) using Followers for a fast changeover doesn't work when you're starting with a shared instance. What you should do here is use pg:backups to make a second database to test your application against and confirm the new database acts correctly there. If that works, turn on maintenance mode and do it again for the real migration.
There's very few schema level incompatibility issues going between PostgreSQL 8.4 and 9.1. That migration doc warns about one of them, the Bytea changes. If you're using Bytea, you probably know it; it's not that common. Most of the other things changed between those versions are also features you're unlikely to use, such as the PL/pgSQL modifications made in PostgreSQL 9.0. Again, if you're using PL/pgSQL server-side functions, you'll probably know that too, and most people on Heroku don't.
Don't guess if your database is compatible, test. You can run multiple copies of PostgreSQL locally, they just need different data directories and ports configured. That way you can test against 9.1 and 8.4 at will.
You usually use the pg_dump from 9.1 to dump the 8.4 database - pg_dump knows about older versions, but not newer (obviously).
Unless you're doing something unusual with the database (presumably not, since you're on Heroku) then there's unlikely to be any problems just dumping + restoring between versions.