AssertionError for feast materialize command - postgresql

I am trying to configure Feast with PostgreSQLSource as both online and offline source. I have created a table in db and edited feature_store.yaml file with proper credentials. I can successfully generate feature views and deploy infrastructure.
But when I run feast materialize command, it throwing an AssertionError for offline_stores. What might be the possible error/mistake and how can I resolve it??
Thank you

I faced same issue recently, I tried using postgresql as datasource, online store and offline store by editing feature-store.yaml file. Postgres supported as Registry, Online store, and Offline store is now available officially in feast version 0.21.0.
So if you use old version for postgresql then you will face issue and instead of editing feature-store.yaml just use postgres as template while feast init.
Refer:
https://github.com/feast-dev/feast/releases

Related

What are the APIs heroku uses to manage postgres DB?

How the Heroku CLI tool manages DBs. What are the APIs they use? The tasks I am trying to do from the app are create/delete a postgres DB, create a dump, and import a dump using python code and not from the console or cli.
There is no publicly defined API for the Heroku Data products, unfortunately. That said, in my experience, the paths are fairly stable and can mostly be reasoned out. This CLI plugin might give you a head start on trying to work out the routes you'd need to hit in order to achieve your goals.

How to share a postgreSQL database?

I'm currently working on a project with some colleagues and a colleague of mine linked a database created in her postgreSQL server to our visual studio project, but we don't know how she can share the database with the rest of us, or how can we modify the database without having it.
We're using postgreSQL 14.
One option is to create your database in a cloud provider such as AWS.
You can take a look at: https://aws.amazon.com/rds/postgresql/
This way all of you will be able to access the database.

How do I choose a local MySQL version that will be compatible with our future switch to CloudSQL?

For simplicity and cost, we are starting our project using local MySQL running on our GCE instances. We will want to switch to CloudSQL some months down the road.
Any advice on avoiding MySQL version conflicts/challenges would be much appreciated!
The majority of the documentation is for MySQL 5.7 so as an advice I recommend you use this version and review migrating to cloudsql concept this is a guide that will guide you through how to migrate safely which migration methods exist and how to prepare you MySQL database.
Another advice which I can give you is make the tutorial migrating mysql to cloud using automated workflow tutorial this guide also says that the any MySQL database running version 5.6 or 5.7 allows you to take advantage of the Cloud SQL automated migration workflow this tutorial is important to know how works and how deploy a source MySQL database on Compute Engine. The sql page will give you more tutorials if you want to learn more.
Finally I suggest to you check de sql pricing to be aware about the billing and also I suggest to you create a workspace with this you can have more transparency and more control over your billing charges by identifying and tuning up the services that are producing more log entries.
I hope all the information that I'm giving you are helpful.

Heroku Release Phase - PG Backup Before Migrations

With the Heroku Release Phase is it possible to run pg:backups:capture? Or is there another method to go about for creating a database backup before trying to run migrations?
Technically this is possible, but must have heroku cli installed on your dyno and you need to authenticate it somehow. So one solution is to find or write buildpack that will install cli tool and add config variable with authentication credentials.
Another approach is to use a library such as https://github.com/kjohnston/pgbackups-archive. There is a problem though, it is using old heroku api, which will be disabled in April 2017. I don't know if there is any similar library that uses new api.
If you just want to backup your data and not necessarily use pg:backups:capture, you can just use write simple script that runs pg_dump DATABASE_URL with some additional options and uploads dump file to S3 or any other location. I think this is the easiest solution. Then just add this script as release command to Procfile.

Couchbase - Run on bucket file to get all keys of specific node

I am currently working with couchbase server 1.8.1 an in a process of upgrading to 2.2 version.
We want to dump all the keys of couchbase 1.8.1 to a text file and then run on this file and copy all the data to the new couchbase 2.2.
The reason we chose to use this method instead of backup and restore is because our server do not respond well to backup and there is a risk of server failing.
Can you help me figure out how to create this dump file from couchbase bucket files?
In addition to what Dave posted, I recommend reading this blog post: http://blog.couchbase.com/Couchbase-rolling-upgrades
Also, there are some unique considerations when upgrading from 1.8.1 to 2.x, so make sure you read the documentation Dave linked to.
Note you can upgrade an existing cluster online (without having to manually copy data to a new 2.2 cluster) - see http://docs.couchbase.com/couchbase-manual-2.5/cb-install/#upgrading
We use this script: CouchbaseDump
It works and help us getting the keys from the sqlite files.