Anything similar to MySQL Proxy for PostgreSQL? [closed] - postgresql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am looking for something similar to MySQL Proxy. The purpose is to modify incoming queries on the server. I am not looking for alternative ways to achieve the same. My best guess at the moment is to modify GridSQL, but this adds complexity and it takes time. I have asked this question before in a vastly different way and got no relevant results, so I deleted that question and added this one.
Edit: It is important that the client can continue to utilize the PostgreSQL protocol, so the package I am looking for needs to communicate using it.

You might take a look at sqlrelay which has the ability to route and filter queries.
http://sqlrelay.sourceforge.net/sqlrelay/router.html
If you want to rewrite the queries I think sqlrelay falls short.
You might otherwise look into PostgreSQL's rules, which can be used to substitute or rewrite queries:
http://www.postgresql.org/docs/8.4/interactive/rules.html

You can refer to the following postgresql-aync driver project.
https://github.com/mauricio/postgresql-async

Related

Switch from RabbitMQ to Kafka [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
How easy is to switch from Rabbit to Kafka in existing solution, to replace one implementation (Rabbit) with other (Kafka)? We are about to use Rabbit in our implementation but we want to see if it is possible in the future to replace it with Kafka.
It is possible, and I've seen people do it - but it is a big project.
Not only the APIs are different, but the semantics are different. So you need to rethink your data model, scaling model, error handling, etc. And then there's testing.
If you don't have tons of code to update, and the code is localized and you have both RabbitMQ and Kafka experts on the team you may be able to get it done in a month or two.

Kibana equivalent for MongoDB [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
We've fed up with instability and unpredictability of ELK stack but still in love with the Kibana dashboards.
Hence I'm looking for some potential migration paths. MongoDB looks very promising: huge track record, lots of docs, ability to cope with json easily etc.
Is there some equivalent to Kibana working on top of MongoDB? Some web app which lets you easily run search queries over indexed data, make them into dashboards, add nice maps and diagrams etc.
I've looked into https://docs.mongodb.org/ecosystem/tools/administration-interfaces/ but this seems to be more about managing MongoDB itself rather than playing with data in it.
you could have a look at mongodb-compass click here
if you would want more, the new mongodb 3.2 has features to connect to any BI tool, like talend. Read more here

What package to use for database migrations in Go? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am fairly new to golang, and trying to identify the best tools for the job. Currently I am evaluating the following packages:
https://github.com/mattes/migrate
https://github.com/DavidHuie/gomigrate
https://bitbucket.org/liamstask/goose/
I was wondering if anyone had any experience with these (or other packages) and could provide some comments.
We use mattes/migrate at work and are very happy with it. It works with plain SQL files, handles file naming by itself and can easily be automated via CLI. It doesn't do anything Go specific.
With gomigrate you need to create the files yourself and write code for executing the migrations.
Take a look at https://github.com/pressly/goose, a maintained fork of https://bitbucket.org/liamstask/goose/.

Best DB for datalogging [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have a lot of logged data stored into a database by a data logger. Basically i have a lot of rows with a timestamp and some values. I want to store this data into a db that has performance and can scale on a multi node structure to support fault tolerance behaviour (and balance requests). Typically i use MySQL but i find its scalability not simple for this type of application. This time, i want offer other db scenarios.
So: Mongo, Redis, Couchdb?
Thanks all.
This is a hard question to answer and not something we can really give answers to on SO.
Redis is quick for getting the data in, but you can not query on the values of the keys so searching would be harder.
MongoDB & CouchDB would both work well as they are document stores and can be used to store any format for the logs.
There are other options. I know Cassandra is used a lot for this task, but there is also ElasticSearch as in (ElasticSearch, Log Stash, Kibana) which is a great solution for central logging.
In the end it probably down to what you want to do with the data.

Duplicates in a QIF File? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 months ago.
Improve this question
Does anyone have a good way of deleting duplicate transactions (same date, amount, biller, etc) in a QIF file? I looked at PERL's Finance:QIF, but it appear to have delete a record function.
Alternatively, does someone have a good QIF --> CSV converter?
Although I am looking at a PERL solution, I am open to other ideas.
Finance::QIF doesn't really need a delete() method (although it would be handy), because you can access all the transactions as a list and manipulate it yourself. The source code is very straightforward, it would be pretty easy to add a as_csv() method to Finance::QIF::Transaction (the module used to store one transaction's data), after which you can apply your own sort-for-uniqueness method (e.g. plain old "sort -u").