Number of Databases based on Tier [closed] - google-cloud-sql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Is there a document or refference we can look at to dimension how many databases would be the recommended maximum based on the Tier type
e.g.:
db-n1-standard-1
1vCPU, 3.75 GB
or
db-n1-standard-2
2vCPU, 7.5 GB

Number of databases is not a good indicator for choosing your tier.
You can have an instance with a 100 databases with little activity on a small instance, a single large database that needs a lot of memory and so forth.
You need to take into consideration how big you expect each database to be, how much data you expect to be kept in the cache, how many read/write queries you expect to be handling and so on.
The usual recommendation is to run load tests using your expected workloads and determine the machine requirements based on that.

Related

Switch from RabbitMQ to Kafka [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
How easy is to switch from Rabbit to Kafka in existing solution, to replace one implementation (Rabbit) with other (Kafka)? We are about to use Rabbit in our implementation but we want to see if it is possible in the future to replace it with Kafka.
It is possible, and I've seen people do it - but it is a big project.
Not only the APIs are different, but the semantics are different. So you need to rethink your data model, scaling model, error handling, etc. And then there's testing.
If you don't have tons of code to update, and the code is localized and you have both RabbitMQ and Kafka experts on the team you may be able to get it done in a month or two.

Lightweight cryptographic hash function for motes [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I need to use a cryptographic hash function for security and message integrity purposes in resource constrained devices like tmote sky or z1 mote. I am using cooja simulator in contiki.
I tried to use quark or blake2. But Section TEXT will not fit in region ROM. I am sure the rest of the code is minimal with respect to memory requirement. So I need to use a still more lightweight hash function.
Using too much memory is causing the issue. For example, in case of quark these lines demand too much memory to be allocated.
Do you know of any hash function lightweight enough to fit on the motes?

Kibana equivalent for MongoDB [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
We've fed up with instability and unpredictability of ELK stack but still in love with the Kibana dashboards.
Hence I'm looking for some potential migration paths. MongoDB looks very promising: huge track record, lots of docs, ability to cope with json easily etc.
Is there some equivalent to Kibana working on top of MongoDB? Some web app which lets you easily run search queries over indexed data, make them into dashboards, add nice maps and diagrams etc.
I've looked into https://docs.mongodb.org/ecosystem/tools/administration-interfaces/ but this seems to be more about managing MongoDB itself rather than playing with data in it.
you could have a look at mongodb-compass click here
if you would want more, the new mongodb 3.2 has features to connect to any BI tool, like talend. Read more here

Best DB for datalogging [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I have a lot of logged data stored into a database by a data logger. Basically i have a lot of rows with a timestamp and some values. I want to store this data into a db that has performance and can scale on a multi node structure to support fault tolerance behaviour (and balance requests). Typically i use MySQL but i find its scalability not simple for this type of application. This time, i want offer other db scenarios.
So: Mongo, Redis, Couchdb?
Thanks all.
This is a hard question to answer and not something we can really give answers to on SO.
Redis is quick for getting the data in, but you can not query on the values of the keys so searching would be harder.
MongoDB & CouchDB would both work well as they are document stores and can be used to store any format for the logs.
There are other options. I know Cassandra is used a lot for this task, but there is also ElasticSearch as in (ElasticSearch, Log Stash, Kibana) which is a great solution for central logging.
In the end it probably down to what you want to do with the data.

Anything similar to MySQL Proxy for PostgreSQL? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I am looking for something similar to MySQL Proxy. The purpose is to modify incoming queries on the server. I am not looking for alternative ways to achieve the same. My best guess at the moment is to modify GridSQL, but this adds complexity and it takes time. I have asked this question before in a vastly different way and got no relevant results, so I deleted that question and added this one.
Edit: It is important that the client can continue to utilize the PostgreSQL protocol, so the package I am looking for needs to communicate using it.
You might take a look at sqlrelay which has the ability to route and filter queries.
http://sqlrelay.sourceforge.net/sqlrelay/router.html
If you want to rewrite the queries I think sqlrelay falls short.
You might otherwise look into PostgreSQL's rules, which can be used to substitute or rewrite queries:
http://www.postgresql.org/docs/8.4/interactive/rules.html
You can refer to the following postgresql-aync driver project.
https://github.com/mauricio/postgresql-async