Separate Cloudant Service from Bluemix - mongodb

I was hosting my node app on Heroku connected to a mongolab for the DB.
Thanks to a few tips from here, I've deployed my app code to Bluemix. I moved mainly because I'm changing databases from mongodb to couchdb, and am hoping that Bluemix might have a faster connection due the relationship they have as both being IBM services.
Ok, now for the questions :p
First, since Cloudant was added to my app as a "service" it was generically provisioned and hence the username/pass/etc were generated. I assume I can also simply create my own separate Cloudant account and put an all of its settings into my Bluemix app manually. So, if I was to do that, to separately link up a separate Cloudantdb to my Bluemix app, would it be slightly slower or have any other negatives? I'm asking because maybe when it's auto-provisioned (and all done together), maybe it is configured in a way with few network hops/firewalls are skipped between the connections. Remember my initial motivation for trying out Bluemix was due to its relationship with Cloudant.
Second, if I decided to just stick with the auto-provisioned Cloudant db, how can I change the username? I've been playing with the interface and don't see that option anywhere. If I can or can't, I assume that I should be able to point my separate Heroku app to it using the same credentials Bluemix uses and it should work the same (as a separate db, just like I do with my single Mongolab and various PAAS providers).
Thanks for the info!
Paul

You can surely create your own separate Cloudant account, and then enter all of its settings into your Bluemix app manually. The connection speed would depend on your choice of data center locations (SoftLayer, Rackspace or Azure) when you created your Cloudant account. If you choose a SoftLayer data center, the speed would be faster than if you have chosen Rackspace or Azure due to the high-speed private network between all the SoftLayer data centers.
I'm not aware of a way to change usernames after it has been provisioned. You should be able to use those same credentials to point your Heroku app to.

Related

Keeping users out of specific MarkLogic databases

My question is kind of similar to this question, but not quite :
Hide a marklogic database to specific user (permissions)
Background - up until now, developers who use database X were all admins on the server ( this is a historic config that we have recently inherited ), but now we want to have new developers added to the server who definitely wont be admins, and who will have a new database Y added to the server.
What we want to do is have several groups of developers using the same MarkLogic 10 server, but have it so developer group X can only work in their database X, and Developer group Y can only work in database Y. We dont care if they can see all databases on the server.
Does this mean we have to apply permissions to every document in every database to do this, or can we control this via a roles that limit access to specific databases?
Can someone suggest the right way to achieve this please?
Thanks in advance.
You have two tools to work with:
Granular privileges which allow you limit the scope of a privilege to a specific resource (such as database or forest)
Document permissions unique to documents reflective of their respective set of intended users on each database as you already mentioned
However, in my experience, I've generally found this use case is better served by having many small dev clusters rather than one large one as resource contention (one app team pushing CPU to 100%) can become too much of an issue. It is pretty quick and painless to spin up and tear down dev clusters on AWS or Azure. Or, if you're self-hosting, you could look at running multiple MarkLogic Containers on a single host.

Integrate MongoDB with Firebase

I have a Flutter app (still in development) that currently uses Firebase for the backend. More specifically, I use Firebase Authentication, Storage, Cloud Functions, Firestore and in the future I am willing to use Remote Config, Dynamic Links, Cloud Messaging and more of Firebase's features.
I got to a point where Firestore is not enough anymore for my purposes: Full-text search, geographical querying and advanced queries in general. I know that I can use 3rd party services like Algolia for this but It's too expensive and I wanted something already integrated with my database.
I was thinking of start using MongoDB as my database (while keeping all other Firebase services) but before I do that I need to understand what is the best way to do it.
Can I host MongoDB on Firebase Hosting (I don't know if this possible at all?) or just use MongoDB Atlas and access it directly (See my next question) from my application?
What is the best way to connect my application to MongoDB? From the app directly (using Rest API) or using Firebase Cloud Functions (so I won't expose my database)?
Can I use Firebase Authentication tokens to access MongoDB or do I have to use MongoDB's authentication service?
If there is more things I need to consider before I start switching to MongoDB please point it to me.
Firebase Hosting is a CDN for hosting static websites. So it is not possible to host an application like MongoDB server. You can't host MongoDB on any Firebase services. You have to deploy it somewhere else. There are several options. You can either get a VPS and install MongoDB server on it. But you will have to manage your own DB which can be difficult and can take quite some time. Another option is to use a Cloud Database like MongoDB Atlas. This is a faster and more secure solution. However, pricing can be high. So you have to decide depending on your needs.
Once you have a running MongoDB server, you need to write an API for client apps to communicate securely. Client apps should never talk with a DB instance directly. In this case you can use Firebase Cloud Functions to create an api.
You can use Firebase Auth service with Firebase Cloud Functions. You should have a look at the Firebase Callable Functions which can pass auth context to the function body. Here you can just ensure the user is authenticated or perform some access control logic depending on your authorization needs.
Overall, you are going to add an another layer to your architecture. It is possible but will take your time to set things up and you will loose some firestore benefits like offline persistency.

AWS platform. Picking the right technologies

I am building an app that allows people to share items with other people in the community. I wanted to use AWS as my platform.
My idea was to use react Native for the app. AWS Cognito for the authentication. AWS lambda for the server calls. Relational database for storing data about the items and user data such as geolocation. Dynamodb for real-time chat, requests for borrowing and transaction data between users. My primary focus is low cost and I was thinking of using PostgresSQL for relational database.
What do you guys think of my database choices. Of course the PostgresSQL database on rds. Is there a flaw in database plan so far? Any help would be greatly appreciated.
I would probably just use DynamoDB for everything in your application. I don't see a real need to storing some of your data in an RDS database here. However if you definitely need a relational database, I would suggest AWS Aurora Serverless so that your entire application would be using serverless AWS services. Also, normal relational database connection pools don't work that well in AWS Lambda, so I would suggest using the new Data API.

Google Cloud SQL Read replica's in other regions

We are currently investigating the options to make a partly switch to Google Cloud SQL. What we are searching for is a setup by which data is available for reading in multiple regions to increase the speed of the web-application. Writing from multiple regions would off course be great, but that's not really something MySQL does when you also want to have speed on your side :-)
What we would like to setup is a master-slave setup through which the Master would be in Europe and slaves (for reading) would be available in the US and Asia. This way we can provide information to our customers from a VM + SQL instance in Asia without having to connect to a database in Europe.
As far as I am aware it is not possible to currently add a read-instance outside of the region of the master. Is that correct?
Or, would it be possible to create our own MySQL read-only instance and let it replicate from a Google Cloud SQL instance? This would not be preferable (database administration, server administration) but is off course an option.
You can do cross-region replication in Cloud SQL, although it is not straight forward because the performance will not be great. You have to create a master in Cloud SQL, then create a replica with external master pointing at the master you created: https://cloud.google.com/sql/docs/replication#external-master
You can go in the other direction as well: https://cloud.google.com/sql/docs/replication#replication-external
These features are only supported for first generation of Cloud SQL.
Cloud Spanner is a relational database that supports transactional consistency on a global scale. It is an SQL Database and works great in a Multi-region environment. Therefore, It can be a good choice for your case. For more info, please check https://cloud.google.com/spanner/

A Local version of Azure Table Storage

Ok first of all I love Azure and table storage.
We're starting a new greenfield project which will be hosted as a SaaS model in the cloud. Azure Table storage is ideal for what we need but one thing stopping us from taking this route is the possibility of someone having to have the application deployed to their local web server rather than a cloud deployment.
This is something i'd rather avoid personally but unfortunately some people insist the their local setup is more secure than any data centre out there.
What i'd really like to know is if someone has created a local implementation of Azure Table Storage. I know microsoft have the emulator which in theory could be used (it stores the data in SQL which may be slow)
Anyone used the emulator for an internal deployment?
I'm happy to look at creating a wrapper for Azure Table Storage using their rest apis but didn't want to do something that's already been done.
Alternately can anyone recommend an alternate? I know there's RavenDB and MongoDB which also look good too but i've not had an exposure to how well they handle under load or when to scale them out.
The emulator is designed to simplify testing - it is definitely not intended to be used as part of a production deployment.
Is it possible to embrace both a cloud only (Azure Web role and Storage) and a hybrid design whereby your application can be hosted within your web server yet still use Azure Storage?
Jason