We are working on Sitecore deployment in Azure.
Sitecore Experience Platform 8.0 rev. 160115
MongoDB - 3.0.4
We installed MongoDB, and we can connect to localhost using Robomongo. We can only see “Analytics” database/collections.
Our connection strings setup are:
Connectionstring.config
But the other 3 databases and collections are not created.
Tracking.live
Tracking.history
Tracking.contact
In Sitecore.Analytics.config file – the setting “Analytics.Enabled” is set to true.
Sitecore.Analytics.config
In log we found some references to xDB cloud initialization failed issues, therefore we disabled it.
Are we missing any configurations? Any help or suggestions are appreciated.
Thank you
Keep in mind that MongoDB is schemaless. Of course, in a production environment you would probably have to create these databases manually - to ensure that access rights are assigned correctly. But in a development environment, any database can be created on the fly.
The only reason the analytics database was created for you is because Sitecore creates indexes for the Interactions collection. Otherwise, you wouldn't see this database until xDB wrote some data into it. Same goes for any MongoDB collection - those won't appear until there's either data being written or an index created.
The other three databases will be created once the aggregation/processing logic is executed. I.e. when your instance starts to actually collect and process visit data.
As a conclusion, don't worry about these databases missing (for now). Just verify that xDB functionality is working properly.
Related
I am trying to follow quickstart to setup SQL Server (not LocalDb version of SQLServer that comes with Visual Studio) as my data store. Looks like that two databases will be needed - one for configuration and the other for operation. But my problem is that I couldn't figure out what db names I should use. I created two databases using names I came up with and ran the scripts I downloaded from quickstart to create all the tables. Now, when I try to make connection, I think I will need to specify db names in my connection string, don't I? What should I use to replace the original connection string provide by quickstart - "Data Source=(LocalDb)\MSSQLLocalDB;database=IdentityServer4.Quickstart.EntityFramework-4.0.0;trusted_connection=yes;" ?
You can have one database for both. But in general I would keep the configuration part in memory if the number of clients is small. Why spend hours keeping the config in a database for just a few clients and resources?
Better to just keep the users and persisted grants in a database.
we are using 2018.3 version of Tableau Server. The server stats like user login, and other stats are getting logged into PostgreSQL DB. and the same being cleared regularly after 1 week.
Is there any API available in Tableau to connect the DB and take backup of data somewhere like HDFS or any place in Linux server.
Kindly let me know if there are any other way other than API as well.
Thanks.
You can enable access to the underlying PostgreSQL repository database with the tsm command. Here is a link to the documentation for your (older) version of Tableau
https://help.tableau.com/v2018.3/server/en-us/cli_data-access.htm#repository-access-enable
It would be good security practice to limit access to only the machines (whitelisted) that need it, create or use an existing read-only account to access the repository, and ideally to disable access when your admin programs are complete (i.e.. enable access, do your query, disable access)
This way you can have any SQL client code you wish query the repository, create a mirror, create reports, run auditing procedures - whatever you like.
Personally, before writing significant custom code, I’d first see if the info you want is already available another way, in one of the built in admin views, via the REST API, or using the public domain LogShark or TabMon systems or with the Addon (for more recent versions of Tableau) the Server Management Add-on, or possibly the new Data Catalog.
I know at least one server admin who somehow clones the whole Postgres repository database periodically so he can analyze stats offline. Not sure what approach he uses to clone. So you have several options.
Situation:
Have created today a new Compose for MongoDB Service instance in Bluemix
Need:
I have to access this MongoDB DIRECTLY with tools (eg. Mongo Managemant Studio Pro, mongo.exe, etc.) for bulkloading, testing, ad-hoc data fix, etc.
Problem:
I have not found any docs, samples nor a CLEAR statement that
a) gives me some confirmation that THIS is possible
b) gives me COMPLETE information (not just some technical fragments that might have worked year ago) how to do it.
Maybe I am looking to the wrong places or do not know the right people. However I am stuck on this, and before quitting Bluemix MongoDB maybe somebody has a copy/past solution or handson step by step manual.
Any help welcome. Thanks!
Connecting to MongoDB service in Bluemix from an application is possible. For this answer I have used the application "Robo3T" and here are the steps:
Access your MongoDB Service on you Bluemix account. Usually under
"Cloud Foundry Services"
Open section "Manage", from "Connection Settings" copy from "HTTPS" the connection address and port. In this example "sl-eu-lon-2-portal.5.dblayer.com" and "20651"
In Robo3T create a new connection with the connection address from previous step
In tab Authentication configure database name, username and password
. The credentials are found as in step 1
From "Connection Settings" copy the SSL Certificate into a text file and save locally.
In Robo3T Add the certificate to the connection in the "SSL" tab
Test the connection and save the settings
Answer
YES, Bluemix hosted Compose for MongoDB instances can be connected from the mongo Shell and some updated DB Managment tools.
However, you have to make sure, that in case you are running the newest DB versions, that your tools (shell and DB management GUIs) comply with the newest DB features such as encryption etc.
Origin of the Problem
My problem was due to older and therefore incompatible versions of the mongo shell and DB-managment tools running against the newest MongoDB versions with their specialities on encription and multiple servers to be handled in the URI.
At least two DB managment tools are not compatible with the newest DB version and will take their time to get fixed. The problem is, that both will not tell you about this. They just do not not connect. No logs on either side. Period.
So my advise here: look for tool providers who express dedicated compliance with the specific version of your DB.
Advise to the Bluemix Team
It might not take much time to provide some sample connection strings for the most common tools like the mongo shell, MongoBooster, etc. to take the hassle and guesswork out of interpreting the Environment variables and figuring out what is needed for specific connection strings and what is not.
For instance MongoDB Atlas hosting provides for every cluster readymade connection strings for many tools you can just copy/past and done!
Connecting to Atlas took me 5 Minutes. For Bluemix I have lost hours! Not because it is complex, but because the documentation and the generated Info is somehow incomplete and messy - at least for the ones who do not connection strings for their living!
I've been following a tutorial with express, node and mongo.
I have in a config file on the server side:
production:{
db:'mongodb://MYUSERNAME:MYPASSWORD#ds033307.mongolab.com:33307/dbname',
rootPath:rootPath,
port:process.env.PORT||80
}
so, i have my username and password in clear text in a server side javascript file. should i be worried about this? if yes, where else can I put it?
Thanks.
Edit: I went back and had a look at mongolab and heroku (where my site is hosted) docs.
Where I found: "The MongoLab add-on contributes one config variable to your Heroku environment: MONGOLAB_URI", and so I was able to put the MONGOLAB_URI env var into my config and move the password out of the source code.
With regards to the same datacenter, am I right to assume heroku would not be hosting my mongolab database in their datacenter, but would instead be calling out to a cloud service mongo database? Not much I can do then, is there, if I want to stick with mongolab and heroku?
I know this question is old but according to Heroku's docs they currently use 2 datacenters (https://devcenter.heroku.com/articles/regions#data-center-locations).
Their US server is 'amazon-web-services::us-east-1' and their EU alternative is 'amazon-web-services::eu-west-1'.
Both of these data centers are available when launching mongo instances on Mongolab so you can choose for both your app and your db to be on the same datacenter giving much improved security.
I think you should always be concerned about storing passwords in source code files. Generally you would be much better off keeping it in a configuration file that is managed separately. This gives you the flexibility to use the same code with a different configuration file to point to development or qa databases.
Of bigger concern perhaps - are you hosting your application in the same datacenter that MongoLab is hosting your database? If not, that user name and password, along with your data, will traverse the internet in the clear.
MongoLab does not currently support SSL (other than for their RestAPI) so even they recommend being in the same data center:
Do you support SSL?
Not yet but it is on our roadmap to be available in Summer 2014. In
the meantime, we highly recommend that you run your application and
database in the same datacenter. If you have a Dedicated plan, we also
highly recommend that you configure custom firewall rules for your
database(s).
Rest API:
Each MongoLab account comes with a REST API that can be used to access
the databases, collections and documents belonging to that account.
The API exposes most the operations you would find in the MongoDB
driver, but offers them as a RESTful interface over HTTPS.
I would definitely read MongoLab's security page fairly closely:
https://docs.mongodb.com/manual/security/
I'm in the process of developing a deployment system for a new web app and I'm wondering where the best point in the process to manage database migrations is (the question of how to do the migrations is another problem entirely).
It seems there are two ways to go:
Use a migration script that can
either be run manually from command
line or as part of the automatic
deployment/build process
Run the migrations when the app
starts up (I'm using ASP.NET so this
can be done easily enough without
causing a long-running user request)
Does anyone have any suggestions/insight/experience with these approaches? Any other suggestions?
I can see why #1 might be more attractive - it gives me complete control over when the DB is updated. However, I quite like #2 as it allows me to quickly iterate between deployments and reduces the manual process. #2 could also be used on my development machine to allow even quicker iterations. Hmm, starting to think having both might be a good thing...
We have a sales-force system with ~100 client and we are updating database at application startup (True, our is a desktop application.) I like this approach, it's safe and iterative if we have indeterministic startpoint (is the client database new or only updated to verison x.y.z?).
But at serverside I'm preferr your #1 option: we create a SQL query file on our virtual machine (based on the copy of the original database) and runs this query against the real server.
So IMHO:
Disconnected clients: startup, iterative scripts
Server: query created on VM based on the actual and real database
So I'm interrested in this problem too, and find some (half)frameworks as RikMigrations. After some googling there is a good startplace about DB versioning/migration frameworks: .NET Database Migration Tool Roundup. Not neccessarely the documentation but the team blogs can be interresting.
I like option #1 better as it seems much more flexible. In lieu of actually performing migrations on each app start, I think I would verify that the database schema (version number?) matches the code, and if not, throw a warning or error about a mismatched database schema.
I'd prefer option #1 for a number of reasons. First, integration tests usually require your DB schema to be up-to-date, and launching a web-site to upgrade the schema will be a huge timewaster. Second, you cannot change database schema while your site is running (say, add a couple of indexes to speed things up).
As for production side of things, upgrading your database in transaction MSI-style installation is much better than attempting to upgrade at each app startup since you can potentially end up with desynchronized database-application versions.
And if you're looking for the migration framework, take a look at Wizardby.
If the application ever has to run on a customer's machine than migrating at startup can prevent a lot of support calls - assuming you can do seamless migration without user intervention (I hope you aren't normally running your web app with permission to modify the database).
If the application always runs under your control automatic migration is less of an issue - but still can be a good feature, especially if you want to minimize downtime and manual deployment steps.