I'm relatively new to chef and am in the process of using the edelight mongodb cookbook. I've got the process of actually creating a standalong mongodb instance working fine. It's understanding how to use the subsequent user_management recipe to create the initial admin user and regular users.
When I add "default['mongodb']['config']['auth'] = true" to the attributes/default.rb file, and run the mongodb::default recipe, the db is created and authentication is on.
However when I run the mongodb::user_management recipe I get this error every time. Clearly I'm doing something wrong, but being new to editing chef/ruby files I can't determine what's failing. Looks like I might need to work within the users.rb attribute file?
===================================================
Error executing action add on resource 'mongodb_user[admin]'
NameError
uninitialized constant Mongo::MongoClient
The edelight cookbook has been unmaintained for quite some time now. Chef-Brigade is attempting to take over maintenance on the cookbook until a new owner can be found.
https://github.com/chef-brigade/mongodb-cookbook
There is work being implemented to fix some of the user_management issues. I am not 100% sure the current state of the user_management fixes but you would likely be better off starting with that cookbook and reporting any issues to the team there so they can work to resolve. There is active development taking place.
I would be glad to help you debug the issue if it persists on the chef-brigade flavor of the cookbook as we can actively make changes to resolve any issues.
Related
We are working on Sitecore deployment in Azure.
Sitecore Experience Platform 8.0 rev. 160115
MongoDB - 3.0.4
We installed MongoDB, and we can connect to localhost using Robomongo. We can only see “Analytics” database/collections.
Our connection strings setup are:
Connectionstring.config
But the other 3 databases and collections are not created.
Tracking.live
Tracking.history
Tracking.contact
In Sitecore.Analytics.config file – the setting “Analytics.Enabled” is set to true.
Sitecore.Analytics.config
In log we found some references to xDB cloud initialization failed issues, therefore we disabled it.
Are we missing any configurations? Any help or suggestions are appreciated.
Thank you
Keep in mind that MongoDB is schemaless. Of course, in a production environment you would probably have to create these databases manually - to ensure that access rights are assigned correctly. But in a development environment, any database can be created on the fly.
The only reason the analytics database was created for you is because Sitecore creates indexes for the Interactions collection. Otherwise, you wouldn't see this database until xDB wrote some data into it. Same goes for any MongoDB collection - those won't appear until there's either data being written or an index created.
The other three databases will be created once the aggregation/processing logic is executed. I.e. when your instance starts to actually collect and process visit data.
As a conclusion, don't worry about these databases missing (for now). Just verify that xDB functionality is working properly.
I have a problem trying to implement a mongodb replica set as a worker role instance in Windows Azure. In the Windows Azure portal, one of the instances is shown as busy with the status:
Waiting for role to start... Calling OnRoleStart()
I have checked all the settings and everything seems to be ok, what could the problem be?
Denis Markelov's blog post helped me solve this problem. The solution is mainly his, however I had to take an extra step to get it to work and thought others might find it useful.
Solution from blog:
Windows Azure reuses virtual machines for roles, so after a fresh
deployment on a hard drive you can find files that were created during
previous sessions. If MongoDB was terminated improperly - there might
be a lock file ("persisted mutex" analogue), because of which MongoDB
refuses to start. It is located at the drive with a label
"WindowsAzureDrive" (say it is F:), at the path:
F:\data\mongod.lock
In the case of a production use this situation might require a
recovery procedures, but if you are just in the process of initial
setup - it is safe to remove this file, letting MongoDB to start
again.
I was having this problem and did as suggested, however I was still having the same problem. So I took a look at the log file at
C:\Resources\Directory\.MongoDB.WindowsAzure.MongoDBRole.MongodLogDir\mongod.txt
And saw that another file was also giving an error. In order to fix the problem, you also have to delete the file local.ns in the same directory as mongod.lock.
We're currently using chef to provision our servers and we want our recipe/cookbook to automatically add some data to the mongo database once its installed and running.
This is where we start to run into problems. We were using an execute resource to run the mongo script like this:
execute "install-mongodb-config" do
command "mongo #{node[:mongodb][:mongo_db_host]}/#{node[:mongodb][:mongo_db]} \"#{node[:mongodb][:mongo_add_config_script]}\""
action :run
end
This part of the recipe always failed no matter what we tried! I won't get into the details of everything we tried here (unless i need to) but lets just say that i've exhausted all possibilities of subscribes and notifies (i think).
The problem originates from the fact that we are using the mongodb::10gen_repo to install mongodb. The recipe exits when apt-get installs the package and then chef continues on to execute more resourses.
We have tried executing the above resource directly after mongodb::10gen_repo but it doesn't seem like mongodb is available and the mongo shell cannot connect and run the script. The error we see is somewhat like this:
MongoDB shell version: 2.0.2
Thu Sep 6 18:40:45 ReferenceError: setTimeout is not defined mongotest.js:2
failed to load: mongoAddConfig.js
Nothing we have tried has been able to get around this in a nice chef way. The thing that we resorted to was to replace the execute resource with the following:
execute "install-mongodb-config" do
command "sleep 60; mongo #{node[:mongodb][:mongo_db_host]}/#{node[:mongodb][:mongo_db]} \"#{node[:mongodb][:mongo_add_config_script]}\""
action :run
end
Which just makes the command sleep for 60 seconds before the mongo script is run. I know this isn't the Right way to do this but it works for now.
Can anyone suggest the Right way to do this? I have a feeling that I will need to talk to the guys that created the mongodb chef script and request a feature!
First of all. Remove this "sleep 60". This can be done by chef: All resources have common attributes and "retries" and "retry_delay" are part of them. So the easiest way would be:
execute "install-mongodb-config" do
command "mongo some_command"
action :run
retries 6
retry_delay 10
end
If you have more than 2-3 places, where you have to run some command on mongo database, consider creating LWRP, similar to one created in this mongodb cookbook. (Particularly check the libraries/mongodb.rb file). You can hide the logic that waits for the server to respond there.
Is it important that the same Chef run that installs the software also injects the initial configuration? The 'chefly' method to constructing cookbooks and recipes is to guard against idempotency in order to ensure that they can be run over and over again without producing unintended results.
In this particular case, I would limit the first recipe to only just installing and starting up mongodb. This recipe would do nothing if it saw that mongodb was already running on the host. Then, I'd have another recipe that would run only if it saw that mongo had been setup and was running. It would query the mongodb to see if the initial configuration had been done. If so, it would simply return. If not, it would run your configuration routine.
In this way, these recipes could run all the time, anytime, on your machine. Even if someone uninstalled mongodb, chef would get around to ensuring that it was set back up again and pristine.
So, I don't know much at all about chef. But your problem seems to be that you try to immediately connect after bringing the server up.
Server's are not immediately available when you bring them up since there is a bit of overhead that goes into electing a primary, getting all the server status's etc.
You can recreate this without chef by trying to bring up a replica set and immediately trying to connect to it in a simple script. So it's not chef specific.
Not sure if there is a way around the server startup lag since bringing up a primary is expected to be a relatively infrequent occurrence compared to just adding nodes to a set.
The only potential solution I see that is cleaner is adding a longer Timeout for the connection to be formed in the configuration. You can find how to do this in the mongodb documentation here: http://www.mongodb.org/display/DOCS/Connections
The flag of interest for you is likely connectTimeoutMS
I came across this error that is apparently pretty common among Linux Systems.
"Too many files Open"
In my code I tried to set the Python open file limit to unlimited and it threw an error saying that I could not exceed the system limit.
import resource
try:
resource.setrlimit(resource.RLIMIT_NOFILE, (500,-1))
except Exception as err:
print err
pass
So...I Googled around a bit and followed this tutorial.
However, I set everything to 9999999 which I thought would be as close to unlimited as I could get. Now I cannot open a session as root on that machine. I can't login as root at all and am pretty much stuck. What can I do to get this machine working again? I need to be able to login as root! I am running Centos 6 and it's as up to date as possible.
Did you try turning it off and on?
If this doesn't help you can supply init=/bin/bash as kernel boot parameter to enter a root shell. Or boot from a live cd and revert your changes.
After performing an 'strace su -', I looked for the 'No such file or directory' error. When comparing the output, I found that some of those errors are ok, however, there were other files missing on my problem system that existed on a comparison system. Ultimately, it led me to a faulty line in /etc/pam.d/system-auth-ac referencing an invalid shared object.
So, my recommendation is to go through your /etc/pam.d config files and validate the existence of the shared object libraries, or, look in /var/log/secure and it should give some clue to missing shared objects as well.
I'm in the process of developing a deployment system for a new web app and I'm wondering where the best point in the process to manage database migrations is (the question of how to do the migrations is another problem entirely).
It seems there are two ways to go:
Use a migration script that can
either be run manually from command
line or as part of the automatic
deployment/build process
Run the migrations when the app
starts up (I'm using ASP.NET so this
can be done easily enough without
causing a long-running user request)
Does anyone have any suggestions/insight/experience with these approaches? Any other suggestions?
I can see why #1 might be more attractive - it gives me complete control over when the DB is updated. However, I quite like #2 as it allows me to quickly iterate between deployments and reduces the manual process. #2 could also be used on my development machine to allow even quicker iterations. Hmm, starting to think having both might be a good thing...
We have a sales-force system with ~100 client and we are updating database at application startup (True, our is a desktop application.) I like this approach, it's safe and iterative if we have indeterministic startpoint (is the client database new or only updated to verison x.y.z?).
But at serverside I'm preferr your #1 option: we create a SQL query file on our virtual machine (based on the copy of the original database) and runs this query against the real server.
So IMHO:
Disconnected clients: startup, iterative scripts
Server: query created on VM based on the actual and real database
So I'm interrested in this problem too, and find some (half)frameworks as RikMigrations. After some googling there is a good startplace about DB versioning/migration frameworks: .NET Database Migration Tool Roundup. Not neccessarely the documentation but the team blogs can be interresting.
I like option #1 better as it seems much more flexible. In lieu of actually performing migrations on each app start, I think I would verify that the database schema (version number?) matches the code, and if not, throw a warning or error about a mismatched database schema.
I'd prefer option #1 for a number of reasons. First, integration tests usually require your DB schema to be up-to-date, and launching a web-site to upgrade the schema will be a huge timewaster. Second, you cannot change database schema while your site is running (say, add a couple of indexes to speed things up).
As for production side of things, upgrading your database in transaction MSI-style installation is much better than attempting to upgrade at each app startup since you can potentially end up with desynchronized database-application versions.
And if you're looking for the migration framework, take a look at Wizardby.
If the application ever has to run on a customer's machine than migrating at startup can prevent a lot of support calls - assuming you can do seamless migration without user intervention (I hope you aren't normally running your web app with permission to modify the database).
If the application always runs under your control automatic migration is less of an issue - but still can be a good feature, especially if you want to minimize downtime and manual deployment steps.