Insert of data into MongoDB failed: localhost:27017: cannot use 'j' option - mongodb

Insert of data into MongoDB failed: localhost:27017: cannot use 'j' option when a host does not have journaling enabled in codeigniter with mongodb
Controller
maincontroller.php
function createUser() {
$this->load->library('mongo_db');
$user = array("name" => "tutorialspoint3");
$options = array(
"w" => 1,
"j" => true,
);
$this->mongo_db->insert('tutorialspoint', $user,$options);
}

Had the same problem, you are probably using a 32 bit mongod which has journaling disabled by default.
Just start the database with --journal or add "journal=true" to your mongod.conf (on debian you can find it in /etc/mongod/) to start the database with journaling enabled

Related

mongo shell command not accept use db command

i am trying to pull records form mongo data base with json format to do that i am running below js :
conatin of get_DRMPhysicalresources_Data.js
cursor = db.physicalresources.find().limit(10)
while ( cursor.hasNext() ){
print( JSON.stringify(cursor.next()) );
}
the command i run to get the records :
mongo host_name:port/data_base_name -u user -p 'password' --eval "load(\"get_DRMPhysicalresources_Data.js\")" > DRMPhysicalresources.json
and i am able to get the result as josn formate inside DRMPhysicalresources.json , now i want to switch to other data base using use command i try add use db as below :
conatin of get_DRMPhysicalresources_Data.js
use db2_test
cursor = db.physicalresources.find().limit(10)
while ( cursor.hasNext() ){
print( JSON.stringify(cursor.next()) );
}
the command i run to get the records :
mongo host_name:port/data_base_name -u user -p 'password' --eval "load(\"get_DRMPhysicalresources_Data.js\")" > DRMPhysicalresources.json
but i am getting below errors :
MongoDB shell version v4.2.3
connecting to: "some data base info"
Implicit session: session { "id" : UUID("8c85c6af-ebed-416d-9ab8-d6739a4230cb") }
MongoDB server version: 4.4.11
WARNING: shell and server versions do not match
2022-04-11T13:39:30.121+0300 E QUERY [js] uncaught exception: SyntaxError: unexpected token: identifier :
#(shell eval):1:1
2022-04-11T13:39:30.122+0300 E QUERY [js] Error: error loading js file: get_DRMPhysicalresources_Data.js :
#(shell eval):1:1
2022-04-11T13:39:30.122+0300 E - [main] exiting with code -4
is there are any way to add use db2_test without break it ?
You can try this:
db = new Mongo().getDB("myOtherDatabase");
or
db = db.getSiblingDB('myOtherDatabase')
( this is the exact js equivalent to the 'use database' helper)
docs

Connect with mongodb server on digital ocean

I followed
DigitalOcean Mongodb Install
sudo ufw allow from your_other_server_ip/32 to any
I set your other server as 127.0.01 as I will be connecting it with express local device as localhost:3000
sudo ufw allow from 127.0.0.1/32 to any
and created admin user.
I have also updated mongodb.conf to
logappend=true
bind_ip = 127.0.0.1,139.**.*.**
port = 27017
How can I make connection with mongoose now.
I tried with a gui with ssh connection and it worked.
How can I connect it with HTTP URL.
EDIT - 1
I installed mongodb
https://www.digitalocean.com/community/tutorials/how-to-install-mongodb-on-ubuntu-18-04
And with
https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-mongodb-on-ubuntu-16-04#part-two-securing-mongodb
Step 3 — Testing the Remote Connection
I am getting
MongoDB shell version v3.6.3
Enter password:
connecting to: mongodb://139.**.*.***:27017/
MongoDB server version: 3.6.3
And running
> show users
{
"_id" : "testdb.testusr",
"user" : "testusr",
"db" : "testdb",
"roles" : [
{
"role" : "readWrite",
"db" : "testdb"
}
]
}
If i try to connect with mongoose with below code
var connectionString = "mongodb://testusr:testpwd#139.**.*.***:27017/testdb";
mongoose
.connect(connectionString, {
keepAlive: 1,
useUnifiedTopology: true,
useNewUrlParser: true,
})
.then(() => console.log('DB Connected!'))
.catch(err => {
console.log(`DB Connection Error: ${err.message}`);
});
I am getting below output
DB Connection Error: Server selection timed out after 30000 ms

sync mongo data to elastic using logstash

I want to sync my mongodb data(local mongodb) to elastic search(local elastic) using logstash-plugin of mongodb
I have install logstash plugin using
bin/logstash-plugin install logstash-input-mongodb .
Then i created a mongodata.conf file in /usr/share/logstash directory.
When I execute the conf file then it shows
--> Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
My config file is:
input{
mongodb{
uri => "mongodb://localhost:27017/reporterDB"
placeholder_db_dir => "/opt/logstash-mongodb/"
placeholder_db_name => "logstash_sqlite.db"
collection => "iam_ms_test"
batch_size => 5000
}
}
filter{
}
output {
stdout { codec => rubydebug }
elasticsearch {
action => "index"
hosts => "localhost:9200"
user => elastic
password => changeme
index => "mongo_log"
document_type => "document_type"
document_id => "%{id}"
}
}
I am getting below lines in logstash-plain.log file
[2019-11-01T15:41:00,869][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-11-01T15:41:00,871][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>750, :thread=>"#<Thread:0x351f7fd1#/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:245 run>"}
[2019-11-01T15:41:01,068][INFO ][logstash.inputs.mongodb ] Registering MongoDB input
[2019-11-01T15:41:01,116][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::MongoDB uri=>\"mongodb://localhost:27017/anchorReports\", placeholder_db_dir=>\"/opt/logstash-mongodb/\", placeholder_db_name=>\"logstash_sqlite.db\", collection=>\"hi_p5m\", batch_size=>5000, id=>\"ec7682e8c6c5676deca84d5072c5f7865120a107ffce81ce21caa878c6e4ed09\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_441f95b8-cc8a-4b9e-a45f-657ed2011e2b\", enable_metric=>true, charset=>\"UTF-8\">, since_table=>\"logstash_since\", since_column=>\"_id\", since_type=>\"id\", parse_method=>\"flatten\", isodate=>false, retry_delay=>3, generateId=>false, unpack_mongo_id=>false, message=>\"Default message...\", interval=>1>", :error=>"Java::JavaSql::SQLException: path to '/opt/logstash-mongodb/logstash_sqlite.db': '/opt/logstash-mongodb' does not exist", :thread=>"#<Thread:0x351f7fd1#/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:245 run>"}
[2019-11-01T15:41:01,869][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Sequel::DatabaseConnectionError: Java::JavaSql::SQLException: path to '/opt/logstash-mongodb/logstash_sqlite.db': '/opt/logstash-mongodb' does not exist>, :backtrace=>["org.sqlite.core.CoreConnection.open(org/sqlite/core/CoreConnection.java:190)", "org.sqlite.core.CoreConnection.<init>(org/sqlite/core/CoreConnection.java:74)", "org.sqlite.jdbc3.JDBC3Connection.<init>(org/sqlite/jdbc3/JDBC3Connection.java:24)", "org.sqlite.jdbc4.JDBC4Connection.<init>(org/sqlite/jdbc4/JDBC4Connection.java:23)", "org.sqlite.SQLiteConnection.<init>
"(org/sqlite/SQLiteConnection.java:45)",
"org.sqlite.JDBC.createConnection(org/sqlite/JDBC.java:114)",
"org.sqlite.JDBC.connect(org/sqlite/JDBC.java:88)"
I want the records on my elastic search under `index("mongo_log").
I also want to know the uses of placeholder_db_dir and placeholder_db_name and whats should be these values when we are using mongodb as the input database.
Problem solved! actually the directory opt/logstash was not created . So I manually create the logstash folder under opt. After that i gave Write permission to that directory , so that when we execute the command for logstash then it can create file inside this folder.

Set smallfiles in ShardingTest

I know there is a ShardingTest() object that can be used to create a testing sharding environment (see https://serverfault.com/questions/590576/installing-multiple-mongodb-versions-on-the-same-server), eg:
mongo --nodb
cluster = new ShardingTest({shards : 3, rs : false})
However, given that the disk space in my testing machine is limited and I'm getting "Insufficient free space for journal files" errors when using the above command, I'd like to set the smallfiles option. I have tried with the following with no luck:
cluster = new ShardingTest({shards : 3, rs : false, smallfiles: true})
How smallfiles can be enabled for a sharding test, please? Thanks!
A good way to determine how to use a MongoDB shell command is to type the command without the parentheses into the shell and instead of running it will print the source code for the command. So if you run
ShardingTest
at the command prompt you will see all of the source code. Around line 30 you'll see this comment:
// Allow specifying options like :
// { mongos : [ { noprealloc : "" } ], config : [ { smallfiles : "" } ], shards : { rs : true, d : true } }
which gives you the correct syntax to pass configuration parameters for mongos, config and shards (which apply to the non replicaset mongods for all the shards). That is, instead of specifying a number for shards you pass in an object. Digging further in the code:
else if( isObject( numShards ) ){
tempCount = 0;
for( var i in numShards ) {
otherParams[ i ] = numShards[i];
tempCount++;
}
numShards = tempCount;
This will take an object and use the subdocuments within the object as option parameters for each shard. This leads to, using your example:
cluster = new ShardingTest({shards : {d0:{smallfiles:''}, d1:{smallfiles:''}, d2:{smallfiles:''}}})
which from the output I can see is starting the shards with --smallfiles:
shell: started program mongod --port 30000 --dbpath /data/db/test0 --smallfiles --setParameter enableTestCommands=1
shell: started program mongod --port 30001 --dbpath /data/db/test1 --smallfiles --setParameter enableTestCommands=1
shell: started program mongod --port 30002 --dbpath /data/db/test2 --smallfiles --setParameter enableTestCommands=1
Alternatively, since you now have the source code in front of you, you could modify the javascript to pass in smallfiles by default.
A thorough explanation of the invoking modes of ShardingTest() is to be found in the source code of the function itself.
E.g., you could set smallFiles for two shards as follows:
cluster = new ShardingTest({shards: {d0:{smallfiles:''}, d1:{smallfiles:''}}})

What does "fastmod" means in mongodb logs

I have thought fastmod specifies some operations like update-in-place.
In my app I'm doing update by _id using '$' modifiers, for example:
$colleciton->update(
array('_id' => $id),
array(
'$inc' => array('hits' => new MongoInt32(1)),
'$set' => array(
'times.gen' => gettimeofday(true),
'http.code' => new MongoInt32(200)
)
),
array('safe'=>false,'multiple'=>false,'upsert'=>false)
);
I've got such logs:
Wed Jul 25 11:08:36 [conn7002912] update mob.stat_pages query: { _id: BinData } update: { $inc: { hits: 1 }, $set: { times.gen: 1343203715.684896, http.code: 200 } } nscanned:1 nupdated:1 keyUpdates:0 locks(micros) w:342973 342ms
In logs as you can see I don't have any "fastmod" flags. There is no "moved" flag, because I set fields 'times.gen' and 'http.code' on insert, so padding factor is 1.0.
Am I doing something wrong, or I misunderstood meaning of fastmod?
You are correct that "fastmod" in the logs means an in-place update. Some possible reasons for the omission of logged fastmod/in-place operations:
You are actually setting or incrementing a field that doesn't exist, so it must be added, not an in place operation
The logs only show slow queries (default >100ms), so the in-place ones are probably happening too fast to be logged
You seem to be using 2.1 or 2.2 judging by the log - did the messages disappear if/when you switched to the new version?
In terms of looking into this further:
Have a look at the profiler, try with different settings, note: profiling adds load - so use carefully.
You can also try setting the slowms value lower, either on start up or:
> db.setProfilingLevel(0,20) // slow threshold=20ms