Connecting to shards in MongoDB mongo shell - mongodb

I'm following the tutorial in the MongoDB: The Definitive Guide, 2nd edition, for a databases course, and it appears not to work in version 3.6.2.
Basically I have two mongo shells opened using mongo --nodb.
Then, in the first, I run cluster = new ShardingTest({"shards": 3, "chunksize": 1}) (which works and produces a steady stream of output).
In the second shell, the book says to run db = (new Mongo("localhost:30999")).getDB("test") which fails. I was told by a colleague instead to run db = (new Mongo("localhost:20000")).getDB("test"), which worked.
Then, I inserted data which worked as well. However, when trying sh.status(), I got the message printShardingStatus: this db does not have sharding enabled. be sure you are connecting to a mongos from the shell and not to a mongod.
After searching online, I figured I'd run sh.enableSharding(db) which also gave me the following error:
2018-03-01T11:05:22.654-0500 E QUERY [thread1] Error: not connected to a mongos :
sh._checkMongos#src/mongo/shell/utils_sh.js:8:15
sh._adminCommand#src/mongo/shell/utils_sh.js:18:9
sh.enableSharding#src/mongo/shell/utils_sh.js:98:12
#(shell):1:1
I'm running on a Windows 10 machine, and have the correct environmental variables set up and the db folder created, so any help/pointers would be much appreciated!
EDIT 1:
This error persists even if db.collection.ensureIndex() is run first.

Try connecting to the shell on port 20006 by opening a new mongo --nodb, then do db = (new Mongo("localhost:20006")).getDB("test")
This should open the mongos for all the shards, so now the command sh.status() should work, as well as other commands like setting balancer state and starting balancer.

The commands bellow show how to run 3 shards instances and 3 config instances in your localhost. For each of these shards also are created 3 replica sets (mongod instances), may it can help you:
clean everything up
echo "killing mongod and mongos"
killall mongod
killall mongos
echo "removing data files"
rm -rf /data/config
rm -rf /data/shard*
start a replica set and tell it that it will be shard0
echo "starting servers for shard 0"
mkdir -p /data/shard0/rs0 /data/shard0/rs1 /data/shard0/rs2
mongod --replSet s0 --logpath "s0-r0.log" --dbpath /data/shard0/rs0 --port 37017 --fork --shardsvr
mongod --replSet s0 --logpath "s0-r1.log" --dbpath /data/shard0/rs1 --port 37018 --fork --shardsvr
mongod --replSet s0 --logpath "s0-r2.log" --dbpath /data/shard0/rs2 --port 37019 --fork --shardsvr
sleep 5
connect to one server and initiate the set
echo "Configuring s0 replica set"
mongo --port 37017 << 'EOF'
config = { _id: "s0", members:[
{ _id : 0, host : "localhost:37017" },
{ _id : 1, host : "localhost:37018" },
{ _id : 2, host : "localhost:37019" }]};
rs.initiate(config)
EOF
start a replicate set and tell it that it will be a shard1
echo "starting servers for shard 1"
mkdir -p /data/shard1/rs0 /data/shard1/rs1 /data/shard1/rs2
mongod --replSet s1 --logpath "s1-r0.log" --dbpath /data/shard1/rs0 -port 47017 --fork --shardsvr
mongod --replSet s1 --logpath "s1-r1.log" --dbpath /data/shard1/rs1 --port 47018 --fork --shardsvr
mongod --replSet s1 --logpath "s1-r2.log" --dbpath /data/shard1/rs2 --port 47019 --fork --shardsvr
sleep 5
echo "Configuring s1 replica set"
mongo --port 47017 << 'EOF'
config = { _id: "s1", members:[
{ _id : 0, host : "localhost:47017" },
{ _id : 1, host : "localhost:47018" },
{ _id : 2, host : "localhost:47019" }]};
rs.initiate(config)
EOF
start a replicate set and tell it that it will be a shard2
echo "starting servers for shard 2"
mkdir -p /data/shard2/rs0 /data/shard2/rs1 /data/shard2/rs2
mongod --replSet s2 --logpath "s2-r0.log" --dbpath /data/shard2/rs0 --port 57017 --fork --shardsvr
mongod --replSet s2 --logpath "s2-r1.log" --dbpath /data/shard2/rs1 --port 57018 --fork --shardsvr
mongod --replSet s2 --logpath "s2-r2.log" --dbpath /data/shard2/rs2 --port 57019 --fork --shardsvr
sleep 5
echo "Configuring s2 replica set"
mongo --port 57017 << 'EOF'
config = { _id: "s2", members:[
{ _id : 0, host : "localhost:57017" },
{ _id : 1, host : "localhost:57018" },
{ _id : 2, host : "localhost:57019" }]};
rs.initiate(config)
EOF
now start 3 config servers
echo "Starting config servers"
mkdir -p /data/config/config-a /data/config/config-b /data/config/config-c
mongod --replSet csReplSet --logpath "cfg-a.log" --dbpath /data/config/config-a --port 57040 --fork --configsvr
mongod --replSet csReplSet --logpath "cfg-b.log" --dbpath /data/config/config-b --port 57041 --fork --configsvr
mongod --replSet csReplSet --logpath "cfg-c.log" --dbpath /data/config/config-c --port 57042 --fork --configsvr
echo "Configuring configuration server replica set"
mongo --port 57040 << 'EOF'
config = { _id: "csReplSet", members:[
{ _id : 0, host : "localhost:57040" },
{ _id : 1, host : "localhost:57041" },
{ _id : 2, host : "localhost:57042" }]};
rs.initiate(config)
EOF
now start the mongos on a standard port
mongos --logpath "mongos-1.log" --configdb csReplSet/localhost:57040,localhost:57041,localhost:57042 --fork
echo "Waiting 60 seconds for the replica sets to fully come online"
sleep 60
echo "Connnecting to mongos and enabling sharding"
add shards and enable sharding on the test db
mongo <<'EOF'
use admin
db.runCommand( { addshard : "s0/localhost:37017" } );
db.runCommand( { addshard : "s1/localhost:47017" } );
db.runCommand( { addshard : "s2/localhost:57017" } );
db.runCommand( { enableSharding: "test" } );
db.runCommand( { shardCollection: "test.some_collection", key: { some_id:1 } } );
EOF

Related

ERROR Configuring mongoDB using Ansible (MongoNetworkError: connect ECONNREFUSED)

I'm trying to configure a replicaset of mongodb using ansible,
I succeeded to install mongoDB on the primary server and created the replica-set configuration file except when I launch the playbook, I get an error of type: MongoNetworkError: connect ECONNREFUSED 3.142.150.62:28041
Does anyone have an idea please how to solve this?
attached, the playbook and the error on the Jenkins console
Playbook:
---
- name: Play1
hosts: hhe
#connection: local
become: true
#remote_user: ec2-user
#remote_user: root
tasks:
- name: Install gnupg
package:
name: gnupg
state: present
- name: Import the public key used by the package management system
shell: wget -qO - https://www.mongodb.org/static/pgp/server-5.0.asc | sudo apt-key add -
- name: Create a list file for MongoDB
shell: echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list
- name: Reload local package database
command: sudo apt-get update
- name: Installation of mongodb-org
package:
name: mongodb-org
state: present
update_cache: yes
- name: Start mongodb
service:
name: mongod
state: started
enabled: yes
- name: Play2
hosts: hhe
become: true
tasks:
- name: create directories on all the EC2 instances
shell: mkdir -p replicaset/member
- name: Play3
hosts: secondary1
become: true
tasks:
- name: Start mongoDB with the following command on secondary1
shell: nohup mongod --port 28042 --bind_ip localhost,ec2-18-191-39-71.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play4
hosts: secondary2
become: true
tasks:
- name: Start mongoDB with the following command on secondary2
shell: nohup mongod --port 28043 --bind_ip localhost,ec2-18-221-31-81.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play5
hosts: arbiter
become: true
tasks:
- name: Start mongoDB with the following command on arbiter
shell: nohup mongod --port 27018 --bind_ip localhost,ec2-13-58-35-255.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Play6
hosts: primary
become: true
tasks:
- name: Start mongoDB with the following command on primary
shell: nohup mongod --port 28041 --bind_ip localhost,ec2-3-142-150-62.us-east-2.compute.amazonaws.com --replSet replica_demo --dbpath replicaset/member &
- name: Create replicaset initialize file
copy:
dest: /tmp/replicaset_conf.js
mode: "u=rw,g=r,o=rwx"
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
- name: Pause for a while
pause: seconds=20
- name: Initialize the replicaset
shell: mongo /tmp/replicaset_conf.js
The error on Jenkins Consol:
PLAY [Play6] *******************************************************************
TASK [Gathering Facts] *********************************************************
ok: [primary]
TASK [Start mongoDB with the following command on primary] *********************
changed: [primary]
TASK [Create replicaset initialize file] ***************************************
ok: [primary]
TASK [Pause for a while] *******************************************************
Pausing for 20 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [primary]
TASK [Initialize the replicaset] ***********************************************
fatal: [primary]: FAILED! => {"changed": true, "cmd": "/usr/bin/mongo 3.142.150.62:28041 /tmp/replicaset_conf.js", "delta": "0:00:00.146406", "end": "2022-08-11 09:46:07.195269", "msg": "non-zero return code", "rc": 1, "start": "2022-08-11 09:46:07.048863", "stderr": "", "stderr_lines": [], "stdout": "MongoDB shell version v5.0.10\nconnecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :\nconnect#src/mongo/shell/mongo.js:372:17\n#(connect):2:6\nexception: connect failed\nexiting with code 1", "stdout_lines": ["MongoDB shell version v5.0.10", "connecting to: mongodb://3.142.150.62:28041/test?compressors=disabled&gssapiServiceName=mongodb", "Error: couldn't connect to server 3.142.150.62:28041, connection attempt failed: SocketException: Error connecting to 3.142.150.62:28041 :: caused by :: Connection refused :", "connect#src/mongo/shell/mongo.js:372:17", "#(connect):2:6", "exception: connect failed", "exiting with code 1"]}
You start the service already with
service:
name: mongod
state: started
enabled: yes
thus shell: nohup mongod ... & is pointless. You cannot start the mongod service multiple times, unless you use different port and dbPath. You should prefer to start the mongod as service, i.e. systemctl start mongod or similar instead of nohup mongod ... &. I prefer to use the configuration file (typically /etc/mongod.conf) rather than command line options.
Plain mongo command uses the default port 27017, i.e. it does not connect to the MongoDB instances you started in above task.
You should wait till replica set is initated. You can do it like this:
content: |
var cfg =
{
"_id" : "replica_demo",
"version" : 1,
"members" : [
{
"_id" : 0,
"host" : "3.142.150.62:28041"
},
{
"_id" : 1,
"host" : "18.191.39.71:28042"
},
{
"_id" : 2,
"host" : "18.221.31.81:28043"
}
]
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
You configured an ARBITER. However, an arbiter node is useful only with an even number of Replica Set members. With 3 members it does not make much sense. Anyway, you don't add the arbiter to your Replica Set, so what is the reason to define it?
Just a note, you don't have to create a temp file, you can execute script directly, e.g. similar to this:
shell:
cmd: mongo --eval '{{ script }}'
executable: /bin/bash
vars:
script: |
var cfg =
{
"_id" : "replica_demo",
...
}
rs.initiate(cfg)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
print(rs.status().ok)
register: ret
failed_when: ret.stdout_lines | last != "1"
Be aware of correct quoting.

MongoDB Replication addition failing

I'm trying to add 2 slaves in mongodb replication after successful initialization. But unfortunately it is failing.
repset_init.js file details
rs.add( { host: "10.0.1.170:27017" } )
rs.add( { host: "10.0.2.157:27017" } )
rs.add( { host: "10.0.3.88:27017" } )
command which i have executed for replicaset addition
mongo -u xxxxx -p yyyy --authenticationDatabase admin --port 27017 repset_init.js
command hangs in terminal and below is the log output
{"t":{"$date":"2021-06-09T12:29:34.939+00:00"},"s":"I", "c":"REPL", "id":21393, "ctx":"conn2","msg":"Found self in config","attr":{"hostAndPort":"MongoD-1:27017"}}
{"t":{"$date":"2021-06-09T12:29:34.939+00:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn2","msg":"Slow query","attr":{"type":"command","ns":"local.system.replset","appName":"MongoDB Shell","command":{"replSetReconfig":{"_id":"Shard_0","version":2,"protocolVersion":1,"writeConcernMajorityJournalDefault":true,"members":[{"_id":0,"host":"MongoD-1:27017","arbiterOnly":false,"buildIndexes":true,"hidden":false,"priority":1.0,"tags":{},"slaveDelay":0,"votes":1},{"host":"10.0.2.157:27017","_id":1.0}],"settings":{"chainingAllowed":true,"heartbeatIntervalMillis":2000,"heartbeatTimeoutSecs":10,"electionTimeoutMillis":10000,"catchUpTimeoutMillis":-1,"catchUpTakeoverDelayMillis":30000,"getLastErrorModes":{},"getLastErrorDefaults":{"w":1,"wtimeout":0},"replicaSetId":{"$oid":"60c0b3566991d93637465f55"}}},"lsid":{"id":{"$uuid":"263568b4-ec31-4ea6-8f72-69cec80c1a7c"}},"$db":"admin"},"numYields":0,"reslen":38,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":3}},"ReplicationStateTransition":{"acquireCount":{"w":5}},"Global":{"acquireCount":{"r":1,"w":4}},"Database":{"acquireCount":{"w":2,"W":1}},"Collection":{"acquireCount":{"w":2}},"Mutex":{"acquireCount":{"r":2}}},"flowControl":{"acquireCount":2,"timeAcquiringMicros":3},"storage":{},"protocol":"op_msg","durationMillis":151}}
{"t":{"$date":"2021-06-09T12:29:34.940+00:00"},"s":"I", "c":"REPL", "id":21215, "ctx":"ReplCoord-1","msg":"Member is in new state","attr":{"hostAndPort":"10.0.2.157:27017","newState":"STARTUP"}}
{"t":{"$date":"2021-06-09T12:29:34.941+00:00"},"s":"I", "c":"REPL", "id":4508702, "ctx":"conn2","msg":"Waiting for the current config to propagate to a majority of nodes"}
{"t":{"$date":"2021-06-09T12:33:55.701+00:00"},"s":"I", "c":"CONTROL", "id":20712, "ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"ShardingStateNotInitialized: sharding state is not yet initialized"}}
{"t":{"$date":"2021-06-09T12:33:55.701+00:00"},"s":"I", "c":"CONTROL", "id":20714, "ctx":"LogicalSessionCacheRefresh","msg":"Failed to refresh session cache, will try again at the next refresh interval","attr":{"error":"ShardingStateNotInitialized: sharding state is not yet initialized"}}
{"t":{"$date":"2021-06-09T12:34:35.029+00:00"},"s":"I", "c":"CONNPOOL", "id":22572, "ctx":"MirrorMaestro","msg":"Dropping all pooled connections","attr":{"hostAndPort":"10.0.2.157:27017","error":"ShutdownInProgress: Pool for 10.0.2.157:27017 has expired."}}
Additional details:
Shard_0:PRIMARY> rs.printSlaveReplicationInfo()
WARNING: printSlaveReplicationInfo is deprecated and may be removed in the next major release. Please use printSecondaryReplicationInfo instead.
source: 10.0.2.157:27017
syncedTo: Thu Jan 01 1970 00:00:00 GMT+0000 (UTC) 1623243005 secs (450900.83 hrs) behind the primary
Able to reach the node via port 27017
telnet 10.0.2.157 27017
Trying 10.0.2.157...
Connected to 10.0.2.157.
Escape character is '^]'.
My config file
net:
bindIp: 0.0.0.0
port: 27017
ssl: {}
processManagement:
fork: "true"
pidFilePath: /var/run/mongodb/mongod.pid
replication:
replSetName: Shard_0
security:
authorization: enabled
keyFile: /etc/zzzzzkey.key
setParameter:
authenticationMechanisms: SCRAM-SHA-256
sharding:
clusterRole: shardsvr
storage:
dbPath: /data/dbdata
engine: wiredTiger
systemLog:
destination: file
path: /data/log/mongodb.log
I'm initializing replicaset using below cmd
mongo --host 127.0.0.1 --port {{mongod_port}} --eval 'printjson(rs.initiate())'
Not sure what causing this issue. Could you please help me
The command looks a bit strange:
{
"replSetReconfig": {
"_id": "Shard_0",
"members": [
{ "_id": 0, "host": "MongoD-1:27017", "arbiterOnly": false, "hidden": false, "priority": 1.0, "slaveDelay": 0, "votes": 1 },
{ "_id": 1.0, "host": "10.0.2.157:27017" }
],
}
}
Why do you name your replica set Shard_0? Do you try to setup a Sharded Cluster?
You add _id: 0, host: "MongoD-1:27017" and _id: 1.0, host: "10.0.2.157:27017" which is not consistent, i.e. you mixed IP-Address and hostname. Also _id "0" and "1.0" is confusing.
How does your config files look like and how did you start the MongoDB services?

Mongod Replica set aborting after invariant() failure due to Stable timestamp Timestamp does not equal appliedThrough timestamp

I am a newbie to MongoDB. I was doing a POC on consuming documents using Java Client.
I am using a 4.2.5 version.
I have 3 instances of mongod running in my local with a Replica set as below.
mongod --port 27017 --dbpath /data/d1/ --replSet rs0 --bind_ip localhost
mongod --port 27018 --dbpath /data/d2/ --replSet rs0 --bind_ip localhost
mongod --port 27019 --dbpath /data/d3/ --replSet rs0 --bind_ip localhost
After a certain time, one or two of the instance gets abended and when I tend to start again, I see the same error. I am not sure about what causes this error.
Any help would be appreciated.
Error:
2020-05-25T19:37:47.126+0530 I REPL [initandlisten] Rollback ID is 1
2020-05-25T19:37:47.128+0530 F - [initandlisten] Invariant failure !stableTimestamp || stableTimestamp->isNull() || appliedThrough.isNull() || *stableTimestamp == appliedThrough.getTimestamp() Stable timestamp Timestamp(1590410112, 1) does not equal appliedThrough timestamp { ts: Timestamp(1590410172, 1), t: 5 } src/mongo/db/repl/replication_recovery.cpp 412
2020-05-25T19:37:47.128+0530 F - [initandlisten]
***aborting after invariant() failure
2020-05-25T19:37:47.137+0530 F - [initandlisten] Got signal: 6 (Abort trap: 6).
0x109e10cc6 0x109e1054d 0x7fff5d3c9b5d 0xa00 0x7fff5d2836a6 0x109e04d4a 0x1083597af 0x1083722ba 0x108376eb9 0x108077c6c 0x108071744 0x108070999 0x7fff5d1de3d5 0x9
----- BEGIN BACKTRACE -----
"backtrace":[{"b":"10806F000","o":"1DA1CC6","s":"_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE"},{"b":"10806F000","o":"1DA154D","s":"_ZN5mongo12_GLOBAL__N_110abruptQuitEi"},{"b":"7FFF5D3C5000","o":"4B5D","s":"_sigtramp"},{"b":"0","o":"A00"},
...
...
...
...
{ "path" : "/System/Library/PrivateFrameworks/BackgroundTaskManagement.framework/Versions/A/BackgroundTaskManagement", "machType" : 6, "b" : "7FFF41F95000", "vmaddr" : "7FFF3C6CD000", "buildId" : "2A396FC07B7930889A82FB93C1181A57" }, { "path" : "/usr/lib/libxslt.1.dylib", "machType" : 6, "b" : "7FFF5C842000", "vmaddr" : "7FFF56F7A000", "buildId" : "EC50E503AEEE3F50956F55E4AF4584D9" }, { "path" : "/System/Library/PrivateFrameworks/AppleSRP.framework/Versions/A/AppleSRP", "machType" : 6, "b" : "7FFF4177E000", "vmaddr" : "7FFF3BEB6000", "buildId" : "EDD16B2E4F353E13B389CF77B3CAD4EB" } ] }}
mongod(_ZN5mongo15printStackTraceERNSt3__113basic_ostreamIcNS0_11char_traitsIcEEEE+0x36) [0x109e10cc6]
mongod(_ZN5mongo12_GLOBAL__N_110abruptQuitEi+0xBD) [0x109e1054d]
libsystem_platform.dylib(_sigtramp+0x1D) [0x7fff5d3c9b5d]
??? [0xa00]
libsystem_c.dylib(abort+0x7F) [0x7fff5d2836a6]
mongod(_ZN5mongo22invariantFailedWithMsgEPKcRKNSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEES1_j+0x33A) [0x109e04d4a]
mongod(_ZN5mongo4repl23ReplicationRecoveryImpl16recoverFromOplogEPNS_16OperationContextEN5boost8optionalINS_9TimestampEEE+0x43F) [0x1083597af]
mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl21_startLoadLocalConfigEPNS_16OperationContextE+0x3AA) [0x1083722ba]
mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl7startupEPNS_16OperationContextE+0xE9) [0x108376eb9]
mongod(_ZN5mongo12_GLOBAL__N_114_initAndListenEi+0x28FC) [0x108077c6c]
mongod(_ZN5mongo12_GLOBAL__N_111mongoDbMainEiPPcS2_+0xDA4) [0x108071744]
mongod(main+0x9) [0x108070999]
libdyld.dylib(start+0x1) [0x7fff5d1de3d5]
??? [0x9]
----- END BACKTRACE -----
Abort trap: 6
There appears to be a ticket for this issue experienced by another user. You may consider engaging with MongoDB developers in that ticket to provide the requested information.

Set smallfiles in ShardingTest

I know there is a ShardingTest() object that can be used to create a testing sharding environment (see https://serverfault.com/questions/590576/installing-multiple-mongodb-versions-on-the-same-server), eg:
mongo --nodb
cluster = new ShardingTest({shards : 3, rs : false})
However, given that the disk space in my testing machine is limited and I'm getting "Insufficient free space for journal files" errors when using the above command, I'd like to set the smallfiles option. I have tried with the following with no luck:
cluster = new ShardingTest({shards : 3, rs : false, smallfiles: true})
How smallfiles can be enabled for a sharding test, please? Thanks!
A good way to determine how to use a MongoDB shell command is to type the command without the parentheses into the shell and instead of running it will print the source code for the command. So if you run
ShardingTest
at the command prompt you will see all of the source code. Around line 30 you'll see this comment:
// Allow specifying options like :
// { mongos : [ { noprealloc : "" } ], config : [ { smallfiles : "" } ], shards : { rs : true, d : true } }
which gives you the correct syntax to pass configuration parameters for mongos, config and shards (which apply to the non replicaset mongods for all the shards). That is, instead of specifying a number for shards you pass in an object. Digging further in the code:
else if( isObject( numShards ) ){
tempCount = 0;
for( var i in numShards ) {
otherParams[ i ] = numShards[i];
tempCount++;
}
numShards = tempCount;
This will take an object and use the subdocuments within the object as option parameters for each shard. This leads to, using your example:
cluster = new ShardingTest({shards : {d0:{smallfiles:''}, d1:{smallfiles:''}, d2:{smallfiles:''}}})
which from the output I can see is starting the shards with --smallfiles:
shell: started program mongod --port 30000 --dbpath /data/db/test0 --smallfiles --setParameter enableTestCommands=1
shell: started program mongod --port 30001 --dbpath /data/db/test1 --smallfiles --setParameter enableTestCommands=1
shell: started program mongod --port 30002 --dbpath /data/db/test2 --smallfiles --setParameter enableTestCommands=1
Alternatively, since you now have the source code in front of you, you could modify the javascript to pass in smallfiles by default.
A thorough explanation of the invoking modes of ShardingTest() is to be found in the source code of the function itself.
E.g., you could set smallFiles for two shards as follows:
cluster = new ShardingTest({shards: {d0:{smallfiles:''}, d1:{smallfiles:''}}})

Can 1 shard server have replica set while the rest don't

Is the below valid for MongoDB sharding?
sh.addShard( "shard01.local:27017" );
sh.addShard( "shard02.local:27017" );
sh.addShard( "rs0/shard03.local:27017,shard04.local:27017,shard05.local:27017" );
Meaning, I am setting up 3 shards, but only the third shard is a replica set. It is not working as the configsrv does not understand rs0.
Does MongoDB handle this? or if one of the shards is a replica set, all shards need to be in replica set.
Although it is not a best idea (each non replicated shard is single point of failure so you can as well create all of them as non replicated) you can mix replicated and non-replicated shards. Minimal working example below. My guess is you have error somewhere else in your configuration.
# Create some directories
mkdir -p ./s0/ ./s1/rs0 ./s1/rs1 ./s1/rs2 ./cfg/
# Start first shard
mongod --logpath "s0.log" --dbpath ./s0/ --port 37017 --fork --shardsvr
# Start second shard
mongod --replSet s1 --logpath "s1-r0.log" --dbpath ./s1/rs0 --port 47017 --fork --shardsvr
mongod --replSet s1 --logpath "s1-r1.log" --dbpath ./s1/rs1 --port 47018 --fork --shardsvr
mongod --replSet s1 --logpath "s1-r2.log" --dbpath ./s1/rs2 --port 47019 --fork --shardsvr
# Start config server and mongos
mongod --logpath "cfg.log" --dbpath ./cfg/ --port 57040 --fork --configsvr
mongos --logpath "mongos.log" --configdb localhost:57040 --fork
# Configure rs
mongo --port 47017 << 'EOF'
rs.initiate(
{ _id: "s1", members:[
{ _id : 0, host : "localhost:47017" },
{ _id : 1, host : "localhost:47018" },
{ _id : 2, host : "localhost:47019" }]
});
EOF
# Configure sharding
mongo <<'EOF'
db.adminCommand( { addshard : "localhost:37017" } );
db.adminCommand( { addshard : "s1/"+"localhost:47017,localhost:47018,localhost:47019" } );
db.adminCommand({enableSharding: "test"})
db.adminCommand({shardCollection: "test.foo", key: {bar: 1}});
EOF