I am having this problem, for the first time. I am running my app to device with distribution + Ad-Hoc provision profile but I can't able to launch app the first time in device, as I am getting this error continuously:
Mar 1 18:07:58 My-iPhon kernel[0] : launchd[276] Builtin profile: container (sandbox)
Mar 1 18:07:58 My-iPhon kernel[0] : launchd[276] Container: /private/var/mobile/Applications/E142C3CE-F6E0-4C77-ABE8-1B764DA216FE (sandbox)
Mar 1 18:07:58 My-iPhon com.apple.debugserver-189[261] : 1 +0.000000 sec [0105/0303]: error: ::task_for_pid ( target_tport = 0x0103, pid = 276, &task ) => err = 0x00000005 ((os/kern) failure) err = ::task_for_pid ( target_tport = 0x0103, pid = 276, &task ) => err = 0x00000005 ((os/kern) failure) (0x00000005)
Mar 1 18:07:58 My-iPhon mobile_house_arrest[280] : Max open files: 125
Mar 1 18:07:59 My-iPhon com.apple.debugserver-189[261] : 2 +0.417620 sec [0105/0303]: error: ::task_for_pid ( target_tport = 0x0103, pid = 276, &task ) => err = 0x00000005 ((os/kern) failure) err = ::task_for_pid ( target_tport = 0x0103, pid = 276, &task ) => err = 0x00000005 ((os/kern) failure) (0x00000005)
Mar 1 18:07:59 My-iPhon mobile_house_arrest[281] : Max open files: 125
Mar 1 18:07:59 My-iPhon mobile_house_arrest[282] : Max open files: 125
After I launch, the app crashed and in Device Console I got this error:
Mar 1 18:11:44 My-iPhon backboardd[52] : BKSendGSEvent ERROR sending event type 50: (ipc/send) invalid destination port (0x10000003)
Mar 1 18:11:44 My-iPhon com.apple.launchd[1] (UIKitApplication:com.xxx.myApp[0x3077][276]) : (UIKitApplication:com.xxxx.myapp[0x3077]) Exited: Killed: 9
Mar 1 18:11:44 My-iPhon com.apple.debugserver-189[261] : 21 +216.166834 sec [0105/0303]: RNBRunLoopLaunchInferior DNBProcessLaunch() returned error: 'failed to get the task for process 276'
Mar 1 18:11:44 My-iPhon com.apple.debugserver-189[261] : error: failed to launch process (null): failed to get the task for process 276
Mar 1 18:11:44 My-iPhon backboardd[52] : Application 'UIKitApplication:com.xxxxx.myApp[0x3077]' quit with signal 9: Killed: 9
However, the third time its running normally!
I have tried it many ways like
Recreated my provision profile and also given entitlement.plist for Ad-Hoc distribution
I set the scheme's build configuration to debug, so how can i solve this error while running my app first time on my device
I restated my device
No matter what I try, I get this error! Can any of you explain this?
You can try using development certificate. It will work fine if you install IPA file in your device.
Use Ad-hoc and distribution provisioning profiles when you are going to upload your app to the app store.
Related
I have these setting in the keepalived.conf file but when I stop the HAProxy service it's not executing the notify script but when I restart the keepalived service it's getting executed every time. Here are the details,
HAProxy: 1.8.8
Keepalived: 2.0.18
OS: Ubuntu 18.04
Python: 2.7
Cloud Service Provider: Hetzner
/etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
# Requires keepalived-1.1.13
script "/usr/bin/pkill -0 haproxy" # cheaper than pidof
interval 2 # check every 2 seconds
weight 2 # add 2 points of priority if OK
}
vrrp_instance real {
interface eth0
state MASTER
virtual_router_id 51
priority 101 # 101 on primary, 100 on secondary
virtual_ipaddress {
11.23.10.19/32 dev eth0 label eth0:1
}
track_script {
chk_haproxy
}
notify "/etc/keepalived/master.sh"
#notify_backup "/etc/keepalived/master.sh"
#notify_fault "/etc/keepalived/master.sh"
}
/etc/keepalived/master.sh
#!/bin/bash
export API_TOKEN='<api_token>'
export MASTER_SERVER_ID='<master_server_id>'
export BACKUP_SERVER_ID='<backup_server_id>'
BASE_API='https://api.hetzner.cloud/v1'
FLOATING_IP_ID='<floating_ip_id>'
INSTANCE="Load-Balancer-Master"
if [ "$HOSTNAME" = "$INSTANCE" ]; then
SERVER_ID=$BACKUP_SERVER_ID # switch to the backup server if
# master gets down
else
SERVER_ID=$MASTER_SERVER_ID # vice-versa
fi
echo "Server ID: " $SERVER_ID
HAS_FLOATING_IP=$(curl -H "Authorization: Bearer $API_TOKEN" -s 'https://api.hetzner.cloud/v1/servers/'$SERVER_ID|python -c "import sys,json; print( True if json.load(sys.stdin)['server']['public_net']['floating_ips'] else False)")
echo "Has Floating Ip: " $HAS_FLOATING_IP
if [ $HAS_FLOATING_IP = "False" ]; then
n=0
while [ $n -lt 10 ]
do
python /usr/local/bin/assign-ip $FLOATING_IP_ID $SERVER_ID && break
n=$((n+1))
sleep 3
done
fi
/usr/local/bin/assign-ip
#!/usr/bin/python
import os
import sys
import requests
import json
api_base = 'https://api.hetzner.cloud/v1'
def usage():
print('{0} [Floating IP] [Server ID]'.format(sys.argv[0]))
print('\nYour Hetzner API token must be in the "API_TOKEN"'
' environmental variable.')
def main(floating_ip_id, server_id):
payload = {'server': server_id}
headers = {'Authorization': 'Bearer {0}'.format(os.environ['API_TOKEN']),
'Content-type': 'application/json'}
url = api_base + "/floating_ips/{0}/actions/assign".format(floating_ip_id)
r = requests.post(url, headers=headers, data=json.dumps(payload))
resp = r.json()
if resp['action']['error']:
print('{0}: {1}'.format(resp['action']['command'], resp['error']['message']))
sys.exit(1)
else:
print('Moving IP address to server: {0} with status:{1}'.format(server_id, resp['action']['status']))
if __name__ == "__main__":
if 'API_TOKEN' not in os.environ or not len(sys.argv) > 2:
usage()
sys.exit()
main(sys.argv[1], sys.argv[2])
When I stop the HAProxy server using sudo service haproxy stop and check the status I get this response,
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2019-09-28 22:12:57 IST; 1s ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 26434 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE $EXTRAOPTS (code=exited, stat
Process: 26423 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SU
Main PID: 26434 (code=exited, status=143)
Sep 28 00:44:18 Load-Balancer-Master haproxy[26434]: Proxy nginx_pool started.
Sep 28 00:44:18 Load-Balancer-Master haproxy[26434]: Proxy nginx_pool started.
Sep 28 00:44:18 Load-Balancer-Master systemd[1]: Started HAProxy Load Balancer.
Sep 28 22:12:57 Load-Balancer-Master haproxy[26434]: [WARNING] 270/004418 (26434) : Exiting Master pr
Sep 28 22:12:57 Load-Balancer-Master haproxy[26434]: [ALERT] 270/004418 (26434) : Current worker 2643
Sep 28 22:12:57 Load-Balancer-Master haproxy[26434]: [WARNING] 270/004418 (26434) : All workers exite
Sep 28 22:12:57 Load-Balancer-Master systemd[1]: Stopping HAProxy Load Balancer...
Sep 28 22:12:57 Load-Balancer-Master systemd[1]: haproxy.service: Main process exited, code=exited, s
Sep 28 22:12:57 Load-Balancer-Master systemd[1]: haproxy.service: Failed with result 'exit-code'.
Sep 28 22:12:57 Load-Balancer-Master systemd[1]: Stopped HAProxy Load Balancer.
and in the /var/log/syslog I get this,
Sep 28 18:35:41 Load-Balancer-Master systemd[1]: Started Session 114 of user driveu.
Sep 28 18:42:57 Load-Balancer-Master systemd[1]: Stopping HAProxy Load Balancer...
Sep 28 18:42:57 Load-Balancer-Master systemd[1]: haproxy.service: Main process exited, code=exited, status=143/n/a
Sep 28 18:42:57 Load-Balancer-Master systemd[1]: haproxy.service: Failed with result 'exit-code'.
Sep 28 18:42:57 Load-Balancer-Master systemd[1]: Stopped HAProxy Load Balancer.
Sep 28 18:42:57 Load-Balancer-Master Keepalived_vrrp[26884]: Script `chk_haproxy` now returning 1
Sep 28 18:42:57 Load-Balancer-Master Keepalived_vrrp[26884]: VRRP_Script(chk_haproxy) failed (exited with status 1)
Sep 28 18:42:57 Load-Balancer-Master Keepalived_vrrp[26884]: (real) Changing effective priority from 103 to 101
But the notify script does not get called and the floating ip does not get assigned to the BACKUP instance. As I am really new to Keepalived could anyone please help me to fix this issue?
Update: I have solved this problem
The interface should be the private network and have to specify the private ips of the MASTER and the BACKUP using unicast_src_ip and unicast_peer. The Modified setting is here,
vrrp_script chk_haproxy {
# Requires keepalived-1.1.13
script "/usr/bin/pkill -0 haproxy" # cheaper than pidof
interval 2 # check every 2 seconds
weight 2 # add 2 points of priority if OK
}
vrrp_instance real {
interface ens10 # changed it from eth0
state MASTER
virtual_router_id 51
priority 101 # 101 on primary, 100 on secondary
unicast_src_ip 192.168.0.3
unicast_peer {
192.168.0.2
}
authentication {
auth_type PASS
auth_pass password
}
virtual_ipaddress {
11.23.10.19/32 dev eth0 label eth0:1
}
track_script {
chk_haproxy
}
notify "/etc/keepalived/master.sh"
#notify_backup "/etc/keepalived/master.sh"
#notify_fault "/etc/keepalived/master.sh"
}
Why does my On-Failure/Success emails repeat their text, sometimes hundreds of times? The problem affects email tasks both On-Success and On-Failure.
I have a session where I have an On-Failure Email to be sent. I created a Task and am referencing it under the Components.
The email arrives, but the text of the email has been repeated, in this case 41 times. In other emails, I have seen the text content repeated 250 times. Ultimately, I will see an error message in the session log.
Why would the text repeat itself?
In the Task definition the email text is:
** %e **
Folder: %n
Workflow: %w
Session: %s
Status: %e
%b
%c
%i
Mapping: %m
%l
%r
%g
Here is the email itself. I am only showing a portion where you can see the repeating text blocks. In this email, the text repeated 41 times. The text starts with with my first line: '** Failed **'
Why? Bug? We use Informatica 10.0
** Failed **
Folder: Davidson
Workflow: wf_m_Event_Wait_AUS
Session: s_m_Event_Wait_AUS_bo
Start Time: Mon Jun 26 10:25:14 2017
Completion Time: Mon Jun 26 10:25:17 2017
Elapsed time: 0:00:02 (h:m:s)
Mapping: m_Event_Wait_AUS_bo [version 6]
Total Rows Loaded = 0
Total Rows Rejected = 0** Failed **
Folder: Davidson
Workflow: wf_m_Event_Wait_AUS
Session: s_m_Event_Wait_AUS_bo
Start Time: Mon Jun 26 10:25:14 2017
Completion Time: Mon Jun 26 10:25:17 2017
Elapsed time: 0:00:02 (h:m:s)
Mapping: m_Event_Wait_AUS_bo [version 6]
Total Rows Loaded = 0
Total Rows Rejected = 0** Failed **
Folder: Davidson
Workflow: wf_m_Event_Wait_AUS
Session: s_m_Event_Wait_AUS_bo
Start Time: Mon Jun 26 10:25:14 2017
Completion Time: Mon Jun 26 10:25:17 2017
Elapsed time: 0:00:02 (h:m:s)
Mapping: m_Event_Wait_AUS_bo [version 6]
Total Rows Loaded = 0
Total Rows Rejected = 0
I've got an app that is served up by Hypnotoad, with no reverse proxy.It has 15 workers, with 2 clients allowed apiece. The app is launched via hypnotoad in foreground mode.
I am seeing the following in the log/production.log:
[Wed Apr 1 16:28:12 2015] [error] Worker 119914 has no heartbeat, restarting.
[Wed Apr 1 16:28:21 2015] [error] Worker 119910 has no heartbeat, restarting.
[Wed Apr 1 16:28:21 2015] [error] Worker 119913 has no heartbeat, restarting.
[Wed Apr 1 16:28:22 2015] [error] Worker 119917 has no heartbeat, restarting.
[Wed Apr 1 16:28:22 2015] [error] Worker 119909 has no heartbeat, restarting.
[Wed Apr 1 16:28:27 2015] [error] Worker 119907 has no heartbeat, restarting.
[Wed Apr 1 16:28:34 2015] [error] Worker 119905 has no heartbeat, restarting.
[Wed Apr 1 16:28:42 2015] [error] Worker 119904 has no heartbeat, restarting.
[Wed Apr 1 16:30:12 2015] [error] Worker 119912 has no heartbeat, restarting.
[Wed Apr 1 16:31:23 2015] [error] Worker 119918 has no heartbeat, restarting.
[Wed Apr 1 16:32:18 2015] [error] Worker 119911 has no heartbeat, restarting.
[Wed Apr 1 16:32:22 2015] [error] Worker 119916 has no heartbeat, restarting.
However, the workers are never restarted.
When I run an strace, the manager process appears to be valiantly trying to kill the (now expired) workers:
Process 119878 attached - interrupt to quit
restart_syscall(<... resuming interrupted call ...>) = 0
kill(119906, SIGKILL) = 0
kill(119917, SIGKILL) = 0
kill(119905, SIGKILL) = 0
kill(119910, SIGKILL) = 0
kill(119904, SIGKILL) = 0
kill(119914, SIGKILL) = 0
kill(119916, SIGKILL) = 0
kill(119908, SIGKILL) = 0
kill(119913, SIGKILL) = 0
kill(119915, SIGKILL) = 0
kill(119918, SIGKILL) = 0
kill(119912, SIGKILL) = 0
kill(119909, SIGKILL) = 0
kill(119911, SIGKILL) = 0
kill(119907, SIGKILL) = 0
stat("/xxx/xxx/xxx/hypnotoad.pid", {st_mode=S_IFREG|0644, st_size=6, ...}) = 0
poll([{fd=4, events=POLLIN|POLLPRI}], 1, 1000) = 0 (Timeout)
kill(119906, SIGKILL) = 0
kill(119917, SIGKILL) = 0
kill(119905, SIGKILL) = 0
kill(119910, SIGKILL) = 0
kill(119904, SIGKILL) = 0
kill(119914, SIGKILL) = 0
kill(119916, SIGKILL) = 0
kill(119908, SIGKILL) = 0
kill(119913, SIGKILL) = 0
kill(119915, SIGKILL) = 0
kill(119918, SIGKILL) = 0
kill(119912, SIGKILL) = 0
kill(119909, SIGKILL) = 0
kill(119911, SIGKILL) = 0
kill(119907, SIGKILL) = 0
stat("/xxx/xxx/xxx/hypnotoad.pid", {st_mode=S_IFREG|0644, st_size=6, ...}) = 0
poll([{fd=4, events=POLLIN|POLLPRI}], 1, 1000^C <unfinished ...>
Process 119878 detached
How can I troubleshoot this further to determine:
Why does Hypnotoad think it still needs to kill non-existent
processes?
Why isn't it starting new ones?
What does "Worker 31842 has no heartbeat, restarting" mean?
As long as they are accepting new connections, worker processes of all built-in preforking web servers send heartbeat messages to the manager process at regular intervals, to signal that they are still responsive. A blocking operation such as an infinite loop in your application can prevent this, and will force the affected worker to be restarted after a timeout. This timeout defaults to 20 seconds and can be extended with the attribute "heartbeat_timeout" in Mojo::Server::Prefork if your application requires it.
http://mojolicio.us/perldoc/Mojolicious/Guides/FAQ#What-does-Worker-31842-has-no-heartbeat-restarting-mean
mongodb has unexpectedly crashed with following stack:
Sun Dec 29 11:30:43 [conn410] build index XXX { referral: 1 }
Sun Dec 29 11:30:43 [conn410] build index done 26597 records 0.056 secs
Sun Dec 29 11:30:43 Invalid access at address: 0
Sun Dec 29 11:30:43 Got signal: 11 (Segmentation fault).
Sun Dec 29 11:30:43 Backtrace:
0xa83fc9 0xa845a0 0x7fdad2200490 0x54d7dc 0x83d551 0x83d756 0x83d8b0 0x9493e3 0x8c0b66 0x8cd4b0 0x8cdb06 0x8d21ce 0x8d3be5 0x8d40e0 0x8d6200 0x94360d 0x948375 0x8823bc 0x885405 0xa96a46
/usr/bin/mongod(_ZN5mongo10abruptQuitEi+0x399) [0xa83fc9]
/usr/bin/mongod(_ZN5mongo24abruptQuitWithAddrSignalEiP7siginfoPv+0x220) [0xa845a0]
/lib64/libpthread.so.0(+0xf490) [0x7fdad2200490]
/usr/bin/mongod(_ZN5mongo24FieldRangeVectorIterator7advanceERKNS_7BSONObjE+0x4c) [0x54d7dc]
/usr/bin/mongod(_ZN5mongo11BtreeCursor29skipOutOfRangeKeysAndCheckEndEv+0x81) [0x83d551]
/usr/bin/mongod(_ZN5mongo11BtreeCursor12skipAndCheckEv+0x26) [0x83d756]
/usr/bin/mongod(_ZN5mongo11BtreeCursor7advanceEv+0x100) [0x83d8b0]
/usr/bin/mongod(_ZN5mongo8UpdateOp4nextEv+0x253) [0x9493e3]
/usr/bin/mongod(_ZN5mongo12QueryPlanSet6Runner6nextOpERNS_7QueryOpE+0x56) [0x8c0b66]
/usr/bin/mongod(_ZN5mongo12QueryPlanSet6Runner4nextEv+0x110) [0x8cd4b0]
/usr/bin/mongod(_ZN5mongo12QueryPlanSet6Runner22runUntilFirstCompletesEv+0x56) [0x8cdb06]
/usr/bin/mongod(_ZN5mongo12QueryPlanSet5runOpERNS_7QueryOpE+0x11e) [0x8d21ce]
/usr/bin/mongod(_ZN5mongo16MultiPlanScanner9runOpOnceERNS_7QueryOpE+0x525) [0x8d3be5]
/usr/bin/mongod(_ZN5mongo11MultiCursor10nextClauseEv+0x70) [0x8d40e0]
/usr/bin/mongod(_ZN5mongo11MultiCursorC1EPKcRKNS_7BSONObjES5_N5boost10shared_ptrINS0_8CursorOpEEEb+0x220) [0x8d6200]
/usr/bin/mongod(_ZN5mongo14_updateObjectsEbPKcRKNS_7BSONObjES2_bbbRNS_7OpDebugEPNS_11RemoveSaverE+0x35d) [0x94360d]
/usr/bin/mongod(_ZN5mongo13updateObjectsEPKcRKNS_7BSONObjES2_bbbRNS_7OpDebugE+0x125) [0x948375]
/usr/bin/mongod(_ZN5mongo14receivedUpdateERNS_7MessageERNS_5CurOpE+0x47c) [0x8823bc]
/usr/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x1105) [0x885405]
/usr/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x76) [0xa96a46]
Logstream::get called in uninitialized state
Sun Dec 29 11:30:43 ERROR: Client::~Client _context should be null but is not; client:conn
Logstream::get called in uninitialized state
Sun Dec 29 11:30:43 ERROR: Client::shutdown not called: conn
how can I know what caused the crashing? Is there another log that describes more details?
I have a replica set of 3 MongoDB instances. The instances have 8GB of RAM and Dual Core 2.27 GHz CPUs. All instances are running version 2.2.2 (I saw the same behavior from 2.0.1).
Here's my issue: Our primary instance (master of the replica set) recently acquired the habit of crawling to 100% CPU every 2 days. Tracking down the cause, I decided to run the MongoDB profiler. I found hundreds of extremely slow queries. Here is an example:
> db.system.profile.find()
{
"ts" : ISODate("2012-12-16T20:31:39.078Z"),
"op" : "command",
"ns" : "stylesaint.$cmd",
"command" : {
"count" : "tears",
"query" : {
"_id" : { "$gt" : ObjectId("50cdeadeaf58d3de96000294") },
"active" : true,
"is_image_processed" : true,
"hidden_from_feed" : false,
"hidden_from_public_feeds" : false
},
"fields" : null
},
"ntoreturn" : 1,
"responseLength" : 48,
"millis" : 13930,
"client" : "#########"
}
From what I've read about mongodb, the natural next step in these situations is to try explain()ing those queries. However, explain() does not explain the slowness of the query:
> db.tears.find({ "_id" : { "$gt" : ObjectId("50cdeadeaf58d3de96000294") }, "active" : true, "is_image_processed" : true, "hidden_from_feed" : false, "hidden_from_public_feeds" : false }).explain()
{
"cursor" : "BtreeCursor id",
"isMultiKey" : false,
"n" : 4,
"nscannedObjects" : 5,
"nscanned" : 5,
"nscannedObjectsAllPlans" : 23,
"nscannedAllPlans" : 25,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"_id" : [
[
ObjectId("50cdeadeaf58d3de96000294"),
ObjectId("ffffffffffffffffffffffff")
]
]
},
"server" : "#########"
}
Scanning 5 documents should not take 13 seconds. Something else is going on that is slowing down the query. Maybe some other query is starving the server's resources? However, I don't know where to look. Any advice you can offer is appreciated.
MongoDB Logs
I couldn't find any warnings in the startup process:
***** SERVER RESTARTED *****
Sun Dec 16 21:02:56 [initandlisten] MongoDB starting : pid=...
Sun Dec 16 21:02:56 [initandlisten] db version v2.2.2, pdfile version 4.5
Sun Dec 16 21:02:56 [initandlisten] git version: ...
Sun Dec 16 21:02:56 [initandlisten] build info: Linux 2.6.21.7-2 ...
Sun Dec 16 21:02:56 [initandlisten] options: { config: "/etc/mongodb.conf", dbpath: "/data/mongodb", logappend: "true", logpath: "/var/log/mongodb/mongodb.log", replSet: "...", rest: "true" }
Sun Dec 16 21:02:56 [initandlisten] journal dir=/data/mongodb/journal
Sun Dec 16 21:02:56 [initandlisten] recover : no journal files present, no recovery needed
Sun Dec 16 21:02:56 [initandlisten] waiting for connections on port ...
Sun Dec 16 21:02:56 [websvr] admin web console waiting for connections on port ...
Sun Dec 16 21:02:56 [initandlisten] connection accepted from ...
Sun Dec 16 21:02:56 [conn1] end connection ... (0 connections now open)
Sun Dec 16 21:02:56 [initandlisten] connection accepted from ... #2 (1 connection now open)
Sun Dec 16 21:02:56 [rsStart] replSet I am ...
Sun Dec 16 21:02:56 [rsStart] replSet STARTUP2
Sun Dec 16 21:02:56 [rsHealthPoll] replSet member ... is up
Sun Dec 16 21:02:56 [rsHealthPoll] replSet member ... is now in state SECONDARY
Sun Dec 16 21:02:57 [initandlisten] connection accepted from ... #3 (2 connections now open)
Sun Dec 16 21:02:57 [rsSync] replSet SECONDARY
Sun Dec 16 21:02:58 [initandlisten] connection accepted from ... #4 (3 connections now open)
Sun Dec 16 21:02:58 [initandlisten] connection accepted from ... #5 (4 connections now open)
Sun Dec 16 21:02:58 [conn5] end connection ... (3 connections now open)
Sun Dec 16 21:02:58 [rsHealthPoll] replSet member ... is up
Sun Dec 16 21:02:58 [rsHealthPoll] replSet member ... is now in state PRIMARY
Sun Dec 16 21:02:59 [initandlisten] connection accepted from ... #6 (4 connections now open)
Sun Dec 16 21:03:00 [initandlisten] connection accepted from ... #7 (5 connections now open)
Sun Dec 16 21:03:02 [conn7] end connection ... (4 connections now open)
Sun Dec 16 21:03:03 [rsBackgroundSync] replSet syncing to: ...
Sun Dec 16 21:03:04 [rsSyncNotifier] replset setting oplog notifier to ...
Sun Dec 16 21:03:06 [conn2] end connection ... (3 connections now open)
Sun Dec 16 21:03:06 [initandlisten] connection accepted from ... #8 (4 connections now open)
Sun Dec 16 21:03:08 [initandlisten] connection accepted from ... #9 (5 connections now open)
Sun Dec 16 21:03:13 [initandlisten] connection accepted from ... #10 (6 connections now open)
Sun Dec 16 21:03:13 [conn10] end connection ... (5 connections now open)
Sun Dec 16 21:03:13 [initandlisten] connection accepted from ... #11 (6 connections now open)
Sun Dec 16 21:03:15 [conn3] end connection ... (5 connections now open)
Sun Dec 16 21:03:16 [rsHealthPoll] replSet member .... is now in state SECONDARY
Sun Dec 16 21:03:16 [rsMgr] replSet info electSelf 1
Sun Dec 16 21:03:16 [rsMgr] replSet PRIMARY
Re: Request for more info
At the moment, MongoDB is functioning normally; there are no queries above 100ms. As soon as 100% CPU happens again, I'll post more info about system resources.
First off, I think the queries are probably a red herring. Are you running these servers under a NUMA architecture? You might read over the Mongo docs for usage on NUMA systems.
If you are running on a NUMA system, then using numactl to run your daemon with an interleave policy will probably fix your issue.
You can check to see if you have any startup warnings. They will appear in your log while you're booting the daemon, and and you can find them after the fact while the daemon is running, though I don't recall how off the top of my head.
Failing that, you might check your IO operations while making those queries. If I had to guess, you're hitting your disk and not operating with your working set in memory. What do your memory usage stats (free -h and the memory usage metrics from inside the mongo console) look like?