How do I check the expiration date on a memcached entry - memcached

I'm running apache/php/memcached on ubuntu 12.04
The keys I am setting are persisting beyond their expiration. We are using the same code that worked with the memcached PAAS we are migrating from.
How do I confirm that the expiration is actually being set on a key?
If I telnet and get [mykey] it simply shows the value, not the expiration.

You can verify the expiration time of an item in memcached by running stats cachedump command. Cachedump require two numeric arguments; item index, and the number of items to display. Memcached will return a list of items with size (in bytes), and a unix timestamp.
Example telnet session
// Add something to cache
>> set apples 0 130 3
>> Foo
<< STORED
// List ITEM stats
>> stats items
<< STAT items:1:number 1
<< STAT items:1:age 7
<< STAT items:1:evicted 0
<< STAT items:1:evicted_nonzero 0
<< STAT items:1:evicted_time 0
<< STAT items:1:outofmemory 0
<< STAT items:1:tailrepairs 0
<< STAT items:1:reclaimed 0
<< STAT items:1:expired_unfetched 0
<< STAT items:1:evicted_unfetched 0
<< END
// Verify expiration time
>> stats cachedump 1 1
<< ITEM apples [3 b; 1393944062 s]
<< END
And 1393944062 => Tue, 04 Mar 2014 08:41:02 -0600

Related

Showing Google Drive files in Flutter App using Google Apps Script

This is my first ever question on StackOverflow so I hope I am able to do it properly.
I am trying to achieve a solution to a problem that I have been trying for a few weeks now. My needs are as follows:
Administrator able to save files in Google Drive in a specific folder for the app.
This folder will have subfolders and each subfolder can have files (Docs, Slides, Drawings, Spreadsheets).
An excel spreadsheet that will list all these files with their ID, can be generated using Apps Script easily.
The spreadsheet can then be imported into Flutter App. Example here.
Now when a user selects the file list item in the Flutter App, I want to show the selected file content inside the flutter app without having to download it as separate file. (This is where I am having issues).
I have tried and managed to get the blob of the file, but that's where I get stuck. I don't know how to use that blob in Flutter.
Here is my trial Google Apps Script code: (its not complete but just an example to get blob)
function doGet(e){
//var params = JSON.stringify(e);
var appFolderId = "1HsJZytwvtNZJK9x7Q6AZNosN6_addjLo";
//var appFolderName = "";
var appFolder = DriveApp.getFolderById(appFolderId);
Logger.log(appFolder.getName());
var filesJsonString = JSON.stringify(getFilesInFolder(appFolder));
Logger.log(filesJsonString);
//return ContentService.createTextOutput('Hello, world!');
}
//Return json object containing array of file objects.
function getFilesInFolder(folder) {
var filesArray = [];
var files = folder.getFiles();
while(files.hasNext()){
var fileObj = {};
var file = files.next();
fileObj['id'] = file.getId();
fileObj['mime'] = file.getMimeType();
fileObj['name'] = file.getName();
fileObj['desc'] = file.getDescription();
fileObj['url'] = file.getUrl();
fileObj['download'] = file.getDownloadUrl();
var fileBlob = file.getBlob();
var raw = fileBlob.getDataAsString();
fileObj['blob'] = raw;
filesArray.push(fileObj);
}
return {filesArray};
}
The result of this code is as follows, (if you scroll to the right it will show a huge blob):
[20-10-17 04:30:35:653 PDT] MyApp
[20-10-17 04:30:37:325 PDT] Logging output too large. Truncating output. {"filesArray":[{"id":"1wzR-7fWT_vA28j82scazk7jVSUnDwSjkwjvup8Ovr9k","mime":"application/vnd.google-apps.document","name":"MApp Readme","desc":null,"url":"https://docs.google.com/document/d/1wzR-7fWT_vA28j82scazSjkwjvuk7jVSUnDwp8Ovr9k/edit?usp=drivesdk","download":null,"blob":"%PDF-1.5\n%����\n2 0 obj\n<< /Linearized 1 /L 18715 /H [ 750 126 ] /O 6 /E 18440 /N 1 /T 18439 >>\nendobj\n \n3 0 obj\n<< /Type /XRef /Length 51 /Filter /FlateDecode /DecodeParms << /Columns 4 /Predictor 12 >> /W [ 1 2 1 ] /Index [ 2 24 ] /Info 11 0 R /Root 4 0 R /Size 26 /Prev 18440 /ID [<df70ed3aa3e7587759206db70489ab30><df70ed3aa3e7587759206db70489ab30>] >>\nstream\nx�cbd�g`b`8\t$���XF#��\u000eDh\u0001\t\u0016[ a�\u0000�-g`bܫ\u0000R��H1\u0001\u0000�l\u0005K\nendstream\nendobj\n \n4 0 obj\n<< /Pages 23 0 R /Type /Catalog >>\nendobj\n5 0 obj\n<< /Filter /FlateDecode /S 36 /Length 49 >>\nstream\nx�c```e``z�\u0000\u0004.s\u0018�\u0000�f\u0006b\u0016�(H-\u001830j1�10\u0018��6�r_`\u0000\u0000�i\u00054\nendstream\nendobj\n6 0 obj\n<< /Annots [ 14 0 R 15 0 R 16 0 R 17 0 R 18 0 R 19 0 R 20 0 R 21 0 R 22 0 R ] /Contents 7 0 R /MediaBox [ 0 0 596 843 ] /Parent 23 0 R /Resources << /ExtGState << /G3 12 0 R >> /Font << /F4 13 0 R >> /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ] >> /StructParents 0 /Type /Page >>\nendobj\n7 0 obj\n<< /Filter /FlateDecode /Length 1013 >>\nstream\nx��mk�0\u0010���S����t��\u0003�A۴}�\u0011�\u0007�� �u�\u001f&;m,��\u0013۩�Y�#IO9��$�O\u000f�U&��l�\u0013٩o\u000f�秊H�Y#:I��\u001e4_?�_�L\u00071�R�No�_V��}�U�\u000f�����S��:Cbe-����ŒlX'2\u0014�/���݇���9��_a�%�WTۻ���\u001a�}hr=g�$���>\u001a��'���$\u001d�xíC�\u0002�\u0015\u0004�(Z��\\�����x��+\\u\u0005N�x���\u001a�+�ڊ�&�\u001a\u0012��2�� a_`-(�\u000bT\u0015�-�U�\r��#���\u0018BkMW`I��!���8\u0014+\u0003�\u0003j��.Ov��l�xM��m�B�-\u0014xS��ُv�I��0\u001cs}�\u001fWh\u0012�h\u0004�t\u0004�\b�r胻F\u0005pȖ�\u000e���^�[\u0005Gl���#݀�\u0016\u0010\u0010l�\u0015�\t\f4�#\u000eF\u00011rΡ��9�����N\u0003\u0001'��� %\u000f�\u0006#�C\u001b��.���-g���\u0002\u000e��>�D\u0003\\\u0000�\f\b�ݠ$Js\u000b\u0006\u0013h�\u000fD\tI�FA\u0005\b\"�\u000f\u0014\u0010l\u0014JA0�o\u001e̓�20���S�<�se��ƺ�!2�\u001c(YO���\u0002�\u0019%��?�k&���)�~��(��0]��\u0010��I�i��0���)˹)$�K�\u0003\u0015�d�p/���'\u0002��\u0004�X\tP\tP\tP\tp�\u0004�QE\r�ɅRA�;���c\u001b\u001a���K`\u000f\u0007!)`\u0004\u000fBa\\1T7 \u0016�7�\u000b�v�y�7x�Ir���Z��\u0015�\u0015�\u0015�\u0015��\fܝ��h�L1�\u001f�\u000e��c�D,Z���V~���皟k~���\u001d��7O��И�\u0002����\n�(\u001a\u001bh,��]�\u0011\u001b\rgudם��9Ӄl2���u)H��#�`�`�`��)R��\u001d\u000b��B\u0007А\u0007\u0000��\u0018|�\u0005�*�޴�8��\f|�\u0004�\u001b�2�\u0005��s:�\u001eC~�K��k\u000b�嘛2\u0013\u0007�����Zd�\u001a�2i&�]�����N��X�uZ]��uZ]��/���!\u0007&=\u000f],�\u000ed�#Ϊ��]��k���5���_�M�/6��ŐP���a\u0013�ŌSf�O:\u0007�}֤��\u000b���q���s\u0003�-�-�-�-�]������\u0011��}2\u0014\u001d���0Q\u0003G_��ѻ;�\u001e1K\u0000�t��?F9���l�\u0011��Ry\n���uF���\n���z+��s)�I91�$F��\u0017\u0012��$I)�\u0013�\u0007_�^��[H�d�+$�X1�\u001b���\u0001��b\u0005endstream\nendobj\n8 0 obj\n<< /Filter /FlateDecode /Length1 24164 /Length 14664 >>\nstream\nx��|\tt\u0014U��}�������}M�;��,\u001d\bd!\u000b�t a1�]H ��o\"\tau\r�,�\u000b2���;�ф�\u0001ud\u0014u\\\u0010\u001c��\u0019\u0005\u0015�Q\u0006�AFŤ���C\b�����|�;������{��w߽��\u0001\u0002\u0000\n\u0002\u000f0j\\n��1;}\u0000D��\u0013&T�Ԏ�8�;��\u00000�>}��&^��\b`�\u001e�\u001f��tq`S�\u001f�\u0002X�\u0003����4{���m���&�\u0010�=��\t\\�\u0005�_͞2��\u0015�\n�>�\t#�\u0006��4gƂ���V�\u000b`�M\u0000Zi�̩3^/9s+�_��\u0007��\u0002�\u0002�1\u001c��O��`��u\u0016�d�w���_�p�ԯ2>����?�v���M\u0002��5��~�\u000f\\1u���Ճ�����t�ʼnl�\u00040���7-����S�8�q8?�\u0013\b��l\u0000\u0001�~���=�ɘ{\u001bfQ�$P����B^\u0001\u0007����W,�\u0018\u0004�)ᝮ1$_\u001cD�b#\u0012�Dw\u0003\u000e8�.��\b%\u0004\\�7�}�\u0000\t�D\u0017�H��\u0004\u0019dD\u001d�\u0010��G4�\u0001Ѩ�\t��\n�\u0010͈?�\u0005̈V� ���hG<\u000b\u000e�!:���B�\u0011��Ĵ\u0007ܘ��\u0007ѧb\nx\u0011S���\u0001�*\u0006 \u00051\b~�4\b �\u0010��t\b\"�!\r1��\u000fȀ\u0010b&�#fA\u00041[�(d$�#\u000ed\"�Q�/d#�B\u0014�\u001f�A��\u001d�A_�|�E,�~��P��\u0000�X\u0004���P��;��X\n��\u0003U,�\u0001�\u0017A\u0011� (F,��ķ�\n��\u00150\u0010q0�!\u000eA�\u001bT�E�U0\bq(�'N�0�!\u000e�\n�\u00110\u0018�b\u0015�a\b�%P�X\u0003C\u0013'a���`\u0018�h\u0018�8\u0006F$�\ncU\u001c\u0007\u0017#����\t�\u0014j\u0010'�8\u0011F\"�¨�7P\u0007�\u0011'!���0\u0006��0\u000e�\u0001�#^��\u0014�4�54�\u0004ĩ0\u0011q\u001a�_`:�!΀I�3a2�,�O|\u0005�U�\u0003\r�s�ė0\u000f\u001a1=_��a*�\u0002���W�tą*6���\u0017�\f3\u0011\u0017�l�\u0016\u0015\u0017Ü��\u0004�\".�y��\u0010?��0\u001fq\u0005,#�\u0012�#�Jūa!�5Єx-4'��u*�B\u000b�JX��+X��\u0014����7��\n�%>�հ\u001cq\r�#\\\u000bW\"�\bW%>�up5�z�\u0006KnB�\u0018n�k\u0011o��\u0010o���\u001b\u0010��m�+čp=���Q�]�;`\u0015�&X�x'��ڻ\u0010���p#�=�.�\u0011�\u000b�\u0011\u0010�W�7p\u000b�f�\u0015q\u000bl#|\u0000�Cx\u0010nC|\b6\">\f�F|\u0004nO�\u0019\u001e�;\u0012��`\u0013�V�\u0013q����]�O�݈O½�O��4܇�\u001d�G��o\u0010w \u001e�6،�\u0013� �Ã�ð\u000b\u001eJ�\u0011v��\f<��\u0001� �G\u0011���,lE|\u000e�%>���q�ߪ�\u0002<��\u000f�D�\u001d<��\"<��\u0012lO�\u000f�!��2�H�\u0007���*�!�\u001ev&ޅנ\u001d�u؅�\u0006�F|\u0013�A<\u0000\u001d�o�\u001eă*\u001e���o�s���\u0013��;��w᷈��\u000b��þ������\u0017\u0011\u000f�K�G`?�T�3���!���\u0011��8\u0004GU<\u0006�%\u000e���:�'�\u0006�*\u001e�7\u0011?�\u0003���[�_���[�_�ۈ�?$\u000e���\u000e�7*��w\u0011�\n�'ބ��\u0001�)\u0015�\u0006D�\u0016\u000e#�\u001d� �V�;�s�\r8\u0003\u001f\"�\u0003>B�\u001e�u�\u0001�\"�\b�\u0010��Lj?��\t�&^�.8����\u0010��������������m��տ��_�L��/t�\u0017?���\u001b:�x�N_t�N��_��OU����t�'�N���N�D�韨:��^:����c�N?���c��:���#���u�u��N�O���su����������Y����\u0003�\u000e#����\u0010�\u001a�(���5�ކ!�P\u0017����A�1\u0019u�\u0014ܹ�pwm�UKt���O-���n���_��\u0013��g�o/a/�1x����G��\u0000���Z\u0001㮹�/Y=��_�qGw\u0000����\\��\u0017�Er\n{mG�lG�q�n�\u000f���5��~��y,�\u0002��N܉v��\u0003ȋ\u0007P���\u0015�E9v\u0010\u0017��`\u0015�\u000e�Z��\\\u001a�v4ꨛ�%�%�\u001d����\tp\t�&Қ�Mܒؘx\u0018��\u001e���\u0019�A�8\u001d���\u001fqG��\u001ew�.?J6jw�\u00190\u0011��\u001e�~�p�p\r<I��S��Sm\u0019��G�\u001e �h\u0014�τ/��\\�\rA*\u000f%���ʇ����b/)$�hP�OԠ\f9�\u0019ˑ�ݸkw�݁{�\b�\u000b�\u0012\u000f�9���n\u0004Χ\u001d�\"���Ε]��ȥ,<�F�~���\u0010\t��х�^�\u0013b��lx\u0012^��}\f{~N�A���:�\u0015~hb0���P�!�q�~L<$��\"\u0013h\u0016]H�-B�!\u0007���sa.��.��\u0011���TO\u000fr\u000f�O�g5)]�\u0012F\\�\bj���wĀ3\r�\u0016�+�>��\u000e�S��\u0013�v~\u001b�\u0007q*��2<-nF��\u000fb!�d\f�L搫�\u001ar\u001b��\u001c �ȗ������In\u000e��=�\u000f�{\u001c��_/�\u0016�k������v�?\u0012y��x>^�g�m�&����A�q\u0018��\t\u0011��\u0018�\u000e� ��\\����f� �J��v|�!�\t��|K�#g)n%��^\u001a�ix��\"���N�\u0007�>D��?pN.��r�\\\u0019W�-�Q��6ཋ����\u0007�\u0004�9O�$l\u0016�\nO\b/\n�4z�Whp���C�ٝ\u001fuA�ڮM]m]�x��q\r=�\u0005?Z\u0016cp�M�=�\u001cO�GP��!z䝇d�A�\u0012��\u00142�4����\u001b�=�\u0011u�O��K\u001f��8f\u0003��c�K\u000b�`:\n���L�L7Ѝ���O�DNǙ8;��\r�\u001a���bn\u0005���sor\u001fr�pg���N�2����\b\u001f��S�%�o�/�/�z�\r�3��Y�Y����M\u001c \u000e\u0012G�c�\u0006�Vq���Ԉ��\u0012�$�����\u0018����v�-4�wӷ�[(�S`\u0006WCQR�V��^C�i��\\3�\u000e$#�\u0014\u001fA^�B7�3t WC��8�G�'�il��\u0018��/�\t�9��[Hy�FO��'5zh##K�/s��(�\u0006\u001c�\u0012�\u0000����IN�Ǹ�(\u0005��Z\br���\\3�\u0006v�*\u0000��t\u0013��H�8��$�|�%��#Q��8fĶD��\f�;�\f~6�\u0006��j<\u000b\u001e�]�%\\�����kt.��ZI;P~\u001bή��\u0013N��\r���Gs�\u001eF;� /�Gܓ8���i��?%�%sp\u0007\\��Isb%�\u0010j�?����\t\u0010�}q5��\u00071F;\u0007�M.rم��\u0003*�\u001a,q��\\�rq)j�{�\u000b�\u0004�\u00124\u0017��D�boA�f<�ق���\u0001���\u001a�6ݣh-�F�j#Z���r5R܊�ܭ����\n�T�9\u001f�K����04ч����8����En��\u000bO����1\u0014\u0006\t��:�\u0003<=�\u00137�YoG;=\rG6\rm��8˿�\u0013�s� �k$ݑ\u0018�5�|��]�X�Od�\u0004/Gk�9xD\u0014`�\u0018�UT��\u0007]T6������ ?��ܾ}r��Y�\u0019�pz(-\u0018���\u001e���۬\u0016�b2\u001a�:Y+�\u001a��(������#<�\u0018�#�����|h*\u0016L�U�\u0018\u000f`��\u000b��\u0003�j���-c�r�?��%[�zZ\u0012%P\u0006e}r\u0002U�#�#e(�A&����͕��#����Q�\u001bԴ\u
Interestingly, the contents of this file are just a few lines of text and links in a Google Doc, but the blob is huge.
Script.google.com - Script project created
https://developers.google.com/apps-script/guides/content
https://developers.google.com/apps-script/guides/services/authorization
https://developers.google.com/apps-script/guides/web
https://developers.google.com/apps-script/guides/web#permissions
https://developers.google.com/apps-script/guides/services/advanced#enabling_advanced_services
https://developers.google.com/apps-script/advanced/drive
https://developers.google.com/apps-script/reference/drive
https://developers.google.com/apps-script/guides/services/quotas
At this stage, I am wondering if I am on the right track. I know Flutter can use Firebase as a database to store app data, that might be easy to search, filter and scale up. But, from an Admin point of view, it's easier for someone to add files to Google drive. Flutter cannot directly show Admin's Google Drive files using Google Drive API (it gives access to logged in user's drive files instead). Hence, I am taking Google Apps Script approach.
Edit:
I found another page that does similar thing but relies on web browser to show file or allow download. Is it possible to display blob in Flutter app itself? Or a dependency that will automatically show Google Drive file in Flutter app as is.
Flutter web - display pdf file generated in application. (Uint8List format)

WinDbg: How do I include a thread id and time value in a breakpoint .printf _without_ using pseudo registers?

I have some breakpoint "pairs," and I'd like to measure the time in between when they are hit.
The simplest thing that would allow me to do this is to include some sort of timestamp (even if it's just clock ticks or something) in the .printf I use when the breakpoint is hit.
I could use the pseudo registers $tid and $dbgtime in the breakpoint code. When I do, the performance really suffers.
bp1000 ucrtbase!malloc ".printf \"[0x%08x] [ucrtbase] [0x%04x] [0x%08x] malloc(%d): \", $dbgtime, $tid, dwo(#esp), dwo(#esp+4); gc "
When the same code is used (without using meaningful values for timestamp and thread id), things work much better.
bp1000 ucrtbase!malloc ".printf \"[0x%08x] [ucrtbase] [0x%04x] [0x%08x] malloc(%d): \", 0, 0, dwo(#esp), dwo(#esp+4); gc "
Is there some other (high-performance) way to get this information? The current time is more valuable than the thread ID. I can always make the breakpoint only apply to a specific thread so that emitting the ID is only sugar.
try this
0:000> bp ucrtbase!malloc "~# ; .echotime ; dd #$csp l2 ; gc ;"
0:000> bl
0 e 00007ff8`ab61c9e0 0001 (0001) 0:**** ucrtbase!malloc "~# ; .echotime ; dd #$csp l2 ; gc ;"
0:000> g
. 0 Id: 1a84.1f14 Suspend: 1 Teb: 00000018`f49d1000 Unfrozen
Start: cdb!wmainCRTStartup (00007ff6`efd2bbf0)
Priority: 0 Priority class: 32 Affinity: f
Debugger (not debuggee) time: Wed Aug 7 22:17:44.992 2019
00000018`f47eeb58 ab622762 00007ff8
. 0 Id: 1a84.1f14 Suspend: 1 Teb: 00000018`f49d1000 Unfrozen
Start: cdb!wmainCRTStartup (00007ff6`efd2bbf0)
Priority: 0 Priority class: 32 Affinity: f
Debugger (not debuggee) time: Wed Aug 7 22:17:44.992 2019 (UTC + 5:30)
00000018`f47eeb08 ab622762 00007ff8

Data transmission Pi/MCP3008

I have a question regarding data transmisison from a Raspberry Pi to a mcp3008. It is just a theoretical one. When they exchange the bytes does the master send 1 byte and receives 1 byte. Then sends the 2nd byte and receives 2nd byte.. OR does the master send 3 bytes and receives 3 bytes after that. From my understanding it is the first one, am I right?
Adafruit's library for the MCP3008 has your answer. Check out the read_adc() function:
def read_adc(self, adc_number):
"""Read the current value of the specified ADC channel (0-7). The values
can range from 0 to 1023 (10-bits).
"""
assert 0 <= adc_number <= 7, 'ADC number must be a value of 0-7!'
# Build a single channel read command.
# For example channel zero = 0b11000000
command = 0b11 << 6 # Start bit, single channel read
command |= (adc_number & 0x07) << 3 # Channel number (in 3 bits)
# Note the bottom 3 bits of command are 0, this is to account for the
# extra clock to do the conversion, and the low null bit returned at
# the start of the response.
resp = self._spi.transfer([command, 0x0, 0x0])
# Parse out the 10 bits of response data and return it.
result = (resp[0] & 0x01) << 9
result |= (resp[1] & 0xFF) << 1
result |= (resp[2] & 0x80) >> 7
return result & 0x3FF
It appears that it sends a three-byte command (where only one byte is non-zero):
resp = self._spi.transfer([command, 0x0, 0x0])
The response is three bytes which contains the packed 10-bit ADC value.
resp = self._spi.transfer([command, 0x0, 0x0])
# Parse out the 10 bits of response data and return it.
result = (resp[0] & 0x01) << 9
result |= (resp[1] & 0xFF) << 1
result |= (resp[2] & 0x80) >> 7

ZeroMQ push/pull pattern

I've decided to write a test code to see how pusher - many pullers bundle works and my suspicions came true.
Pullers receive messages in order they were connected, for example 1st message is received by 1st puller connected, 2nd by 2nd, etc. I've simulated a situation when one of the pullers stayed busy after receiving a message, but when it's time came to receive a message, it queued anyway, so I have 'lost' message. That's bad. I want this message to be received by next 'free' puller. Is that real?
My test code. I use zmqpp as bindings
void main()
{
auto _socket = sIpcContext->CreateNewSocket(zmqpp::socket_type::push);
_socket->bind("tcp://*:4242");
for (auto i = 0; i < 3; ++i)
{
new std::thread([&](int _idx)
{
auto idx = _idx;
auto sock = sIpcContext->CreateNewSocket(zmqpp::socket_type::pull);
sock->connect("tcp://127.0.0.1:4242");
for (;;)
{
std::string msg;
sock->receive(msg);
std::cout << idx << " received: " << msg << std::endl;
if (idx == 1)
{
std::cout << "Puller 1 is now busy" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(10000));
}
}
}, i);
}
for (auto i = 0;; ++i)
{
_socket->send(std::to_string(i));
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
I get this output:
0 received: 0
0 received: 1
1 received: 2
Puller 1 is now busy
2 received: 3
0 received: 4
2 received: 6
0 received: 7
2 received: 9
0 received: 10
2 received: 12
0 received: 13
2 received: 15
As you can see, 5, 8, and so on are 'missed' but actually queued in puller #1
Yes, push/pull sockets are dumb enough to let that happen. You could use other sockets, such as router/dealer, to send work to a free worker.
The 0MQ Guide explains this case (calls it the Post Office Analogy):
It's the post office analogy. If you have one queue per counter, and
you have some people buying stamps (a fast, simple transaction), and
some people opening new accounts (a very slow transaction), then you
will find stamp buyers getting unfairly stuck in queues. Just as in a
post office, if your messaging architecture is unfair, people will get
annoyed.
The solution in the post office is to create a single queue so that
even if one or two counters get stuck with slow work, other counters
will continue to serve clients on a first-come, first-serve basis.
Long story short, you should be using the ROUTER when dealing with slow running workers.

register multiple instances of application in the same host to same Net-SNMP agent

I've been struggling with this for a couple of days, and none of the solutions I've found work the way I'd like (I can be completely wrong, I've not used SNMP for a very long time, though).
This is existing code in my company, a perl application that connects to net-snmp agentx using POE::Component::NetSNMP::Agent. MIB defined for this application is defined, with base oid finished in .154. The MIB file defines 3 tables on it: status (.1), statistics (.2) and performance (.3). Everything works fine, registration with the agent goes fine, snmpwalk shows data being updated, etc.
But now a requirement has been implemented, allowing multiple (up to 32) instances of the application running in the same host. And monitoring shall be supported too, which brings the first issue: when connecting to agentX more than once, with the same OID, only one instance connects, and the others are refused.
I've though to make something like this:
.154
.1 (instance 1):
.1 (status table)
.2 (statistics table)
.3 (performance table)
.2 (instance 2):
.1 (status table)
.2 (statistics table)
.3 (performance table)
.3 (instance 3):
.1 (status table)
.2 (statistics table)
.3 (performance table)
[...]
.32 (instance 32):
.1 (status table)
.2 (statistics table)
.3 (performance table)
With this approach, each instance (they know their own id) can register to AgentX with no problems (nice!). Following the model above, tables for status, statistics and performance would be common to all instances.
querying to .154 would show the model above.
querying data for each specific instance by walking to .154.1, .154.2, etc would be possible too.
But I'm unable to get this running properly, as smlint, snmpwalk and iReasoning moan about different data types expected, data is not being shown the right way, etc.
What I've tried so far:
arrays: main index per instance, subindex on status, statistics and performance indexed with { main index, subindex}. Like this: SNMP: ASN.1 MIB Definitions. Referencing a table within a table.
Multiple definitions: re-define every table and component for the 32 instances, with different indices on names. It works moreover, but not exactly the way I was expecting: an snmpwalk of the parent does not show any child, so snmpwalk must be performed using . . . . .154.1, . . . . . .154.2, etc.
I've considered this solution as well: Monitoring multiple java processes on the same host via SNMP. But in my case does not work, as the instances connect to a common agent, they don't have their own agent running in a different port.
I have to admit I'm running out of ideas. Again, I could be completely wrong and could be facing the problem from a wrong perspective.
Is it possible to implement this the way I'm looking for? In SNMPv3 this could possibly be a good use for contexts, but they are not available in net-snmp to my knowledge.
edit
Solution number two, from my list above, is the one working better by far.
From parent MIB, 32 new child OIDs are defined:
sampleServer MODULE-IDENTITY
LAST-UPDATED "201210101200Z"
[...]
DESCRIPTION "Sample Server MIB module"
REVISION "201211191200Z" -- 19 November 2012
DESCRIPTION "Version 0.1"
::= { parentMibs 154 }
instance1 OBJECT IDENTIFIER ::= { sampleServer 1 }
instance2 OBJECT IDENTIFIER ::= { sampleServer 2 }
instance3 OBJECT IDENTIFIER ::= { sampleServer 3 }
instance4 OBJECT IDENTIFIER ::= { sampleServer 4 }
instance5 OBJECT IDENTIFIER ::= { sampleServer 5 }
instance6 OBJECT IDENTIFIER ::= { sampleServer 6 }
[...]
And tables are repeated for each instanceId, a python script wrote the big MIB file for this (I know,
-- the table contains static information for instance number 1
-- this includes version, start time etc
sampleStatusTable1 OBJECT-TYPE
SYNTAX SEQUENCE OF sampleStatusEntry1
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION "sample Server instance1 Status, table"
::= { instance1 1 }
[...]
-- this table contains statistics and sums that change constantly
-- please note that depending on sample_server configuraiton not all
-- of these will be filled in
sampleStatisticsTable1 OBJECT-TYPE
SYNTAX SEQUENCE OF sampleStatisticsEntry1
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION "sample Server Statistics, table"
::= { instance1 2 }
[...]
-- performance figures that reflect the current load of sample_server
samplePerformanceTable1 OBJECT-TYPE
SYNTAX SEQUENCE OF samplePerformanceEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION "sample Server Performance, table"
::= { instance1 3 }
[...]
snmpwalk output for each instance:
snmpwalk -M +/opt/sample_server/docs/mibs -m +COMPANY-SAMPLE-MIB -v2c -cpublic localhost 1.3.6.1.1.1.2.154.1
COMPANY-SAMPLE-MIB::sampleStatusInstance1 = INTEGER: 1
COMPANY-SAMPLE-MIB::sampleStatusVersion1 = STRING: "3.58"
COMPANY-SAMPLE-MIB::sampleStatusStartTime1 = STRING: "2014-12-13T00:06:27+0000"
COMPANY-SAMPLE-MIB::sampleStatisticsInstance1 = INTEGER: 1
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPInputTransactions1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPInputErrors1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPOutputTransactions1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPOutputErrorsRecoverable1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPOutputErrors1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsEntry1.7 = INTEGER: 0
COMPANY-SAMPLE-MIB::samplePerformanceInstance1 = INTEGER: 1
COMPANY-SAMPLE-MIB::samplePerformanceQueueLoad1 = INTEGER: 0
COMPANY-SAMPLE-MIB::samplePerformanceThroughput1 = INTEGER: 0
snmpwalk -M +/opt/sample_server/docs/mibs -m +COMPANY-SAMPLE-MIB -v2c -cpublic localhost 1.3.6.1.1.1.2.154.2
COMPANY-SAMPLE-MIB::sampleStatusInstance2 = INTEGER: 1
COMPANY-SAMPLE-MIB::sampleStatusVersion2 = STRING: "3.58"
COMPANY-SAMPLE-MIB::sampleStatusStartTime2 = STRING: "2014-12-13T00:06:27+0000"
COMPANY-SAMPLE-MIB::sampleStatisticsInstance2 = INTEGER: 1
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPInputTransactions2 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPInputErrors2 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPOutputTransactions2 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPOutputErrorsRecoverable2 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPOutputErrors2 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsEntry2.7 = INTEGER: 0
COMPANY-SAMPLE-MIB::samplePerformanceInstance2 = INTEGER: 1
COMPANY-SAMPLE-MIB::samplePerformanceQueueLoad2 = INTEGER: 0
COMPANY-SAMPLE-MIB::samplePerformanceThroughput2 = INTEGER: 0
But the result it's not as good as I was expecting, because a snmpwalk to master .154 shows the status for .154.1, instead of showing for every instance. Not sure if this is the expected behavior.
snmpwalk -M +/opt/sample_server/docs/mibs -m +COMPANY-SAMPLE-MIB -v2c -cpublic localhost 1.3.6.1.1.1.2.154
COMPANY-SAMPLE-MIB::sampleStatusInstance1 = INTEGER: 1
COMPANY-SAMPLE-MIB::sampleStatusVersion1 = STRING: "3.58"
COMPANY-SAMPLE-MIB::sampleStatusStartTime1 = STRING: "2014-12-13T00:06:27+0000"
COMPANY-SAMPLE-MIB::sampleStatisticsInstance1 = INTEGER: 1
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPInputTransactions1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPInputErrors1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPOutputTransactions1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPOutputErrorsRecoverable1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsHTTPOutputErrors1 = INTEGER: 0
COMPANY-SAMPLE-MIB::sampleStatisticsEntry1.7 = INTEGER: 0
COMPANY-SAMPLE-MIB::samplePerformanceInstance1 = INTEGER: 1
COMPANY-SAMPLE-MIB::samplePerformanceQueueLoad1 = INTEGER: 0
COMPANY-SAMPLE-MIB::samplePerformanceThroughput1 = INTEGER: 0