Check programatically the status of an action in oozie workflow from another oozie workflow - scala

I am running some code in oozie workflow named WF1's action named AC1.. This workflow is not scheduled but runs continuously.. usually action AC1 will get its turn 4 times a day. Time at which this action runs is not known previously.
Now, there is another Oozie workflow WF2, scheduled to run at 4:00 AM in the morning using Oozie coordinator. This WF2 runs for 3-4 minutes only as this is a small code required to be run in off-peak hours.
In this WF2, we want to check the status of workflow action AC1 (running as part of WF1 [everytime AC1 instance runs, a new id gets assigned to it]. Is it possible to get the status of AC1 using name only, without knowing the id?
I know I have a workaround where I can store the status of AC1 in Hive table and keep querying the same to know the status. But if something is offered out of the box, it will be helpful.

There are several ways to do it (as you mention).
The built-in way is to use the job information
So you can do a simple get and get a response with job status on all actions, in the below example you can go to actions look for your action name and change the status for example:
HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
.
{
id: "0-200905191240-oozie-W",
appName: "indexer-workflow",
appPath: "hdfs://user/bansalm/indexer.wf",
externalId: "0-200905191230-oozie-pepe",
user: "bansalm",
status: "RUNNING",
conf: "<configuration> ... </configuration>",
createdTime: "Thu, 01 Jan 2009 00:00:00 GMT",
startTime: "Fri, 02 Jan 2009 00:00:00 GMT",
endTime: null,
run: 0,
actions: [
{
id: "0-200905191240-oozie-W#indexer",
name: "AC1",
type: "map-reduce",
conf: "<configuration> ...</configuration>",
startTime: "Thu, 01 Jan 2009 00:00:00 GMT",
endTime: "Fri, 02 Jan 2009 00:00:00 GMT",
status: "OK",
externalId: "job-123-200903101010",
externalStatus: "SUCCEEDED",
trackerUri: "foo:8021",
consoleUrl: "http://foo:50040/jobdetailshistory.jsp?jobId=...",
transition: "reporter",
data: null,
errorCode: null,
errorMessage: null,
retries: 0
},

Related

Older oplog entries are not getting truncated

I have a mongo instance running with oplogMinRetentionHours set to 24 hours and max oplog size set to 50G. But despite this config settings oplog entries seem to be withhold indefinitely since oplog has entries past 24 hours and oplog size has reached 1.4 TB and .34 TB on disk
db.runCommand( { serverStatus: 1 } ).oplogTruncation.oplogMinRetentionHours
24 hrs
db.getReplicationInfo()
{
"logSizeMB" : 51200,
"usedMB" : 1464142.51,
"timeDiff" : 3601538,
"timeDiffHours" : 1000.43,
"tFirst" : "Fri Mar 19 2021 14:15:49 GMT+0000 (Greenwich Mean Time)",
"tLast" : "Fri Apr 30 2021 06:41:27 GMT+0000 (Greenwich Mean Time)",
"now" : "Fri Apr 30 2021 06:41:28 GMT+0000 (Greenwich Mean Time)"
}
MongoDB server version: 4.4.0
OS: Windows Server 2016 DataCenter 64bit
what I have noticed is event with super user with root role is not able to access replset.oplogTruncateAfterPoint, not sure if this is by design
mongod.log
{"t":{"$date":"2021-04-30T06:35:51.308+00:00"},"s":"I", "c":"ACCESS",
"id":20436, "ctx":"conn8","msg":"Checking authorization
failed","attr":{"error":{"code":13,"codeName":"Unauthorized","errmsg":"not
authorized on local to execute command { aggregate:
"replset.oplogTruncateAfterPoint", pipeline: [ { $indexStats: {} }
], cursor: { batchSize: 1000.0 }, $clusterTime: { clusterTime:
Timestamp(1619764547, 1), signature: { hash: BinData(0,
180A28389B6BBA22ACEB5D3517029CFF8D31D3D8), keyId: 6935907196995633156
} }, $db: "local" }"}}}
Not sure why mongo would not delete older entries from oplog?
Mongodb oplog truncation seems to be triggered with inserts. So as and when insert happens oplog gets truncated.

Unable to deploy java chain code which is provided by IBM Bluemix

Request from restclient:
POST http://localhost:7050/chaincode
Request:
{
"jsonrpc": "2.0",
"method": "deploy",
"params": {
"type": 1,
"chaincodeID":{
"name": "raja"
},
"ctorMsg": {
"args":["init", "a", "100", "b", "200"]
}
},
"id": 5
}
Register java chain code with chaincode id name:
rajasekhar#rajasekhar-VirtualBox:~/mychaincode/src/github.com/hyperledger/fabric/examples/chaincode/java/chaincode_example02/build/distributions/chaincode_example02/bin$ CORE_CHAINCODE_ID_NAME=raja ./chaincode_example02
Jun 13, 2017 1:24:06 PM org.hyperledger.fabric.shim.ChaincodeBase newPeerClientConnection
INFO: Configuring channel connection to peer.
Jun 13, 2017 1:24:09 PM org.hyperledger.fabric.shim.ChaincodeBase chatWithPeer
INFO: Connecting to peer.
Jun 13, 2017 1:24:09 PM io.grpc.internal.TransportSet$1 call
INFO: Created transport io.grpc.netty.NettyClientTransport#599c4539(/127.0.0.1:7051) for /127.0.0.1:7051
Jun 13, 2017 1:24:10 PM io.grpc.internal.TransportSet$TransportListener transportReady
INFO: Transport io.grpc.netty.NettyClientTransport#599c4539(/127.0.0.1:7051) for /127.0.0.1:7051 is ready
Jun 13, 2017 1:24:10 PM org.hyperledger.fabric.shim.ChaincodeBase chatWithPeer
INFO: Registering as 'raja' ... sending REGISTER
java.lang.RuntimeException: [raja]Chaincode handler org.hyperledger.fabric.shim.fsm cannot handle message (INIT) with payload size (23) while in state: established
at org.hyperledger.fabric.shim.impl.Handler.handleMessage(Handler.java:493)
at org.hyperledger.fabric.shim.ChaincodeBase$1.onNext(ChaincodeBase.java:188)
at org.hyperledger.fabric.shim.ChaincodeBase$1.onNext(ChaincodeBase.java:181)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onMessage(ClientCalls.java:305)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$2.runInContext(ClientCallImpl.java:423)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:54)
at io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
you will need to provide more information as to what you have done so far. With just the error message - its not possible to pin point the cause of failure.
Here is excellent documentation available on how to write a Java Chain Code for Blockchain: https://www.ibm.com/developerworks/library/j-chaincode-for-java-developers/index.html
I am hoping you have seen the above documentation. Do go through the steps one by one. The documentation is extensive in terms of setting your environment to writing your first chain code in Java.
Hope this helps.

How to get the time stamp difference of the first and the last records that are inserted into a collection of mongodb

How can we use this particular code in the execution process:
now = new Date();
current_date = new Date(now.getUTCFullYear(), now.getUTCMonth(), now.getUTCDate(), now.getUTCHours(), now.getUTCMinutes(), now.getUTCSeconds());
end_date = obj.end_date (Mon, 02 Apr 2012 20:16:35 GMT);
var millisDiff = end_date.getTime() - current_date.getTime();
console.log(millisDiff / 1000);
I have already inserted my records into a collection. It consists of many parameters. The first record tells the status to be started and the last record tells that the process has ended. They also consist of the time stamp values in all the records on when it started and when it ended.
How can I get the difference in time between the two processes? How can I use the above code?
db.trials.insert( { _id: ObjectId("51d2750d16257024e046c0d7"), Application: "xxx", ProcessContext: "SEARCH", InteractionID: "I001", opID: "BBB", buID: "Default", HostName: "xxx-ttt-tgd-002", InstanceName: "Instance1", EventTime: ISODate("2011-11-03T14:23:00Z"), HOPID: "", HOPName: "", HOPStatus: "", INTStatus: "Started" } );
db.trials.insert( { _id: ObjectId("51d2750d16257024e046c0d7"), Application: "xxx", ProcessContext: "SEARCH", InteractionID: "I001", opID: "BBB", buID: "Default", HostName: "xxx-ttt-tgd-002", InstanceName: "Instance1", EventTime: ISODate("2011-11-03T14:23:58Z"), HOPID: "", HOPName: "", HOPStatus: "", INTStatus: "Started" } );
These are the 2 records for which I wanted to know the time stamp difference using the above logic. How can I use the logic such that I can get the difference?

assertion 10320 BSONElement: bad type 113 when querying profile collection, db.system.profile.find()

I am running Mongo 2.2.1 in ec2, I have enabled profiling and I'm sending a slow op summary every 180 sec to graphite. Every now and again the script reports an error (BSONElement: bad type 113) and if I log to the Mongo shell and run a db.system.profile.find() I get a more detailed report:
Mon Feb 18 09:12:48 Assertion: 10320:BSONElement: bad type 113
0x6073f1 0x5d1aa9 0x4b0d98 0x5c17a6 0x6b3f35 0x6b6a2c 0x69be0a 0x6aa13f 0x668e46 0x668ec2 0x66a2ce 0x5cbcc4 0x4a4a14 0x4a67e6 0x7f1519bb776d 0x49f669
mongo(_ZN5mongo15printStackTraceERSo+0x21) [0x6073f1]
mongo(_ZN5mongo11msgassertedEiPKc+0x99) [0x5d1aa9]
mongo(_ZNK5mongo11BSONElement4sizeEv+0x1d8) [0x4b0d98]
mongo(_ZN5mongo16resolveBSONFieldEP9JSContextP8JSObjectljPS3_+0x146) [0x5c17a6]
mongo(js_LookupPropertyWithFlags+0x3f5) [0x6b3f35]
mongo(js_GetProperty+0x7c) [0x6b6a2c]
mongo(js_Interpret+0x10ea) [0x69be0a]
mongo(js_Execute+0x36f) [0x6aa13f]
mongo(JS_EvaluateUCScriptForPrincipals+0x66) [0x668e46]
mongo(JS_EvaluateUCScript+0x22) [0x668ec2]
mongo(JS_EvaluateScript+0x6e) [0x66a2ce]
mongo(_ZN5mongo7SMScope4execERKNS_10StringDataERKSsbbbi+0x144) [0x5cbcc4]
mongo(_Z5_mainiPPc+0x26c4) [0x4a4a14]
mongo(main+0x26) [0x4a67e6]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed) [0x7f1519bb776d]
mongo(__gxx_personality_v0+0x2a1) [0x49f669]
Error: BSONElement: bad type 113
In the logs I can see when the script has run and reported the error:
Mon Feb 18 09:26:21 [conn577444] Assertion: 10320:BSONElement: bad type 113
0xaf8c41 0xabedb9 0x570aab 0x7fc84c 0x7fe2ca 0x8057a7 0x806268 0x651171 0x82c71e 0x82c7d4 0x8318f6 0x8345f3 0x7b0b0d 0x7b20e2 0x56fe42 0xae6ed1 0x7f0eb2526e9a 0x7f0eb183c4bd
/opt/mongodb/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xaf8c41]
/opt/mongodb/bin/mongod(_ZN5mongo11msgassertedEiPKc+0x99) [0xabedb9]
/opt/mongodb/bin/mongod(_ZNK5mongo11BSONElement4sizeEv+0x1cb) [0x570aab]
/opt/mongodb/bin/mongod(_ZNK5mongo7Matcher13matchesDottedEPKcRKNS_11BSONElementERKNS_7BSONObjEiRKNS_14ElementMatcherEbPNS_12MatchDetailsE+0x153c) [0x7fc84c]
/opt/mongodb/bin/mongod(_ZNK5mongo7Matcher7matchesERKNS_7BSONObjEPNS_12MatchDetailsE+0xfa) [0x7fe2ca]
/opt/mongodb/bin/mongod(_ZNK5mongo19CoveredIndexMatcher7matchesERKNS_7BSONObjERKNS_7DiskLocEPNS_12MatchDetailsEb+0xc7) [0x8057a7]
/opt/mongodb/bin/mongod(_ZNK5mongo19CoveredIndexMatcher14matchesCurrentEPNS_6CursorEPNS_12MatchDetailsE+0xa8) [0x806268]
/opt/mongodb/bin/mongod(_ZN5mongo6Cursor14currentMatchesEPNS_12MatchDetailsE+0x41) [0x651171]
/opt/mongodb/bin/mongod(_ZN5mongo20QueryResponseBuilder14currentMatchesERNS_12MatchDetailsE+0x1e) [0x82c71e]
/opt/mongodb/bin/mongod(_ZN5mongo20QueryResponseBuilder8addMatchEv+0x44) [0x82c7d4]
/opt/mongodb/bin/mongod(_ZN5mongo23queryWithQueryOptimizerEiRKSsRKNS_7BSONObjERNS_5CurOpES4_S4_RKN5boost10shared_ptrINS_11ParsedQueryEEES4_RKNS_17ShardChunkVersionERNS7_10scoped_ptrINS_25PageFaultRetryableSectionEEERNSG_INS_19NoPageFaultsAllowedEEERNS_7MessageE+0x376) [0x8318f6]
/opt/mongodb/bin/mongod(_ZN5mongo8runQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x1a93) [0x8345f3]
/opt/mongodb/bin/mongod() [0x7b0b0d]
/opt/mongodb/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x3a2) [0x7b20e2]
/opt/mongodb/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x82) [0x56fe42]
/opt/mongodb/bin/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x411) [0xae6ed1]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f0eb2526e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f0eb183c4bd]
Mon Feb 18 09:26:21 [conn577444] assertion 10320 BSONElement: bad type 113 ns:mydb.system.profile query:{ ts: { $gte: new Date(1361179280953), $lte: new Date(1361179580953) } }
Mon Feb 18 09:26:21 [conn577444] problem detected during query over mydb.system.profile : { $err: "BSONElement: bad type 113", code: 10320 }
The script will query the profile collection for slow operations since last time it ran ( ts: { $gte: new Date(1361179280953), $lte: new Date(1361179580953) })
I am fairly new to MongoDB, any help appreciated.
Thanks,
Simone
This generally means you have data corruption, caused possibly by an unclean shutdown. If you do not have too much data, you could run a repair on the database - or, preferably, if you have a backup somewhere, restore your data from the backup.
(It is always recommended that you run with replication, partially so that if you experience corruption you have a data backup.)

MongoMapper Parent Inheritance

I am trying to get a better and organized result from using class inheritance with MongoMapper, but having some trouble.
class Item
include MongoMapper::Document
key :name, String
end
class Picture < Item
key :url, String
end
class Video < Item
key :length, Integer
end
When I run the following commands, they don't quite return what I am expecting.
>> Item.all
=> [#<Item name: "Testing", created_at: Sun, 03 Jan 2010 20:02:48 PST -08:00, updated_at: Mon, 04 Jan 2010 13:01:31 PST -08:00, _id: 4b416868010e2a04d0000002, views: 0, user_id: 4b416844010e2a04d0000001, description: "lorem?">]
>> Video.all
=> [#<Video name: "Testing", created_at: Sun, 03 Jan 2010 20:02:48 PST -08:00, updated_at: Mon, 04 Jan 2010 13:01:31 PST -08:00, _id: 4b416868010e2a04d0000002, views: 0, user_id: 4b416844010e2a04d0000001, description: "lorem?">]
>> Picture.all
=> [#<Picture name: "Testing", created_at: Sun, 03 Jan 2010 20:02:48 PST -08:00, updated_at: Mon, 04 Jan 2010 13:01:31 PST -08:00, _id: 4b416868010e2a04d0000002, views: 0, user_id: 4b416844010e2a04d0000001, description: "lorem?">]
They are all the same result, I would expect to have Item.all list all of the results, so including itself, Picture, and Video. But if the item is actually a Picture, I would like it to be returned if I ran Picture.all and not if I run Video.all. Do you see what I mean?
Am I misunderstanding how the inheritance works here? If I am what is the best way to replicate this sort of behavior? I am trying to follow this (point 2) as a guideline of how I want this work. I assume he can run Link.all to find all the links, and not include every other class that inherits from Item. Am I wrong?
The example you link to is a little misleading (or maybe just hard to follow) in that it doesn't show the full definition for the Item model. In order to use inheritance in your models, you'll need to define a key _type on the parent model. MongoMapper will then automatically set that key to the class name of the actual class of that document. So, for instance, you models would now look like this:
class Item
include MongoMapper::Document
key :name, String
key :_type, String
end
class Picture < Item
key :url, String
end
class Video < Item
key :length, Integer
end
and the output of your searches (assuming you created a Picture object) will turn into:
>> Item.all
=> [#<Picture name: "Testing", _type: "Picture", created_at: Sun, 03 Jan 2010 20:02:48 PST -08:00, updated_at: Mon, 04 Jan 2010 13:01:31 PST -08:00, _id: 4b416868010e2a04d0000002, views: 0, user_id: 4b416844010e2a04d0000001, description: "lorem?">]
>> Video.all
=> []
>> Picture.all
=> [#<Picture name: "Testing", _type: "Picture", created_at: Sun, 03 Jan 2010 20:02:48 PST -08:00, updated_at: Mon, 04 Jan 2010 13:01:31 PST -08:00, _id: 4b416868010e2a04d0000002, views: 0, user_id: 4b416844010e2a04d0000001, description: "lorem?">]