Scoping Issue with SparkContext.sequenceFile(...).foreach in Scala - scala

My objective is to process a series of SequenceFile folders generated by calling org.apache.spark.rdd.RDD[_].saveAsObjectFile(...). My folder structure is similar to this:
\MyRootDirectory
\Batch0001
_SUCCESS
part-00000
part-00001
...
part-nnnnn
\Batch0002
_SUCCESS
part-00000
part-00001
...
part-nnnnn
...
\Batchnnnn
_SUCCESS
part-00000
part-00001
...
part-nnnnn
I need to extract some of the persisted data, however my collection - whether I use a ListBuffer, mutable.Map, or any other mutable type, loses scope and appears to be newed up on each iteration of sequenceFile(...).foreach
The following proof of concept generates a series of "Processing directory..." followed by "1 : 1" repeated and never increasing, as I expected counter and intList.size to do.
private def proofOfConcept(rootDirectoryName: String) = {
val intList = ListBuffer[Int]()
var counter: Int = 0
val config = new SparkConf().setAppName("local").setMaster("local[1]")
new File(rootDirectoryName).listFiles().map(_.toString).foreach { folderName =>
println(s"Processing directory $folderName...")
val sc = new SparkContext(config)
sc.setLogLevel("WARN")
sc.sequenceFile(folderName, classOf[NullWritable], classOf[BytesWritable]).foreach(f => {
counter += 1
intList += counter
println(s" $counter : ${intList.size}")
})
sc.stop()
}
}
Output:
"C:\Program Files\Java\jdk1.8.0_111\bin\java" ...
Processing directory C:\MyRootDirectory\Batch0001...
17/05/24 09:30:25.228 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[Stage 0:> (0 + 0) / 57] 1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
Processing directory C:\MyRootDirectory\Batch0002...
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
Processing directory C:\MyRootDirectory\Batch0003...
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1
1 : 1

The function inside foreach is run in a spark worker JVM, not inside the client JVM, where the variable is defined. That worker gets a copy of that variable locally, increments it, and prints it. My guess is you are testing this locally? If you were running this in a production, distributed spark environment, you wouldn't even see the output of those prints.
More generally, pretty much any function you pass into one of RDD's methods will probably be actually executed remotely and will not have mutable access to any local variables or anything. It will get an essentially immutable snapshot of them.
If you want to move data from spark's distributed storage back to the client, use RDD's collect method. The reverse is done with sc.parallelize. But note that both of these are usually done very rarely, since they do not happen in parallel.

Related

Daikon failing to run: "Error at line 1 in file example.dtrace: No declaration was provided for program point program.point:::POINT"

I am attempting to run Daikon on a .decls and .dtrace file I generated from a CSV file using an open-source perl script. The .decls and .dtrace file will be provided below. The daikon.jar file is held within a directory, which has a sub-directory "scripts" where I keep the .dtrace and .decls.
I am attempting to call daikon using the following command from within the directory containing the daikon.jar file:
java -cp daikon.jar daikon.Daikon scripts/example.dtrace scripts/example.decls
The program response is the following:
Daikon version 5.8.10, released November 1, 2021; http://plse.cs.washington.edu/daikon.
(read 1 decls file)
Processing trace data; reading 1 dtrace file:
Error at line 1 in file scripts/example.dtrace: No declaration was provided for program point program.point:::POINT
I am confused as to why it can't find the declarations file I provided which contains the declaration for the program.point function. Below I have provided the contents of both the example.dtrace and the example.decls files.
example.dtrace
program.point:::POINT
a
1
1
b
1
1
c
2
1
d
2
1
e
4
1
aprogram.point:::POINT
a
3
1
b
3
1
c
4
1
d
4
1
e
5
1
example.decls
DECLARE
aprogram.point:::POINT
a
double
double
1
b
double
double
1
c
double
double
1
d
double
double
1
e
double
double
1
Your example.decls file declares a program point named aprogram.point:::POINT, which starts with an a. Your example.dtrace file contains samples for a program point named program.point:::POINT, which does not start with an a.
So, the message is right: there is no declaration for a program point named program.point:::POINT, though there is a declaration for a program point named aprogram.point:::POINT.
Making the program point names consistent between the two files should resolve your problem. By adding the character a to the beginning of your example.dtrace file, I was able to get Daikon to produce output:
Daikon version 5.8.11, released November 2, 2021; http://plse.cs.washington.edu/daikon.
(read 1 decls file)
Processing trace data; reading 1 dtrace file:
[2021-11-17T10:13:50.284232]: Finished reading example.dtrace
===========================================================================
aprogram.point:::POINT
a == b
c == d
a one of { 1.0, 3.0 }
c one of { 2.0, 4.0 }
e one of { 4.0, 5.0 }
Exiting Daikon.

How do I tell my graph coloring problem program to only assign color 1 one time?

Basically, I have a graph coloring program where each node with an edge to another node has to be different colors. Here, is my code:
node(1..4).
edge(1,2).
edge(2,3).
edge(3,4).
edge(4,1).
edge(2,4).
color(1..3).
{ assign(N,C) : color(C) } = 1 :- node(N).
1 { assign(N,1) : color(1) } 1 :- node(N). %line in question
:- edge(N,M), assign(N,C), assign(M,C).
How would I tell the program to only assign color 1, once? The line labeled %line in question is the line giving me problems. Here is another solution I tried that didn't work:
node(1..4).
edge(1,2).
edge(2,3).
edge(3,4).
edge(4,1).
edge(2,4).
color(1..3).
{ assign(N,C) : color(C) } = 1 :- node(N).
:- edge(N,M), assign(N,C), assign(M,C).
vtx(Node, Color) :- node(Node), color(Color).
1 { vtx(N, 1) : color(1) } 1 :- node(N).
#show vtx/2.
If anyone could help me out it would be much appreciated.
In this simple case of restricting a single color to be used once, you can write the a single constraint
:- assign(N, 1), assign(M, 1), node(N), node(M), N!=M.
Actually, the line you marked as in question :
1 { assign(N,1) : color(1) } 1 :- node(N). %line in question
can be translated as
If N is a node, we will (and we must) assign color(1) to node(N) and only assign once, i.e. If node(i) is true, we will have exactly one node(i, 1).
Therefore, with this rule and your facts node(1..4), you will immediately get assign(1,1), assign(2,1), assign(3,1), assign(4,1). This is defninitely unsatisfiable under color problem (with the last constraint).
Back to your requirement:
How would I tell the program to only assign color 1, once?
The problem here is the constraint you set in the line: "color 1 is assigned only once" applies to each node(i), i=1,2,3,4 instead of all nodes.
To make it clearer, you might as well consider that this line would be instantiated as:
1 { assign(1,1) : color(1) } 1 :- node(1).
1 { assign(2,1) : color(1) } 1 :- node(2).
1 { assign(3,1) : color(1) } 1 :- node(3).
1 { assign(4,1) : color(1) } 1 :- node(4).
With node(1..4) all true, we will have assign(1,1), assign(2,1), assign(3,1), assign(4,1).
What you want is assign(N, 1) appears once and only once in the answer, thus in your rule, this should be true with no premiere condition.
Therefore, change the problem line into:
{ assign(N,1): node(N), color(1) } = 1. %problem line changed
You will get the proper assignment:
clingo version 5.4.0
Reading from test.lp
Solving...
Answer: 1
assign(2,2) assign(1,3) assign(3,3) assign(4,1)
Answer: 2
assign(1,2) assign(2,3) assign(3,2) assign(4,1)
Answer: 3
assign(2,1) assign(1,3) assign(3,3) assign(4,2)
Answer: 4
assign(2,1) assign(1,2) assign(3,2) assign(4,3)
SATISFIABLE
Intuitively, this line means the assign(N, 1) should be in answer set under no condition, as long as N is a node. This will count all nodes instead of every single one.

Depth First Search Implementation - understanding swift code

I was going thru few tutorials for Tree DS and I found this code which is really confusing to understand. Please explain
public func forEachDepthFirst(visit: (TreeNode) -> Void) {
visit(self) // 1
children.forEach { // 2
$0.forEachDepthFirst(visit: visit)
}
}
}
Why do we have visit(self) here?
I see explanation here https://forums.raywenderlich.com/t/help-understanding-the-recursion-for-depth-first-traversal/56552/2 but its still not clear
Any recusive method has
1- base case : which ends the run and here it's
children.forEach // when children property is empty meaning a leaf node
2- recusive case
$0.forEachDepthFirst(visit: visit) // call the same method with it's children
Your method takes a closure / completion that's be called for every node inside the main root node
So suppose You have root
0
- 1
- 1.1 , 1.2 , 1.3
- 2
- 2.1 , 2.2 , 2.3
Here 0 node is called then when runnign your function
visit(0)
children.forEach { // = 1,2
for 0 > 1
visit(1)
children.forEach { // = 1.1,1.2,1.3
for 0 > 2
visit(2)
children.forEach { // = 2.1,2.2,2.3
Inner case
for 0 > 1 > 1.1
visit(1.1)
children.forEach { // end here as there is no childrens ( leaf node)
so on for 1.2,1,3
for 0 > 2 > 2.1 / 2.2 / 2.3 same as above case
How to call
your method is an instance method inside the tree so every node can call it , if you want to traverse nodes of 0 then do this
zeroNode.forEachDepthFirst { (item) in
print(item.name) // suppose node object has a name
}
Then you will get
0 , 1 , 1.1 , 1.2 , 1.3 , 2.1 , 2.2 , 2.3
And that's as you called visit(NodeObject) for the main node and recursively all it's childrens
Why do we have visit(self) here?
Because if we didn't, we would never actually do anything to any of the nodes on the tree!
Consider this tree:
n1 -> n2 -> n3 -> n4
We now call our method forEachDepthFirst on n1. If we didn't have visit(self), we would immediately call forEachDepthFirst on n2, which would call it on n3, which would call it on n4. And then we'd stop. But at no time would we have called visit, so we would have looped through every node in the tree without doing anything to those nodes.

Scalable HEVC encoding: How to setup cfg files for quality scalability?

I downloaded SHM12.3 and I started scalable encoding.
This is the script I use in terminal:
/TAppEncoderStatic -c cfg/encoder_scalable_journal_B2.cfg -c cfg/per-sequence-svc/C_L-SNR.cfg -c cfg/layers_journal.cfg -b C_L_SSIM_B2.bin -o0 rec/C_L_B2_l0_rec.yuv -o1 rec/C_L_B2_l1_rec.yuv >> results_B2_26_06_2017.txt
This is the example script given in software description.
I need to perform scalable encoding having a video with different video qualities or video with different bitrate.
Can anyone help me to edit the configuration files to support quality scalability?
Thank you in advance!
I found the solution. The first configuration file is ncoder_scalable_journal_B2.cfg.
My setup includes 3 layers for SNR scalability.
#======== File I/O =====================
BitstreamFile : str.bin
#ReconFile : rec.yuv
#======== Profile ================
NumProfileTierLevel : 3
Profile0 : main # Profile for BL (NOTE01: this profile applies to whole layers but only BL is outputted)
# (NOTE02: this profile has no effect when NonHEVCBase is set to 1)
Profile1 : main # Profile for BL (NOTE01: this profile applies to HEVC BL only)
# (NOTE02: When NonHEVCBase is set to 1, this profile & associated level should be updated appropriately)
Profile2 : scalable-main # Scalable profile
#======== Unit definition ================
#MaxCUWidth : 64 # Maximum coding unit width in pixel
#MaxCUHeight : 64 # Maximum coding unit height in pixel
#MaxPartitionDepth : 16 # Maximum coding unit depth
#QuadtreeTULog2MaxSize : 5 # Log2 of maximum transform size for
# quadtree-based TU coding (2...6)
#QuadtreeTULog2MinSize : 2 # Log2 of minimum transform size for
# quadtree-based TU coding (2...6)
#QuadtreeTUMaxDepthInter : 3
#QuadtreeTUMaxDepthIntra : 3
#======== Coding Structure =============
MaxNumMergeCand : 2
#IntraPeriod : 4 # Period of I-Frame ( -1 = only first)
DecodingRefreshType : 2 # Random Accesss 0:none, 1:CRA, 2:IDR, 3:Recovery Point SEI
GOPSize : 6 # GOP Size (number of B slice = GOPSize-1)
# Type POC QPoffset CbQPoffset CrQPoffset QPfactor tcOffsetDiv2 betaOffsetDiv2 temporal_id #ref_pics_active #ref_pics reference pictures predict deltaRPS #ref_idcs reference idcs
Frame1: B 1 2 0 0 0.4624 0 0 0 1 1 -1 0
Frame2: B 2 1 0 0 0.4624 0 0 0 1 1 -2 2 1
Frame3: P 3 0 0 0 0.4624 0 0 0 1 1 -3 2 2
Frame4: B 4 2 0 0 0.4624 0 0 0 1 1 -1 2 2
Frame5: B 5 1 0 0 0.4624 0 0 0 1 1 -2 2 3
Frame6: P 6 0 0 0 0.4624 0 0 0 1 1 -3 2 3
#=========== Motion Search =============
FastSearch : 1 # 0:Full search 1:TZ search
SearchRange : 25 # (0: Search range is a Full frame)
BipredSearchRange : 4 # Search range for bi-prediction refinement
HadamardME : 1 # Use of hadamard measure for fractional ME
FEN : 1 # Fast encoder decision
FDM : 1 # Fast Decision for Merge RD cost
#======== Quantization =============
#QP : 32 # Quantization parameter(0-51)
MaxDeltaQP : 0 # CU-based multi-QP optimization
#MaxCuDQPDepth : 0 # Max depth of a minimum CuDQP for sub-LCU-level delta QP
DeltaQpRD : 0 # Slice-based multi-QP optimization
RDOQ : 1 # RDOQ
RDOQTS : 1 # RDOQ for transform skip
SliceChromaQPOffsetPeriodicity: 0 # Used in conjunction with Slice Cb/Cr QpOffsetIntraOrPeriodic. Use 0 (default) to disable periodic nature.
SliceCbQpOffsetIntraOrPeriodic: 0 # Chroma Cb QP Offset at slice level for I slice or for periodic inter slices as defined by SliceChromaQPOffsetPeriodicity. Replaces offset in the GOP table.
SliceCrQpOffsetIntraOrPeriodic: 0 # Chroma Cr QP Offset at slice level for I slice or for periodic inter slices as defined by SliceChromaQPOffsetPeriodicity. Replaces offset in the GOP table.
#=========== Deblock Filter ============
#DeblockingFilterControlPresent: 0 # Dbl control params present (0=not present, 1=present)
LoopFilterOffsetInPPS : 0 # Dbl params: 0=varying params in SliceHeader, param = base_param + GOP_offset_param; 1=constant params in PPS, param = base_param)
LoopFilterDisable : 0 # Disable deblocking filter (0=Filter, 1=No Filter)
LoopFilterBetaOffset_div2 : 0 # base_param: -6 ~ 6
LoopFilterTcOffset_div2 : 0 # base_param: -6 ~ 6
DeblockingFilterMetric : 0 # blockiness metric (automatically configures deblocking parameters in bitstream)
#=========== Misc. ============
#InternalBitDepth : 8 # codec operating bit-depth
#=========== Coding Tools =================
#SAO : 0 # Sample adaptive offset (0: OFF, 1: ON)
AMP : 0 # Asymmetric motion partitions (0: OFF, 1: ON)
TransformSkip : 0 # Transform skipping (0: OFF, 1: ON)
TransformSkipFast : 0 # Fast Transform skipping (0: OFF, 1: ON)
SAOLcuBoundary : 0 # SAOLcuBoundary using non-deblocked pixels (0: OFF, 1: ON)
#============ Slices ================
SliceMode : 0 # 0: Disable all slice options.
# 1: Enforce maximum number of LCU in an slice,
# 2: Enforce maximum number of bytes in an 'slice'
# 3: Enforce maximum number of tiles in a slice
SliceArgument : 1500 # Argument for 'SliceMode'.
# If SliceMode==1 it represents max. SliceGranularity-sized blocks per slice.
# If SliceMode==2 it represents max. bytes per slice.
# If SliceMode==3 it represents max. tiles per slice.
LFCrossSliceBoundaryFlag : 1 # In-loop filtering, including ALF and DB, is across or not across slice boundary.
# 0:not across, 1: across
#============ PCM ================
PCMEnabledFlag : 0 # 0: No PCM mode
PCMLog2MaxSize : 5 # Log2 of maximum PCM block size.
PCMLog2MinSize : 3 # Log2 of minimum PCM block size.
PCMInputBitDepthFlag : 1 # 0: PCM bit-depth is internal bit-depth. 1: PCM bit-depth is input bit-depth.
PCMFilterDisableFlag : 0 # 0: Enable loop filtering on I_PCM samples. 1: Disable loop filtering on I_PCM samples.
#============ Tiles ================
TileUniformSpacing : 0 # 0: the column boundaries are indicated by TileColumnWidth array, the row boundaries are indicated by TileRowHeight array
# 1: the column and row boundaries are distributed uniformly
NumTileColumnsMinus1 : 0 # Number of tile columns in a picture minus 1
TileColumnWidthArray : 2 3 # Array containing tile column width values in units of CTU (from left to right in picture)
NumTileRowsMinus1 : 0 # Number of tile rows in a picture minus 1
TileRowHeightArray : 2 # Array containing tile row height values in units of CTU (from top to bottom in picture)
LFCrossTileBoundaryFlag : 1 # In-loop filtering is across or not across tile boundary.
# 0:not across, 1: across
#============ WaveFront ================
#WaveFrontSynchro : 0 # 0: No WaveFront synchronisation (WaveFrontSubstreams must be 1 in this case).
# >0: WaveFront synchronises with the LCU above and to the right by this many LCUs.
#=========== Quantization Matrix =================
#ScalingList : 0 # ScalingList 0 : off, 1 : default, 2 : file read
#ScalingListFile : scaling_list.txt # Scaling List file name. If file is not exist, use Default Matrix.
#============ Lossless ================
#TransquantBypassEnableFlag : 0 # Value of PPS flag.
#CUTransquantBypassFlagForce: 0 # Force transquant bypass mode, when transquant_bypass_enable_flag is enabled
#============ Rate Control ======================
#RateControl : 0 # Rate control: enable rate control
#TargetBitrate : 1000000 # Rate control: target bitrate, in bps
#KeepHierarchicalBit : 2 # Rate control: 0: equal bit allocation; 1: fixed ratio bit allocation; 2: adaptive ratio bit allocation
#LCULevelRateControl : 1 # Rate control: 1: LCU level RC; 0: picture level RC
#RCLCUSeparateModel : 1 # Rate control: use LCU level separate R-lambda model
#InitialQP : 0 # Rate control: initial QP
#RCForceIntraQP : 0 # Rate control: force intra QP to be equal to initial QP
### DO NOT ADD ANYTHING BELOW THIS LINE ###
### DO NOT DELETE THE EMPTY LINE BELOW ###
The second configuration file is C_L-SNR.cfg.
FrameSkip : 0 # Number of frames to be skipped in input
FramesToBeEncoded : 480 # Number of frames to be coded
Level0 : 3 # Level of the whole bitstream
Level1 : 3 # Level of the base layer
Level2 : 3 # Level of the enhancement layer
Level3 : 3 # Level of the enhancement layer
#======== File I/O ===============
InputFile0 : C_L_560x448_40.yuv
FrameRate0 : 40 # Frame Rate per second
InputBitDepth0 : 8 # Input bitdepth for layer 0
SourceWidth0 : 560 # Input frame width
SourceHeight0 : 448 # Input frame height
RepFormatIdx0 : 0 # Index of corresponding rep_format() in the VPS
IntraPeriod0 : 96 # Period of I-Frame ( -1 = only first)
ConformanceMode0 : 1 # conformance mode
QP0 : 31
LayerPTLIndex0 : 1
InputFile1 : C_L_560x448_40.yuv
FrameRate1 : 40 # Frame Rate per second
InputBitDepth1 : 8 # Input bitdepth for layer 1
SourceWidth1 : 560 # Input frame width
SourceHeight1 : 448 # Input frame height
RepFormatIdx1 : 1 # Index of corresponding rep_format() in the VPS
IntraPeriod1 : 96 # Period of I-Frame ( -1 = only first)
ConformanceMode1 : 1 # conformance mode
QP1 : 26
LayerPTLIndex1 : 2
InputFile2 : C_L_560x448_40.yuv
FrameRate2 : 40 # Frame Rate per second
InputBitDepth2 : 8 # Input bitdepth for layer 1
SourceWidth2 : 560 # Input frame width
SourceHeight2 : 448 # Input frame height
RepFormatIdx2 : 2 # Index of corresponding rep_format() in the VPS
IntraPeriod2 : 96 # Period of I-Frame ( -1 = only first)
ConformanceMode2 : 1 # conformance mode
QP2 : 23
LayerPTLIndex2 : 3
And the last configuration files is layers_journal.cfg.
NumLayers : 3
NonHEVCBase : 0
ScalabilityMask1 : 0 # Multiview
ScalabilityMask2 : 1 # Scalable
ScalabilityMask3 : 0 # Auxiliary pictures
AdaptiveResolutionChange : 0 # Resolution change frame (0: disable)
SkipPictureAtArcSwitch : 0 # Code higher layer picture as skip at ARC switching (0: disable (default), 1: enable)
MaxTidRefPresentFlag : 1 # max_tid_ref_present_flag (0=not present, 1=present(default))
CrossLayerPictureTypeAlignFlag: 1 # Picture type alignment across layers
CrossLayerIrapAlignFlag : 1 # Align IRAP across layers
SEIDecodedPictureHash : 1
#============= LAYER 0 ==================
#QP0 : 22
MaxTidIlRefPicsPlus10 : 7 # max_tid_il_ref_pics_plus1 for layer0
#============ Rate Control ==============
RateControl0 : 0 # Rate control: enable rate control for layer 0
TargetBitrate0 : 1000000 # Rate control: target bitrate for layer 0, in bps
KeepHierarchicalBit0 : 1 # Rate control: keep hierarchical bit allocation for layer 0 in rate control algorithm
LCULevelRateControl0 : 1 # Rate control: 1: LCU level RC for layer 0; 0: picture level RC for layer 0
RCLCUSeparateModel0 : 1 # Rate control: use LCU level separate R-lambda model for layer 0
InitialQP0 : 0 # Rate control: initial QP for layer 0
RCForceIntraQP0 : 0 # Rate control: force intra QP to be equal to initial QP for layer 0
#============ WaveFront ================
WaveFrontSynchro0 : 0 # 0: No WaveFront synchronisation (WaveFrontSubstreams must be 1 in this case).
# >0: WaveFront synchronises with the LCU above and to the right by this many LCUs.
#=========== Quantization Matrix =================
ScalingList0 : 0 # ScalingList 0 : off, 1 : default, 2 : file read
ScalingListFile0 : scaling_list0.txt # Scaling List file name. If file is not exist, use Default Matrix.
#============= LAYER 1 ==================
#QP1 : 20
NumSamplePredRefLayers1 : 1 # number of sample pred reference layers
SamplePredRefLayerIds1 : 0 # reference layer id
NumMotionPredRefLayers1 : 1 # number of motion pred reference layers
MotionPredRefLayerIds1 : 0 # reference layer id
NumActiveRefLayers1 : 1 # number of active reference layers
PredLayerIds1 : 0 # inter-layer prediction layer index within available reference layers
#============ Rate Control ==============
RateControl1 : 0 # Rate control: enable rate control for layer 1
TargetBitrate1 : 1000000 # Rate control: target bitrate for layer 1, in bps
KeepHierarchicalBit1 : 1 # Rate control: keep hierarchical bit allocation for layer 1 in rate control algorithm
LCULevelRateControl1 : 1 # Rate control: 1: LCU level RC for layer 1; 0: picture level RC for layer 1
RCLCUSeparateModel1 : 1 # Rate control: use LCU level separate R-lambda model for layer 1
InitialQP1 : 0 # Rate control: initial QP for layer 1
RCForceIntraQP1 : 0 # Rate control: force intra QP to be equal to initial QP for layer 1
#============ WaveFront ================
WaveFrontSynchro1 : 0 # 0: No WaveFront synchronisation (WaveFrontSubstreams must be 1 in this case).
# >0: WaveFront synchronises with the LCU above and to the right by this many LCUs.
#=========== Quantization Matrix =================
ScalingList1 : 0 # ScalingList 0 : off, 1 : default, 2 : file read
ScalingListFile1 : scaling_list1.txt # Scaling List file name. If file is not exist, use Default Matrix.
#============= LAYER 2 ==================
#QP1 : 20
NumSamplePredRefLayers2 : 1 # number of sample pred reference layers
SamplePredRefLayerIds2 : 0 # reference layer id
NumMotionPredRefLayers2 : 1 # number of motion pred reference layers
MotionPredRefLayerIds2 : 0 # reference layer id
NumActiveRefLayers2 : 1 # number of active reference layers
PredLayerIds2 : 0 # inter-layer prediction layer index within available reference layers
#============ Rate Control ==============
RateControl2 : 0 # Rate control: enable rate control for layer 1
TargetBitrate2 : 1000000 # Rate control: target bitrate for layer 1, in bps
KeepHierarchicalBit2 : 1 # Rate control: keep hierarchical bit allocation for layer 1 in rate control algorithm
LCULevelRateControl2 : 1 # Rate control: 1: LCU level RC for layer 1; 0: picture level RC for layer 1
RCLCUSeparateModel2 : 1 # Rate control: use LCU level separate R-lambda model for layer 1
InitialQP2 : 0 # Rate control: initial QP for layer 1
RCForceIntraQP2 : 0 # Rate control: force intra QP to be equal to initial QP for layer 1
#============ WaveFront ================
WaveFrontSynchro2 : 0 # 0: No WaveFront synchronisation (WaveFrontSubstreams must be 1 in this case).
# >0: WaveFront synchronises with the LCU above and to the right by this many LCUs.
#=========== Quantization Matrix =================
ScalingList2 : 0 # ScalingList 0 : off, 1 : default, 2 : file read
ScalingListFile2 : scaling_list1.txt # Scaling List file name. If file is not exist, use Default Matrix.
NumLayerSets : 3 # Include default layer set, value of 0 not allowed
NumLayerInIdList1 : 2 # 0-th layer set is default, need not specify LayerSetLayerIdList0 or NumLayerInIdList0
LayerSetLayerIdList1 : 0 1
NumLayerInIdList2 : 3 # 0-th layer set is default, need not specify LayerSetLayerIdList0 or NumLayerInIdList0
LayerSetLayerIdList2 : 0 1 2
NumAddLayerSets : 0
NumOutputLayerSets : 3 # Include defualt OLS, value of 0 not allowed
DefaultTargetOutputLayerIdc : 2
NumOutputLayersInOutputLayerSet : 1 1 # The number of layers in the 0-th OLS should not be specified,
# ListOfOutputLayers0 need not be specified
ListOfOutputLayers1 : 1
ListOfProfileTierLevelOls1 : 1 2
ListOfOutputLayers2 : 2
ListOfProfileTierLevelOls2 : 1 2 2

MongoDB benchmarking inserts

I am trying to benchmark MongoDB with the JS harness. I am trying to do inserts. The example given in mongo website.
However, I am trying an insert operation, which works totally fine, but gives out wrong queries/sec.
ops = [{op: "insert", ns: "benchmark.bench", safe: false, doc: {"a": 1}}]
The above works fine. Then, I have run the following in mongo shell:
for ( x = 1; x<=128; x*=2){
res = benchRun( { parallel : x ,
seconds : 5 ,
ops : ops
} )
print( "threads: " + x + "\t queries/sec: " + res.query )
}
It gives out:
threads: 1 queries/sec: 0
threads: 2 queries/sec: 0
threads: 4 queries/sec: 0
threads: 8 queries/sec: 0
threads: 16 queries/sec: 0
threads: 32 queries/sec: 1.4
threads: 64 queries/sec: 0
threads: 128 queries/sec: 0
I dont understand why the queries/sec is 0 and not a single doc has been inserted. Is this right was testing performance for inserts?
Answering because I just encountered a similar problem.
Try replacing your print statement with printjson(res).
You will see that res has the following fields:
{
"note" : "values per second",
"errCount" : NumberLong(0),
"trapped" : "error: not implemented",
"insertLatencyAverageMicros" : 8.173300153139357,
"totalOps" : NumberLong(130600),
"totalOps/s" : 25366.173139864142,
"findOne" : 0,
"insert" : 25366.173139864142,
"delete" : 0,
"update" : 0,
"query" : 0,
"command" : 0
}
As you can see, the query count is 0, hence when you print res.query it gives 0. To get the number of insert operations per second you would want to print res.insert. I believe res.query corresponds to the "find" operation.