I'm trying to come up with a chart visualization like this in power BI:
X axis - time (month);
Y axis - % of consumption and remaining %;
Would it be possible to do?
Also I'm in doubt how can I create a table to represent this goal, was thinking something like this:
New column New column New column New column New column New column
Date Consumption Capacity Remaining % Com %Rem %Extra needed %Com accum
01/07/2022 50 150 100 33% 67% 0% 33%
20/07/2022 20 150 80 13% 53% 0% 47%
30/07/2022 10 150 70 7% 47% 0% 53%
04/08/2022 40 150 30 27% 20% 0% 80%
05/08/2022 35 150 -5 23% -3% 3% 103%
10/08/2022 10 150 -15 7% -10% 10% 110%
Althought I am not seeing a way to calculate the last column %Com accum, being the accumulated of the sum of the new %Com with the previous from the actual row colum (since it's row calculation)
for the last row: 103% (previous %Com accum) + 7% (%Com) = 110%
also the available capacity is function of the time period we are in also, could be quarters or semesters, for example,
1Q 150, 2Q 200, 3Q, 150, 4Q, 200
1H 350, 2H 350
This can be achieved using the Deneb custom visual. https://deneb-viz.github.io/
Related
In a TSV file, I have varying rows that precede the actual header row.
Depending on the type of data collection the number of preceding rows.
data looks like the text blob below.
I need to be able to start the import beginning with a specific row number or ideally where the headers appear in MongoDB using compass if possible.
Peak Name: OZopiclone
Internal Standard: Zolpidem D-6
Q1/Q3 Masses: 389.10/244.80 Da
Fit Quadratic Weighting 1 / x Iterate No
a0 0.00109
a1 0.0121
a2 -1.53e-006
Correlation coefficient 0.9998
Use Area
Sample Name Sample ID Analyte Peak Name Sample Type Sample Comment Set Number Acquisition Method Rack Type Rack Position Vial Position Plate Type Plate Position Analyte Concentration (ng/mL) Calculated Concentration (ng/mL) Accuracy (%) Weight To Volume Ratio Sample Annotation Disposition Analyte Units Analyte Peak Area for DAD (mAU x min) Analyte Peak Height (cps) Analyte Peak Height for DAD (mAU) Analyte Expected RT (min) Analyte RT Window (sec) Analyte Centroid Location (min) Analyte Start Scan Analyte Start Time (min) Analyte Stop Scan Analyte Stop Time (min) Analyte Integration Type Analyte Signal To Noise Analyte Peak Width (min) Standard Query Status Analyte Mass Ranges (Da) Analyte Wavelength Ranges (nm) Height Ratio Analyte Annotation Analyte Channel Analyte Peak Width at 50% Height (min) Analyte Slope of Baseline (%/min) Analyte Processing Alg. Analyte Peak Asymmetry IS Peak Name IS Units IS Peak Area for DAD (mAU x min) IS Peak Height (cps) IS Peak Height for DAD (mAU) IS Concentration (ng/mL) IS RT Window (sec) IS Centroid Location (min) IS Start Scan IS Start Time (min) IS Stop Scan IS Stop Time (min) IS Integration Type IS Signal To Noise IS Peak Width (min) IS Mass Ranges (Da) IS Wavelength Ranges (nm) IS Channel IS Peak Width at 50% Height (min) IS Slope of Baseline (%/min) IS Processing Alg. IS Peak Asymmetry Use Record Record Modified Calculated Concentration for DAD (ng/mL) Relative Retention Time Dilution Factor Analyte Peak Area (counts) IS Peak Area (counts) Area Ratio Analyte Retention Time (min) IS Retention Time (min) IS Expected RT (min) File Name Acquisition Date Response Factor Analyte Integration Quality IS Integration Quality
WATER O6 MAM Unknown 0 190128 OF Panel 63 PRO1.dam MTP 96 Standard 1 1 MTP 96 Standard 1 N/A No Peak N/A 0.00 ng/mL N/A 0.00 N/A 2.54 30.0 0.00 0 0.00 0 0.00 No Peak N/A 0.00 N/A 328.200/165.200 Da N/A 0.00e+000 N/A 0.00 0.00e+000 Specify Parameters - MQIII 0.00 Tapentadol D-3 ng/mL N/A 9.18e+002 N/A 1.00 30.0 3.31 55 3.25 67 3.38 Base To Base N/A 0.132 225.100/107.000 Da N/A N/A 8.78e-002 -5.51e+001 Specify Parameters - MQIII 3.66 0 N/A 0.000 1.00 0.00 3210. 0.00 0.0 3.28 3.22 211005 MS1 P63 OF JP 699613-700684.wiff 10/5/2021 7:08:49 PM N/A 0.00 0.391
WATER O7-Aminoclonazepam Unknown 0 190128 OF Panel 63 PRO1.dam MTP 96 Standard 1 1 MTP 96 Standard 1 N/A No Peak N/A 0.00 ng/mL N/A 0.00 N/A 3.00 30.0 0.00 0 0.00 0 0.00 No Peak N/A 0.00 N/A 286.100/121.000 Da N/A 0.00e+000 N/A 0.00 0.00e+000 Specify Parameters - MQIII 0.00 7-Aminoclonazepam d-4 2 ng/mL N/A 7.40e+002 N/A 1.00 30.0 2.95 27 2.88 36 3.00 Base To Base N/A 0.115 290.100/121.100 Da N/A N/A 4.33e-002 0.00e+000 Specify Parameters - MQIII 0.517 0 N/A 0.000 1.00 0.00 2070. 0.00 0.0 2.96 3.03 211005 MS1 P63 OF JP 699613-700684.wiff 10/5/2021 7:08:49 PM N/A 0.00 0.937
Initial Table
company time value
-------------------------
a 00:00:15.000 100
a 00:00:30.000 100
b 00:01:00.000 100
a 00:01:10.000 100
a 00:01:15.000 100
a 00:01:20.000 300
a 00:01:25.000 100
b 00:01:30.000 400
a 00:01:50.000 100
a 00:02:00.000 100
a 00:00:03.000 200
Let t = 1 hour.
For each row, I would like to look back t time.
Entries falling in t will form a time window. I would like to get max(time window) - min (time window) / number of events).
For example, if it is 12:00 now, and there are a total of five events, 12:00, 11:50, 11:40, 11:30, 10:30, four of which falls in the window of t i.e. 12:00, 11:50, 11:40, 11:30, the result will be 12:00 - 11:30 / 4.
Additionally, the window should only account for rows with the same value and company name.
Resultant Table
company time value x
--------------------------------
a 00:00:15.000 100 0 (First event A).
a 00:00:30.000 100 15 (30 - 15 / 2 events).
b 00:01:00.000 100 0 (First event of company B).
a 00:01:10.000 100 55/3 = 18.33 (1:10 - 0:15 / 3 events).
a 00:01:15.000 100 60/4 = 15 (1:15 - 0:15 / 4 events).
a 00:01:20.000 300 0 (Different value).
a 00:01:25.000 100 55/4 = 13.75 (01:25 - 0:30 / 4 events).
b 00:01:30.000 400 0 (Different value and company).
a 00:01:50.000 100 40/4 = 10 (01:50 - 01:10 / 4 events).
a 00:02:00.000 100 50/5 = 10 (02:00 - 01:10 / 5 events).
a 00:03:00.000 200 0 (Different value).
Any help will be greatly appreciated. If it helps, I asked a similar question, which worked splendidly: Sum values from the previous N number of days in KDB?
Table Query
([] company:`a`a`b`a`a`a`a`b`a`a`a; time: 00:00:15.000 00:00:30.000 00:01:00.000 00:01:10.000 00:01:15.000 00:01:20.000 00:01:25.000 00:01:30.000 00:01:50.000 00:02:00.000 00:03:00.000; v: 100 100 100 100 100 300 100 400 100 100 200)
You may wish to use the following;
q)update x:((time-time[time binr time-01:00:00])%60000)%count each v where each time within/:flip(time-01:00:00;time) by company,v from t
company time v x
---------------------------------
a 00:15:00.000 100 0
a 00:30:00.000 100 7.5
b 01:00:00.000 100 0
a 01:10:00.000 100 18.33333
a 01:15:00.000 100 15
a 01:20:00.000 300 0
a 01:25:00.000 100 13.75
b 01:30:00.000 400 0
a 01:50:00.000 100 10
a 02:00:00.000 100 10
a 03:00:00.000 200 0
It uses time binr time-01:00:00 to get the index of the min time for the previous 1 hour of each time.
Then (time-time[time binr time-01:00:00])%60000 gives the respective time range (i.e., time - min time) for each time in minutes.
count each v where each time within/:flip(time-01:00:00;time) gives the number of rows within this range.
Dividing the two and implementing by company,v applies it all only to those that have the same company and v values.
Hope this helps.
Kevin
If your table is ordered by time then below solution will give you the required result. You can also order your table by time if it is not already using xasc.
I have also modified the table to have time with different hour values.
q) t:([] company:`a`a`b`a`a`a`a`b`a`a`a; time: 00:15:00.000 00:30:00.000 01:00:00.000 01:10:00.000 01:15:00.000 01:20:00.000 01:25:00.000 01:30:00.000 01:50:00.000 02:00:00.000 03:00:00.000; v: 100 100 100 100 100 300 100 400 100 100 200)
q) f:{(`int$x-x i) % 60000*1+til[count x]-i:x binr x-01:00:00}
q) update res:f time by company,v from t
Output
company time v res
---------------------------------
a 00:15:00.000 100 0
a 00:30:00.000 100 7.5
b 01:00:00.000 100 0
a 01:10:00.000 100 18.33333
a 01:15:00.000 100 15
a 01:20:00.000 300 0
a 01:25:00.000 100 13.75
b 01:30:00.000 400 0
a 01:50:00.000 100 10
a 02:00:00.000 100 10
a 03:00:00.000 200 0
You can modify the function f to change time window value. Or change f to accept that as an input parameter.
Explanation:
We pass time vector by company, value to a function f. It deducts 1 hour from each time value and then uses binr to get the index of the first time entry within 1-hour window range from the input time vector.
q) i:x binr x-01:00:00
q) 0 0 0 0 1 2 2
After that, it uses the indexes of the output to calculate the total count. Here I am multiplying the count by 60000 as time differences are in milliseconds because it is casting it to int.
q) 60000*1+til[count x]-i
q) 60000 120000 180000 240000 240000 240000 300000
Then finally we subtract the min and max time for each value and divide them by the above counts. Since time vector is ordered(ascending), the input time vector can be used as the max value and min values are at indexes referred by i.
q) (`int$x-x i) % 60000*1+til[count x]-i
I'm having regular freezes in Emacs (every 15 Seconds and then for 2 to 3 Seconds).
I stated the Profiler with CPU:
- ... 4588 44%
Automatic GC 4588 44%
+ redisplay_internal (C function) 3405 33%
+ command-execute 1887 18%
+ timer-event-handler 384 3%
+ flyspell-post-command-hook 18 0%
+ undo-auto--add-boundary 8 0%
+ internal-echo-keystrokes-prefix 8 0%
+ delete-selection-pre-hook 7 0%
+ eldoc-schedule-timer 3 0%
tooltip-hide 3 0%
internal-timer-start-idle 2 0%
+ gui-set-selection 2 0%
+ global-visual-line-mode-check-buffers 1 0%
+ deactivate-mark 1 0%
How can i proceed from here to identify the root cause.
This kind of freeze is annoying when typing longer code or text.
Any help is highly appreciated.
Thanks
I want to try to draw stress and strain curve for copper nanoparticles with Lammps.
This is my code.
I don't know whether this is correct or not.
Can anybody help me?
It has fix nve for relaxation, but before that has fix nvt ,and these 2 fixes can't come together.
# tensile test
echo both
dimension 3
boundary s p p
units metal
atom_style atomic
##########create box#######
region copperbox block -80 80 -40 40 -40 40 units box
create_box 1 copperbox
lattice fcc 3.61
region copper block -60 60 -20 20 -20 20 units box
create_atoms 1 region copper
mass 1 63.546
thermo_modify lost ignore
region left block -60 -50 -20 20 -20 20 units box
group left region left
region right block 50 60 -20 20 -20 20 units box
group right region right
group middle subtract all left right
timestep 0.002
pair_style eam
pair_coeff * * cu_u3.eam
velocity all create 300 4928459 mom yes rot yes dist uniform
velocity left create 0 4928459 mom yes rot yes dist uniform
velocity right create 0 4928459 mom yes rot yes dist uniform
fix 1 all nvt temp 300 300 0.01
fix 2 left setforce 0 0 0
fix 3 right setforce 0 0 0
fix 4 all nve
thermo 100
run 1000
#####################################
compute 1 middle stress/atom
compute 2 middle reduce sum c_1[1]
dump 1 all custom 100 stress.txt mass x y z c_1[1] c_1[2] c_1[3]
c_1[4] c_1[5] c_1[6]
dump 2 all xyz 100 dump.xyz
#####################################
variable a loop 2
label loopa
fix 8 right move linear 0.2 0 0 units box
fix 9 left move linear 0 0 0 units box
run 1000
#####################################
unfix 8
unfix 9
fix 8 right move linear 0 0 0 units box
fix 9 left move linear 0 0 0 units box
run 40000
#####################################
fix 10 all nve
thermo 100
run 10000
#####################################
variable sigma equal "c_2/(40000)*(10^4)"
variable l2 equal xcm(right,x)
variable l0 equal ${l2}
variable strain equal "(v_l2-v_l0)/(v_l0)*100"
next a
jump in.copper99 loopa
restart 1000 restart.*
Consider a magnetic tape drive with density=1666 bpi and IBG=0.5 inch. How many inches of tape are required to store 50,000 records, each record is 120 bytes. Blocking factor is 45.
I don't understand how to convert the units to an answer...
You don't specify how many bits per byte, but if you assume 8 bits is 1 byte, then each 120 byte record will be 120 * 8 = 960 bits.
The total number of bits in 50,000 records will be 50,000 * 960 = 48000000 bits. At a density of 1666 bpi, this is 48000000 / 1666 = 28811.5 inches.
Then, as your blocking factor (I'm assuming this means the number of records per data block), is 45, and you have 50,000 records, you will need 50,000 / 45 = 1112 blocks (rounded up), so there will be 1111 inter-block gaps (IBG). Therefore, you will have 1111 * 0.5 inches of IBG = 555.5 inches.
In total, you will have 28811.5 + 555.5 = 29367 inches of tape.
You'll need to double-check my assumption of the terminology and also the underlying size of a single byte in bits.