I try to create a chart with bars and lines.
My data:
Typ
Value1 (bar)
Value2 (bar)
Target (line)
A1
95
32
50
A2
56
28
30
A3
17
58
40
The result should be:
I spent hours playing around with Grafana, but I can't get this working- It look soooo simple that I cannot imagine that it is not possible?
I use Grafana 9.2.4 and the database is MSSQL
Related
OBJECTIVE: My plan is to run my model for ~60 ticks, and use the outcome of that run (i.e. the changes to the patches) as the starting point for all future runs. The idea behind this is that the first 60 ticks simulate a landscape from the past up until today (without any policy interventions). Then, from today on, I want to explore a range of policy scenarios, all starting with the same base conditions.
QUESTION: Do you know if there is a smart way to take stock of / save the outcomes of a run so that I can use them as a starting point for future runs, or do I need to assess the conditions after 60 ticks manually and then build an alternative setup-button that replicates those conditions?
I agree with Charles that export-world and import-world should work.
An alternative (see code below ) would be to use a fixed random seed for your alternative setup for the first 60 ticks, then change to a run-specific random seed, which would also work on a web-based run. ( I suspect export-world doesn't work over the web. )
Here's an example of changing the random seed mid-flight. Be sure to save the random seed before you define the new random seed or everything will always be the same!
Load this code and hit setup and go buttons multiple times and you can confirm it's working.
globals [ variable-random-seed fixed-ticks]
to setup
clear-all
set variable-random-seed random 999999999 ;; nine nines works
random-seed 123456789 ;; any fixed number to use for low ticks
set fixed-ticks 10 ;; or 60 in your case
print " ----------- fixed ------------- ===== -------- varies by run ----------- "
reset-ticks
end
to go
if ticks > 20 [ print "\n" stop ]
write random 100
if ticks = fixed-ticks [ write "=====" random-seed variable-random-seed ]
tick
end
Sample output of three runs
----------- fixed ------------- ===== -------- varies by run
66 68 42 59 14 1 34 20 3 15 86 "=====" 1 80 87 54 85 51 37 53 94 69
----------- fixed ------------- ===== -------- varies by run
66 68 42 59 14 1 34 20 3 15 86 "=====" 94 72 60 26 18 90 65 50 65 18
----------- fixed ------------- ===== -------- varies by run
66 68 42 59 14 1 34 20 3 15 86 "=====" 23 93 75 68 17 44 17 30 99 94
For the following typical case:
n = 1000000;
r = randi(n,n,2);
(assume there are 0.05% common numbers between all rows; n could be even tens of millions)
I am looking for a CPU and Memory efficient solution to merge rows based on any common items (here integer numbers). A list of sample codes in Python is available here and a quick try to translate one into Matlab can be found here.
In my attempt they take ages (minutes to hours), so I am in favor of finding faster solution.
For the above example, the typical output should look like (cell):
{
[1 90 34 67 ... 9]
[35 89]
[45000 23 828 130 8999 45326 ... 11]
...
}
Note also that, I have tried to compile as mex but failed due to no-support for cell in Matlab-Coder.
Edit: A tiny demonstration example
%---------------------------------------
clc
n = 100;
r = randi(n,n,2); % random integers in [1,n], size(n,2)
%---------------------------------------
>> r
r =
82 17 % (1) 82 17
91 13 % (2) 91 13
13 32 % (3) 91 13 32 merged with (2), common 13
82 53 % (4) 82 17 53 merged with (1), common 82
64 17 % (5) 82 17 53 64 merged with (4), common 17
...
94 45
13 31 % (77) 91 13 32 31 merged with (3), common 13
57 51
47 52
2 13 % (80) 91 13 32 31 2 merged with (77), common 13
34 80
%---------------------------------------
c = merge(r); % cpu and memory friendly solution is searched for.
%---------------------------------------
c =
[82 17 53 64]
[91 13 32 31 2]
...
You need an index.
In Python, use a dict. In MATLAB - I'd not use MATLAB, because open-source is the future, and MATLAB is dying out.
But Python is quite slow. You can likely get a 10x speedup by using e.g. Cython to translate and optimize the code in C. Avoid using Python data types such as a list of int, because they are very memory intensive. numpy has memory-efficient arrays of integer.
If you get a new pair (a,b) you can use this dictionary to find existing items to merge. Then update the dict after the merge.
Actually for integers, you should use an array instead of a dict.
The trickiest part is handling the case when both a and b exist, but are large different groups. There are some neat optimizations possible here, if that isn't fast enough yet.
It's not clustering, but connected components.
What I would like to do is create a choropleth map which is darker or lighter based on the number of data points in a particular area.
I have the following data:
RO-B, 9
PL-MZ, 24
SE-C, 3
DE-NI, 5
PL-DS, 14
ES-CM, 11
RO-IS, 2
DE-BY, 51
SE-Z, 18
CH-BE, 10
PL-WP, 1
ES-IB, 1
DE-BW, 21
DE-BE, 24
DE-BB, 1
IE-M, 26
ES-PV, 1
DE-SN, 6
CH-ZH, 31
ES-GA, 1
NL-GE, 2
IE-U, 1
ES-AN, 4
FR-J, 82
DE-HH, 34
PL-PD, 1
PL-LD, 6
GB-WLS, 60
GB-ENG, 8619
RO-BV, 45
CH-VD, 2
PL-SL, 1
DE-HE, 17
SE-I, 1
HU-PE, 4
PL-MA, 4
SE-AB, 3
CH-BS, 20
ES-CT, 31
DE-TH, 25
IE-C, 1
CZ-ST, 1
DE-NW, 29
NL-NH, 3
DE-RP, 9
CZ-PR, 4
IE-L, 134
HU-BU, 10
RO-CJ, 1
GB-NIR, 29
ES-MD, 33
CH-LU, 11
GB-SCT, 172
CH-GE, 3
BE-BRU, 30
BE-VLG, 25
It references the ISO3166-2 of a country and sub region, and the # corresponds to the amount of data points affiliated with that region.
I've seen this project on GitHub which seems to also use the same ISO3166-2 to reference countries.
I'm trying to figure out how I could modify their code to display my data points, such that if the number is higher the area would be darker, if the number is less it would be lighter.
It seems it should be possible, the first thing I was trying to do was modify this jsfiddle code, which seems to be very close to what I need, but I couldn't get it to work.
For instance this line:
osmeRegions.geoJSON('RU-MOW',
Seems to directly reference a ISO3166-2 code, but it's not as simple as just changing that (or maybe it is but I couldn't get that to work properly).
Does anyone know if I could possibly adapt the code from that project to create the map rendering I've described?
Or perhaps there's a different way to achieve the same goal?
I'm using Stata and trying to compute conditional means based on time/date. For each store I want to calculate mean (inventory) per year. If there are missing year gaps, then I want to take the mean from the closest two dates' inventory values.
I've used (below) to get overall means per store, but I need more granularity.
egen mean_inv = mean(inventory), by (store)
I've also tried this loop with similar results:
by id, sort: gen v1'=_n'
forvalues x = 1/'=n'{
by store: sum inventory if v1==`x'
replace mean_inv= r(mean) if v1==`x'
}
Visually, I want mean inventory per store: (store id is not sequential)
5/1/2003 2/3/2006 8/9/2006 3/5/2007 6/9/2007 2/1/2008
13 18 12 15 24 11
[mean1] [mean2] [mean3] [mean4] [mean5]
store date inventory
1 16750 17
1 18234 16
1 15844 13
1 17111 14
1 17870 13
1 16929 13.5
1 17503 13
4 15987 18
4 15896 16
4 18211 16
4 17154 18
4 17931 24
4 16776 23
12 16426 26
12 17681 17
12 16386 17
12 16603 18
12 17034 16
12 17205 16
42 15798 18
42 16022 18
42 17496 16
42 17870 18
42 16204 18
42 16778 14
33 18053 23
33 16086 13
33 16450 21
33 17374 19
33 16814 19
33 15834 16
33 16167 16
56 17686 16
56 17623 18
56 17231 20
56 15978 16
56 16811 15
56 17861 20
It is hard to relate your code to the word description of your problem.
Your egen call calculates means by store, not year.
Your longer fragment does not make complete sense given lack of definitions and at least one typo.
Note that your variable v1 contains identifiers that run 1 up within groups of store, and does not distinguish different values of store, as you (seem to) imply. It strains credibility that it produces results anywhere near those by the egen call.
n is not defined and the code evaluating it is presumably intended to be
`=n'
If you calculate
by store: sum inventory if v1 == `x'
several means will be calculated in turn but only the last to be calculated will be accessible as r(mean).
The sample data are unrelated to the problem. There is no year variable and even if the dates are Stata daily dates, they are all dates within 1960.
Setting all that aside, suppose you have variables store, inventory and year. You can try
collapse inventory, by(store year)
fillin store year
ipolate inventory year, gen(inventory2) by(store)
The collapse produces a reduced dataset of means. The ipolate interpolates across gaps, as you ask. fillin may not be adequate to give all the store and year combinations you want and you may need to add further years manually before the interpolation. If you want to put these results back with the original data, that's a merge.
In total, this is a pretty messy question.
cluster the given data and use any retrieval algorithm to show output as shown below.
(any clustering algorithm)
Euclidean distance may be used for finding closest cases.
let a data file containing input vectors like
caseid f1 f2 f3 f4
1 30 45 9.5 1500
2 35 45 8 1600
3 38 47 10 1550
4 32 50 9.5 1800
..
..
..
t1 30 45 9.5 1500(target)
output should like
NO. f1 f2 f3 f4
t1 30 45 9.5 1500 (target)
21 35 45 10 1500(1st closest to target)
39 35 50 8 1500 (2nd closes)
56 35 42 9.5 1500 (3rd closes)
This looks like a classic nearest neighbor query to me, not like clustering.
Also I'd be careful with using Euclidean distance here. A difference of 1 in attribute f1 does not look like it is equal to a difference of 1 in attribute f4. The values seem to have a completely different magnitude.