I'm trying to plot data out of a CSV file, and my xrange is not linear - it rises from 0.5 to 1.2, then down to 0.1 as the dataset progresses. Gnuplot is ranging the data from 0.1 to 1.2 ascending, and I need to tell it to just take the data as it reads it. How do I do that? the data is a temperature plot against a varying water flow rate...
Thanks!
Here's a sample of my data to be plotted, sorry it's long:
0.558 34.327
0.698 34.429
1.264 34.577
1.258 34.690
1.252 34.864
1.274 35.010
1.271 35.097
1.286 38.223
1.306 38.186
1.291 38.114
1.288 38.100
1.294 38.049
1.288 38.005
1.297 37.467
1.297 37.464
1.299 37.437
1.298 37.399
1.281 37.406
0.606 37.456
0.607 37.449
0.601 37.483
0.594 37.495
0.594 37.587
0.607 37.625
0.607 37.737
0.596 37.798
0.599 37.918
0.334 38.015
0.348 38.073
0.355 38.171
0.345 38.259
0.348 38.386
0.142 39.230
0.137 39.305
0.126 39.374
0.115 39.371
0.131 39.423
0.132 39.369
Further into this data, the x and y variables will increment AND decrement as it goes on. I just need the X axis to show up as it's read from the file. I have attached an image of the excel generated graph that I'm trying to replace, and the gnuplot version (only one plot line) for comparison.
Try
set xtics rotate by -90
plot "-" using 0:2:xtic(1) with lines
0.558 34.327
0.698 34.429
1.264 34.577
1.258 34.690
1.252 34.864
1.274 35.010
1.271 35.097
1.286 38.223
1.306 38.186
1.291 38.114
1.288 38.100
1.294 38.049
1.288 38.005
1.297 37.467
1.297 37.464
1.299 37.437
1.298 37.399
1.281 37.406
0.606 37.456
0.607 37.449
0.601 37.483
0.594 37.495
0.594 37.587
0.607 37.625
0.607 37.737
0.596 37.798
0.599 37.918
0.334 38.015
0.348 38.073
0.355 38.171
0.345 38.259
0.348 38.386
0.142 39.230
0.137 39.305
0.126 39.374
0.115 39.371
0.131 39.423
0.132 39.369
EOF
Related
I'm looking for some help with selecting a value from a weighted list based on a randomly drawn number.
Think of my list as an empirical cumulative distribution function for a random variable, where the random variable is flood-prob. The first number in each pair of the list represents a depth and the last number in the pair is the weight/probability.
Suppose the random number drawn for the flood-prob variable is 0.66, how would I then select the corresponding depth from my list? or the approximate value?
extensions [ rnd ]
globals [probabilities depths ecdf flood-prob]
to-report create-ecdf
set probabilities [0.056 0.061 0.073 0.083 0.087 0.095 0.103 0.110 0.115 0.117 0.121 0.125 0.132 0.135 0.141 0.144 0.146 0.153 0.158 0.161 0.166 0.171 0.176 0.183 0.186 0.193 0.199 0.205 0.206 0.211 0.219 0.225 0.229 0.232 0.242 0.246 0.253 0.256 0.261 0.270 0.273 0.280 0.290 0.293 0.300 0.309 0.315 0.317 0.328 0.332 0.335 0.342 0.348 0.357 0.360 0.369 0.379 0.383 0.392 0.399 0.403 0.415 0.424 0.429 0.432 0.442 0.447 0.450 0.462 0.463 0.472 0.486 0.491 0.499 0.507 0.520 0.525 0.540 0.548 0.561 0.571 0.583 0.596 0.605 0.612 0.628 0.636 0.646 0.661 0.679 0.685 0.695 0.715 0.726 0.735 0.752 0.762 0.771 0.790 0.800 0.814 0.826 0.836 0.847 0.859 0.872 0.880 0.883 0.898 0.904 0.911 0.920 0.926 0.935 0.939 0.953 0.957 0.967 0.974 0.979 0.986 0.987 0.993 0.997 0.999]
set depths [2.80399990081787 2.82999992370605 2.88599991798401 2.92899990081787 2.94400000572205 2.97499990463257 3.00300002098083 3.02500009536743 3.04200005531311 3.04800009727478 3.06100010871887 3.07200002670288 3.09299993515015 3.1010000705719 3.11800003051758 3.125 3.13100004196167 3.14800000190735 3.16100001335144 3.17000007629395 3.18099999427795 3.19400000572205 3.20600008964539 3.22300004959106 3.22900009155273 3.24499988555908 3.2590000629425 3.27099990844727 3.27300000190735 3.28299999237061 3.29999995231628 3.31200003623962 3.3199999332428 3.32699990272522 3.34599995613098 3.35400009155273 3.36800003051758 3.37400007247925 3.3840000629425 3.3989999294281 3.40599989891052 3.41899991035461 3.43600010871887 3.44099998474121 3.4539999961853 3.46900010108948 3.48000001907349 3.48399996757507 3.50200009346008 3.50799989700317 3.51399993896484 3.52500009536743 3.53500008583069 3.54999995231628 3.5550000667572 3.5699999332428 3.58599996566772 3.59299993515015 3.60700011253357 3.61800003051758 3.62400007247925 3.64299988746643 3.65700006484985 3.66499996185303 3.66899991035461 3.68499994277954 3.69300007820129 3.69700002670288 3.71499991416931 3.71799993515015 3.73099994659424 3.75200009346008 3.75999999046326 3.77300000190735 3.78500008583069 3.80399990081787 3.81299996376038 3.83500003814697 3.84800004959106 3.86800003051758 3.88299989700317 3.90199995040894 3.92199993133545 3.93600010871887 3.94799995422363 3.97300004959106 3.98699998855591 4.00299978256226 4.02799987792969 4.05800008773804 4.06799983978271 4.08500003814697 4.11999988555908 4.1399998664856 4.15799999237061 4.19000005722046 4.20900011062622 4.22800016403198 4.26599979400635 4.28800010681152 4.31799983978271 4.34700012207031 4.37099981307983 4.39900016784668 4.42999982833862 4.46600008010864 4.49100017547607 4.5 4.55100011825562 4.57000017166138 4.59600019454956 4.63199996948242 4.65500020980835 4.69799995422363 4.7189998626709 4.7979998588562 4.82399988174438 4.89799976348877 4.95699977874756 5.02099990844727 5.11899995803833 5.13100004196167 5.27099990844727 5.44799995422363 5.64300012588501]
set ecdf (map list depths probabilities) ;; First number is depth, second number is probability.
set flood-prob random-normal 0.26 0.22
;; Need help from here onwards in selecting the appropriate value...
end
I have tried using rnd:weighted-one-of-list but it is unsuitable. I think I need some code that uses the drawn value of flood-prob to relate to an item index in the list that is approximate to it, from which I can extract the corresponding depth.
I've been stuck on this for a while so any help is greatly appreciated, thanks in advance!
I second Luke's remarks on (1) making sure whether you want to use random-normal rather than just a uniform 0-to-1 sample, considering that the values in the probabilities list already shape the distribution (but there is more to this, I'll talk about it at the end of the answer), and (2) moving the creation of your global values to setup so that this happens only once.
That said, here I propose a different, light approach to achieve your goal which only relies on the two existing lists and might also have the advantage of readability.
In English: create your random value and compare it to each value in the probabilities list, in order, until you find a value in the probabilities list that is not smaller than the random value. Keep track of the position of the item (from probabilities) being compared. When you find the probability value that is not smaller than the random value, extract the item from depths that is found in the same position.
In NetLogo:
globals [ depths probabilities ]
to setup
clear-all
set depths [2.80399990081787 2.82999992370605 2.88599991798401 2.92899990081787 2.94400000572205 2.97499990463257 3.00300002098083 3.02500009536743 3.04200005531311 3.04800009727478 3.06100010871887 3.07200002670288 3.09299993515015 3.1010000705719 3.11800003051758 3.125 3.13100004196167 3.14800000190735 3.16100001335144 3.17000007629395 3.18099999427795 3.19400000572205 3.20600008964539 3.22300004959106 3.22900009155273 3.24499988555908 3.2590000629425 3.27099990844727 3.27300000190735 3.28299999237061 3.29999995231628 3.31200003623962 3.3199999332428 3.32699990272522 3.34599995613098 3.35400009155273 3.36800003051758 3.37400007247925 3.3840000629425 3.3989999294281 3.40599989891052 3.41899991035461 3.43600010871887 3.44099998474121 3.4539999961853 3.46900010108948 3.48000001907349 3.48399996757507 3.50200009346008 3.50799989700317 3.51399993896484 3.52500009536743 3.53500008583069 3.54999995231628 3.5550000667572 3.5699999332428 3.58599996566772 3.59299993515015 3.60700011253357 3.61800003051758 3.62400007247925 3.64299988746643 3.65700006484985 3.66499996185303 3.66899991035461 3.68499994277954 3.69300007820129 3.69700002670288 3.71499991416931 3.71799993515015 3.73099994659424 3.75200009346008 3.75999999046326 3.77300000190735 3.78500008583069 3.80399990081787 3.81299996376038 3.83500003814697 3.84800004959106 3.86800003051758 3.88299989700317 3.90199995040894 3.92199993133545 3.93600010871887 3.94799995422363 3.97300004959106 3.98699998855591 4.00299978256226 4.02799987792969 4.05800008773804 4.06799983978271 4.08500003814697 4.11999988555908 4.1399998664856 4.15799999237061 4.19000005722046 4.20900011062622 4.22800016403198 4.26599979400635 4.28800010681152 4.31799983978271 4.34700012207031 4.37099981307983 4.39900016784668 4.42999982833862 4.46600008010864 4.49100017547607 4.5 4.55100011825562 4.57000017166138 4.59600019454956 4.63199996948242 4.65500020980835 4.69799995422363 4.7189998626709 4.7979998588562 4.82399988174438 4.89799976348877 4.95699977874756 5.02099990844727 5.11899995803833 5.13100004196167 5.27099990844727 5.44799995422363 5.64300012588501]
set probabilities [0.056 0.061 0.073 0.083 0.087 0.095 0.103 0.110 0.115 0.117 0.121 0.125 0.132 0.135 0.141 0.144 0.146 0.153 0.158 0.161 0.166 0.171 0.176 0.183 0.186 0.193 0.199 0.205 0.206 0.211 0.219 0.225 0.229 0.232 0.242 0.246 0.253 0.256 0.261 0.270 0.273 0.280 0.290 0.293 0.300 0.309 0.315 0.317 0.328 0.332 0.335 0.342 0.348 0.357 0.360 0.369 0.379 0.383 0.392 0.399 0.403 0.415 0.424 0.429 0.432 0.442 0.447 0.450 0.462 0.463 0.472 0.486 0.491 0.499 0.507 0.520 0.525 0.540 0.548 0.561 0.571 0.583 0.596 0.605 0.612 0.628 0.636 0.646 0.661 0.679 0.685 0.695 0.715 0.726 0.735 0.752 0.762 0.771 0.790 0.800 0.814 0.826 0.836 0.847 0.859 0.872 0.880 0.883 0.898 0.904 0.911 0.920 0.926 0.935 0.939 0.953 0.957 0.967 0.974 0.979 0.986 0.987 0.993 0.997 0.999]
end
to-report depth-based-on-probability
let this-random-number random-normal 0.26 0.22
let index 0
while [this-random-number > item index probabilities] [
set index index + 1
]
report item index depths
end
However, note that while this is a fast and light approach, it still needs some aspects to be fixed which require some further considerations. See below.
Notes regarding the way you draw your random number
random-normal can generate values that are outside of your range of probabilities. This is true in general, but also very much in particular for your random-normal 0.26 0.22.
It can give negative numbers but negative numbers are not contemplated in your probabilities list. You would not get an error in that case, because this-random-number > item index probabilities will evaluate as FALSE on the first iteration of while and the procedure will just report the first element of depths. However this makes me wonder whether occasionally having negative values is something you planned for, considering that the effect of this is to increase the probability that the first element of depths is extracted (i.e. "increase" beyond the level specified in probabilities, which is 0.056). Using random-float 1 instead of random-normal will eliminate this issue while still following the shape of your distribution, because the shape is given by the values in probabilities.
I see that your probabilities reach 0.999. There is absolutely no guarantee that random-normal 0.26 0.22 will give values lower than 0.999, as it can actually give values that are not only greater than 0.999 but also greater than 1. This will give you an error, because at some point the index value will grow to the point that item index probabilities does not exist, and the model will interrupt with an error. If my understanding is correct, your intention is to have probabilities and depths such that these two lists map all the possible events. Again, a correct way to do it would be to use random-float 1 (which guarantees that you get a value 0 <= x < 1) AND setting the last item of your probabilities list as 1 (and not 0.999).
Summarising the above, the resulting code would look like:
globals [ depths probabilities ]
to setup
clear-all
set depths [2.80399990081787 2.82999992370605 2.88599991798401 2.92899990081787 2.94400000572205 2.97499990463257 3.00300002098083 3.02500009536743 3.04200005531311 3.04800009727478 3.06100010871887 3.07200002670288 3.09299993515015 3.1010000705719 3.11800003051758 3.125 3.13100004196167 3.14800000190735 3.16100001335144 3.17000007629395 3.18099999427795 3.19400000572205 3.20600008964539 3.22300004959106 3.22900009155273 3.24499988555908 3.2590000629425 3.27099990844727 3.27300000190735 3.28299999237061 3.29999995231628 3.31200003623962 3.3199999332428 3.32699990272522 3.34599995613098 3.35400009155273 3.36800003051758 3.37400007247925 3.3840000629425 3.3989999294281 3.40599989891052 3.41899991035461 3.43600010871887 3.44099998474121 3.4539999961853 3.46900010108948 3.48000001907349 3.48399996757507 3.50200009346008 3.50799989700317 3.51399993896484 3.52500009536743 3.53500008583069 3.54999995231628 3.5550000667572 3.5699999332428 3.58599996566772 3.59299993515015 3.60700011253357 3.61800003051758 3.62400007247925 3.64299988746643 3.65700006484985 3.66499996185303 3.66899991035461 3.68499994277954 3.69300007820129 3.69700002670288 3.71499991416931 3.71799993515015 3.73099994659424 3.75200009346008 3.75999999046326 3.77300000190735 3.78500008583069 3.80399990081787 3.81299996376038 3.83500003814697 3.84800004959106 3.86800003051758 3.88299989700317 3.90199995040894 3.92199993133545 3.93600010871887 3.94799995422363 3.97300004959106 3.98699998855591 4.00299978256226 4.02799987792969 4.05800008773804 4.06799983978271 4.08500003814697 4.11999988555908 4.1399998664856 4.15799999237061 4.19000005722046 4.20900011062622 4.22800016403198 4.26599979400635 4.28800010681152 4.31799983978271 4.34700012207031 4.37099981307983 4.39900016784668 4.42999982833862 4.46600008010864 4.49100017547607 4.5 4.55100011825562 4.57000017166138 4.59600019454956 4.63199996948242 4.65500020980835 4.69799995422363 4.7189998626709 4.7979998588562 4.82399988174438 4.89799976348877 4.95699977874756 5.02099990844727 5.11899995803833 5.13100004196167 5.27099990844727 5.44799995422363 5.64300012588501]
set probabilities [0.056 0.061 0.073 0.083 0.087 0.095 0.103 0.110 0.115 0.117 0.121 0.125 0.132 0.135 0.141 0.144 0.146 0.153 0.158 0.161 0.166 0.171 0.176 0.183 0.186 0.193 0.199 0.205 0.206 0.211 0.219 0.225 0.229 0.232 0.242 0.246 0.253 0.256 0.261 0.270 0.273 0.280 0.290 0.293 0.300 0.309 0.315 0.317 0.328 0.332 0.335 0.342 0.348 0.357 0.360 0.369 0.379 0.383 0.392 0.399 0.403 0.415 0.424 0.429 0.432 0.442 0.447 0.450 0.462 0.463 0.472 0.486 0.491 0.499 0.507 0.520 0.525 0.540 0.548 0.561 0.571 0.583 0.596 0.605 0.612 0.628 0.636 0.646 0.661 0.679 0.685 0.695 0.715 0.726 0.735 0.752 0.762 0.771 0.790 0.800 0.814 0.826 0.836 0.847 0.859 0.872 0.880 0.883 0.898 0.904 0.911 0.920 0.926 0.935 0.939 0.953 0.957 0.967 0.974 0.979 0.986 0.987 0.993 0.997 1]
end
to-report depth-based-on-probability
let this-random-number random-float 1
let index 0
while [this-random-number >= item index probabilities] [
set index index + 1
]
report item index depths
end
Note above the use of random-float and the new last item of probabilities.
However also note that an important difference is in the condition for while: I included >= instead of simply >. This has to do with the way conditions with random-float (or random if working with integers) are conceived in NetLogo. In fact, random x reports a value greater than or equal to 0, but strictly less than x. This means that a 50% chance is represented by random 2 < 1, and you can write a 12% chance as random 100 < 12 or random-float 1 < 0.12.
It follows that, looking for example at the first element of your lists, you want to extract the depth of 2.80399990081787 if random-float 1 < 0.56. It means that you want the condition to evaluate as FALSE when this-random-number < item index probabilities, which means that it has to evaluate as TRUE when this-random-number >= item index probabilities.
Having the last item of probabilities as 1 will ensure that this-random-number >= item index probabilities is always FALSE for the last item (because random-float 1 is always strictly less than 1), which is exactly what you need when using while loops in order to avoid infinite loops or, in your case, runtime errors (as explained in point 2 above).
If you're using this ecdf in the way I'm imagining (to generate random points within your empirical distribution), you may want to confirm that random-normal is the correct distribution to sample for your generation rather than a random uniform selection from 0 to 1 to sample from the inverse cdf- but I may be misunderstanding your purpose here (and, I haven't looked at the shape of your provided points).
If you just want to match the closest values between your flood-prob and the list of possible probabilities, you could determine the minimum absolute differences between flood-prob and the list then use that new list to pull the index value to pull from your depths:
globals [ probabilities depths ]
to setup
ca
set probabilities [0.056 0.061 0.073 0.083 0.087 0.095 0.103 0.110 0.115 0.117 0.121 0.125 0.132 0.135 0.141 0.144 0.146 0.153 0.158 0.161 0.166 0.171 0.176 0.183 0.186 0.193 0.199 0.205 0.206 0.211 0.219 0.225 0.229 0.232 0.242 0.246 0.253 0.256 0.261 0.270 0.273 0.280 0.290 0.293 0.300 0.309 0.315 0.317 0.328 0.332 0.335 0.342 0.348 0.357 0.360 0.369 0.379 0.383 0.392 0.399 0.403 0.415 0.424 0.429 0.432 0.442 0.447 0.450 0.462 0.463 0.472 0.486 0.491 0.499 0.507 0.520 0.525 0.540 0.548 0.561 0.571 0.583 0.596 0.605 0.612 0.628 0.636 0.646 0.661 0.679 0.685 0.695 0.715 0.726 0.735 0.752 0.762 0.771 0.790 0.800 0.814 0.826 0.836 0.847 0.859 0.872 0.880 0.883 0.898 0.904 0.911 0.920 0.926 0.935 0.939 0.953 0.957 0.967 0.974 0.979 0.986 0.987 0.993 0.997 0.999]
set depths [2.80399990081787 2.82999992370605 2.88599991798401 2.92899990081787 2.94400000572205 2.97499990463257 3.00300002098083 3.02500009536743 3.04200005531311 3.04800009727478 3.06100010871887 3.07200002670288 3.09299993515015 3.1010000705719 3.11800003051758 3.125 3.13100004196167 3.14800000190735 3.16100001335144 3.17000007629395 3.18099999427795 3.19400000572205 3.20600008964539 3.22300004959106 3.22900009155273 3.24499988555908 3.2590000629425 3.27099990844727 3.27300000190735 3.28299999237061 3.29999995231628 3.31200003623962 3.3199999332428 3.32699990272522 3.34599995613098 3.35400009155273 3.36800003051758 3.37400007247925 3.3840000629425 3.3989999294281 3.40599989891052 3.41899991035461 3.43600010871887 3.44099998474121 3.4539999961853 3.46900010108948 3.48000001907349 3.48399996757507 3.50200009346008 3.50799989700317 3.51399993896484 3.52500009536743 3.53500008583069 3.54999995231628 3.5550000667572 3.5699999332428 3.58599996566772 3.59299993515015 3.60700011253357 3.61800003051758 3.62400007247925 3.64299988746643 3.65700006484985 3.66499996185303 3.66899991035461 3.68499994277954 3.69300007820129 3.69700002670288 3.71499991416931 3.71799993515015 3.73099994659424 3.75200009346008 3.75999999046326 3.77300000190735 3.78500008583069 3.80399990081787 3.81299996376038 3.83500003814697 3.84800004959106 3.86800003051758 3.88299989700317 3.90199995040894 3.92199993133545 3.93600010871887 3.94799995422363 3.97300004959106 3.98699998855591 4.00299978256226 4.02799987792969 4.05800008773804 4.06799983978271 4.08500003814697 4.11999988555908 4.1399998664856 4.15799999237061 4.19000005722046 4.20900011062622 4.22800016403198 4.26599979400635 4.28800010681152 4.31799983978271 4.34700012207031 4.37099981307983 4.39900016784668 4.42999982833862 4.46600008010864 4.49100017547607 4.5 4.55100011825562 4.57000017166138 4.59600019454956 4.63199996948242 4.65500020980835 4.69799995422363 4.7189998626709 4.7979998588562 4.82399988174438 4.89799976348877 4.95699977874756 5.02099990844727 5.11899995803833 5.13100004196167 5.27099990844727 5.44799995422363 5.64300012588501]
reset-ticks
end
to-report pull-depth-from-dist
; Random flood dist value
let flood-prob random-normal 0.26 0.22
; Select the closest prob (could also average the depths values instead if preferred)
let dif-vals map [ i -> abs ( i - flood-prob ) ] probabilities
let dif-index position ( min dif-vals ) dif-vals
let closest-depth item dif-index depths
print word "flood-prob: " flood-prob
print word "closest prob: " item dif-index probabilities
report closest-depth
end
Note that I've moved the definition of probabilities / depths to the setup so that it's only being called on setup, rather than within the reporter.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
i have 2 reflection coefficient equations R1 and R2 from K with condition absolute must below 1, i use if command for this situation .But when i plot the graph the absolute reflection coefficient still above 1. (K is matrix with 1 column and 201 row)
R1=K+sqrt(K.^2-1);
R2=K-sqrt(K.^2-1);
if abs(R1)<1
r=R1;
else
r=R2;
end
this is the K in excel
real imaginer
-0.7536 0.0512
-0.802 0.0426
-0.8496 0.0408
-0.8872 0.0327
-0.927 0.0338
-0.9575 0.0242
-0.979 0.0174
-0.9977 0.0113
-10,031 0.0029
-10,012 -0.007
-0.9876 -0.0167
-0.9654 -0.0249
-0.9299 -0.0401
-0.8797 -0.0488
-0.8176 -0.0623
-0.7297 -0.0782
-0.6458 -0.0865
-0.5351 -0.1051
-0.4098 -0.1197
-0.2701 -0.1349
-0.1177 -0.1489
0.0536 -0.1699
0.213 -0.1853
0.3933 -0.1921
0.5519 -0.1857
0.7128 -0.1896
0.8511 -0.1712
0.9468 -0.1452
10,222 -0.0943
10,375 -0.04
10,134 0.0365
0.9361 0.1255
0.8122 0.2168
0.6622 0.3108
0.4657 0.3774
0.2577 0.4497
0.0431 0.4775
-0.1463 0.5093
-0.3442 0.4999
-0.5203 0.4782
-0.6692 0.4417
-0.7781 0.3822
-0.8856 0.3293
-0.9703 0.2615
-10,187 0.193
-10,524 0.1254
-10,614 0.0557
-10,539 -0.0016
-10,297 -0.0698
-0.9879 -0.1212
-0.9355 -0.1829
-0.8721 -0.2298
-0.8011 -0.2783
-0.7232 -0.325
-0.6401 -0.3586
-0.5455 -0.4008
-0.4429 -0.43
-0.3524 -0.4433
-0.2455 -0.4769
-0.1336 -0.4863
-0.0391 -0.5073
0.0779 -0.5105
0.1776 -0.5196
0.2869 -0.5152
0.3893 -0.5084
0.4831 -0.4978
0.5888 -0.4907
0.6822 -0.4574
0.7614 -0.4381
0.8484 -0.4017
0.9098 -0.3585
0.9771 -0.3172
10,268 -0.2607
10,667 -0.2102
10,969 -0.1464
11,115 -0.0724
11,141 -0.0019
10,981 0.0838
10,645 0.1546
10,135 0.2457
0.9409 0.3332
0.8657 0.4061
0.7519 0.4973
0.6426 0.5635
0.5072 0.6302
0.3633 0.6782
0.2148 0.7161
0.0382 0.7573
-0.1051 0.7395
-0.273 0.7359
-0.4273 0.7154
-0.5653 0.6794
-0.6971 0.6279
-0.8202 0.555
-0.905 0.493
-0.9996 0.4155
-10,716 0.3239
-11,006 0.2549
-11,444 0.1479
-11,464 0.0722
-11,493 -0.0031
-11,282 -0.0814
-11,040 -0.1603
-10,645 -0.2219
-10,187 -0.2787
-0.9514 -0.3223
-0.8878 -0.3841
-0.8225 -0.42
-0.7415 -0.4606
-0.6607 -0.4889
-0.5577 -0.5319
-0.482 -0.5512
-0.3775 -0.5614
-0.2918 -0.5798
-0.1621 -0.5712
-0.0979 -0.5917
0.0149 -0.5559
0.1062 -0.5734
0.2142 -0.5648
0.3159 -0.5363
0.3844 -0.5302
0.5019 -0.5066
0.5805 -0.4709
0.6626 -0.4506
0.7482 -0.4117
0.8005 -0.363
0.8799 -0.3378
0.9349 -0.2889
0.9883 -0.2449
10,306 -0.1946
10,643 -0.1373
10,870 -0.1025
10,935 -0.0389
10,840 0.0184
10,732 0.0639
10,333 0.1274
0.9906 0.1739
0.9243 0.2293
0.8455 0.2752
0.7527 0.3035
0.6292 0.3394
0.5384 0.3524
0.3808 0.3845
0.2509 0.4067
0.0931 0.4004
-0.0423 0.3839
-0.2123 0.377
-0.3666 0.3537
-0.4838 0.3309
-0.6157 0.288
-0.7211 0.2604
-0.8322 0.2172
-0.8947 0.1791
-0.9618 0.1366
-10,024 0.0932
-10,299 0.0493
-10,415 0.0099
-10,333 -0.0243
-10,092 -0.0612
-0.9798 -0.0906
-0.9321 -0.1302
-0.8796 -0.1472
-0.8121 -0.17
-0.7414 -0.1886
-0.6649 -0.2019
-0.5907 -0.2149
-0.4793 -0.2271
-0.4011 -0.2224
-0.3121 -0.2408
-0.1948 -0.2343
-0.0997 -0.2322
0.008 -0.2328
0.1304 -0.2224
0.2662 -0.2213
0.4093 -0.2298
0.553 -0.2406
0.7094 -0.3018
0.8613 -0.383
0.9745 -0.5634
0.9796 -0.8226
0.7781 -0.9412
0.6424 -0.8495
0.6264 -0.8147
0.6071 -0.6706
0.6682 -0.6029
0.6759 -0.5596
0.71 -0.5218
0.7479 -0.4825
0.7691 -0.4476
0.8264 -0.4056
0.8412 -0.3912
0.8511 -0.3813
0.8689 -0.3425
0.899 -0.3375
0.8827 -0.3198
0.9024 -0.3164
0.929 -0.2876
0.9106 -0.2855
0.9695 -0.2079
10,342 -0.5353
0.8692 -0.5046
I am not 100% sure exactly what you are asking, but I believe the problem you are experiencing is that r is above 1?
K is an imaginary number, where the first column is the real part and the second column is the imaginary part, do I have that correctly? So the first K value is -0.7536+0.0512i, right?
Ok, so did you perhaps intend to cycle through each position of the R1 matrix and see if each one was less than 1. Because right now what you are doing is saying if any values in the entire R1 vector are less than 1, then r equals to the entire R2 vector.
If you want to go through each position in the vector, you should do this:
R1=K+sqrt(K.^2-1);
R2=K-sqrt(K.^2-1);
l=length(R1);
for p=1:l
if abs(R1(p))<1
r(p)=R1(p);
else
r(p)=R2(p);
end
end
What should be the VIF value limit (like 4,5,6,7,....) for Linear Regression model which has 30 discrete, 4 continuous input variables and 1 continuous variable?
It's confusing to see that different researcher recommend different VIF values to use.
I have tried it in SPSS and by creating dummy variables for discrete variables. Here is the result
Coefficients
Model Unstandardized Coefficients Standardized Coefficients t Sig. Collinearity Statistics
B Std. Error Beta Tolerance VIF
(Constant) .076 1.262 .060 .952
absences .014 .012 .020 1.170 .243 .776 1.289
G1 .129 .039 .109 3.326 .001 .214 4.665
G2 .857 .036 .773 23.541 .000 .215 4.645
age .027 .050 .010 .548 .584 .649 1.540
school_new -.170 .135 -.025 -1.265 .206 .588 1.702
sex_new .150 .121 .023 1.239 .216 .680 1.471
address_new -.119 .127 -.017 -.937 .349 .712 1.405
famsize_new .038 .118 .005 .320 .749 .830 1.205
pstatus_new .004 .169 .000 .025 .980 .786 1.272
schoolsup_new .197 .178 .019 1.105 .269 .811 1.234
famsup_new -.070 .110 -.011 -.632 .528 .836 1.197
paid_new .147 .222 .011 .659 .510 .865 1.156
activities_new -.009 .108 -.001 -.087 .931 .830 1.204
nursery_new .070 .132 .009 .531 .596 .879 1.137
higher_new -.124 .189 -.012 -.655 .513 .712 1.404
internet_new -.115 .134 -.015 -.858 .391 .755 1.324
romantic_new .022 .112 .003 .200 .842 .832 1.202
M_prim_edu -.046 .556 -.006 -.083 .934 .046 21.942
M_5th_TO_9th -.114 .560 -.016 -.203 .839 .038 26.474
M_secon_edu -.143 .566 -.018 -.253 .801 .045 22.328
M_higher_edu -.309 .583 -.042 -.529 .597 .036 27.719
F_prim_edu -.454 .518 -.062 -.875 .382 .046 21.795
F_5th_TO_9th -.318 .522 -.046 -.608 .543 .041 24.624
F_secon_edu -.300 .532 -.037 -.563 .574 .053 18.873
F_higher_edu -.269 .547 -.033 -.492 .623 .051 19.613
M_health_job -.195 .253 -.025 -.770 .441 .229 4.373
M_other_job .050 .256 .004 .197 .844 .541 1.849
M_services_job -.273 .225 -.041 -1.211 .226 .199 5.016
M_teacher_job -.013 .226 -.002 -.055 .956 .286 3.496
F_health_job .470 .335 .036 1.400 .162 .355 2.814
F_other_job .003 .362 .000 .008 .993 .539 1.854
F_services_job .151 .269 .023 .563 .574 .136 7.336
F_teacher_job .015 .275 .002 .054 .957 .159 6.293
reason_school_repu .239 .194 .031 1.235 .217 .364 2.746
reason_course_pref .176 .202 .023 .873 .383 .347 2.886
reason_other .364 .175 .056 2.074 .039 .320 3.129
guard_mother -.030 .129 -.004 -.234 .815 .699 1.431
guard_other .311 .259 .023 1.204 .229 .612 1.635
tra_time_15_TO_30min .043 .120 .006 .356 .722 .764 1.309
tra_time_30_TO_60min .274 .206 .023 1.327 .185 .745 1.342
tra_time_GT_60min .791 .351 .038 2.254 .025 .816 1.225
study_2_TO_5hrs_time .171 .129 .026 1.325 .186 .584 1.713
study_5_TO_10hrs_time .151 .177 .017 .853 .394 .605 1.654
study_GT_10hrs_time .073 .253 .005 .290 .772 .743 1.347
failure_1_time -.532 .189 -.051 -2.814 .005 .704 1.421
failure_2_time -.691 .362 -.033 -1.906 .057 .766 1.305
failure_3_time -.428 .375 -.019 -1.140 .255 .813 1.230
family_rela_bad -.002 .381 .000 -.004 .997 .391 2.558
family_rela_avg .012 .322 .001 .038 .970 .177 5.642
family_rela_good .011 .303 .002 .037 .971 .106 9.470
family_rela_excel -.101 .308 -.014 -.329 .743 .127 7.885
freetime_low .105 .236 .012 .447 .655 .315 3.172
freetime_avg -.038 .217 -.006 -.174 .862 .217 4.600
freetime_high -.026 .231 -.004 -.111 .911 .228 4.384
freetime_very_high -.153 .266 -.014 -.572 .567 .363 2.753
go_out_low .095 .223 .012 .424 .672 .280 3.576
go_out_avg .135 .218 .019 .619 .536 .236 4.244
go_out_high .186 .232 .024 .801 .423 .264 3.781
go_out_very_high -.132 .246 -.015 -.537 .591 .284 3.521
Dalc_low -.157 .156 -.019 -1.006 .315 .655 1.527
Dalc_avg .274 .250 .021 1.097 .273 .628 1.592
Dalc_high -.877 .352 -.043 -2.488 .013 .763 1.310
Dalc_very_high .102 .407 .005 .250 .802 .571 1.751
Walc_low .031 .144 .004 .213 .831 .656 1.526
Walc_avg -.148 .164 -.018 -.901 .368 .594 1.683
Walc_high .000 .205 .000 .002 .998 .495 2.020
Walc_very_high -.059 .309 -.005 -.190 .849 .393 2.542
health_low -.065 .205 -.006 -.314 .754 .542 1.845
health_avg -.125 .185 -.015 -.677 .499 .459 2.179
health_high -.088 .190 -.010 -.465 .642 .482 2.075
health_very_high -.234 .169 -.035 -1.381 .168 .357 2.801
a. Dependent Variable: G3
This is some code I wrote to search for the peaks of a very clean (no noise) signal where fun is an array containing evenly sampled data of a sine wave.
J=[fun(1)];
K=[1];
count=1;
for i=2:1.0:(length(fun)-2)
if fun(i-1)<fun(i) && fun(i)>fun(i+1)
J=[J,fun(i+1)];
K=[K,count+1];
end
count=count+1;
end
Included below is the data that I am trying to process.
The code found the peaks at the 664th and 991st entry, but none of the ones in between. I wrote the same algorithm in c++ and got the same result, so it is an algorithm problem, not language specific.
Please help me find the error or give me another solution.
fun = -1*pi/180*[-90.15
-90.00
-89.70
-89.10
-88.50
-87.75
-86.70
-85.65
-84.30
-82.95
-81.45
-79.80
-78.15
-76.35
-74.55
-72.30
-70.20
-67.80
-65.40
-62.70
-60.00
-57.15
-54.30
-51.15
-48.00
-44.85
-41.40
-37.95
-34.50
-30.90
-27.30
-23.55
-19.80
-16.05
-12.15
-8.25
-4.95
-1.50
1.95
4.80
7.80
10.65
13.95
17.40
20.70
23.85
27.15
30.30
33.45
36.45
39.45
42.45
45.30
48.00
50.70
53.40
55.95
58.35
60.75
63.15
65.25
67.35
69.45
71.40
73.20
74.85
76.50
78.15
79.50
80.85
82.05
83.25
84.15
85.05
85.95
86.70
87.45
88.05
88.50
88.95
89.10
89.25
89.40
89.25
89.10
88.95
88.50
88.05
87.45
86.70
86.10
85.20
84.30
83.25
82.20
81.00
79.65
78.15
76.65
75.00
73.35
71.55
69.60
67.50
65.40
63.30
60.90
58.65
56.10
53.55
51.00
48.30
45.45
42.60
39.75
36.75
33.75
30.60
27.45
24.30
21.00
17.70
14.40
11.10
7.65
4.80
1.95
-0.90
-4.35
-7.65
-11.10
-14.85
-18.75
-22.35
-26.10
-29.70
-33.30
-36.75
-40.20
-43.50
-46.80
-49.95
-52.95
-55.95
-58.65
-61.35
-63.90
-66.45
-68.85
-70.95
-73.05
-75.00
-76.80
-78.45
-80.10
-81.60
-82.95
-84.15
-85.20
-86.10
-87.00
-87.60
-88.05
-88.50
-88.80
-88.80
-88.80
-88.80
-88.50
-88.05
-87.60
-87.00
-86.25
-85.50
-84.45
-83.25
-82.05
-80.55
-79.05
-77.40
-75.60
-73.65
-71.55
-69.45
-67.20
-64.65
-62.25
-59.55
-56.70
-53.85
-50.85
-47.70
-44.55
-41.25
-37.95
-34.50
-30.90
-27.30
-23.70
-19.95
-16.20
-12.45
-8.55
-5.25
-1.95
1.50
4.35
7.20
10.05
13.35
16.65
19.95
23.10
26.40
29.55
32.55
35.55
38.55
41.40
44.25
47.10
49.80
52.35
54.90
57.30
59.70
61.95
64.05
66.30
68.25
70.20
72.00
73.65
75.30
76.80
78.30
79.65
80.85
81.90
82.95
83.85
84.75
85.50
86.10
86.55
87.00
87.45
87.60
87.75
87.75
87.75
87.60
87.30
87.00
86.55
85.95
85.35
84.60
83.70
82.80
81.75
80.55
79.35
78.00
76.50
75.00
73.35
71.70
69.75
67.95
65.85
63.75
61.50
59.25
56.85
54.45
51.90
49.35
46.65
43.80
40.95
38.10
35.10
32.10
28.95
25.95
22.65
19.50
16.20
13.05
9.75
6.90
4.05
1.05
-1.80
-5.10
-8.40
-11.70
-15.45
-19.20
-22.95
-26.55
-30.15
-33.60
-37.05
-40.35
-43.65
-46.80
-49.95
-52.80
-55.65
-58.50
-61.05
-63.60
-66.00
-68.25
-70.50
-72.45
-74.40
-76.20
-77.85
-79.35
-80.70
-81.90
-83.10
-84.15
-85.05
-85.80
-86.40
-86.85
-87.15
-87.45
-87.45
-87.45
-87.30
-87.00
-86.55
-85.95
-85.35
-84.45
-83.55
-82.50
-81.30
-79.95
-78.45
-76.95
-75.15
-73.35
-71.40
-69.30
-67.05
-64.65
-62.25
-59.70
-57.00
-54.15
-51.30
-48.30
-45.15
-41.85
-38.55
-35.25
-31.80
-28.20
-24.60
-21.00
-17.25
-13.65
-9.90
-6.60
-3.30
0.15
2.85
5.70
8.55
11.40
14.70
17.85
21.15
24.30
27.45
30.45
33.45
36.45
39.30
42.15
44.85
47.70
50.25
52.80
55.20
57.60
59.85
62.10
64.20
66.30
68.10
70.05
71.70
73.35
75.00
76.35
77.70
79.05
80.25
81.30
82.20
83.10
83.85
84.45
85.05
85.50
85.95
86.10
86.40
86.40
86.40
86.25
86.10
85.65
85.35
84.75
84.15
83.40
82.65
81.75
80.70
79.50
78.30
77.10
75.60
74.10
72.45
70.80
69.00
67.05
65.10
63.15
60.90
58.65
56.40
54.00
51.45
48.90
46.20
43.50
40.65
37.80
34.95
31.95
28.95
25.80
22.65
19.50
16.35
13.05
9.90
7.05
4.20
1.35
-1.50
-4.65
-7.95
-11.25
-15.00
-18.75
-22.35
-25.95
-29.40
-32.85
-36.30
-39.60
-42.75
-45.90
-49.05
-51.90
-54.75
-57.45
-60.15
-62.55
-64.95
-67.20
-69.30
-71.40
-73.20
-75.00
-76.65
-78.15
-79.50
-80.70
-81.90
-82.80
-83.70
-84.45
-85.05
-85.50
-85.80
-85.95
-86.10
-86.10
-85.80
-85.50
-85.05
-84.60
-83.85
-82.95
-82.05
-81.00
-79.65
-78.30
-76.95
-75.30
-73.65
-71.70
-69.75
-67.65
-65.40
-63.15
-60.60
-58.05
-55.35
-52.50
-49.65
-46.65
-43.50
-40.35
-37.05
-33.60
-30.15
-26.70
-23.10
-19.50
-15.90
-12.15
-8.55
-5.25
-1.95
1.35
4.05
6.90
9.75
12.45
15.75
18.90
22.05
25.05
28.20
31.20
34.20
37.05
39.90
42.60
45.30
48.00
50.55
53.10
55.35
57.75
60.00
62.10
64.20
66.15
67.95
69.75
71.40
73.05
74.55
75.90
77.10
78.30
79.50
80.55
81.30
82.20
82.95
83.55
84.00
84.45
84.75
84.90
85.05
85.05
84.90
84.75
84.45
84.15
83.55
83.10
82.35
81.60
80.70
79.65
78.60
77.55
76.20
74.85
73.35
71.85
70.20
68.40
66.60
64.65
62.55
60.45
58.35
55.95
53.70
51.15
48.75
46.05
43.35
40.65
37.80
34.95
32.10
29.10
25.95
22.95
19.80
16.65
13.50
10.20
7.05
4.20
1.50
-1.35
-4.50
-7.80
-11.10
-14.70
-18.30
-21.90
-25.50
-28.95
-32.40
-35.70
-39.00
-42.15
-45.30
-48.30
-51.15
-54.00
-56.70
-59.25
-61.65
-64.05
-66.30
-68.40
-70.35
-72.30
-73.95
-75.60
-77.10
-78.45
-79.65
-80.70
-81.60
-82.50
-83.10
-83.70
-84.15
-84.45
-84.60
-84.75
-84.60
-84.45
-84.15
-83.70
-83.10
-82.35
-81.45
-80.55
-79.35
-78.15
-76.80
-75.30
-73.65
-72.00
-70.05
-68.10
-66.00
-63.75
-61.35
-58.95
-56.40
-53.70
-50.85
-47.85
-44.85
-41.85
-38.70
-35.40
-32.10
-28.65
-25.05
-21.60
-18.00
-14.40
-10.80
-7.05
-3.90
-0.60
2.55
5.40
8.10
10.95
14.10
17.25
20.25
23.40
26.40
29.40
32.40
35.25
38.10
40.95
43.65
46.20
48.75
51.30
53.70
55.95
58.20
60.30
62.40
64.35
66.30
68.10
69.75
71.40
72.90
74.25
75.60
76.80
77.85
78.90
79.80
80.70
81.45
82.05
82.50
82.95
83.25
83.55
83.70
83.70
83.70
83.55
83.25
82.95
82.50
81.90
81.30
80.55
79.65
78.75
77.70
76.50
75.30
73.95
72.45
70.95
69.30
67.65
65.85
63.90
61.95
59.85
57.60
55.35
53.10
50.70
48.15
45.60
42.90
40.20
37.50
34.65
31.80
28.80
25.80
22.80
19.65
16.65
13.50
10.20
7.05
4.35
1.65
-1.20
-4.35
-7.50
-10.80
-14.40
-18.00
-21.45
-25.05
-28.50
-31.80
-35.10
-38.40
-41.55
-44.55
-47.55
-50.40
-53.25
-55.80
-58.35
-60.90
-63.15
-65.40
-67.35
-69.30
-71.25
-72.90
-74.55
-75.90
-77.25
-78.45
-79.50
-80.40
-81.30
-81.90
-82.50
-82.95
-83.25
-83.40
-83.40
-83.25
-83.10
-82.80
-82.35
-81.75
-81.00
-80.10
-79.05
-78.00
-76.65
-75.30
-73.80
-72.15
-70.50
-68.55
-66.60
-64.50
-62.25
-59.85
-57.30
-54.75
-52.05
-49.35
-46.35
-43.35
-40.35
-37.05
-33.90
-30.60
-27.15
-23.70
-20.25
-16.65
-13.05
-9.45
-6.30
-3.15
0.15
2.85
5.55
8.25
10.95
14.10
17.25
20.25
23.40
26.40
29.25
32.25
35.10
37.80
40.50
43.20
45.90
48.30
50.85
53.10
55.35
57.60
59.70
61.80
63.75
65.55
67.35
69.00
70.50
72.00
73.35
74.70
75.90
76.95
77.85
78.75
79.65
80.25
80.85
81.45
81.75
82.05
82.35
82.50
82.50
82.35
82.20
81.90
81.45
81.00
80.40
79.80
78.90
78.15
77.10
76.05
74.85
73.65
72.30
70.80
69.30
67.65
65.85
64.05
62.10
60.15
58.05
55.80
53.55
51.30
48.90
46.35
43.80
41.10
38.40
35.70
32.85
30.00
27.00
24.00
21.00
18.00
14.85
11.70
8.70
6.00
3.30
0.45
-2.25
-5.40
-8.55
-11.70
-15.30
-18.75
-22.20
-25.65
-29.10
-32.40
-35.70
-38.85
-41.85
-44.85
-47.85
-50.55
-53.25
-55.95
-58.35
-60.75
-63.00
-65.10
-67.05
-69.00
-70.80
-72.45
-73.95
-75.30
-76.50
-77.70
-78.75
-79.65
-80.40
-81.00
-81.45
-81.75
-82.05
-82.20
-82.05
-82.05
-81.75
-81.30
-80.70
-80.10];
Look at your data
First of all you should carefully look on your input data if your algorithm does not work as expected. Maybe it does what it is designed for but this is not what you expect. Some of your maxima are not clean local maxima. You have samples with exactly equal function values. I have drawn your data and magnified the first maximum to demonstrate it:
There are four values at index 165 to 169 that have identical numerical values. Your algorithm can not recognize a maximum of this shape.
Solutions
I have three suggestions for you.
Add precision to your data
Firstly: Look deeper in your data. They may have more precision if you take all significant digits. With a closer look your peaks might have real local maxima.
Don't re-invent the wheel
If you can solve it in matlab/octave you could just use an existing solution already able to deal with complicated situation as this:
[J,K]=findpeaks(fun,'DoubleSided')
This will give the expected result:
J =
-1.5603
1.5499
-1.5315
1.5263
-1.5080
1.5027
-1.4844
1.4792
-1.4608
1.4556
-1.4399
1.4347
K =
83
165
249
332
415
499
581
664
745
827
909
991
Use an improved algorithm
If you need to implement this method yourself you have to adapt your criterion for peak finding. For example you could use two single sided criteria and mark raising and falling and flat areas:
c(i)=1*(fun(i-1) < fun(i)) + -1*(fun(i+1) < fun(i))
This expression will produce in matlab/octave a 1 value for raising signal parts, 0 for flat parts and -1 for falling parts.
Now you can search this array for some conditions:
If you find a place without raise or fall after a raise and before falling signal you found a maximum. You also find a maximum if a fall follows a raise immediately.
I have the following time series:
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1948 24.786 24.767 25.117 25.514 26.565 27.374 27.778 28.022 27.827 27.308 26.545 25.620
1949 25.191 24.962 25.038 25.591 26.325 27.044 27.719 28.059 28.077 27.541 26.501 25.568
1950 24.713 24.461 24.682 25.122 26.157 26.965 27.688 28.072 28.089 27.429 26.318 25.194
1951 24.423 24.114 24.335 25.153 26.399 27.474 28.143 28.547 28.441 27.854 26.904 25.842
1952 25.025 24.812 25.317 25.734 26.660 27.615 28.069 28.468 28.384 27.738 26.640 25.402
1953 24.619 24.614 25.048 25.917 26.870 27.485 28.021 28.363 28.311 27.687 26.768 25.829
1954 25.176 24.804 25.089 25.768 26.579 27.222 27.684 28.053 27.971 27.164 26.141 25.162
1955 24.553 24.340 24.679 25.266 26.182 26.959 27.649 28.108 28.053 27.453 26.594 25.483
1956 24.492 24.361 24.791 25.446 26.295 26.867 27.505 27.946 27.889 27.420 26.347 25.504
1957 24.928 24.811 25.110 25.690 26.677 27.566 28.221 28.486 28.459 27.793 26.889 25.794
1958 24.982 24.616 25.134 25.940 26.842 27.887 28.421 28.758 28.712 28.085 27.185 26.233
1959 25.432 25.376 25.469 25.987 26.726 27.424 28.042 28.242 28.349 27.988 26.962 26.049
1960 25.364 25.060 25.195 25.877 26.907 27.645 28.187 28.427 28.344 28.002 27.124 25.859
1961 25.194 25.116 25.415 25.907 26.755 27.363 27.960 28.269 28.224 27.672 26.792 25.849
1962 25.335 25.174 25.257 25.696 26.708 27.386 28.144 28.467 28.398 27.828 26.593 25.624
1963 25.103 24.899 25.371 25.914 26.646 27.481 27.953 28.341 28.277 27.652 26.733 25.661
1964 25.023 24.864 25.188 25.678 26.622 27.434 27.753 27.950 27.859 27.217 26.348 25.285
1965 24.478 24.373 24.636 25.399 26.180 26.973 27.499 27.912 27.954 27.550 26.709 25.768
1966 24.885 24.625 24.797 25.494 26.307 27.183 27.877 28.186 28.225 27.728 26.615 25.466
1967 24.794 24.639 24.872 25.375 26.345 27.023 27.564 27.914 27.973 27.529 26.550 25.478
1968 24.521 24.180 24.466 25.344 26.259 27.241 27.929 28.349 28.283 27.681 26.692 25.643
1969 24.838 24.668 25.022 26.111 27.064 27.970 28.415 28.547 28.431 27.852 26.705 25.604
1970 24.849 24.565 24.964 25.809 26.502 27.206 27.793 28.065 28.101 27.570 26.367 25.412
1971 24.756 24.505 24.660 25.258 26.161 26.984 27.564 27.832 27.747 27.238 26.304 25.473
1972 24.903 24.638 25.025 25.624 26.355 27.086 27.629 27.938 28.112 27.757 26.969 26.070
1973 25.217 24.837 25.237 25.591 26.594 27.422 27.919 28.030 27.989 27.440 26.454 25.131
1974 24.783 24.371 24.839 25.495 26.199 27.103 27.608 27.938 27.907 27.200 26.276 25.229
1975 24.622 24.564 24.867 25.532 26.407 27.202 27.545 27.947 27.777 27.236 26.326 25.045
1976 24.058 23.905 24.467 25.101 25.954 26.744 27.458 27.854 27.899 27.539 26.537 25.631
1977 24.768 24.636 25.078 25.422 26.202 27.195 27.869 28.085 28.096 27.613 26.797 25.758
1978 24.876 24.496 24.901 25.673 26.661 27.311 27.762 28.034 28.025 27.584 26.828 25.851
1979 24.920 24.658 24.853 25.461 26.285 27.295 27.820 28.137 28.090 27.640 26.608 25.577
1980 24.917 24.588 25.038 25.697 26.828 27.554 28.094 28.295 28.269 27.653 26.622 25.470
My time series has an anual cycle and I want to see how it changes in another temporal scales (e.g 5 or 6 years) but I don't really know how to do it. I should make it using digital filters.
Assuming you have the data in a matrix M.
%Averaging period
k = 12*5;
%How often you want to have a lebel, you're going to need to play with this
timeUnits = 10*12;
startYear = 1948;
endYear = 1980;
startMonNum = 1;
endMonNum = 12;
%Get all the months and years
[year, mon] = meshgrid(startYear:endYear, 1:12);
%Convert them to a single column and put them in a nice format
dates = reshape(arrayfun(#(x, y) datestr(datenum(x,y,1),'mmm yyyy'), year,mon,'uni', 0), [], 1);
dates = dates(startMonNum:end-endMonNum+12);
%Change your data from a square matrix to a column
M = reshape((1:length(dates)), [], 12);
%Smooth
MPrime = smooth(reshape(M', [], 1), k);
plot(1:length(MPrime), MPrime)
%This is where you'll need to play around some
%Put strings at the bottom of the axis
set(gca, 'Xtick',1:timeUnits*floor(length(MPrime)/timeUnits))
set('XTickLabel',dates(1:timeUnits:end))