NMISS on all variables - macros

I am trying to count the missings in my dataset, but I have a problem.
My first question is named Q2, and the last one is Q55A7. Thus, I cannot use NMISS(Q2 to Q55A7). is there no way to do it by the ID instead of the name? Or how can I get this working?

Actuall NMISS(Q2 to Q55A7) shouls work fine - when you are referring to variables in the data with the to keyword SPSS doesn't look at the structure of the name, it just takes all the variables in the data that are between these two, according to their order in the data.
The problem you may run into is with the existance of other variables that you don't want included (e.g. an open ended question in between the other questions. Here are a couple of ways to work around that -
Let's say your variables are ordered as follows: Q2 Q3 Q4 Q4Other Q5 Q6 Q55A7. Using Q2 to Q55A7 will incluede Q4Other which is an open ended text variable which you can't include in the calculation. In this case you can use:
nmiss(Q3 to Q4, Q5 to Q55A7)
Now if there are many more variables, and many variables you need to skip, the above method becomes as bothersome as stating all the variables' names. Another way to go back to nmiss(Q3 to Q55A7) is by changing the order of the variables so only the numeric variables you want in the analysis are actually placed between these two. One way to do that is:
add files /file=* /keep Q4Other Q17Other SomeOtherOpenQ all.
Rrunning this will bring all the specified variables to the beginnig of the dataset, leaving only the variables you want in the analysis between Q3 to Q55A7.
If you still want to inprove on that, you can look up spssinc select variables (see my answer here for example) extention command which enables you to define variable lists according to their attributes, and then you can run the analysis on a list of variables.

Related

Why use result sets rather than variables in Sybase?

In a Sybase database I am working with result sets are used (misused?) as variables.
For example, one often finds lines such as the following:
select SOMETHING = 'bla'
"SOMETHING" is technically a result set ... and the content of the result set is used by the application accessing the database. Since "SOMETHING" is not a variable, it does not get declared anywhere.
I have never seen this kind of hack before (and colleagues of mine couldn't explain to me the reason why it was done that way) and I have not found anything about it on google.
Is there some reference available that explains why one would want to use such a hack as opposed to "normal" variables?
I think you are not reading this correctly. This query simply means that there is a one-column result set with the column named 'SOMETHING'. This query is equivalent to: SELECT 'bla' AS SOMETHING

Change a Variable's Name in Matlab's Workspace

I have a problem that has more than likely been asked before but I have yet to find a solution that works for me.
I would like to know if it's possible to change the name of variable within a workspace in Matlab?
The need for this is a follows,
I have a small section of code that imports over 60's columns of data from Excel into Matlab, each column has over 10,000 rows in, so a fair amount of data. Row 1 of each column is the variable, i.e. R1C1 = "Time from X to Z" I would like to know if its possible for Matlab to create a Variable called "Time_from_X_to_Z" then store all the data in that Variable from the excel sheet. I am able to store all the data in separate variables, I just need a bit of help naming the variable.
I would like to name the variables like so, as when I come to re-use the code the order of the column may change from time to time, hence why I can't just let Variable1 = Varibale1 etc.
Here is some of the code I have tried from my research.
VariableName = txt(1,i);
VariableName = num2str(cell2mat(VariableName));
VariableName = Variable
clear Variable
This was my most successful one but is nowhere near what I expected it to be.
I hope that all makes sense.
Thanks for any help you are able to provide.

Octopus Output Variables and Values

I'm looking through the Octopus powershell library and trying to identify a way to output all the variable names and their values used in a deployment - not the project overall, but only for a deployment.
So say I have 3 variables like the below
VariableOne Value1
VariableTwo Value2
VariableThree Value3
And I only use the first and third and want those printed with their names (VariableOne, VariableThree) and their values (Value1, Value3).
There is an option for outputting all the variables into the deployment log for debugging purposes.
Set one (or both) of the following in your project variables list:
OctopusPrintVariables True
OctopusPrintEvaluatedVariables True
I find that the latter of the two is generally sufficient.
This feature is written up at https://octopus.com/docs/how-to/debug-problems-with-octopus-variables
<TL;DR>
No. It can't.
It's something we tried as well, but Octopus Deploy has so many ways in which Variables can be used, from XPath to .config files, JsonPath to json files, direct references and inline scripts in the workflows as well as direct references in the #{var} syntax.
None of these options track which variables were actually transformed or referenced, plus, some optional expansion may actually shortcircuit.
I've asked Octopus whether they could actually extend the object model to detect the requests to the values of a variable, so we can see which values have actually been found. But that is currently not in place.
And they came back with the problem that the step scripts may actually change or override the values of variables between steps, so the value may actually change during the workflow, making tracking them even harder.

automatically get the name vectors saved in the workspace in Matlab

I have multiple vectors (+100) that have been loaded into MATLAB workspace, I would like to write a script that can plot and save them all, but for that I need their name, my question is: is there a way to automatically get the name vectors saved in the workspace.
thanks in advance.
Step one: whoever gave you a *.mat file with 100+ named variables in it, [censored for strong language and scenes some viewers may find upsetting]. I am only partly joking here; if you find yourself in this sort of situation normally it is because something has gone terribly wrong upstream. We can work around it, though.
Step two: use who with the filename to get a list of variables in that file
names = who('-file', 'all');
Step three: load the variables (or a subset of them) into a struct
data = load('all.mat');
Step four: use dynamic structure naming to extract data:
for n = 1:length(names);
plot(data.(names{n})); % or whatever you want to do with this data
end
I would probably just use the loop to dump the data in a cell array so as to make further processing simpler and avoid further use of dynamic field names or worse, eval.
You can use who, which lists all variables alphabetically in the active workspace.

loading parameter files for data different sets

I need to analyse several sets of data which are associated with different parameter sets (one single set of parameters for each set of data). I'm currently struggling to find a good way to store these parameters such that they are readily available when analysing a specific dataset.
The first thing I tried was saving them in a script file parameters.m in the data directory and load them with run([path_to_data,'/parameters.m']). I understand, however, that this is not good coding practice and it also gave me scoping problems (I think), as changes in parameters.m were not always reflected in my workspace variables. (Workspace variables were only changed after Clear all and rerunning the code.)
A clean solution would be to define a function parameters() in each data directory, but then again I would need to add the directory to the search path. Also I fear I might run into namespace collisions if I don't give the functions unique names. Using unique names is not very practical on the other hand...
Is there a better solution?
So define a struct or cell array called parameters and store it in the data directory it belongs in. I don't know what your parameters look like, but ours might look like this:
parameters.relative_tolerance = 10e-6
parameters.absolute_tolerance = 10e-6
parameters.solver_type = 3
.
.
.
and I can write
save('parameter_file', 'parameters')
or even
save('parameter_file', '-struct', 'parameters', *fieldnames*)
The online help reveals how to use -struct to store fields from a structure as individual variables should that be useful to you.
Once you've got the parameters saved you can load them with the load command.
To sum up: create a variable (most likely a struct or cell array) called parameters and save it in the data directory for the experiment it refers to. You then have all the usual Matlab tools for reading, writing and investigating the parameters as well as the data. I don't see a need for a solution more complicated than this (though your parameters may be complicated themselves).