Defining a natural variable n in TI-Nspire CAS - ti-nspire

I'm wondering if it's possible to define a natural variable n in TI-Nspire CAS. For example I'd like to write:

You can't define your own natural variables. However, Nspire has the following special variables you can use:
#n0...#n255: Restricted to natural numbers
#c0...#c255: Restricted to real numbers
You can replace the original variables with them by hand or for convience just put |x=#n0 and y=#n1 at the end of line.
Example: You are calculating fourier coefficients and know that variable k will only get real numbers from Σ operation. Replacing k with #n1 will simpilfy the function.
Picture
(Calculator needs to be in RAD mode if you want to try)

The answer is no. Variables in NSpire store a value. A variable has no type. Solve might return #n1 in a result to indicate an arbitrary natural number, but you can tell solve to look for integer solutions only.

Related

Modelica annotation derivative: noDerivative versus zeroDerivative

I have successfully used annotation(derivative) in Modelica functions. Now I have reached a point where I think I need to use zeroDerivative or noDerivative, but from the specification I just do not understand what is the difference, and when to use what.
https://specification.modelica.org/v3.4/Ch12.html#declaring-derivatives-of-functions
It seems zeroDerivative is for time-constant parameters??
Does somebody have a simple example?
Use zeroDerivative to refer to inputs that are non-varying, i.e. parameters or constant values.
Use noDerivative for signals that do not have a derivative value. For example if an input signal comes from an external function.
The important case for noDerivative is when the input is "redundant".
As an example consider the computation of density for some media in MSL:
The density computation is found in Modelica.Media.R134a.R134a_ph.density_ph (note this does not contain any derivative in itself):
algorithm
d := rho_props_ph(
p,
h,
derivsOf_ph(
p,
h,
getPhase_ph(p, h)));
where the top function called is:
function rho_props_ph
"Density as function of pressure and specific enthalpy"
extends Modelica.Icons.Function;
input SI.Pressure p "Pressure";
input SI.SpecificEnthalpy h "Specific enthalpy";
input Common.InverseDerivatives_rhoT derivs
"Record for the calculation of rho_ph_der";
output SI.Density d "Density";
algorithm
d := derivs.rho;
annotation (
derivative(noDerivative=derivs) = rho_ph_der ...);
end rho_props_ph;
So the derivs-argument is sort of redundant and is given by p and h; and we don't need to differentiate it again. If you send in a derivs-argument that isn't given in this way may give unpredictable result, but describing this in detail would be too complicated. (There was some idea of noDerivative=something - but even just specifying it turned out to be too complicated.)
For zeroDerivative the corresponding requirement is that the arguments have zero derivative; that is straightforward to verify and if non-zero we cannot use the specific derivative (it is possible to specify multiple derivatives and use another derivative one for that case).

Coupled variables in hyperparameter optimization in MATLAB

I would like to find optimal hyperparamters for a specific function, I am using bayesopt routine in MATLAB.
I can set the variables to optimize like the following:
a = optimizableVariable('a',[0,1],'Type','integer');
But I have coupled variables, i.e, variables whose value depend on the existence of other variables, e.g., a={0,1}, b={0,1} iff a=1.
Meaning that b has an influence on the function if a==1.
I thought about creating a unique variables that encompasses all the possibilities, i.e., c=1 if a=0, c=2 if a=1,b=0, c=3 if a=1,b=0. The problem is that I am interested in optimizing continuous variables and the above approach does not hold anymore.
I tried something alone the line of
b = a * optimizableVariable('b',[0,1],'Type','integer');
But MATLAB threw an error.
Undefined operator '*' for input arguments of type 'optimizableVariable'.
After three months almost to the day, buried deep down in MATLAB documentation, the answer was to use constrained variables.
https://www.mathworks.com/help/stats/constraints-in-bayesian-optimization.html#bvaw2ar

Distinguishing cryptographic properties: hiding and collision resistance

I saw from Another question the following definitions, which clarifies somewhat:
Collision-resistance:
Given: x and h(x)
Hard to find: y that is distinct from x and such that h(y)=h(x).
Hiding:
Given: h(r|x), where r|x is the concatenation of r and x
Secret: x and a highly-unlikely-and-randomly-chosen r
Hard to find: y such that h(y)=h(r|x). where r|x is the concatenation of r and x
This is different from collision-resistance in that it doesn’t matter whether or not y=r|x.
My question:
Does this mean that any hash function h(x) is non-hiding if there is no secret r, that is, the hash is h(x), not h(r|x)? where r|x is the concatenation of r and x
Example:
Say I make a simple hash function h(x) = g^x mod(n), where g is the generator for the group. The hash should be Collision resistant with p(x_1 != x_2, h(x_1) = h(x_2)) = 1/(2^(n/2)), but I would think it is hiding as well?
Hashfunctions can kinda offer collision-resistance
Commitments have to be hiding.
In contrast to popular opinion these primitives are not the same!
Very strictly speaking the thing that you think of as hash-function cannot offer collision resistance: There always ARE collisions. The input space is infinite in theory, yet the function always produces a fixed-length output. The terminology should actually be “H is randomly drawn from a family of collision-resistant functions”. In practice however we will just call that function collision-resistant and ignore that it technically isn't.
A commitment has to offer two properties: Hiding and Binding. Binding means that you can only open it to one value (this is where the relation to collision-resistance comes in). Hiding means that it is impossible to learn anything about the element that is contained in it. This is why a secure commitment MUST use randomness (or nounces, but after all is said and done, those boil down to the same). Imagine any hash-function, no matter how perfect you want it to be: You can use a random-oracle if you want. If I give you a hash H(m) of a value m , you can compute H(0), compare the result and learn whether m = 0, meaning it is not hiding.
This is also why g^x is not a hiding commitment-scheme. Whether it is binding depends on what you allow as the message-space: If you allow all integers, then a simple attack y = x*phi(n)produces H(y)=H(x).
works. If you define it as ℤ_p, where p is the group-order, then it is perfectly binding, as it is an information-theoretically collision-resistant one-way-function. (Since message-space and target-space are of the same size, this time a single function actually CAN be truly collision-resistant!)

string input to anonymous function in MATLAB

In matlab i know i can convert string into anonymous function with str2func.
For example;
s= '#(x) x.^2';
h= str2func(s);
h(2) would be 4
But what if i do not know the number of unknown? Let's say user of this program will enter lots of function to get a numerical solution of a system. When the user enters x^2, i should add #(x) to its beginning then convert it to a function. But in programming time i do not know how many function the user will enter with how many unknown. #(x) may should be #(x,y) as well as #(x,y,z). If the user enters the number of unknowns, how can i create and add the necessary prefix at runtime?
ps: number of unknown can be any integer number.
You need to know not only the quantity of variables but also their names and order. An expression may read x(c). Even if you know that the expression has two variables in it and are able to parse out x and c, you won't be able to tell if the user intended to define something like #(x, c) x(c), #(c, x) x(c) or even something like #(c, d) x(c) where x is actually a function.
Parsing the expressions just to get the names they use is something that you shouldn't have to do.
Restricting the variable names that are allowed can be messy. If the user is expecting MATLAB syntax and you are parsing as MATLAB, why make your life harder? Also, when you introduce a restriction like one-letter variable names only, you have to ask yourself if there will ever be a situation where you need more than 27 variables.
It would be much safer all around to have the user list the names of the variables they plan on using before the function, e.g. (x, y, pi) pi*(x^2 + y). Now all you have to do is prepend # and not worry about whether pi is a built-in or an argument. In my opinion the notation is quite clean.

Maxima simplify expression with diff

Let's assume a is a constant and x is my variable with respect to time, so basically x(t).
Then in Maxima , what is the best way to replace 'diff(a*x,t) with a*'diff(x,t) automatically without use subst command.
The reason I don't to use subst is that I have many variables and higher order derivatives. It is not efficient to use subst to replace all the occurrences.
Thanks.
UPDATE
I have tried with depends(x,t) command, but it only works with the simple case. Here is an minimal example of my situation.
depends([x,y],t);
eq1:diff(x,t)-b=c;
eq2:subst([x=a*y],eq1);
sol_dy=solve(eq2,diff(y,t))
Of course here a,b,c are constants and x, y are variables on t.
Maxima can not solve diff(y,t) directly. How do deal with it?
I see that 'diff(...) (i.e. derivative noun expression) isn't linear (doesn't distribute over + and doesn't factor out constants) but diff(...) (verb expression) is linear. That's a misfeature, at least.
I was going to suggest declare(nounify(diff), linear) but that makes derivatives come out as 0 in your example ... this is probably a bug, I'll have to think more about it.
Try ev(eq2, nouns); to re-evaluate the derivatives as verbs -- I think that should cause the constant to factor out.