Does F* support linear types? - linear-types

According to Wikipedias article about substructural type-systems, F* support some kind of linear types. Is this true? If so, how? I can't find any information about it in the F* tutorial.

Earlier versions of F* had affine types (closely related to linear types), as described in this paper from 2011: https://www.microsoft.com/en-us/research/publication/secure-distributed-programming-with-value-dependent-types/
However, versions of F* since 2015 dropped affine types in favor of other constructs, notably monadic effects, to model stateful resources.

Related

GPy and GPflow mathematical background - references

Does GPy and GPflow share a common mathematical background? I'm asking this because I'm using GPy but I cannot see the references. However, GPflow provides references in its examples.
Is it Ok using keep using GPy or would you suggest the use GPflow inmediately for gaussian processes purposes?
GPy and GPflow definitely share a common mathematical background: Gaussian processes Rasmussen and Williams, and many of the concepts are very similar in both frameworks: kernels, likelihoods, mean-functions, inducing points, etc. For me, the biggest difference between GPy and GPflow is the computational backend: AFAIK GPy uses plain Python and numpy to perform all its computations, whereas GPflow relies on TensorFlow. This gives GPflow multiple nice features for free: GPU acceleration, automatic gradients, compatibility with TF eco-system, etc. Depending on your use-case, these features can be crucial or simply nice-to-have.
Here is more information on the technical details between the two frameworks:
https://gpflow.readthedocs.io/en/master/intro.html#what-s-the-difference-between-gpy-and-gpflow
That would depend on what you are actually doing.
The very basic GPs should be similar, just that GPflow relies on tensorflow for the gradients (if used) and probably some technical implementation differences.
For the other more advanced models, both libraries provide references to the respective papers in the docs. In my opinion, GPflow's design is mainly centered around the SVGP framework from [1] and [2] (and many other extensions.. I can really recommend [2] if you are interested in the theory).
But they still do provide some other implementations.
I use GPflow since it works on the GPU and offers a lot of state-of-the-art implementations. However, the disadvantage would be that it is under a lot of change.
If you want to use classic GPs and are not too concerned with performance or very up-to-date methods I'd say GPy should be sufficient and the more stable variant.
[1] Hensman, James, Alexander Matthews, and Zoubin Ghahramani. "Scalable variational Gaussian process classification." (2015).
[2] Matthews, Alexander Graeme de Garis. Scalable Gaussian process inference using variational methods. Diss. University of Cambridge, 2017.

Complex inverse and complex pseudo-inverse in Scala?

I'm considering to learn Scala for my algorithm development, but first need to know if the language has implemented (or is implementing) complex inverse and pseudo-inverse functions. I looked at the documentation (here, here), and although it states these functions are for real matrices, in the code, I don't see why it wouldn't accept complex matrices.
There's also the following comment left in the code:
pinv for anything that can be transposed, multiplied with that transposed, and then solved
Is this just my wishful thinking, or will it not accept complex matrices?
Breeze implementer here:
I haven't implemented inv etc. for complex numbers yet, because I haven't figured out a good way to store complex numbers unboxed in a way that is compatible with blas and lapack and doesn't break the current API. You can set the call up yourself using netlib java following a similar recipe to the code you linked.

Are there any implementations available online for filter based feature selection methods?

The selection methods I am looking for are the ones based on subset evaluation (i.e. do not simply rank individual features). I prefer implementations in Matlab or based on WEKA, but implementations in any other language will still be useful.
I am aware of the existence of CsfSubsetEval and ConsistencySubsetEval in WEKA, but they did not lead to good classification performance, probably because they suffer from the following limitation:
CsfSubsetEval is biased toward small feature subsets, which may prevent locally predictive features from being included in the selected subset, as noted in [1].
ConsistencySubsetEval use min-features bias [2] which, similarly to CsfSubsetEval, result in the selection of too few features.
I know it is "too few" because I have built classification models with larger subsets and their classification performance were relatively much better.
[1] M. A. Hall, Correlation-based Feature Subset Selection for Machine Learning, 1999.
[2] Liu, Huan, and Lei Yu, Toward integrating feature selection algorithms for classification and clustering, 2005.
Check out python scikit learn simple and efficient tools for data mining and data analysis. There are various implemented methods for feature selection, classification, evaluation and a lot of documentations and tutorials.
My search has led me to the following implementations:
FEAST toolbox: it is an interesting toolbox, developed by the University of Manchester, and provide implementations of Shannon's Information Theory functions. The implementations can be downloaded from THIS webpage, and they can be used to evaluate individual features as well as subset of features.
I have also found THIS matlab code, which is an implementation for a selection algorithm based on Interaction Information.
PY_FS: A Python Package for Feature Selection
I came across this package [1] which was just released (2021) and contains many methods with reference to their original papers.

Is there a MATLAB version of partial Schur decomposition?

I'm quite surprised not to find one in standard library. Is there some reason it is missing or I just need to use a specific toolbox?
Implementing it myself would be very problematic because of the complexity of algorithm involved.

iOS5 Objective-C library for numerical analysis or GNU Octave wrapper class?

I'm doing some numerical estimation and correction with the Kalman filter, and would like to better estimate my parameters of Q and R, preferably dynamically.
http://en.wikipedia.org/wiki/Kalman_filter#Estimation_of_the_noise_covariances_Qk_and_Rk
That article mentions that GNU Octave is currently the best way of determining these parameters from data:
http://en.wikipedia.org/wiki/GNU_Octave#C.2B.2B_integration
Unfortunately it is written for Matlab, and there's supposedly a C++ implementation. I'm very weak in C++ and would not even know how to import a C++ library and link it properly in XCode. All of my C++ libraries to date have been wrapped in 3rd party Objective-C classes.
Has anyone used the C++ implementation for scientific computing or engineering applications on iPhone? I'd appreciate any pointers or tutorials on how to do this kind of analysis with Objective-C.
Additional keywords:
estimating covariance from data
Autocovariance Least-Squares (ALS) technique
noise covariance
Thank you!
I do not know of any such C++ library, if you fancy doing numerical analysis on iOS, the best way to go is the accelerate framework, specifically (from this description):
Linear Algebra: LAPACK and BLAS
The Basic Linear Algebra Subprograms (BLAS) and Linear Algebra Package
(LAPACK) libraries contain—as you would expect—functions to perform
linear algebra computations such as solving simultaneous linear
equations, least squares solutions of linear equations, and eigenvalue
problems. The BLAS library serves as a building block for the LAPACK
library. The BLAS and LAPACK libraries are widely distributed and
industry standard computational libraries. They are available on a
number of different platforms and architectures. So, if you are
already using these libraries you should feel right at home, as the
APIs are exactly the same on Mac OS X.
You'll need a fairly good grounding in C, pointers, arrays and such though, no way around it I feel. There is a detailed description of how to use these linear algebra primitives to implement kalman filtering (although this is using R, so probably not of mush use to you).
This is a SO post on Kalman Filtering which expressed my opinion quite well. I'm afraid I think the chances of finding a magic Objective-C wrapper for Kalman Filtering are fairly low, though I would be very happy to be proven wrong!