Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 months ago.
Improve this question
in the meantime, is there a way to dictate MATLAB or Paraview or any other application that uses OpenGL to do stuff in double precision ? I could use a workaround for my problems, but I prefer not to :) Thanks!
EDIT:
I try to be more specific about the problem/issue. First two images:
The first one is rendered using openGL, the second (fine one) is rendered after typing the "opengl neverselect" method, which switches to another renderer. Since I experience quite simiular renderering problems in Paraview as well, I am quite sure that this is OpenGL specific and not the "fault" of matlab or Paraview. When I shift the values as mentioned in the comment below, I get smoothly rendered images as well. I assume that is because my data range has a huge offset from zero and the precision in the rendering routine is not accurate enough and produces serious rounding errors in the rendering calculations.
Thus, I would like to know if you know some way (in MATLAB, Paraview, in the OS settings) to set the rendering precision higher ( i read that gpus/OpenGL usually calculate in float)
First off, this has nothing to do with OpenGL. The part of MATLAB actually doing the plotting is written in some compiled language, and relies on OpenGL just for displaying stuff to the screen.
The precision used (double/float) is hard coded into the program. You can't have the OS or something force the program to use different data types. In certain cases you might be able to make the relevant changes to the source code of a program and then recompile, but this doesn't sound like it is applicable in your case.
This doesn't mean that there isn't a way to do what you want in MATLAB. In fact, since the program is specifically designed to do numeric computation there almost certainly is a way to specify the precision. You would need to provide more detailed information on your issue (screenshot?) if you want to get further guidance.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I need lots of cables of different small sizes (under 100 meters) and cables are only sold in lenghts of 100 meters.
So, to optimize my purchase, I would like a code where I can input the lengths of all pieces of cables that I need. The code will combine my inputs under the constraint the sum is under 100, while minimizing the total number of 100m-length cables that I need to buy.
If anyone could help with a code in VBA, Matlab or Python I would be very grateful.
This is known as a bin-packing problem, and it's actually very difficult (computationally speaking) to find the optimal solution.
However, it is a problem that is practically useful to solve (as you have seen for yourself) and so there are several approaches that seek to find an approximate solution--one that is "good enough" without guaranteeing that it's the best possible solution. I did a quick search and found this course website, which has some examples that may help you out.
If you are looking for an exact solution, you can ask the related question "will I be able to fit the cables I need into N 100-meter cables?". This feasibility problem can be expressed as a "binary program", which is a special case of a "mixed-integer linear program", for which MATLAB has a solver called intlinprog (requires the optimization toolbox).
I'm sorry that I don't have any code to solve your problem, but I hope that this at least gives you some keywords to help you find more resources!
I believe this is like the cutting stock problem. There are some very good methods to solve this. Here is an implementation and some background. It is not too difficult to write an Excel front-end for this (see here).
If you google for "cutting stock problem" you will find lots of references.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
i have a question on speed up application built by MATLAB software, i need to know the affect of using vectorization and parallel computation on speed up the application ? and if there is better method than both previous way in such case ? thanks
The first thing you need to do when your MATLAB code runs too slow is to run it in the profiler. In recent versions of MATLAB, this can by done by pressing the "Run and Time" button on the main toolbar. This way, you will now which functions and which lines in these function take up the most time. Once you know this, you may do one of the following, depending on your circumstances and the nature of the particular piece of code:
Think if your algorithm is the most optimal one in terms of O() complexity.
Try turning loops into vector operations. The efficacy of this has declined in recent versions of MATLAB because of improvements in how loops are executed.
If you have a multi-core CPU try using the parallel computing toolbox. If your code parallelizes well, you will get a sped up nearly equal to the number of cores.
If you have an nVidia GPU try using the GPU support. You can get a speed-up by a factor of 10 or more with some problems, but not all problems are amicable to this sort of optimization.
If everything else fails, you may outsource the slowest piece of your code to a low level language like C. See here for how to do this. You could then use low-level profiling tools like Intel vTune to get the absolute maximum speed from the low-level code.
If it is still too slow, you may need to buy an FPGA. See here for a brief tutorial.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Basically I have some hourly and daily data like
Day 1
Hours,Measure
(1,21)
(2,22)
(3,27)
(4,24)
Day 2
hours,measure
(1,23)
(2,26)
(3,29)
(4,20)
Now I want to find outliers in the data by considering hourly variations and as well as the daily variations using bivariate analysis...which includes hourly and measure...
So which is the best clustering algorithm is more suited to find outlier considering this scenario?
.
one 'good' advice (:P) I can give you is that (based on my experience) it is NOT a good idea to treat time similar to spatial features. So beware of solutions that do this. You probably can start with searching the literature in outlier detection for time-series data.
You really should use a different repesentation for your data.
Why don't you use an actual outlier detection method, if you want to detect outliers?
Other than that, just read through some literature. k-means for example is known to have problems with outliers. DBSCAN on the other hand is designed to be used on data with "Noise" (the N in DBSCAN), which essentially are outliers.
Still, the way you are representing your data will make none of these work very well.
You should use time series based outlier detection method because of the nature of your data (it has its own seasonality, trend, autocorrelation etc.). Time series based outliers are of different kinds (AO, IO etc.) and it's kind of complicated but there are applications which make it easy to implement.
Download the latest build of R from http://cran.r-project.org/. Install the packages "forecast" & "TSA".
Use the auto.arima function of forecast package to derive the best model fit for your data amd pass on those variables along with your data to detectAO & detectIO of TSA functions. These functions will pop up any outlier which is present in the data with their time indexes.
R is also easy to integrate with other applications or just simply run a batch job ....Hope that helps...
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm getting some new students soon, who will be writing MATLAB code. They're new to MATLAB, but they have experience coding in Java and C++.
I'm going to have them go through the Getting Started section of the MATLAB help. In addition, I want to give a small tutorial with the goal to prevent them from making some of the most common mistakes people make when switching to MATLAB (e.g. "MATLAB starts counting at 1"), and show them some features that they may not be aware of when coming from other languages (e.g. "you can subtract a scalar directly from an array, and for vectors, there's bsxfun").
What are the most important things I should tell them?
I agree with previous answers, but I'd say indexing is the first and the most important and complex concept in studying MATLAB. I saw many C programmers starting with MATLAB just write loops, a lot of loops, something ridiculous like
for i=1:10
a(i)=i;
end
instead of simple a=1:10;.
So I'd suggest them to read about matrix programming concepts:
How to create simple vectors and matrices
Which variables can be used for indexing
How to create and apply indexes
Logical operations and functions, logical and numeric indexes (find function)
Indexing right and left side of expression
Difference between indexing numerical matrices and cell arrays
How to use indexes as output from different functions, like sort, unique, ismember, etc.
You cannot apply indexes to intermediate results
As for productivity, I would add that knowing how to use editor's cell mode is very useful.
Enough snippy comments, here's something of an answer too:
The Matlab desktop: what all the windows are for, dragging code from the history back into the command window, the variable inspector, etc.
Plotting: not just the plot command, but how to use the plot GUI tools, and how to create an M-file from a graphic.
M-files for scripts and functions, and the key differences between them.
M-Lint, the profiler.
Use Matlab as a vehicle for teaching the perils and pitfalls of floating-point arithmetic.
Getting help: at the command line, on the web, documentation, the file exchange, ...
Set path and the current working directory.
Importing data from files, exporting data to files, loading and saving.
That should be enough to keep them busy for an hour or so.
To clarify, I propose these topics to help you teach your students to avoid common Matlab errors including;
Unproductive use of the tool, retyping commands which can easily be recalled from the history, using C (or Java) style file reading commands instead of uuimport, slowly typing scripts to draw graphics when Matlab can do it for you, wondering what all the little orange lines in the editor right margin mean and the squiggly underlines, trying to figure things out for themselves when the help facilities could tell them, tons of other stuff that many much more experience Matlab users have taken ages to learn.
Floating point arithmetic is not real.
and probably a lot of other stuff too.
For those coming from C-family languages, the element-wise operators are new. It took me a couple of months to discover the ./ and .* operators. Before that, I used to write for loops for element-wise operations. So perhaps that's something that should be pointed out.
With respect to unexpected or non-intuitive MATLAB features that may cause them confusion, there are some good pointers in this question:
Corner Cases, Unexpected and Unusual MATLAB
With respect to cool time-saving/efficiency tricks, this other question has some nice examples:
What are your favourite MATLAB/Octave programming tricks?
And for a few potentially more advanced topics, you can refer to the answers to this question:
MATLAB interview questions?
Now for my $0.02. Based on the sorts of questions I've seen asked most frequently on SO, I'd say you will want to make sure they have a good understanding of the following concepts:
Reading and writing data files of different formats, such as using CSVREAD, DLMREAD, TEXTREAD, FREAD, FSCANF, LOAD, and all their write equivalents.
How to deal effectively with cell arrays.
The different image formats, how these are represented, and how to modify them (which will involve a discussion of various data types and how to deal with multi-dimensional arrays).
How to use handle graphics to control the appearance of various graphics objects.
And here are some neat features that are already implemented in MATLAB that may save them some time and effort:
Functions for performing various array operations, like KRON, DIAG, and TRIU.
Functions to create specialized matrices, like HANKEL and TOEPLITZ.
Predefined dialog boxes, like UIGETFILE and INPUTDLG.
MATLAB is conceptually in some ways very different from other languages you mentioned:
cells are used were Java uses upcasting
global and persistent variables are static in Java
gui handles being just numbers of type double
nested functions are closures, neither Java nor C/C++ has such feature
seldom used private and #TYPE folders for visibility scoping
array handling tricks
very easy interoperability with Java/COM/.Net using MATLAB syntax
variadic function arguments, handling of function arguments with varargin / varargout
memory management
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to know how to perform "spectral change detection" for the classification of vocal & non-vocal segments of a song. We need to find the spectral changes from a spectrogram. Any elaborate information about this, particularly involving MATLAB?
Separating out distinct signals from audio is a very active area of research, and it is a very hard problem. This is often called Blind Signal Separation in the literature. (There is some MATLAB demo code in the previous link.
Of course, if you know that there is vocal in the music, you can use one of the many vocal separation algorithms.
As others have noted, solving this problem using only raw spectrum analysis is a dauntingly hard problem, and you're unlikely to find a good solution to it. At best, you might be able to extract some of the vocals and a few extra crossover frequencies from the mix.
However, if you can be more specific about the nature of the audio material you are working with here, you might be able to get a little bit further.
In the worst case, your material would be normal mp3's of regular songs -- ie, a full band + vocalist. I have a feeling that this is the case you are probably looking at given the nature of your question.
In the best case, you have access to the multitrack studio recordings and have at least a full mixdown and an instrumental track, in which case you could extract the vocal frequencies from the mix. You would do this by generating an impulse response from one of the tracks and applying it to the other.
In the middle case, you are dealing with simple music which you could apply some sort of algorithm tuned to the parameters of the music to. For instance, if you are dealing with electronic music, you can use to your advantage the stereo width of the track to eliminate all mono elements (ie, basslines + kicks) to extract the vocals + other panned instruments, and then apply some type of filtering and spectrum analysis from there.
In short, if you are planning on making an all-purpose algorithm to generate clean acapella cuts from arbitrary source material, you're probably biting off more than you can chew here. If you can specifically limit your source material, then you have a number of algorithms at your disposal depending on the nature of those sources.
This is hard. If you can do this reliably you will be an accomplished computer scientist. The most promising method I read about used the lyrics to generate a voice only track for comparison. Again, if you can do this and write a paper about it you will be famous (amongst computer scientists). Plus you could make a lot of money by automatically generating timings for karaoke.
If you just want to decide wether a block of music is clean a-capella or with instrumental background, you could probably do that by comparing the bandwidth of the signal to a normal human singer bandwidth. Also, you could check for the base frequency, which can only be in a pretty limited frequency range for human voices.
Still, it probably won't be easy. However, hearing aids do this all the time, so it is clearly doable. (Though they typically search for speech, not singing)
first sync the instrumental with the original, make sure they are the same length and bitrate and start and end at the exact time and convert them to .wav
then do something like
I = wavread(instrumental.wav);
N = wavread(normal.wav);
i = inv(I);
A = (N - i); // it could be A = (N * i) or A = (N + i) you'll have to play around
wavwrite(A, acapella.wav)
that should do it.. a little linear algebra goes a long way.