c: skip an element with scanf - scanf

I'm trying to scan a file that contains 13 ints which are to be stored in 13 variables. Is there a way to loop over this, while skipping the i-th element? I'm anticipating there might be a solution, which have yet eluded me, perhaps similar to the code below:
int i;
for (i = 0; i < 13; i++)
fscanf(file, %d, &variables[i]); // somehow apply i to %d
instead of the obvious but lengthy and unclean:
fscanf(file, %d, &variable1);
fscanf(file, %*d, %d, &variable2);
fscanf(file, %*d %*d, %d, &variable3); // etc
thanks

int *variables[] = { &variable1, &variable2, &variable3, ... };
for (int i = 0; i < 13; i++) {
fscanf(file, "%d", variables[i]);
}

Related

CUDA class with multidimensional pointers

I have been struggling with this class implementation now for quite a while and hope someone can help me with it.
class Material_Properties_Class_device
{
public:
int max_variables;
Logical * table_prop;
Table_Class ** prop_table;
};
The implementation for the pointers looks like this
Material_Properties_Class **d_material_prop = new Material_Properties_Class* [4];
Logical *table_prop;
for (int k = 1; k <= 3; k++ )
{
cutilSafeCall(cudaMalloc((void**)&(d_material_prop[k]),sizeof(Material_Properties_Class)));
cutilSafeCall(cudaMemcpy(d_material_prop[k], material_prop[k], sizeof(Material_Properties_Class ), cudaMemcpyHostToDevice));
}
for( int i = 1; i <= 3; i++ )
{
cutilSafeCall(cudaMalloc((void**)&(table_prop), sizeof(Logical)));
cudaMemcpy(&(d_material_prop[i]->table_prop), &(table_prop), sizeof(Logical*),cudaMemcpyHostToDevice);
cudaMemcpy(table_prop, material_prop[i]->table_prop, sizeof(Logical),cudaMemcpyHostToDevice);
}
cutilSafeCall(cudaMalloc((void ***)&material_prop_device, (4) * sizeof(Material_Properties_Class *)));
cutilSafeCall(cudaMemcpy(material_prop_device, d_material_prop, (4) * sizeof(Material_Properties_Class *), cudaMemcpyHostToDevice));
This implementation works but it can't get it working for the **prop_table.
I assume it must somehow follow the same principle but I just can't get my head around it.
I have already tried
Table_Class_device **prop_table = new Table_Class_device*[3];
and insert another loop inside the second for loop
for (int k = 1; k <= 3; k++ )
{
cutilSafeCall(cudaMalloc((void**)&(prop_table[k]), sizeof(Table_Class)));
cutilSafeCall(cudaMemcpy( prop_table[k], material_prop[i]->prop_table[k], sizeof( Table_Class *), cudaMemcpyHostToDevice));
}
Help would be much appriciated
some magic. May be it'll help
struct fading_coefficient
{
double* frequency_array;
double* temperature_array;
int frequency_size;
int temperature_size;
double** fading_coefficients;
};
struct fading_coefficient* cuda_fading_coefficient;
double* frequency_array = NULL;
double* temperature_array = NULL;
double** fading_coefficients = NULL;
double** fading_coefficients1 = (double **)malloc(fading_coefficient->frequency_size * sizeof(double *));
cudaMalloc((void**)&frequency_array,fading_coefficient->frequency_size *sizeof(double));
cudaMemcpy( frequency_array, fading_coefficient->frequency_array, fading_coefficient->frequency_size *sizeof(double), cudaMemcpyHostToDevice );
free(fading_coefficient->frequency_array);
cudaMalloc((void**)&temperature_array,fading_coefficient->temperature_size *sizeof(double));
cudaMemcpy( temperature_array, fading_coefficient->temperature_array, fading_coefficient->temperature_size *sizeof(double), cudaMemcpyHostToDevice );
free(fading_coefficient->temperature_array);
cudaMalloc((void***)&fading_coefficients,fading_coefficient->temperature_size *sizeof(double*));
for (int i = 0; i < fading_coefficient->temperature_size; i++)
{
cudaMalloc((void**)&(fading_coefficients1[i]),fading_coefficient->frequency_size *sizeof(double));
cudaMemcpy( fading_coefficients1[i], fading_coefficient->fading_coefficients[i], fading_coefficient->frequency_size *sizeof(double), cudaMemcpyHostToDevice );
free(fading_coefficient->fading_coefficients[i]);
}
cudaMemcpy(fading_coefficients, fading_coefficients1, fading_coefficient->temperature_size *sizeof(double*), cudaMemcpyHostToDevice );
fading_coefficient->frequency_array = frequency_array;
fading_coefficient->temperature_array = temperature_array;
fading_coefficient->fading_coefficients = fading_coefficients;
cudaMalloc((void**)&cuda_fading_coefficient,sizeof(struct fading_coefficient));
cudaMemcpy( cuda_fading_coefficient, fading_coefficient, sizeof(struct fading_coefficient), cudaMemcpyHostToDevice );
This question comes up frequently. Multidimensional pointers are especially challenging.
If possible, it's recommended that you flatten multidimensional pointer usage (**) to single-dimensional pointer usage (*), and as you've seen, even that is somewhat cumbersome.
The single-dimensional case (*) is further described here. Although you seem to have already figured it out.
If you really want to handle the 2 dimensional (**) case, look here.
An example implementation for 3 dimensional case (***) is here. ("madness!")
Working with 2 and 3 dimensions this way is quite difficult. Thus the recommendation to flatten.

ios parse issue nstimer tutorial

I have followed the tutorial:
www.edumobile.org/iphone/iphone-programming-tutorials/a-simple-stopwatch-for-iphone
and I get 1 error and 1 warning, both on the same line 71
for (int i = [timeArray count] – 1; i >= 0; i–) {
error – a parse issue Expected )
warning – Unused entity issue Expression result unused
Any ideas what is wrong?
Change this,
for (int i = [timeArray count] – 1; i >= 0; i–) {
to,
for (int i = [timeArray count] – 1; i >= 0; i--) {
Compiler is saying that it is not able to parse the character '–'. If it is not able to recognize the for loop syntax and parse it, it will throw this error.
As ACB mentioned, the expression needs to be i-- instead of i-.
Just a couple of notes - Douglas Crawford actually recommends to avoid using -- and ++ in favor of doing i -= 1. While a smidgen verbose, there is no room for doubt over what it actually does versus something like
int example = --i + b;
may confuse some to the value of i after the end of the expression.
Also, as a minor optimization, you should put the size of the array in a local value as opposed to calling [timeArray count] every loop iteration
int timeArraySize = [timeArray count] - 1;
for (int i = timeArraySize; i >= 0; i -= 1) {
Hope that helps!

Memory Access Exceedingly Slow For a Simple Loop Through an Array

I am taking about 50 times as long as expected to loop through a simple assignment. My first reaction was that I had disordered my memory access in the arrays, resulting in cache misses. This doesn't seem the case, however.
The pixel value assignment and updating the arrays takes a dogs age. Do any one of you folks have an inclining as to why this is happening? (I am compiling for an iPod with an A4)
memset(columnSumsCurrentFrameA, 0, sizeof(unsigned int) * (_validImageWidth/numSubdivisions) );
memset(rowSumsCurrentFrameA, 0, sizeof(unsigned int) * (_validImageHeight/numSubdivisions) );
int pixelValue = 0;
int startingRow = 0;
int startingColumn = 0;
for (int i = 0; i < _validImageHeight/numSubdivisions; i++)
{
int index = (i + startingRow) * _imageWidth;
for( int j = 0; j < (_validImageWidth/numSubdivisions); j++)
{
pixelValue = imageData[index + startingColumn + j];
columnSumsCurrentFrameA[j] += pixelValue;
rowSumsCurrentFrameA[i] += pixelValue;
}
}
The result of _validImageWidth/numSubdivisions must be an integer, are you sure that is always the case?
Also, you should calculate _validImageWidth/numSubdivisions before entering the double loops, it's not safe to assume your compiler takes care of it.
int limit = _validImageHeight/numSubdivisions;
for (int i = 0; i < limit; i++)
{
int index = (i + startingRow) * _imageWidth;
for( int j = 0; j < limit; j++)
{
pixelValue = imageData[index + startingColumn + j];
columnSumsCurrentFrameA[j] += pixelValue;
rowSumsCurrentFrameA[i] += pixelValue;
}
}

Carefully deleting N items from a "circular" vector (or perhaps just an NSMutableArray)

Imagine a std:vector, say, with 100 things on it (0 to 99) currently. You are treating it as a loop. So the 105th item is index 4; forward 7 from index 98 is 5.
You want to delete N items after index position P.
So, delete 5 items after index 50; easy.
Or 5 items after index 99: as you delete 0 five times, or 4 through 0, noting that position at 99 will be erased from existence.
Worst, 5 items after index 97 - you have to deal with both modes of deletion.
What's the elegant and solid approach?
Here's a boring routine I wrote
-(void)knotRemovalHelper:(NSMutableArray*)original
after:(NSInteger)nn howManyToDelete:(NSInteger)desired
{
#define ORCO ((NSInteger)[original count])
static NSInteger kount, howManyUntilLoop, howManyExtraAferLoop;
if ( ... our array is NOT a loop ... )
// trivial, if messy...
{
for ( kount = 1; kount<=desired; ++kount )
{
if ( (nn+1) >= ORCO )
return;
[original removeObjectAtIndex:( nn+1 )];
}
return;
}
else // our array is a loop
// messy, confusing and inelegant. how to improve?
// here we go...
{
howManyUntilLoop = (ORCO-1) - nn;
if ( howManyUntilLoop > desired )
{
for ( kount = 1; kount<=desired; ++kount )
[original removeObjectAtIndex:( nn+1 )];
return;
}
howManyExtraAferLoop = desired - howManyUntilLoop;
for ( kount = 1; kount<=howManyUntilLoop; ++kount )
[original removeObjectAtIndex:( nn+1 )];
for ( kount = 1; kount<=howManyExtraAferLoop; ++kount )
[original removeObjectAtIndex:0];
return;
}
#undef ORCO
}
Update!
InVariant's second answer leads to the following excellent solution. "starting with" is much better than "starting after". So the routine now uses "start with". Invariant's second answer leads to this very simple solution...
N times do if P < currentsize remove P else remove 0
-(void)removeLoopilyFrom:(NSMutableArray*)ra
startingWithThisOne:(NSInteger)removeThisOneFirst
howManyToDelete:(NSInteger)countToDelete
{
// exception if removeThisOneFirst > ra highestIndex
// exception if countToDelete is > ra size
// so easy thanks to Invariant:
for ( do this countToDelete times )
{
if ( removeThisOneFirst < [ra count] )
[ra removeObjectAtIndex:removeThisOneFirst];
else
[ra removeObjectAtIndex:0];
}
}
Update!
Toolbox has pointed out the excellent idea of working to a new array - super KISS.
Here's an idea off the top of my head.
First, generate an array of integers representing the indices to remove. So "remove 5 from index 97" would generate [97,98,99,0,1]. This can be done with the application of a simple modulus operator.
Then, sort this array descending giving [99,98,97,1,0] and then remove the entries in that order.
Should work in all cases.
This solution seems to work, and it copies all remaining elements in the vector only once (to their final destination).
Assume kNumElements, kStartIndex, and kNumToRemove are defined as const size_t values.
vector<int> my_vec(kNumElements);
for (size_t i = 0; i < my_vec.size(); ++i) {
my_vec[i] = i;
}
for (size_t i = 0, cur = 0; i < my_vec.size(); ++i) {
// What is the "distance" from the current index to the start, taking
// into account the wrapping behavior?
size_t distance = (i + kNumElements - kStartIndex) % kNumElements;
// If it's not one of the ones to remove, then we keep it by copying it
// into its proper place.
if (distance >= kNumToRemove) {
my_vec[cur++] = my_vec[i];
}
}
my_vec.resize(kNumElements - kNumToRemove);
There's nothing wrong with two loop solutions as long as they're readable and don't do anything redundant. I don't know Objective-C syntax, but here's the pseudocode approach I'd take:
endIdx = after + howManyToDelete
if (Len <= after + howManyToDelete) //will have a second loop
firstloop = Len - after; //handle end in the first loop, beginning in second
else
firstpass = howManyToDelete; //the first loop will get them all
for (kount = 0; kount < firstpass; kount++)
remove after+1
for ( ; kount < howManyToDelete; kount++) //if firstpass < howManyToDelete, clean up leftovers
remove 0
This solution doesn't use mod, does the limit calculation outside the loop, and touches the relevant samples once each. The second for loop won't execute if all the samples were handled in the first loop.
The common way to do this in DSP is with a circular buffer. This is just a fixed length buffer with two associated counters:
//make sure BUFSIZE is a power of 2 for quick mod trick
#define BUFSIZE 1024
int CircBuf[BUFSIZE];
int InCtr, OutCtr;
void PutData(int *Buf, int count) {
int srcCtr;
int destCtr = InCtr & (BUFSIZE - 1); // if BUFSIZE is a power of 2, equivalent to and faster than destCtr = InCtr % BUFSIZE
for (srcCtr = 0; (srcCtr < count) && (destCtr < BUFSIZE); srcCtr++, destCtr++)
CircBuf[destCtr] = Buf[srcCtr];
for (destCtr = 0; srcCtr < count; srcCtr++, destCtr++)
CircBuf[destCtr] = Buf[srcCtr];
InCtr += count;
}
void GetData(int *Buf, int count) {
int srcCtr = OutCtr & (BUFSIZE - 1);
int destCtr = 0;
for (destCtr = 0; (srcCtr < BUFSIZE) && (destCtr < count); srcCtr++, destCtr++)
Buf[destCtr] = CircBuf[srcCtr];
for (srcCtr = 0; srcCtr < count; srcCtr++, destCtr++)
Buf[destCtr] = CircBuf[srcCtr];
OutCtr += count;
}
int BufferOverflow() {
return ((InCtr - OutCtr) > BUFSIZE);
}
This is pretty lightweight, but effective. And aside from the ctr = BigCtr & (SIZE-1) stuff, I'd argue it's highly readable. The only reason for the & trick is in old DSP environments, mod was an expensive operation so for something that ran often, like every time a buffer was ready for processing, you'd find ways to remove stuff like that. And if you were doing FFT's, your buffers were probably a power of 2 anyway.
These days, of course, you have 1 GHz processors and magically resizing arrays. You kids get off my lawn.
Another method:
N times do {remove entry at index P mod max(ArraySize, P)}
Example:
N=5, P=97, ArraySize=100
1: max(100, 97)=100 so remove at 97%100 = 97
2: max(99, 97)=99 so remove at 97%99 = 97 // array size is now 99
3: max(98, 97)=98 so remove at 97%98 = 97
4: max(97, 97)=97 so remove at 97%97 = 0
5: max(96, 97)=97 so remove at 97%97 = 0
I don't program iphone for know, so I image std::vector, it's quite easy, simple and elegant enough:
#include <iostream>
using std::cout;
#include <vector>
using std::vector;
#include <cassert> //no need for using, assert is macro
template<typename T>
void eraseCircularVector(vector<T> & vec, size_t position, size_t count)
{
assert(count <= vec.size());
if (count > 0)
{
position %= vec.size(); //normalize position
size_t positionEnd = (position + count) % vec.size();
if (positionEnd < position)
{
vec.erase(vec.begin() + position, vec.end());
vec.erase(vec.begin(), vec.begin() + positionEnd);
}
else
vec.erase(vec.begin() + position, vec.begin() + positionEnd);
}
}
int main()
{
vector<int> values;
for (int i = 0; i < 10; ++i)
values.push_back(i);
cout << "Values: ";
for (vector<int>::const_iterator cit = values.begin(); cit != values.end(); cit++)
cout << *cit << ' ';
cout << '\n';
eraseCircularVector(values, 5, 1); //remains 9: 0,1,2,3,4,6,7,8,9
eraseCircularVector(values, 16, 5); //remains 4: 3,4,6,7
cout << "Values: ";
for (vector<int>::const_iterator cit = values.begin(); cit != values.end(); cit++)
cout << *cit << ' ';
cout << '\n';
return 0;
}
However, you might consider:
creating new loop_vector class, if you use this kind of functionality enough
using list if you perform many deletions (or few deletions (not from end, that's simple pop_back) but large array)
If your container (NSMutableArray or whatever) is not list, but vector (i.e. resizable array), you most definitely don't want to delete items one by one, but whole range (e.g. std::vector's erase(begin, end)!
Edit: reacting to comment, to fully realize what must be done by vector, if you erase element other than the last one: it must copy all values after that element (e.g. 1000 items in array, you erase first, 999x copying (moving) of item, that is very costly).
Example:
#include <iostream>
#include <vector>
#include <ctime>
using namespace std;
int main()
{
clock_t start, end;
vector<int> vec;
const int items = 64 * 1024;
cout << "using " << items << " items in vector\n";
for (size_t i = 0; i < items; ++i) vec.push_back(i);
start = clock();
while (!vec.empty()) vec.erase(vec.begin());
end = clock();
cout << "Inefficient method took: "
<< (end - start) * 1.0 / CLOCKS_PER_SEC << " ms\n";
for (size_t i = 0; i < items; ++i) vec.push_back(i);
start = clock();
vec.erase(vec.begin(), vec.end());
end = clock();
cout << "Efficient method took: "
<< (end - start) * 1.0 / CLOCKS_PER_SEC << " ms\n";
return 0;
}
Produces output:
using 65536 items in vector
Inefficient method took: 1.705 ms
Efficient method took: 0 ms
Note it's very easy to get inefficient, look e.g. have at http://www.cplusplus.com/reference/stl/vector/erase/

Bottoms-up mergesort problems!

I am having problems with bottoms-up mergesort. I have problems sorting/merging. Current code includes:
public void mergeSort(long[] a, int len) {
long[] temp = new long[a.length];
int length = 1;
while (length < len) {
mergepass(a, temp, length, len);
length *= 2;
}
}
public void mergepass(long[] a, long[] temp, int blocksize, int len) {
int k = 0;
int i = 1;
while(i <= (len/blocksize)){
if(blocksize == 1){break;}
int min = a.length;
for(int j = 0; j < blocksize; j++){
if(a[i*j] < min){
temp[k++] = a[i*j];
count++;
}
else{
temp[k++] = a[(i*j)+1];
count++;
}
}
for(int n = 0; n < this.a.length; n++){
a[n] = temp[n];
}
}
}
Obvious problems:
i is never incremented.
At no point do you compare two elements in the array. (Is that what if(a[i*j] < min) is supposed to be doing? I can't tell.)
Why are you multiplying i and j?
What's this.a.length?
Style problems:
mergeSort() takes len as an argument, even though arrays have an implicit length. To make matters worse, the function also uses a.length and length.
Generally poor variable names.
Nitpicks:
If you're going to make a second array of the same size, it is common to make one the "source" and the other the "destination" and swap them between passes, instead of sorting into a temporary array and copying them back again.