How to use textscan to reading this txt file matlab/octave
Time:
11:00
Day:
2019-11-05
Company:
Hyperdrones
Drones:
Jupiter, alvalade, 20, 2000, 500.0, 20.0, 2019-11-05, 10:15
Terra, ameixoeira, 15, 1500, 400.0, 20.0, 2019-11-05, 10:20
V125, ameixoeira, 20, 2000, 350.0, 20.0, 2019-11-05, 10:20
Saturno, lumiar, 10, 1000, 600.0, 20.0, 2019-11-05, 10:30
Neptuno, lumiar, 15, 1500, 600.0, 15.0, 2019-11-05, 10:30
Mercurio, alvalade, 25, 2500, 200.0, 20.0, 2019-11-05, 10:40
Marte, campogrande, 10, 1500, 100.0, 10.0, 2019-11-05, 10:50
Regarding the Octave version specifically, I would recommend something like this:
pkg load io % required for `csv2cell` function
tmp = fileread('data'); % read file as string
tmp = strsplit( tmp, '\n' ); % split on line endings to create rows
tmp = tmp(1:6); % keep only rows 1 to 6
Headers = struct( tmp{:} ); % create struct from comma-separated-list
Headers.Drones = csv2cell('data', 7) % use csv2cell to read csv starting from row 7
Result:
Headers =
scalar structure containing the fields:
Time:: 1x5 sq_string
Day:: 1x10 sq_string
Company:: 1x11 sq_string
Drones: 8x8 cell
Matlab does not come with an equivalent csv2cell function, but there are similar ones on matlab fileexchange; here's one that seems to have similar syntax as the octave version.
In general I'd use csv2cell for non-numeric csv data of this nature; it's much easier to work with than textscan. I'd only use textscan as a last resort for non-csv files with lines that are unusual but otherwise consistent in some way...
PS. If your csv file ends with an empty line, csv2cell might result in an extra 'empty' cell row. Remove this manually if you don't want it.
Assuming you want to read the matrix portion:
C = textscan(fileID, '%s %s %d %d %f %f %{yyyy-dd-MM}D %{HH:mm}D','HeaderLines',7,'Delimiter,',')
The 7 assumes there aren't really blank lines in your file. If you have blank lines, adjust that to 14 or whatever is appropriate).
Related
I am trying to read from a file and display the data in rows 6, 11, 111 and 127 in Matlab. I could not figure out how to do it. I have been searching Matlab forums and this platform for an answer. I used fscanf, textscan and other functions but they did not work as intended. I also used a for loop but again the output was not what I wanted. I can now only read one row and display it. Simply I want to display all of them(data in rows given above) at the same time. How can I do that?
matlab code
n = [0 :1: 127];
%% Problem 1
figure
x1 = cos(0.17*pi*n)
%it creates file and writes content of x1 to the file
fileID = fopen('file.txt','w');
fprintf(fileID,'%d \n',x1);
fclose(fileID);
%line number can be changed in order to obtain wanted values.
fileID = fopen('file.txt');
line = 6;
C = textscan(fileID,'%s',1,'delimiter','\n', 'headerlines',line-1);
celldisp(C)
fclose(fileID);
and this is the file
1
8.607420e-01
4.817537e-01
-3.141076e-02
-5.358268e-01
-8.910065e-01
-9.980267e-01
-8.270806e-01
-4.257793e-01
9.410831e-02
5.877853e-01
9.177546e-01
9.921147e-01
7.901550e-01
3.681246e-01
-1.564345e-01
-6.374240e-01
-9.408808e-01
-9.822873e-01
-7.501111e-01
-3.090170e-01
2.181432e-01
6.845471e-01
9.602937e-01
9.685832e-01
7.071068e-01
2.486899e-01
-2.789911e-01
-7.289686e-01
-9.759168e-01
-9.510565e-01
-6.613119e-01
-1.873813e-01
3.387379e-01
7.705132e-01
9.876883e-01
9.297765e-01
6.129071e-01
1.253332e-01
-3.971479e-01
-8.090170e-01
-9.955620e-01
-9.048271e-01
-5.620834e-01
-6.279052e-02
4.539905e-01
8.443279e-01
9.995066e-01
8.763067e-01
5.090414e-01
-4.288121e-15
-5.090414e-01
-8.763067e-01
-9.995066e-01
-8.443279e-01
-4.539905e-01
6.279052e-02
5.620834e-01
9.048271e-01
9.955620e-01
8.090170e-01
3.971479e-01
-1.253332e-01
-6.129071e-01
-9.297765e-01
-9.876883e-01
-7.705132e-01
-3.387379e-01
1.873813e-01
6.613119e-01
9.510565e-01
9.759168e-01
7.289686e-01
2.789911e-01
-2.486899e-01
-7.071068e-01
-9.685832e-01
-9.602937e-01
-6.845471e-01
-2.181432e-01
3.090170e-01
7.501111e-01
9.822873e-01
9.408808e-01
6.374240e-01
1.564345e-01
-3.681246e-01
-7.901550e-01
-9.921147e-01
-9.177546e-01
-5.877853e-01
-9.410831e-02
4.257793e-01
8.270806e-01
9.980267e-01
8.910065e-01
5.358268e-01
3.141076e-02
-4.817537e-01
-8.607420e-01
-1
-8.607420e-01
-4.817537e-01
3.141076e-02
5.358268e-01
8.910065e-01
9.980267e-01
8.270806e-01
4.257793e-01
-9.410831e-02
-5.877853e-01
-9.177546e-01
-9.921147e-01
-7.901550e-01
-3.681246e-01
1.564345e-01
6.374240e-01
9.408808e-01
9.822873e-01
7.501111e-01
3.090170e-01
-2.181432e-01
-6.845471e-01
-9.602937e-01
-9.685832e-01
-7.071068e-01
-2.486899e-01
2.789911e-01
Assuming the file is not exceedingly large, the simplest way would probably be read the entire file & index the output to your desired lines.
line = [6 11 111 127];
fileID = fopen('file.txt');
C = textscan(fileID,'%s','delimiter','\n');
fclose(fileID);
disp(C{1}(line))
I'm using POSTGIS="2.4" and Postgresql 9.6 and facing following error
While trying to insert polygon data
INSERT INTO aalis.mv_l1_parcelownership_aalis (geometry) VALUES
(st_Polygonfromtext ('polygon(482449.20552234241,
999758.79058533313,.....)',20137));
You're close :-)
The geometry you provided in your insert statement is invalid. Make sure that your POLYGONS are really correct and try one of these statements (using ST_GeomFromText or ST_PolygonFromText):
INSERT INTO aalis.mv_l1_parcelownership_aalis
VALUES (ST_GeomFromText('POLYGON ((30 10, 40 40, 20 40, 10 20, 30 10))',20137));
or
INSERT INTO aalis.mv_l1_parcelownership_aalis
VALUES (ST_GeomFromText('POLYGON ((30 10, 40 40, 20 40, 10 20, 30 10))',20137));
To check if your geometries are correct you can use ST_IsValid:
SELECT ST_IsValid(ST_GeomFromText('POLYGON((0 0, 1 1, 1 2, 1 1, 0 0))'));
HINWEIS: Self-intersection at or near point 0 0
st_isvalid
------------
f
(1 Zeile)
SELECT ST_IsValid(ST_GeomFromText('POLYGON ((30 10, 40 40, 20 40, 10 20, 30 10))'));
st_isvalid
------------
t
(1 Zeile)
Keep in mind also that the WKT standard sort of expects double parenthesis (( for polygons with 0 interior rings, and yours has only one: 'polygon(482449.20552234241, 999758.79058533313,.....). Also, the x and y axes are separated by space, not by comma. Commas separate coordinate pairs instead.
Example:
SELECT ST_IsValid('POLYGON((30 10, 40 40, 20 40, 10 20, 30 10))');
st_isvalid
------------
t
(1 Zeile)
SELECT ST_IsValid('POLYGON(30 10, 40 40, 20 40, 10 20, 30 10)');
FEHLER: parse error - invalid geometry
ZEILE 1: SELECT ST_IsValid('POLYGON(30 10, 40 40, 20 40, 10 20, 30 10...
^
TIP: "POLYGON(30 " <-- parse error at position 11 within geometry
Your polygon text is way off, and includes the characters ....., which are not valid:
polygon(482449.20552234241, 999758.79058533313,.....)
Not sure what your coordinates are, but the polygon text is generally in the form:
polygon((1.000 1.000, 2.000 1.500, 3.000 2.000, 1.000 1.000))
Note that the x-y pairs are in the form x y, and there are commas between the pairs.
I am trying to find the indices 2d max pooling in lasagne
network = batch_norm(Conv2DLayer(
network, num_filters=filter_size, filter_size=(kernel, kernel),pad=pad,
nonlinearity=lasagne.nonlinearities.rectify,
W=lasagne.init.GlorotUniform(),name="conv"), name="BN")
pool_in = lasagne.layers.get_output(network)
network = MaxPool2DLayer(network, pool_size=(pool_size, pool_size),stride=2,name="pool")
pool_out = lasagne.layers.get_output(network)
ind1 = T.grad(T.sum(pool_out), wrt=pool_in)
When I try to build the model it raises a error
DisconnectedInputError: grad method was asked to compute the gradient with respect to a variable that is not part of the computational graph of the cost, or is used only by a non-differentiable operator: Elemwise{mul,no_inplace}.0
Backtrace when the node is created:
File "//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2871, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 2975, in run_ast_nodes
if self.run_code(code, result):
File "//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 3035, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-28-0b136cc660e2>", line 1, in <module>
network = build_model()
File "<ipython-input-27-20acc3fe0d98>", line 8, in build_model
pool_in = lasagne.layers.get_output(network)
File "//anaconda/lib/python2.7/site-packages/lasagne/layers/helper.py", line 191, in get_output
all_outputs[layer] = layer.get_output_for(layer_inputs, **kwargs)
File "//anaconda/lib/python2.7/site-packages/lasagne/layers/special.py", line 52, in get_output_for
return self.nonlinearity(input)
File "//anaconda/lib/python2.7/site-packages/lasagne/nonlinearities.py", line 157, in rectify
return theano.tensor.nnet.relu(x)
What is the right way of coding functions on lasagne layers intermediate outputs.
I had a similar problem a while ago, check out my solution for 2d and 3d max pooling indices:
Theano max_pool_3d
(Its based on the same Google-groups post, i guess)
Using the MNIST tutorial of Tensorflow, I try to make a convolutional network for face recognition with the "Database of Faces".
The images size are 112x92, I use 3 more convolutional layer to reduce it to 6 x 5 as adviced here
I'm very new at convolutional network and most of my layer declaration is made by analogy to the Tensorflow MNIST tutorial, it may be a bit clumsy, so feel free to advice me on this.
x_image = tf.reshape(x, [-1, 112, 92, 1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_conv3 = weight_variable([5, 5, 64, 128])
b_conv3 = bias_variable([128])
h_conv3 = tf.nn.relu(conv2d(h_pool2, W_conv3) + b_conv3)
h_pool3 = max_pool_2x2(h_conv3)
W_conv4 = weight_variable([5, 5, 128, 256])
b_conv4 = bias_variable([256])
h_conv4 = tf.nn.relu(conv2d(h_pool3, W_conv4) + b_conv4)
h_pool4 = max_pool_2x2(h_conv4)
W_conv5 = weight_variable([5, 5, 256, 512])
b_conv5 = bias_variable([512])
h_conv5 = tf.nn.relu(conv2d(h_pool4, W_conv5) + b_conv5)
h_pool5 = max_pool_2x2(h_conv5)
W_fc1 = weight_variable([6 * 5 * 512, 1024])
b_fc1 = bias_variable([1024])
h_pool5_flat = tf.reshape(h_pool5, [-1, 6 * 5 * 512])
h_fc1 = tf.nn.relu(tf.matmul(h_pool5_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder("float")
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
print orlfaces.train.num_classes # 40
W_fc2 = weight_variable([1024, orlfaces.train.num_classes])
b_fc2 = bias_variable([orlfaces.train.num_classes])
y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
My problem appear when the session run the "correct_prediction" op which is
tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
At least I think given the error message:
W tensorflow/core/common_runtime/executor.cc:1027] 0x19369d0 Compute status: Invalid argument: Incompatible shapes: [8] vs. [20]
[[Node: Equal = Equal[T=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"](ArgMax, ArgMax_1)]]
Traceback (most recent call last):
File "./convolutional.py", line 133, in <module>
train_accuracy = accuracy.eval(feed_dict = {x: batch[0], y_: batch[1], keep_prob: 1.0})
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 405, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2728, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 345, in run
results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 419, in _do_run
e.code)
tensorflow.python.framework.errors.InvalidArgumentError: Incompatible shapes: [8] vs. [20]
[[Node: Equal = Equal[T=DT_INT64, _device="/job:localhost/replica:0/task:0/cpu:0"](ArgMax, ArgMax_1)]]
Caused by op u'Equal', defined at:
File "./convolutional.py", line 125, in <module>
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 328, in equal
return _op_def_lib.apply_op("Equal", x=x, y=y, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 633, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1710, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 988, in __init__
self._traceback = _extract_stack()
It looks like the y_conv output a matrix of shape 8 x batch_size instead of number_of_class x batch_size
If I change the batch size from 20 to 10, the error message stay the same but instead [8] vs. [20] I get [4] vs. [10]. So from that I conclude that the problem may come from the y_conv declaration (last line of the code above).
The loss function, optimizer, training, etc declarations is the same as in the MNIST tutorial:
cross_entropy = -tf.reduce_sum(y_ * tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
sess.run((tf.initialize_all_variables()))
for i in xrange(1000):
batch = orlfaces.train.next_batch(20)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict = {x: batch[0], y_: batch[1], keep_prob: 1.0})
print "Step %d, training accuracy %g" % (i, train_accuracy)
train_step.run(feed_dict = {x: batch[0], y_: batch[1], keep_prob: 0.5})
print "Test accuracy %g" % accuracy.eval(feed_dict = {x: orlfaces.test.images, y_: orlfaces.test.labels, keep_prob: 1.0})
Thanks for reading, have a good day
Well, after a lot debugging, I found that my issue was due to a bad instantiation of the labels. Instead of creating arrays full of zeros and replace one value by one, I created them with random value! Stupid mistake. In case someone wondering what I did wrong there and how I fix it here is the change I made.
Anyway during all the debugging I made, to find this mistake, I found some useful information to debug this kind of problem:
For the cross entropy declaration, the tensorflow's MNIST tutorial use a formula that can lead to NaN value
This formula is
cross_entropy = -tf.reduce_sum(y_ * tf.log(y_conv))
Instead of this, I found two ways to declare it in a safer fashion:
cross_entropy = -tf.reduce_sum(y_ * tf.log(tf.clip_by_value(y_conv, 1e-10, 1.0)))
or also:
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logit, y_))
As mrry says. printing the shape of the tensors can help to detect shape anomaly.
To get the shape of a tensor just call his get_shape() method like this:
print "W shape:", W.get_shape()
user1111929 in this question use a debug print that help me assert where the problem come from.
I have the following data in a csv file:
00:1A:1E:35:81:01, -36, -37, -36
00:1A:1E:35:9D:61, -69, -69, -69
00:1A:1E:35:7E:C1, -95, -95, -71
00:1A:1E:35:9D:65, -66, -67, -67
00:1A:1E:35:9D:60, -67, -68, -68
00:1A:1E:35:9D:63, -66, -68, -68
I am unable to read first column with MATLAB, which contain strings.
You can use
xlsread(file.csv);
instead of csvread. It returns [num, txt, raw], where num contains all cells parsed to double (NaN where it was not possible), txt all cells as text ('' where conversion to num succeeded) and raw with all cells as strings.