How do you open .txt file containing matrix in Matlab? - matlab

I am currently working on the project and have a difficulty to open .txt file in Matlab.
This .txt file contains rainfall radar data in matrix (numbers) form, [m,n] = [360,90].
Below is the data I am having trouble with.
Projection Metadata:
Radar number 54
Number of radials in scan 360
Number of range bins in scan 90
Starting range 127500.000000
Maximum range 150000.000000
Azimuth of first radial -0.500000
Azimuth of last radial 359.500000
Range bin size 250.000
Separation between radials 1.000
Projection POLAR
Units METRES DEGREES
Spheroid GRS80
Parameters :
Latitude of radar (degrees) -34.017000
Longitude of radar (degrees) 151.230000
Beam elevation angle (0 for CAPPI) 0.000
Scan metadata :
Time (seconds since 01/01/1970) 973199701
Time (dd/mm/yyyy hh:mm:ss) 02/11/2000 21:15:01
Time zone (seconds ahead of UTC) 0
Time zone (hours ahead of UTC) +0.0
Data type flag 9
Data type Reflectivity
Data units dBZ
Not forecast data
Not simulated data
Scan data :
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ..>1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ....
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ....
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ....
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ....
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ....
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ....
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ....
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ..>360
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . 90
End of scan data.
As you can see the first 29 text lines as well as the last text line need to be skipped.
I would be really appreciated if anybody can help me open up this file in Matlab in the form of
matrix [row,column] = [360,90].
Thank you very much for your time.
Regards,
NadeaR

You can use MATLAB's importdata function, which will read in and save the file in two fields: a cell array 'textdata' for the header and a matrix 'data' for the numerical data that follows.
input = importdata('datafile.txt', ' ', 29)
The arguments in this example are the input file name, a space as the delimiter character, and the number of header lines.
The 360x90 matrix would then be stored as input.data.
You can use a variable as the header-length argument if the number of header lines varies but is known for each file. If you don't know how many header lines to expect, you could do some fancy footwork in MATLAB to parse the .txt file but at that point, I would pre-process the .txt file with sed or perl, etc. and then read in the numerical portion in MATLAB, instead.

Related

How to convert a Float64 Matrix into a RGB channel Matrix in Julia?

Suppose I have the following Matrix:
img = [
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.5 1.0 1.0 1.0 1.0 1.0 1.0 0.5 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
]
I want to convert it to a Matrix in which each element represent the value as an RGB channel. I guess a somewhat translation should happen there. E.g., what is the equivalent RGB channel for representing 0.0? You might say this can't happen or isn't possible. But I can convert the same Matrix into a black-and-white image:
julia> using Images
julia> Gray.(img)
And this shows an image as the output. So if it can translate the 0.0 into black and 1.0 into white by the Gray channel, Then I can expect existing the equivalent as an RGB channel.
Q1: What is my expected output?
Ans: A Matrix like this (here, with arbitrary numbers):
10×12 Array{RGB{Float64},2} with eltype RGB{Float64}:
RGB{Float64}(0.412225,0.412225,0.412225) … RGB{Float64}(0.755344,0.755344,0.755344)
RGB{Float64}(0.0368978,0.0368978,0.0368978) RGB{Float64}(0.198098,0.198098,0.198098)
Q2: Why do I need this?
Ans: The whole idea is to be able to change the color of some pixels. So first, I have a Matrix with the Float64 element type. Then I want to achieve the equivalent Matrix with the RGB{Float64} element type to change the value of some of them into something else (to show a pixel in a green color or anything.) and then see the picture to check the final image.
Q3: What I've tried?
Ans: I tried a straightforward approach, but I failed:
julia> RGB.(img)
julia> typeof(RGB.(img))
Matrix{RGB{N0f8}} (alias for Array{RGB{Normed{UInt8, 8}}, 2})
This command renders the image. It doesn't give me the expected Matrix. Then I tried this:
julia> channelview(RGB.(img))
3×10×12 reinterpret(reshape, N0f8, ::Array{RGB{N0f8},2}) with eltype N0f8:
[:, :, 1] =
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
;;; …
[:, :, 12] =
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
But it's not what I expect.
Update:
Now tried another way:
julia> convert(Array{RGB, 2}, img)
# Just renders the result and doesn't show the matrix
julia> typeof(convert(Array{RGB, 2}, img))
Matrix{RGB} (alias for Array{RGB, 2})
Also, I tried collecting it, but it led to the same result.
P.S. I need it to work in the VScode, not the REPL.
Issue solved by disabling the plot pane:
To understand what you're seeing, realize that Julia calls display(obj) on the value obj returned from any command you execute. So when you do RGB.(img) (which is indeed the answer to your original question), that value gets returned---you could assign it to a variable and manipulate it.
But since you're working in VSCode, the default is to display anything that is a matrix of Colorant as an image, rather than a standard matrix. If you want to see it as a numeric matrix, you need to do a bit more work to specify that. Here's a demo in VSCode's Julia REPL:
Just as you found, after the imgrgb = RGB.(img) line, the plot pane showed a representation of this image. But as the later lines show, you can still manipulate and display imgrgb as a regular matrix. The MIME("text/plain") forces it to display in text-mode rather than image-mode.
Unable to understand why RGB.(img) is not the expected result? Looks fine to me.
| This command renders the image. It doesn't give me the expected Matrix.
julia> using Images
[ Info: Precompiling Images [916415d5-f1e6-5110-898d-aaa5f9f070e0]
julia> img = [
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.5 1.0 1.0 1.0 1.0 1.0 1.0 0.5 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5
]
10×12 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.5 1.0 1.0 1.0 1.0 1.0 1.0 0.5 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.5
julia> RGB.(img)
10×12 Array{RGB{Float64},2} with eltype RGB{Float64}:
RGB{Float64}(0.0,0.0,0.0) … RGB{Float64}(0.0,0.0,0.0)
RGB{Float64}(0.0,0.0,0.0) RGB{Float64}(0.0,0.0,0.0)
RGB{Float64}(0.0,0.0,0.0) RGB{Float64}(0.0,0.0,0.0)
RGB{Float64}(0.0,0.0,0.0) RGB{Float64}(0.0,0.0,0.0)
RGB{Float64}(0.0,0.0,0.0) RGB{Float64}(0.0,0.0,0.0)
RGB{Float64}(0.0,0.0,0.0) … RGB{Float64}(0.0,0.0,0.0)
RGB{Float64}(0.0,0.0,0.0) RGB{Float64}(0.0,0.0,0.0)
RGB{Float64}(0.0,0.0,0.0) RGB{Float64}(0.0,0.0,0.0)
RGB{Float64}(0.0,0.0,0.0) RGB{Float64}(0.0,0.0,0.0)
RGB{Float64}(0.0,0.0,0.0) RGB{Float64}(0.5,0.5,0.5)

Is there anyway to view a STL file in a github readme similar to an image?

Howdy I am trying to show a preview of an STL file from my github repo in the README file is there anyway to do this?
So far I have tried using
<script src="https://embed.github.com/view/3d/<username>/<repo>/<ref>/<path_to_file>"></script>
For example my file would be
<script src="https://github.com/view/3d/UNCG-DAISY/SediNetCam/blob/main/hardware/cameracover.stl"></script>
Now, Github renderize STLs inserting the ASCII code in code blocks wit syntax highlighting
https://docs.github.com/en/enterprise-cloud#latest/get-started/writing-on-github/working-with-advanced-formatting/creating-diagrams
```stl
solid cube_corner
facet normal 0.0 -1.0 0.0
outer loop
vertex 0.0 0.0 0.0
vertex 1.0 0.0 0.0
vertex 0.0 0.0 1.0
endloop
endfacet
facet normal 0.0 0.0 -1.0
outer loop
vertex 0.0 0.0 0.0
vertex 0.0 1.0 0.0
vertex 1.0 0.0 0.0
endloop
endfacet
facet normal -1.0 0.0 0.0
outer loop
vertex 0.0 0.0 0.0
vertex 0.0 0.0 1.0
vertex 0.0 1.0 0.0
endloop
endfacet
facet normal 0.577 0.577 0.577
outer loop
vertex 1.0 0.0 0.0
vertex 0.0 1.0 0.0
vertex 0.0 0.0 1.0
endloop
endfacet
endsolid
```

iostat -k in Solaris (SunOS 5.10)

In Linux iostat -k displays kB_read kB_wrtn fields, which are total data read/written during the measured interval.
#iostat -k
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.59 3.69 10.46 12418161 35236147
Is there any possibility to display the same in Solaris?
try iostat -xn... I think the output is similar to Linux.
iostat -sndx 1 will produce this output:
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
1.0 1.1 82.1 8.5 0.0 0.0 0.0 9.7 0 1 c2t0d0
36.9 0.6 4576.8 14.2 0.0 0.6 0.0 16.2 0 14 c2t1d0
6.6 0.3 695.1 64.2 0.0 0.0 0.0 5.7 0 2 c2t2d0
0.0 7.7 0.8 2438.0 2.1 1.0 270.4 128.0 33 33 c1t0d0
0.0 7.8 0.8 2438.1 2.1 1.0 271.6 128.2 33 33 c1t0d1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0 c1t0d2
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 85.0 0.0 360.5 0.0 0.0 0.0 0.4 0 1 c2t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c2t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c2t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d2
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c2t0d0
12.0 118.0 24.5 460.5 0.0 0.5 0.0 3.7 0 44 c2t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c2t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d1
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d2
The 1 argument specifies emitting an output set once per second. Add a second numerical value to specify how many sets of output are emitted. The first set out output is the I/O statistics since the counters were last reset, probably at boot.
Add -z to suppress rows with all zeros.
Add -p to see per-partition statistics.
Add -I to see raw counts instead of rate numbers. (Might only be Solaris 11)

Categorical Variables in TensorFlow

i'm trying to use TensorFlow on a dataset with has a few Categorical variables. I've encoded them with dummies but it looks like its causing trouble and TF is complaining that the dataset is not dense.
Or is the reason for the error something totally different ?
I'm trying to run a simple Neural Network Model with 1 hidden layer with stochastic gradient. The code was working when the input was numeric variables (images of digits from MNIST)
thanks
-------------------------------------------------------------------------- ValueError Traceback (most recent call
last) in ()
37 return(test_acc,round(l,5))
38
---> 39 define_batch(0.005)
40 run_batch()
in define_batch(beta)
11 shape=(batch_size, num_var))
12 tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
---> 13 tf_valid_dataset = tf.constant(valid_dataset)
14 tf_test_dataset = tf.constant(test_dataset)
15
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/constant_op.pyc
in constant(value, dtype, shape, name)
159 tensor_value = attr_value_pb2.AttrValue()
160 tensor_value.tensor.CopyFrom(
--> 161 tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape))
162 dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype)
163 const_tensor = g.create_op(
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/tensor_util.pyc
in make_tensor_proto(values, dtype, shape)
320 nparray = np.array(values, dtype=np_dt)
321 if list(nparray.shape) != _GetDenseDimensions(values):
--> 322 raise ValueError("Argument must be a dense tensor: %s" % values)
323 # python/numpy default float type is float64. We prefer float32 instead.
324 if (nparray.dtype == np.float64) and dtype is None:
ValueError: Argument must be a dense tensor: Tuesday
Wednesday Thursday Friday Saturday Sunday CENTRAL \ 736114
0.0 0.0 0.0 0.0 1.0 0.0 0.0 437148 0.0 0.0 1.0 0.0 0.0 0.0 0.0 605041 0.0 0.0 0.0 0.0 0.0 0.0 0.0 444608 0.0 0.0 0.0 0.0 1.0 0.0 0.0 695549 0.0 0.0 0.0 0.0 1.0 0.0 0.0 662807 0.0 0.0 0.0 1.0 0.0 0.0 0.0 238635 0.0 0.0 0.0 0.0 0.0 1.0 0.0 549524 0.0 0.0 0.0 1.0 0.0 0.0 0.0 705478 1.0 0.0 0.0 0.0 0.0 0.0 0.0 557716 0.0 0.0 0.0 1.0 0.0 0.0 0.0 41808 0.0 0.0 0.0 0.0 0.0 1.0 0.0 227235 1.0 0.0 0.0 0.0 0.0 0.0 0.0 848719 0.0 0.0 0.0 0.0 0.0 0.0 0.0 731202 0.0 0.0 0.0 0.0 1.0 0.0 0.0 467516 1.0 0.0 0.0 0.0 0.0 0.0 1.0
Here is an excerpt from the code
# Adding regularization to the 1 hidden layer network
def define_batch(beta):
batch_size = 128
num_RELU =256
graph1 = tf.Graph()
with graph1.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, num_var))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_RELU = tf.Variable(
tf.truncated_normal([num_var, num_RELU]))
biases_RELU = tf.Variable(tf.zeros([num_RELU]))
weights_layer1 = tf.Variable(
tf.truncated_normal([num_RELU, num_labels]))
biases_layer1 = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits_RELU = tf.matmul(tf_train_dataset, weights_RELU) + biases_RELU
RELU_vec = tf.nn.relu(logits_RELU)
logits_layer = tf.matmul(RELU_vec, weights_layer1) + biases_layer1
# loss = tf.reduce_mean(
# tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels))
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits_layer, tf_train_labels,name="cross_entropy")
l2reg = tf.reduce_sum(tf.square(weights_RELU))+tf.reduce_sum(tf.square(weights_layer1))
beta = 0.005
loss = tf.reduce_mean(cross_entropy+beta*l2reg)
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.3).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits_layer)
valid_prediction = tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_valid_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
test_prediction =tf.nn.softmax(
tf.matmul(tf.nn.relu((tf.matmul(tf_test_dataset, weights_RELU) + biases_RELU)),weights_layer1)+biases_layer1)
import datetime
startTime = datetime.datetime.now()
num_steps = 301 # change to 3001
def run_batch():
with tf.Session(graph=graph1) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions, logits = session.run(
[optimizer, loss,train_prediction,logits_RELU], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
test_acc = accuracy(test_prediction.eval(), test_labels)
print("Test accuracy: %.1f%%" % test_acc)
print('loss=%s' % l)
x = datetime.datetime.now() - startTime
print(x)
return(test_acc,round(l,5))
define_batch(0.005)
run_batch()
EDIT:
#gdhal thanks for looking at it
train_dataset is a pandas dataframe
train_dataset.columns
Index([u'Tuesday', u'Wednesday', u'Thursday', u'Friday', u'Saturday',
u'Sunday', u'CENTRAL', u'INGLESIDE', u'MISSION', u'NORTHERN', u'PARK',
u'RICHMOND', u'SOUTHERN', u'TARAVAL', u'TENDERLOIN', u' 3H - 4H',
u' 5H - 6H', u' 7H - 8H', u' 9H - 10H', u'11H - 12H', u'13H - 14H',
u'15H - 16H', u'17H - 18H', u'19H - 20H', u'21H - 22H', u'23H - 0H',
u'Xnorm', u'Ynorm', u'Hournorm'],
dtype='object')
all the variables are dummies (taking 0 or 1 values) except the last 3 variables (Xnorm, Ynorm, and Hournorm) which are numerical values normalized to [0,1] interval. valid_dataset and test_dataset have the same format
train_labels is a pandas series
train_labels.describe()
count 790184
unique 39
top LARCENY/THEFT
freq 157434
Name: Category, dtype: object
valid_labels, and test_labels have the same format
Try feeding a numpy array instead of a pandas dataframe.

fastest way to read an f06 file (Nastran) with matlab

I have to read several file of results from Nastran with MATLAB.
I need to import the information about the displacement vector, which is located in the middle of the file, (but I know the line where the section about displacements starts and ends.).
Those files are formatted as follows:
They are divided in "pages" with a fixed number of row per page
Each page has a header, which I do not need to import
Data columns have fixed width
(Below an example of "a page" of the file )
I have written a function in matlab, but it is extremely slow, and this process is the bottleneck of my code. (Those files are very big and I need to process a lot of them): I'm searching the fastest way to read them.
Have you some ideas?
Thank you
Edit: I have added my code
function [TIME,T1,T2,T3,R1,R2,R3]=importdisp(D,L,cst,cst2,filename)
%D first row of the section
%L last row+1 of the section
page=floor((L-D)/cst);
q=zeros((page)*cst2,1);
TIME=q;T1=q;T2=q;T3=q;R1=q;R2=q; R3=q;
for i=0:page-1
startRow=D+i*cst;
endRow=startRow+cst-1;
qq=(1+i*cst2:(i+1)*cst2);
[TIME(qq),~,T1(qq),T2(qq),T3(qq),R1(qq),R2(qq),R3(qq)] = importfile2(filename, startRow, endRow);
end
i=page;
startRow=D+i*57;
endRow=L-1;
[aTIME,~,aT1,aT2,aT3,aR1,aR2,aR3] = importfile2(filename, startRow, endRow);
TIME=[TIME;aTIME];
T1=[T1;aT1];
T2=[T2;aT2];
T3=[T3;aT3];
function [TIME,TYPE,T1,T2,T3,R1,R2,R3] = importfile2(filename, startRow, endRow)
%% Read columns of data as strings:
% For more information, see the TEXTSCAN documentation.
formatSpec = ' %12f %*s %13f %13f %13f %13f %13f %13f';
%% Open the text file.
fileID = fopen(filename,'r');
dataArray = textscan(fileID, formatSpec, endRow-startRow-6,'HeaderLines',startRow+6, 'ReturnOnError', false);
fclose(fileID);
TIME = cell2mat(dataArray(1));
TYPE = [];%
T1 = cell2mat(dataArray(2));
T2 = cell2mat(dataArray(3));
T3 = cell2mat(dataArray(4));
R1 = cell2mat(dataArray(5));
R2 = cell2mat(dataArray(6));
R3 = cell2mat(dataArray(7));
R1=[R1;aR1];
R2=[R2;aR2];
R3=[R3;aR3];
1 MSC.NASTRAN JOB CREATED ON 23-JUL-13 AT 11:37:55 JULY 30, 2013 MSC.NASTRAN 11/25/11 PAGE 375
TIME_DIPENDENT
0 SUBCASE 1
POINT-ID = 51
D I S P L A C E M E N T V E C T O R
TIME TYPE T1 T2 T3 R1 R2 R3
1.010000E+00 G 3.575517E-05 0.0 -2.498832E-05 0.0 -1.368603E-06 0.0
1.010200E+00 G 3.615527E-05 0.0 5.931119E-05 0.0 2.523460E-08 0.0
1.010400E+00 G 3.643431E-05 0.0 1.400531E-04 0.0 1.428176E-06 0.0
1.010600E+00 G 3.690420E-05 0.0 2.124308E-04 0.0 1.886763E-06 0.0
1.010800E+00 G 3.727554E-05 0.0 2.720885E-04 0.0 1.029395E-06 0.0
1.011000E+00 G 3.753303E-05 0.0 3.154415E-04 0.0 -5.155680E-07 0.0
1.011200E+00 G 3.799178E-05 0.0 3.399170E-04 0.0 -1.612602E-06 0.0
1.011400E+00 G 3.847007E-05 0.0 3.440528E-04 0.0 -1.544716E-06 0.0
1.011600E+00 G 3.878193E-05 0.0 3.275930E-04 0.0 -3.747878E-07 0.0
1.011800E+00 G 3.927647E-05 0.0 2.914786E-04 0.0 1.095575E-06 0.0
1.012000E+00 G 3.994424E-05 0.0 2.378519E-04 0.0 1.759643E-06 0.0
1.012200E+00 G 4.034076E-05 0.0 1.699633E-04 0.0 1.095494E-06 0.0
1.012400E+00 G 4.074808E-05 0.0 9.188905E-05 0.0 -3.884768E-07 0.0
1.012600E+00 G 4.135053E-05 0.0 8.304626E-06 0.0 -1.629274E-06 0.0
1.012800E+00 G 4.170949E-05 0.0 -7.576412E-05 0.0 -1.752396E-06 0.0
1.013000E+00 G 4.199858E-05 0.0 -1.552350E-04 0.0 -6.461957E-07 0.0
1.013200E+00 G 4.248216E-05 0.0 -2.252832E-04 0.0 8.903658E-07 0.0
1.013400E+00 G 4.278283E-05 0.0 -2.817524E-04 0.0 1.804066E-06 0.0
1.013600E+00 G 4.301732E-05 0.0 -3.213745E-04 0.0 1.573536E-06 0.0
1.013800E+00 G 4.358916E-05 0.0 -3.417496E-04 0.0 3.442360E-07 0.0
1.014000E+00 G 4.405503E-05 0.0 -3.414665E-04 0.0 -1.139860E-06 0.0
1.014200E+00 G 4.437157E-05 0.0 -3.203563E-04 0.0 -1.834472E-06 0.0
1.014400E+00 G 4.499020E-05 0.0 -2.795874E-04 0.0 -1.277272E-06 0.0
1.014600E+00 G 4.558231E-05 0.0 -2.215368E-04 0.0 2.333646E-08 0.0
1.014800E+00 G 4.597261E-05 0.0 -1.495850E-04 0.0 1.129674E-06 0.0
1.015000E+00 G 4.643634E-05 0.0 -6.810582E-05 0.0 1.298742E-06 0.0
1.015200E+00 G 4.685981E-05 0.0 1.765095E-05 0.0 4.973296E-07 0.0
1.015400E+00 G 4.722315E-05 0.0 1.022117E-04 0.0 -5.859554E-07 0.0
1.015600E+00 G 4.764707E-05 0.0 1.804387E-04 0.0 -1.191458E-06 0.0
1.015800E+00 G 4.799638E-05 0.0 2.476246E-04 0.0 -9.108963E-07 0.0
1.016000E+00 G 4.831509E-05 0.0 2.995918E-04 0.0 1.032815E-07 0.0
1.016200E+00 G 4.872533E-05 0.0 3.330215E-04 0.0 1.085160E-06 0.0
1.016400E+00 G 4.917991E-05 0.0 3.459135E-04 0.0 1.281773E-06 0.0
1.016600E+00 G 4.956298E-05 0.0 3.376274E-04 0.0 5.597767E-07 0.0
1.016800E+00 G 4.989865E-05 0.0 3.085430E-04 0.0 -5.647819E-07 0.0
1.017000E+00 G 5.048857E-05 0.0 2.602277E-04 0.0 -1.257802E-06 0.0
1.017200E+00 G 5.116086E-05 0.0 1.957408E-04 0.0 -1.047838E-06 0.0
1.017400E+00 G 5.149745E-05 0.0 1.192598E-04 0.0 -1.662111E-07 0.0
1.017600E+00 G 5.192366E-05 0.0 3.549325E-05 0.0 7.763596E-07 0.0
1.017800E+00 G 5.251750E-05 0.0 -5.053262E-05 0.0 1.131500E-06 0.0
1.018000E+00 G 5.280536E-05 0.0 -1.334662E-04 0.0 6.140833E-07 0.0
1.018200E+00 G 5.301622E-05 0.0 -2.081115E-04 0.0 -3.166196E-07 0.0
1.018400E+00 G 5.345660E-05 0.0 -2.701173E-04 0.0 -8.439085E-07 0.0
1.018600E+00 G 5.390675E-05 0.0 -3.158737E-04 0.0 -5.923507E-07 0.0
1.018800E+00 G 5.422348E-05 0.0 -3.423101E-04 0.0 1.950982E-07 0.0
1.019000E+00 G 5.467689E-05 0.0 -3.474040E-04 0.0 9.392950E-07 0.0
1.019200E+00 G 5.521453E-05 0.0 -3.307139E-04 0.0 1.020770E-06 0.0
1.019400E+00 G 5.565249E-05 0.0 -2.932987E-04 0.0 2.693299E-07 0.0
1.019600E+00 G 5.617935E-05 0.0 -2.372591E-04 0.0 -8.091097E-07 0.0
1.019800E+00 G 5.678378E-05 0.0 -1.657268E-04 0.0 -1.504755E-06 0.0
Nastran f06 files are formatted to provide adequate readibility for users. For programming purposes at the other hand, there is a better option in my opinion. It is the punch file (.pch). It contains all the results in a fixed-length text table format. You do not have to deal with page breaks, empty lines or repeatitive identifiers at each page like you do in an f06 file. The good thing is that once you have achieved a code that reads some data successfully, you can just use almost the same pattern in order to read other kind of data because the only thing that changes the title and the row number. You have to examine a little to get into it anyway...
The punch file must be requested by the BDF file. For example if your request is:
DISPLACEMENT (PRINT) = ALL
then you add a "PUNCH"
DISPLACEMENT (PUNCH,PRINT) = ALL
and it will generate the pch file as well as the f06 file.