Matlab unable to format string to path - matlab

In original script there is part of code where is this
filepathsource='C:\Users\...';
strFP=dir([filepathsource '*.txt']);
Which works nicely.
Problem is that i need to change that filepathsource to some more dynamic so i did.
filepathsource=string(strcat(app.folderPath,'\',string(app.currentCount),'\'));
strFP=dir([filepathsource '*.txt']);
But i end with error
Error using dir
Name must be a text scalar.
And i need original formating because i get errors later in script. Is there any way how can i fix it? I tried everything but always is something somewhere wrong. And i just dont understand whats difference between original filepathsource and the one that i created bot are strings. (output of my filepathsource is correct when i display it)

Related

Cimplicity Screen - one object/button that is dependent on hundreds of points

So I have created a huge screen that essentially just shows the robot status for every robot in this factory (individually)… At the very end of the project, they decided they want one object on the screen that blinks if any of the 300 robots fault. I am trying to think of a way to make this work. Maybe a global script of some kind? Problem is, I do not do much scripting in Cimplicity, so any help is appreciated.
All the points that are currently used on this screen (to indicate a fault) have very similar names… as in, the beginning is the same… so I was thinking of a script that could maybe recognize if a bit is high based on PART of it's string name characteristic. The end will change a little each time, but I am sure there is a way to only look for part of a string and negate the rest. If the end has to be hard coded, that's fine.
You can use a Python script in Cimplicity.
I will not go into detail on the use of python in Cimplicity, which is well described in the documentation indicated above.
Here's an example of what can be done... note that I don't have a way to test it and, of course, this will work if the name of your robots in the declaration follows the format Robot_1, Robot_2, Robot_3 ... Robot_10 ... Robot_300 and it also depends on the Name and the Type of the fault variable... as you didn't define it, I imagine it can be an integer, with ZERO indicating no error. But if you use something other than that, you can easily change it.
import cimplicity
(...)
OneRobotWithFault = False
# Here you get the values and check for fault
for i in range(0, 300):
pointName = f'MyFactory.Robot_{i}.FaultCode'
robotFaultCode = cimplicity.point_get(pointName)
if robotFaultCode > 0:
OneRobotWithFault = True
break
# Set the status to the variable "WeHaveRobotWithFault"
cimplicity.point_set("WeHaveRobotWithFault", OneRobotWithFault)

Convert PSS/E .raw file to Pandapower

I'm trying to find a possible way to convert PSS/E native .raw files to Pandapower format.
My objective is to take advantage of the network plotting capabilities that are available in Pandapower.
For that, I have to first be able to load my grid data into Pandapower.
For that, I have to somehow bridge the gap between PSSE .raw to Pandapower.
Literature says that a possible way of doing this is by using the 'psse2mpc' function available in Matpower.
I've tried to use it but I get the following error message:
(quote)
>> psse2mpc('RED1523.raw')
Reading file 'RED1523.raw' ............................................. done.
Splitting into individual lines ...error: regexp: the input string is invalid UTF-8
error: called from
psse_read at line 60 column 9
psse2mpc at line 68 column 21
(unquote)
I'was informed that maybe I should save my .raw file (natively generated with a PSSE/E v33 version) into an older .raw format (corresponding to previous PSS/E versions).
I've tried this as well but still have the same error message.
Apart from getting this error which so far impedes to reach my objective, I've been unable to guess the Pandapower "equivalent .raw" structure. Does anybody know how this input structure looks like in Pandapower?
If I would know how Pandapower needs to get the input data, I could even try to code a taylor-made python script that converts my .raw file into whatever is required from Pandapower.
If somebody could help me to get out of this labyrinth I would be most gratefull !!!
Thanks.
Eneko.
You need to check your .raw file to enter the other Inputs of the psse2mpc function. For instance, if I have the case39.raw file and I want to convert it to matpower format like case39mpc.m, then I must enter something like this:
psse2mpc ('case39.raw', 'case39mpc.m', '1', '29')

Libreoffice : italics breaking off (messes with replace-function)

I have been trying to use automatic replace function to place entries starting with italics on their own lines, so for example
lähteä60*F【动】
1. 离开, 出发, 走掉líkāi, chūfā, zǒu diào(poistua). Vieraat lähtevät.客人走了kèrén zǒule.Juna lähtee raiteelta kaksi. 火车两点离站huǒchē liǎng diǎn lí zhàn.
would turn into
1. 离开, 出发, 走掉líkāi, chūfā, zǒu diào(poistua).
Vieraat lähtevät.客人走了kèrén zǒule.
Juna lähtee raiteelta kaksi. 火车两点离站huǒchē liǎng diǎn lí zhàn.
However, for some reason unknown to me, the replace function (cf. screenshot) breaks up the lines like this:
lähteä60*F【动】
1. 离开, 出发, 走掉líkāi, chūfā, zǒu diào(poistua).
Vieraat lähtev
ä
t.客人走了kèrén zǒule.
Juna lähtee raiteelta kaksi. 火车两点离站huǒchē liǎng diǎn lí zhàn.
(so the first line "Vieraat lähtevät.客人走了kèrén zǒule." is broken up.)
As far as I can tell, it should be all italics, so I have no idea why it breaks up and no way to examine what's the problem. Trying to save to different formats doesn't seem to help either. There's thousands of pages of this stuff, so the automatic function is really required.
A small sample file of the stuff can be downloaded from here:
http://shakki.info/test.docx.
Some screenshots of the problem:

Grok filter for a time counter HH:MM

I'm quite new to ELK and Grok-filtering, and I'm struggling with parsing this particular pattern in my grok filter.
I've used the grok debugger to try and solve this, but although I like the tool, I just get confused by the custom patterns.
Eventually, I hope to parse lots of log files sent by filebeat to logstash, then send the parsed logs to elasticsearch and display with kibana or some similar visualization tool.
The lines that I need to parse follow the following pattern:
1310 2017-01-01 16:48:54 [325:51] [326:49] [359:57] Some log info text
The first four digits is a log type identifier, and will be used for grouping. I've called the field "LogLineID".
The date is formatted YYYY-MM-DD HH:MM:SS, and is parsed ok. I called the field "LogDate".
But now the problem begins. Within the square brackets, I have counters, formatted as MM:SS if you like. I cannot for the life of me find a way to sort these out, but I need to compare these times, hence I want to store them as minutes and seconds, not just numbers.
The first is a counter "TimeSpent",
the second is a counter "TimeStarted" and
the third is a counter "TimeSinceDown".
Then, last, comes the info text, which I've managed to grok with simply applying %{GREEDYDATA:LogInfo}.
I notice that the amount of minutes could be far higher than the standard 60 minutes within an hour, so I may be barking up the wrong tree here trying to parse it with date patterns such as TIMESTAMP_ISO8601, but then, I don't really know how else to do this.
So, I came this far:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate}
and were as mentioned able to (by cutting away the square bracket parts) to parse the log info text with
%{GREEDYDATA:LogInfo}
to create a field LogInfo.
But that's were I'm stuck. Could someone please help me figure out the rest?
Massive thanks in advance.
PS! I also found %{NUMBER:duration}, but it could as far as I could tell only parse timestamps with dot, not colon..
grok regex expression can help you solve the problem.
but first I wanna make sure that do you mean [325:51] [326:49] [359:57] are the three component that you wanna to fetch? And it will returns the result like :
TimeSpent: 325:51
TimeStarted: 326:49
TimeSinceDown: 359:57
were i get the point , you can use my ways in on of the following suggestions:
define your own custom pattern files and add the pattern in your file.
just use the expression in filter part of logstash conf file
hope it will helps you
Ah, there was a space.. Actually, I was misleading myself and everybody in my question, as it was not actually that log line that was causing problems. I just took the first one, not realizing where the problem really were, but the one causing problems had a space within the brackets as such: [ 42:31]. There are also some parts where there are two spaces, so the way I managed to solve this was to include a %{SPACE} between the \[ and the %{NUMBER}:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate} \[%{SPACE}%{NUMBER:TimeSpentMinutes}\:%{NUMBER:TimeSpentSeconds}\] \[%{SPACE}%{NUMBER:TimeStartedMinutes}\:%{NUMBER:TimeStartedSeconds}\] \[%{SPACE}%{NUMBER:TimeSinceDownMinutes}\:%{NUMBER:TimeSinceDownSeconds}\] %{GREEDYDATA:LogText}
I still haven't solved the merging of minutes and seconds, but this I can also handle in a later stage.
Thanks to Lin Don for showing an interest in my problem, and sorry for not replying sooner.
Hope the solution will help others (or even myself) if their stuck on the same kind of problem.
Note to myself: Read the logs more carefully before grok'ing.. :)

For loop to open files in Python

I am relatively new to Python and need to run a python macro through Abaqus. I am opening files e.g "nonsym1, nonsym2, nonsym3". I'm trying to do this with a loop. The code opens nonsym1 (in abaqus) and performs some operations on it, then is supposed loop back and do the same to the other files. Here is the code I'm trying...
for i in range (1,10):
filename = 'nonsym(i)'
step = mdb.openStep(
'C:/Users/12345678/Documents/Inventor/Aortic Dissection/%s.stp' %filename,
scaleFromFile=OFF)
My main issue is coming from the fact that the %s in the directory I think?...
error message when trying to run this macro Don't know how to best approach this, so any help would be great thanks! Still learning!
Instead of using filename=nonsym1-2-3-..., name the step files as integers 1.stp,2.stp,3.stp and then convert integers to the string values with %str(i)...
And use the code below:
for i in range (1,10):
step = mdb.openStep(
'C:/Users/12345678/Documents/Inventor/Aortic Dissection/%s.stp' %str(i), scaleFromFile=OFF)
To obtain equal quantity of odb files, modify the Job code line as similiar as this code.