Import Beckhoff twinCAT scope view csv format into Matlab - matlab

I try to import data from twinCAT into matlab for analyzing.
I have tried it with the code:
Tb = readtable('LogTestScopeView1.csv');
But that didn't work. I guess the header of a twinCAT csv-file is a bit more complicated.
Have someone experience with this?
Already many thanks.

This might depend on the header configuration you choose in the TwinCAT Scope Export Wizard. E.g. when choosing "Name only header" the readtable() command worked fine for me. Also, I selected "Comma" as CSV-Separator and "Point" as Decimal mark (see this
Measurement Export Wizard screenshot)
If you just need to quickly analyze the data and don't need code to automate this process, you can also use Matlab's import tool (Home -> Import Data), where you can interactively select the data to import.

Related

Draw.io import diagrams from CSV using an API

In draw.io there is a very nice option to create a diagram using CSV import utility (Arrange->Insert->Advanced->CSV). It is very simple and straight forward.
I was trying to find a way to do it using an API (REST for example), is there a way to do it?
One more question:
Does anybody knows if there's a way to create draw.io file with multiple pages using the CSV import utility?
Thanks
Danny
Absolutely possible. Working example here: https://github.com/GanizaniSitara/drawio/
pyMX.py you want to have a look at first.
It creates the file in XML then encodes it and packs it into the drawio format.
Needs input data in CSV in format:
Level0,Level1,Level2,AppName,TC,StatusRAG,Status,HostingPercent,HostingPattern1,HostingPattern2,Arrow1,Arrow2,Link
Cool Division,Some Department,Some Department2,SomeString,Zero,25,red,green,0,Azure,Linux,up,up,http://www.gooogle.com
Rinse and repeat for anything else you need to create. It's rough code, ping me here or on GitHub if anything needs clarification.

How can I create in Gehpi directed tree graph instead of sphererical

I want to make a network graph which shows the distribution of our documents in our folder structure.
I have the nodefile, edgefile and gephi graph file in this location:
https://1drv.ms/f/s!AuVfRBdVHkO7hgs5K9r9f7jBBAUH
What I do is:
Run the algorithm ForceAtlas2 with scaling 10-20, dissuade hub marked and prevent overlap marked, all other standard setting.
What I get is a graph with groups radial/spherical distributed. However, what I want is a tree directed network graph.
Anyone know how I can adjust Gephi to make this?
Thanks!
I just found a solution.
I tested the file format as shown on the Yed site "import excel file" page
http://yed.yworks.com/support/manual/import_excel.html
This gave me the Yed import dialog (took a life time to figure out that it's a pop up menu and not selectable through the standard menu)
Anyway, it worked and I've adjusted the test files with the data prepared for the Gehpi. This was pretty easy, I could used the source target ID's etc. Just copy paste.
I load it into Yed and used some directed and radial clustering algorithms on it. Works fine!
Below you can find the excel node/edge file used to import in Yed and the graph file you can open with Yed to see the final radial result.
https://1drv.ms/f/s!AuVfRBdVHkO7hg6DExK_eVkm5_mR
Only thing to figure out is how to combine the weight (which represents the number of documents) with the node size.
Unfortunately, as of version 0.9.0, Gephi no longer supports hierarchical graphs. Maybe try using a previous version?
Other alternatives involve more complex software, such as Graphviz, but you need a .dot file instead of your .csv. I looked all over, but could not find an easy-to-use csv to dot converter.
You could try looking at d3-hierarchy, a node.js program, but then again you need to use the not-so-user-friendly npm. If you look at the link, it looks like it can produce the kind of diagram you're looking for.

Data.c file generation

i'm very new to matlab, i'm working on a software which needs the following files as input model.c,model.h,model_data.c for a particular simulink model. I have a model for which i can't generate model_data file using RTW, i have tried to get some information on the files generated by RTW, but i didnt get sufficient info. If there anybody who knows about the RTW please let me know the blocks which are required to generate model_data.c
thank you
model_data.c is a conditionally created file (i.e. it is only created if it is needed, which depends on the way the model is set up for code generation).
For a discussion of the Simulink Coder build process, and what files get generated when, search the doc for the section titled "Files and Folders Created by Build Process".
For others who need help in future.
Open the Configuration Parameters pane. Go to Code Generation -> Optimization and make sure that Default parameter behavior is set to Tunable.

programmatically add cells to an ipython notebook for report generation

I have seen a few of the talks by iPython developers about how to convert an ipython notebook to a blog post, a pdf, or even to an entire book(~min 43). The PDF-to-X converter interprets the iPython cells which are written in markdown or code and spits out a newly formatted document in one step.
My problem is that I would like to generate a large document where many of the figures and sections are programmatically generated - something like this. For this to work in iPython using the methods above, I would need to be able to write a function that would write other iPython-Code-Blocks. Does this capability exist?
#some pseudocode to give an idea
for variable in list:
image = make_image(variable)
write_iPython_Markdown_Cell(variable)
write_iPython_Image_cell(image)
I think this might be useful so I am wondering if:
generating iPython Cells through iPython is possible
if there is a reason that this is a bad idea and I should stick to a 'classic' solution like a templating library (Jinja).
thanks,
zach cp
EDIT:
As per Thomas' suggestion I posted on the ipython mailing list and got some feedback on the feasibility of this idea. In short - there are some technical difficulties that make this idea less than ideal for the original idea. For a repetitive report where you would like to generate markdown -cells and corresponding images/tables it is ore complicated to work through the ipython kernel/browser than to generate a report directly with a templating system like Jinja.
There's a Notebook gist by Fernando Perez here that demonstrates how to programmatically create new cells. Note that you can also pass metadata in, so if you're generating a report and want to turn the notebook into a slideshow, you can easily indicate whether the cell should be a slide, sub-slide, fragment, etc.
You can add any kind of cell, so what you want is straightforward now (though it probably wasn't when the question was asked!). E.g., something like this (untested code) should work:
from IPython.nbformat import current as nbf
nb = nbf.new_notebook()
cells = []
for var in my_list:
# Assume make_image() saves an image to file and returns the filename
image_file = make_image(var)
text = "Variable: %s\n![image](%s)" % (var, image_file)
cell = nbf.new_text_cell('markdown', text)
cells.append(cell)
nb['worksheets'].append(nbf.new_worksheet(cells=cells))
with open('my_notebook.ipynb', 'w') as f:
nbf.write(nb, f, 'ipynb')
I won't judge whether it's a good idea, but if you call get_ipython().set_next_input(s) in the notebook, it will create a new cell with the string s. This is what IPython uses internally for its %load and %recall commands.
Note that the accepted answer by Tal is a little deprecated and getting more deprecated: in ipython v3 you can (/should) import nbformat directly, and after that you need to specify which version of notebook you want to create.
So,
from IPython.nbformat import current as nbf
becomes
from nbformat import current as nbf
becomes
from nbformat import v4 as nbf
However, in this final version, the compatibility breaks because the write method is in the parent module nbformat, where all of the other methods used by Fernando Perez are in the v4 module, although some of them are under different names (e.g. new_text_cell('markdown', source) becomes new_markdown_cell(source)).
Here is an example of the v3 way of doing things: see generate_examples.py for the code and plotstyles.ipynb for the output. IPython 4 is, at time of writing, so new that using the web interface and clicking 'new notebook' still produces a v3 notebook.
Below is the code of the function which will load contents of a file and insert it into the next cell of the notebook:
from IPython.display import display_javascript
def make_cell(s):
text = s.replace('\n','\\n').replace("\"", "\\\"").replace("'", "\\'")
text2 = """var t_cell = IPython.notebook.get_selected_cell()
t_cell.set_text('{}');
var t_index = IPython.notebook.get_cells().indexOf(t_cell);
IPython.notebook.to_code(t_index);
IPython.notebook.get_cell(t_index).render();""".format(text)
display_javascript(text2, raw=True)
def insert_file(filename):
with open(filename, 'r') as content_file:
content = content_file.read()
make_cell(content)
See details in my blog.
Using the magics can be another solution. e.g.
get_ipython().run_cell_magic(u'HTML', u'', u'<font color=red>heffffo</font>')
Now that you can programatically generate HTML in a cell, you can format in any ways as you wish. Images are of course supported. If you want to repetitively generate output to multiple cells, just do multiple of the above with the string to be a placeholder.
p.s. I once had this need and reached this thread. I wanted to render a table (not the ascii output of lists and tuples) at that time. Later I found pandas.DataFrame is amazingly suited for my job. It generate HTML formatted tables automatically.
from IPython.display import display, Javascript
def add_cell(text, type='code', direct='above'):
text = text.replace('\n','\\n').replace("\"", "\\\"").replace("'", "\\'")
display(Javascript('''
var cell = IPython.notebook.insert_cell_{}("{}")
cell.set_text("{}")
'''.format(direct, type, text)));
for i in range(3):
add_cell(f'# heading{i}', 'markdown')
add_cell(f'code {i}')
codes above will add cells as follows:
#xingpei Pang solution is perfect, especially if you want to create customized code for each dataset having several groups for instance. However, the main issue with the javascript code is that if you run this code in a trusted notebook, it runs every time the notebook is loaded.
The solution I came up with is to clear the cell output after execution. The javascript code is stored in the output cell, so by clearing the output the code is gone and nothing is left to be executed in the trusted mode again. By using the code from here, the solution is the code below.
from IPython.display import display, Javascript, clear_output
def add_cell(text, type='code', direct='above'):
text = text.replace('\n','\\n').replace("\"", "\\\"").replace("'", "\\'")
display(Javascript('''
var cell = IPython.notebook.insert_cell_{}("{}")
cell.set_text("{}")
'''.format(direct, type, text)));
# create cells
for i in range(3):
add_cell(f'# heading{i}', 'markdown')
add_cell(f'code {i}')
# clean the javascript code from the current cell output
for i in range(10):
clear_output(wait=True)
Note that the clear_output() needs the be run several times to make sure the output is cleared.
As a slight update incorporating Tal's answer above, updates from Chris Barnes and a little digging in the nbformat docs, the following worked for me:
import nbformat
from nbformat import v4 as nbf
nb = nbf.new_notebook()
cells = [
nbf.new_code_cell(f"""print("Doing the thing: {i}")""")
for i in range(10)
]
nb.cells.extend(cells)
with open('generated_notebook.ipynb', 'w') as f:
nbformat.write(nb, f)
You can then start up the new artificial notebook and cut-n-paste cells where ever you need them.
This is unlikely to be the best way to do anything, but it's useful as a dirty hack. 🐱‍💻
This worked with the following versions:
Package Version
-------------------- ----------
ipykernel 5.3.0
ipython 7.15.0
jupyter 1.0.0
jupyter-client 6.1.3
jupyter-console 6.1.0
jupyter-core 4.6.3
nbconvert 5.6.1
nbformat 5.0.7
notebook 6.0.3
...
Using the command line goto the directory where the myfile.py file is located
and execute (Example):
C:\MyDir\pip install p2j
Then execute:
C:\MyDir\p2j myfile.py -t myfile.ipynb
Run in the Jupyter notebook:
!pip install p2j
Then, using the command line, go the corresponding directory where the file is located and execute:
python p2j <myfile.py> -t <myfile.ipynb>

What is the best file parsing solution for converting files?

I am looking for the best solution for custom file parsing for our enterprise import routines. I want to basically change one file format into a standard file format and have one routine that imports that data into the database. I need to be able to create custom scripts for each client since its difficult to get the customer to comply with a standard or template format. I have looked at PowerShell and Iron Python to do this so far but I am not sure this is the route I want to go. I have also looked at some tools such as Talend which is a drag and drop style tool which may or may not give me what I want as far as flexibility. We are a .NET shop and have created custom code to do this in the past but I need something that is quicker to create then coding custom parsing functions each time we get a new file format in.
Depending on the complexity and variability of your work, you should consider an ETL tool like SSIS (SQL Server Integration Services).
Python is wonderful for this kind of thing. That's why we use. Each new customer transfer is a new adventure and Python gives us the flexibility to respond quickly.
Edit. All python scripts that read files are "custom file parsers". Without an actual example, it's not sensible to provide a detailed example.
with open( "some file", "r" ) as source:
for line in source:
process( line )
That's about all there is to a "custom file parser". If you're parsing .csv or .xml files, then Python has modules for that. If you're parsing fixed-format files, you'd use string slicing operations. If you're parsing other files (X12? JSON? YAML?) you'll need appropriate parsers.
Tab-Delim.
from collections import namedtuple
RecordLayout = namedtuple('RecordLayout',['field1','field2','field3',...])
def process( aLine ):
record = RecordLayout( aLine.split('\t') )
...
Fixed Layout.
from collections import namedtuple
RecordLayout = namedtuple('RecordLayout',['field1','field2','field3',...])
def process( aLine ):
fields = ( aLine[:10], aLine[10:20], aLine[20:30], ... )
record = RecordLayout( fields )
...