UE5: import csv for a data driven animation - unreal-engine4

I was wondering if UE5 can support 50k+ lines of a db/CSV as they rappresent the parameters of the whole animation. (coordinates[x,y,z], TimeDelta, Speed, Brake)
Any documentation is very much appreciated

There is no existing functionality in the engine itself for this extremely specific use case. Of course, it can "support" it if you write a custom solution using the many available tools within the engine.
You can use IFileHandle to stream in a file (your csv): link
You can then parse the incoming data to create a FVector3 of your coordinates, a float of your TimeDelta, etc. For example, FVector::InitFromString may help: link
However, this depends very much on the format of your data. Parsing string/texts into values is not specific to UE4, you can find a lot of info on converting streams of binary/character data to needed values.
Applying the animation as the data is read is a separate, quite big, task. Since you provide no details on what the animation data represents, or what you need to apply it to, I cannot really help.
In general though, it can help you a lot to break down your question into 3-4 separate, more specific, questions. In any case though, this is a task that will require a lot of research and work.
And even before that, it might be good to research alternative approaches and changing the pipeline, to avoid using such non-standard file structures for animation.

Related

How to use VTK to efficiently write time-varying field data on a fixed mesh?

I am working on physics simulation research. I have a large fixed grid in one of my projects that does not vary with time. The fields on the grid, on the other hand, vary with time in the simulation. I need to use VTK to record the field data in each step for visualization (Paraview).
The method I am using is to write a separate *.vtu file to disk at each time step. This basically serves the purpose, but actually writes a lot of duplicate data (re-recording the geometry of the mesh at each step), which not only consumes more disk space, but also wastes time on encoding and parsing.
I would like to have a way to write the mesh information only once, and the rest of the time only new field data is written, while being able to guarantee the same visualization. Please let me know if VTK and Paraview provide such an interface and how to implement it.
Using .pvtu and refer to the same .vtu as Piece for each step should do the trick.
See this similar post on the ParaView discourse, and the pvtu doc
EDIT
This seems to be a side effect of the format, this is not supported by the writer.
The correct solution is to use another file format ...
Let me provide my own research findings for reference.
As Nico said, with the combination of pvtu/vtu files, we could theoretically implement a geometry structure stored in a separate vtu file, referenced by a pvtu file. Setting the NumberOfPieces attribute of the ptvu file to 1 would enable the construction of only one separate vtu file.
However, the VTK library does not expose a dedicated operation interface to control the writing process of vtu files. No matter how it is set, as long as the writer's input contains geometry structures, the writer will write geometry information to disk, and this process cannot be skipped through the exposed interface.
However, it is indeed possible to make multiple pvtu files point to the same vtu file by manually editing the piece node in the ptvu file, and paraview can recognize and visualize such a file group properly.
I did not proceed to try adding arrays to the unstructured grid and using pvtu output.
So, I think the conclusion is.
if you don't want to dive into VTK's library code and XML implementation, then this approach doesn't make sense.
if you are willing to write a series of files, delete most of them from the vtu file, and then point all the pvtu's piece nodes to the only surviving vtu file by editing the pvtu file, you can save a lot of disk space, but will not shorten the write, read, and parse times.
If you implement an XML writer by yourself, you can achieve all the requirements in theory, but it requires a lot of coding work.

kdb - customized data streaming/ticker plant?

We've been using kdb to handle a number of calculations focused more on traditional desktop sources. We have deployed our web application and are looking to make the leap as to how best to pick up data changes and re-calculate them in kdb to render a "real-time" view of the data as it changes.
From what I've been reading, the use of data loaders(feed handlers) into our own equivalent of a "ticker plant" as a data store is the most documented ideal solution. So far, we have been "pushing" data into kdb directly and calculating as part of a script so we are trying to make the leap from calculation-on-demand to a "live" calculation as data inputs are edited by user.
I'm trying to understand how to manage the feed handlers and timing of updates. We really only want to move data when it changes (web-front end so trying to figure out how best to "trigger" when things change (such as save or lost focus on an editable data grid for example.) We are also thinking our database as the "ticker plant" itself which may minimize feedhandlers.
I found a reference below and it looks like its running a forever-loop which feels excessive but understand the original use case for kdb and streaming data.
Feedhandler - sending data to tickerplant
Does this sound like a solid workflow?
Many thanks in advance!
Resources we've referencing:
Official Manual -https://github.com/KxSystems/kdb/blob/master/d/tick.htm
kdb+ Tick overview: http://www.timestored.com/kdb-guides/kdb-tick-data-store
Source code: https://github.com/KxSystems/kdb-tick
There's a lot to parse here but some general thoughts/ideas:
Yes, most examples of feedhandlers are set up as forever loops but this is often just for convenience for demoing.
Ideally a live data flow should work based on event handling, aka on-event triggers. Kdb/q has this out of the box in the form of the .z handlers. Other languages should have similar concepts of event handling
Some more examples of python/java feeders are here: https://github.com/exxeleron
There's also some details on the official Kx site: https://code.kx.com/q/wp/capi/#publishing-to-a-kdb-tickerplant
It still might be a viable option to have a forever loop, or at least a short timer in the event you want to batch data.
Depending on the amount of dataflow a tickerplant might be overkill for your use-case, but a tickerplant is still useful for (a) separating your processing from the processing of dataflow (i.e. data can still flow through the tickerplant while another process is consuming/calculating) and (b) logging data for recovery purposes.

Where to store a real time strategy data?

I'm trying to make a basic RTS, but I have no idea where can I store data, for example units, buildings, etc. I'd like to avoid making a hundreds of .txt files (or one, very big .txt file). Well, I could just write a header with a class of every single object, but wouldn't it be too much? I mean, if I make about 20 units (in total, of course) with similar stats (range, attack value, health, etc.) and only with different special abilities, I think it is quite strange to set everything in 20 constructors, doesn't it?
Another problem is with storing a map. I think I'll try the .txt solution here, but I'm probably going to write some kind of map editor in WinAPI or sth like that, setting the map in the .txt file would be a torment. So I know how to represent tiles (I want the map to be a tiled one, it will be much easier to implement, I suppose), but what if there is a unit that takes more than only one tile, how can I deal with this?
Txt and XML are not great solutions, and also writing and reading from disk isn't the cheapest operation you can do in real time. The way to do this in Unity is through Serialization, basically you write a class that allow you to store data without instantiating a GameObject for it, and whenever you'd like to, you can save or load it at runtime. There is also a great tutorial about data persistence on Unity Tutorials page. (Link Here)
I highly recommend the Easy Save plugin. I'd set it up so it only saves to disk every few seconds, not a constant stream. Also, with Easy Save you can save just bits and pieces to a larger save file rather than saving everything with each pass. If the game crashes, you might lose a couple seconds of progress, but that should be an acceptable loss in the case of a crash or quit.

how to do fix+fast using quickfix?

I consider to buy this connector:
MICEX FIX/FAST Market Data Adaptor http://www.b2bits.com/trading_solutions/market-data-solutions/micex-fixfast.html
But I don't like propriety software by some reasons and would prefer to replace this connector with QuickFix + DIY code.
100 usec perfomance difference is not critical for me, but I do care about features.
In particular MICEX uses FIX+Fast and referenced connector automatically decodes fast: "Hides FAST functionality from user, automatically applies FAST decoding."
The question is how to do the same with quickfix? Is it good idea at all? How easy would be to implement referenced connector using quickfix?
Have you looked at http://code.google.com/p/quickfast/ I have used it and is mostly works but it is not the best library.
I don't believe QuickFIX supports FAST. FAST is a complex compression specification for FIX messages, and to implement FAST on top of QuickFIX or any FIX engine in a performing way can be tricky.
You want to choose a FAST engine that can generate template-decoding source code, in other words, it reads the template XML file from the exchange and spills out code to parse each template. Done this way it's automatic, easy and crucial for speed as the code generated avoids recursion calls otherwise necessary to parse repeating groups.
Take a look on CoralFIX which is an intuitive FIX engine with support for FAST decoding.
Disclaimer: I am one of the developers of CoralFIX.

How can I build a generic dataset-handling Perl library?

I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data.
By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this.
The solutions I can think of are:
Create a configuration file in XML or similar and with a prespecified format.
Pass the information during initialization of the module.
Use the first row of the data as headers and try to guess types (ouch)
Surely there must be a "canonical" way of doing this that is also usable and efficient.
This doesn't answer your question directly, but have you checked CPAN? It might have the module you need already. If not, it might have similar modules -- related either to biomedical data or simply to delimited data handling -- that you can mine for good ideas, both concerning formats for metadata and your module's API.
Any of the approaches you've listed could make sense. It all depends on how complex the data structures and their definitions are. What will make something like this useful to people is whether it saves them time and effort. So, your decision will have to be answered based on what approach will best satisfy the need to make:
use of the module easy
reuse of data definitions easy
the data definition language sufficiently expressive to describe all known use cases
the data definition language sufficiently simple that an infrequent user can spend minimal time with the docs before getting real work done.
For example, if I just need to enter the names of the columns and their types (and there are only 4 well defined types), doing this each time in a script isn't too bad. Unless I have 350 columns to deal with in every file.
However, if large, complicated structure definitions are common, then a more modular reuse oriented approach is better.
If your data description language is difficult to work with, you can mitigate the issue a bit by providing a configuration tool that allows one to create and edit data schemes.
rx might be worth looking at, as well as the Data::Rx module on the CPAN. It provides schema checking for JSON, but there is nothing inherent in the model that makes it JSON-only.