I'm trying to make a basic RTS, but I have no idea where can I store data, for example units, buildings, etc. I'd like to avoid making a hundreds of .txt files (or one, very big .txt file). Well, I could just write a header with a class of every single object, but wouldn't it be too much? I mean, if I make about 20 units (in total, of course) with similar stats (range, attack value, health, etc.) and only with different special abilities, I think it is quite strange to set everything in 20 constructors, doesn't it?
Another problem is with storing a map. I think I'll try the .txt solution here, but I'm probably going to write some kind of map editor in WinAPI or sth like that, setting the map in the .txt file would be a torment. So I know how to represent tiles (I want the map to be a tiled one, it will be much easier to implement, I suppose), but what if there is a unit that takes more than only one tile, how can I deal with this?
Txt and XML are not great solutions, and also writing and reading from disk isn't the cheapest operation you can do in real time. The way to do this in Unity is through Serialization, basically you write a class that allow you to store data without instantiating a GameObject for it, and whenever you'd like to, you can save or load it at runtime. There is also a great tutorial about data persistence on Unity Tutorials page. (Link Here)
I highly recommend the Easy Save plugin. I'd set it up so it only saves to disk every few seconds, not a constant stream. Also, with Easy Save you can save just bits and pieces to a larger save file rather than saving everything with each pass. If the game crashes, you might lose a couple seconds of progress, but that should be an acceptable loss in the case of a crash or quit.
Related
I am working on physics simulation research. I have a large fixed grid in one of my projects that does not vary with time. The fields on the grid, on the other hand, vary with time in the simulation. I need to use VTK to record the field data in each step for visualization (Paraview).
The method I am using is to write a separate *.vtu file to disk at each time step. This basically serves the purpose, but actually writes a lot of duplicate data (re-recording the geometry of the mesh at each step), which not only consumes more disk space, but also wastes time on encoding and parsing.
I would like to have a way to write the mesh information only once, and the rest of the time only new field data is written, while being able to guarantee the same visualization. Please let me know if VTK and Paraview provide such an interface and how to implement it.
Using .pvtu and refer to the same .vtu as Piece for each step should do the trick.
See this similar post on the ParaView discourse, and the pvtu doc
EDIT
This seems to be a side effect of the format, this is not supported by the writer.
The correct solution is to use another file format ...
Let me provide my own research findings for reference.
As Nico said, with the combination of pvtu/vtu files, we could theoretically implement a geometry structure stored in a separate vtu file, referenced by a pvtu file. Setting the NumberOfPieces attribute of the ptvu file to 1 would enable the construction of only one separate vtu file.
However, the VTK library does not expose a dedicated operation interface to control the writing process of vtu files. No matter how it is set, as long as the writer's input contains geometry structures, the writer will write geometry information to disk, and this process cannot be skipped through the exposed interface.
However, it is indeed possible to make multiple pvtu files point to the same vtu file by manually editing the piece node in the ptvu file, and paraview can recognize and visualize such a file group properly.
I did not proceed to try adding arrays to the unstructured grid and using pvtu output.
So, I think the conclusion is.
if you don't want to dive into VTK's library code and XML implementation, then this approach doesn't make sense.
if you are willing to write a series of files, delete most of them from the vtu file, and then point all the pvtu's piece nodes to the only surviving vtu file by editing the pvtu file, you can save a lot of disk space, but will not shorten the write, read, and parse times.
If you implement an XML writer by yourself, you can achieve all the requirements in theory, but it requires a lot of coding work.
I was wondering if UE5 can support 50k+ lines of a db/CSV as they rappresent the parameters of the whole animation. (coordinates[x,y,z], TimeDelta, Speed, Brake)
Any documentation is very much appreciated
There is no existing functionality in the engine itself for this extremely specific use case. Of course, it can "support" it if you write a custom solution using the many available tools within the engine.
You can use IFileHandle to stream in a file (your csv): link
You can then parse the incoming data to create a FVector3 of your coordinates, a float of your TimeDelta, etc. For example, FVector::InitFromString may help: link
However, this depends very much on the format of your data. Parsing string/texts into values is not specific to UE4, you can find a lot of info on converting streams of binary/character data to needed values.
Applying the animation as the data is read is a separate, quite big, task. Since you provide no details on what the animation data represents, or what you need to apply it to, I cannot really help.
In general though, it can help you a lot to break down your question into 3-4 separate, more specific, questions. In any case though, this is a task that will require a lot of research and work.
And even before that, it might be good to research alternative approaches and changing the pipeline, to avoid using such non-standard file structures for animation.
I would like to set up the configuration of each level of my iphone game through a plist or some kind of flat file. One drawback of doing this is that user can potentially open up the app and change the flat file. I am now thinking of hard coding it as an instance of say Config class. Is that a good idea? What is the conventional approach for saving/loading/configuring levels?
The approach I've often used is two-fold: First, write the data out as data, rather than text, simply to make it a bit less obvious what it is. If you're using a plist, you can serialize it as an NSData element. Secondly, create a hash (SHA-1, etc) of the data, salted and/or concatenated with some value internal to your program, and store the hash either along side the data or somewhere else. Then when the data is read back in, you can validate that it hasn't been tampered with.
You could obfuscate the data in various ways, or actually encrypt it before storing it. Encryption has export ramifications however.
I have an object that contains about half a dozen properties. I expect to save maybe a dozen or more of these objects to my documents folder. My solution is to save the data using NSEncoding and NSKeyed/Archiver/Unarchiver. Anyone have a better strategy or approach.
NSKeyedArchiver/Unarchiver will work fine. If each file only has a half dozen properties, you might consider whether putting the entire object list in one file would be simpler for you to load/save/keep consistent.
SqlDataReader is a faster way to process the stored procedure. What are some of the advantage/disadvantages of using SQLDataReader?
I assume you mean "instead of loading the results into a DataTable"?
Advantages: you're in control of how the data is loaded. You can ask for specific data types, and you don't end up loading the whole set of data into memory all at the same time unless you want to. Basically, if you want the data but don't need a data table (e.g. you're going to populate your own kind of collection) you don't get the overhead of the intermediate step.
Disadvantages: you're in control of how the data is loaded, which means it's easier to make a mistake and there's more work to do.
What's your use case here? Do you have a good reason to believe that the overhead of using a normal (or strongly typed) data table is significantly hurting performance? I'd only use SqlDataReader directly if I had a good reason to do so.
The key advantage is obviously speed - that's the main reason you'd choose a SQLDataReader.
One potential disadvantage not already mentioned is that the SQLDataReader is forward only, so you can only go through the records once in sequence - that's one of the things that allows it to be so fast. In many cases that's fine but if you need to iterate over the records more than once or add/edit/delete data you'll need to use one of the alternatives.
It also remains connected until you've worked through all the records and close the reader (of course, you can opt to close it earlier, but then you can't access any of the remaining records). If you're going to perform any lengthy processing on the records as you iterate over them, you may find that you impact other connections to the database.
It depends on what you need to do. If you get back a page of results from the database (say 20 records), it would be better to use a data adapter to fill a DataSet, and bind that to something in the UI.
But if you need to process many records, 1 at a time, use SqlDataReader.
Advantages: Faster, less memory.
Disadvantages: Must remain connected, must remember to close the reader.
The data might not be concluesive and you are not in control of your actions that why the milk man down the road has always got to carry data with him or else they gona get cracked by the data and the policeman will not carry any data because they think that is wrong to keep other people's data and its wrong to do so. There is a girl who lives in Sheffield and she loves to go out and play most the times that she s in the house that is why I dont like to talk to her because her parents and her other fwends got taken to peace gardens thats a place that everyone likes to sing and stay. usually famous Celebs get to hang aroun dthere but there are always top security because we dont want to get skanked down them ends. KK see u now I need 2 go and chill in the west end PEACE!!!£"$$$ Made of MOney MAN$$$$