I'm hoping to develop an IDE which can handle large Verilog files (up to 5GB) using Eclipse's Theia. Since Theia is built off of VSCode and assuming they can handle the same file sizes, what is the largest file size that VSCode can handle, and how can you change its configuration to increase the maximum file size?
I tried using the command: code --max-memory [file-size] , however it didn't work.
In order to create a dummy 1GB text file for testing, use the following command:
dd if=/dev/zero of=1gb-file bs=1000000000 count=1
Thanks for the help!
Clarification, simpler question: How do I display files of size ~1GB on VSCode?
Related
I want to test an application with a file containing 10000 lines of records (plus header and footer lines). I have a test file with 10 lines now, so I want to duplicate these line 1000 times. I don't want to create a C# code in my app to generate that file (is only for test), so I am looking for a different and simple way to do that.
What kind of tool can I use to do that? CMD? Visual Studio/VS Code extension? Any thought?
If your data is textual, load the 10 records from your test file into an editor. Select all, copy, insert at the end of file. Repeat until the file is of length 10000+
This procedure requires ceil(log_2(1000)) cycles, 10 in your case, in general ceil(log_2(<target_number_of_lines>/<base_number_of_lines>)).
Alternative (large files)
Modern editors should not have performance problems here. However, the principle can be applied using a cat cli command. Assuming that you copy the original file into a file named dup0.txt proceed as follows:
cat dup0.txt dup0.txt >dup1.txt
cat dup1.txt dup1.txt >dup0.txt
leaving you with the quadrupled number of lines in dup0.txt.
I want to open a large text file (400MB) which contains 20 millions lines of domain addresses.
This is the file when I open it normally:
But when I open it in Eclipse I get this error:
I tried to change Xms6000M but it's not working!!
Does anyone have a solution to this problem?
Use the current version of Eclipse instead of the outdated one you have.
If required, increase -Xmx in the eclipse.ini file.
I have some results in format text include text header. It's about 15-50Gb. I want to import this in Matlab for the treatment. Could you give me some advises what command I should use for this big file?
You mat use the Root software that is available at Cern website
This was developed to work with very large files ( say > 1T)
I solved this problem via textscan function. But I had text files around 5 GB. Computer has 16GB RAM and it wasn't enough and have to use pagefile.sys. Reading time cca 60min.
I'm trying to open a .mat file in the windows environment but it is failing. It was created in a Unix environment. Also note that this file was first put in a .tar file first, ftp via binary method. The file opens in Unix and I don't think it was corrupted in any way.
The *.mat file format is platform agnostic. The OS does not matter.
There are a number of variants of the *.mat file which have been used, and older versions cannot always read formats saved with newer versions. You can save to an older version using flags available in the save command. These formats have been updated as the Matlab feature set has demanded a more flexible file format, and as other technologies have advanced, most notably HDF5 in the recent version.
Finally, the save command supports an ASCII formatted option. I suspect this is your current problem, based on your comment regarding the error message received.
To address your current problem:
First, check to see if the file is an ASCII file. The easiest way is to simply open it in notepat, wordpad, or even the matlab editor. If the file is text, then this becomes a file parsing problem, and the appropriate use of fscanf shoudl fix the problem.
If the file is actually a binary *.mat file then you probably have a Matlab version incompatability. Yuo can either go back to the source unix environment and save to an older version (eg save .... -v7) or update the Matlab version of the reading computer.
I would like to view my CSV files in a column-aligned format from the command line, with something like less, but my CSV files are sometimes gigabytes big, and I'm using a little computer (Netbook, 1GB RAM, 8GB HD, 1GHz processor), so I don't want to waste a lot of memory or processing power viewing the file.
I mention that I'd like to use something like less because I would like to be able to navigate around within the file.
cat FILE | column -s, -t | less is one thought, but cat is still going to try to print the whole file and I'm not sure how much buffering the pipes will use (if any) or what sort of caching less employs.
This question is similar to this other question, but I'm specifically interested in viewing large files using minimal resources preferably already on the machine. I don't presently use VI or EMACS, and think they'd both be overkill here. VI, for instance, would be a 27MB install for a utility acting merely as a viewer.
First of all, less can open oversized files. Second, both vim (which I use with the Largefile plugin and with files over 8 GB) and emacs can do it.
But... Most of the time, viewing a big file in a 80x40 (or a bit bigger) terminal is useless... so you should filter it with something like (f)grep or process it with awk. If you want only the start or end, then there are head and tail.
HTH
Check the tail \ head commands.
Or even better, Download VIM source and compile it. That should be easy enough. Version 5.8 source is 1Mb before decompressing (4MB after). Enjoy.