I uploaded Fabian Vogt's micropython port to my TI Nspire CX CAS, together with a couple of *.py.tns files to try. I can't find a way to load/launch those files.
As micropython does not include the os module, I can't use os.chdir to change the current directory and load the *.py files from the python shell. I tried from python shell: open("documents/mydirectory/myfile")
with different extensions .py or .py.tns, without success.
I don't think the Nspire has anything like the terminal commmand line either.
Thanks for your help,
There are 2 ways that you could do this, one easy way and one tedious way.
1. Map .py to micropython in your ndless.cfg
(ndless.cfg should be at /documents/ndless/ndless.cfg)
Like so:
ext.xxx=program-name
ext.xxx=program-name
ext.txt=nTxt
ext.py=micropython
ext.xxx=program-name
ext.xxx=program-name
You can edit this file either by copying it back and forth from your computer using TiLP or the official software, or you can edit it on-calc using nTxt. (This requires a bit of fiddling with making a copy of ndless.cfg so that the mappings still exist to open the copied file ndless.txt).
Ndless should come with a standard ndless.cfg containing basic bindings for nTxt and a few popular emulators. If you don't have one, get the standard one here. It will scan all directories (at least /documents/*, AFAIK) for programs. I've found that removing lines related to programs not on your Nspire will decrease load time.
2. Proper way to run a file in Python
To run a file in Python, you should do something like this:
with open("/documents/helloworld.py.tns","r") as file:
exec(file.read())
This will properly close the file after executing, which I've noticed is quite important on the Nspire, as leaving files open has given me trouble before. Of course, if you'd like, you can do exec(open("...","r").read()) and then handle closing the file yourself, but be warned: bad things can happen if you forget.
Also, you must remember to add the leading / and the .tns extension, or else strange things will happen, especially with writing to files.
That's about it! Feel free to ask more questions if needed, I'll be watching the ti-nspire tag.
(Just realized this question is quite old, but I guess it still might be helpful for others who end up on empty questions months later while trying to figure something out :P)
Related
I have a bunch of Archives that I want to extract. Problem is, there's a lot of them, and it's a lot of info to move around. I'd like to do it all at once. It's probably taken more time to research than to do it manually, but research is more interesting.
TL;DR: Would like help with 7-zip command line to extract multiple archives into their own directory. Autohotkey, Powershell, and batch files answers would also be nice if you are feeling extra helpful.
Win10, latest update and all that. I've been using 7-zip, so if there's a better extractor for this it might be a helpful suggestion. I have a little experience with coding, so I can usually pars an example and apply it to my project, but I can't come up with code on my own. So with that said, I'm comfortable using cmd, autohotkey, powershell, batch files, and a few others, but I need an example before I can do anything. haha
So, in my research, I found
(7z x -o"...\Stellaris\mod\Examples\" "...\content\281990\*")
for cmd, which works, except that extracts everything to the same dir since the archive files are in the root archive dir (I think that's why; if they were one folder down, it should work like I want right?). I don't think you can use environment variables in the path(?). Not sure what would make it work here...
Powershell: I only recently started tinkering with it so the one script I found didn't make any sense to me. And never found anyone using AutoHotKey for this.
And finally a **batch file* I found here seemed to come closest (normally I'd comment on that thread cause apparently it's still active, but I don't have 50 rep), but I wasn't sure how to modify it for my purposes:
#echo off
SET "filename=%~1" #Where does the working dir path go?
SET dirName=%filename:~0,-4% #How/where would you put in wildcards?
7z x -o"%dirName%" "%filename%"
I don't mind using any method, though I might prefer AHK? I'm probably most experienced there.
If you made it this far, wow, I'm impressed! I hope it was coherent enough to understand (probably not at first?). And maybe a little entertaining? I think I'm funny. Let me know if I should add or remove anything for the future. I know it's probably way too much context, but I would rather have too much than not enough, and I'm never sure what would be relevant and what would not. I'm not happy with my code format here, but I didn't quite understand what the help was saying about whitespace and I'm not familiar enough with Markdown yet (I wanted comments to be in line). Also, I'm honestly not sure about the tags.
EDIT: Added TL;DR at the top, and...
Found an answer via a program that does this. I'll post it in an answer as well: ExtractNow seems to be a bit outdated, last update was in '17, but it did what I wanted it to.
For interactive use at the command prompt:
for %z in ("\path\to\dir\subdir\*.zip") do #echo 7z "-o\path\to\extracted\%~nz" "%~z"
This won't run 7z, but it will print out the commands. Once you are satisfied that the printed commands look fine, remove the #echo to execute them.
In a batch script you must of course duplicate the % signs.
Found an answer via a program that does this. ExtractNow seems to be a bit outdated, last update was in '17, but it did what I wanted it to with only a few settings changes.
So, in my research, I found
(7z x -o"...\Stellaris\mod\Examples" "...\content\281990\*")
for cmd, which works, except that extracts everything to the same dir...
Assuming you were using Windows, 7-zip would have worked fine to do what you wanted. The only thing you were missing is the * character, which 7-zip expands to be the archive name when used with the -o switch:
7z x "dir\subdir\*.*" -o"dir\*"
So 7z x -o"...\Stellaris\mod\Examples" "...\content\281990\*"
becomes:
7z x -o"...\Stellaris\mod\Examples\*" "...\content\281990\*"
Also be aware that *.* does not mean any file under 7-zip. 7-Zip takes *.* to be name of any file that has an extension. To process all files just use a "dir\subdir\*" without the extra .*.
Emacs 24 in Ubuntu 14.
I have file opened only in emacs, and it gives me this constantly, after each saving. that is annoying.
This is strange, because earlier everything worked fine. I can hardly guess what could I break during this time. I'am total newbie in Ubuntu, using it according to instructions found in internet.
Now I'm using emacs 23, everything is fine. I guess, I need auto-syncronization of opened buffer with saved file right after saving. Anyway, how can I fix it?
It sounds like some other program on your computer is reading the file when it changes, and possibly even introducing changes (perhaps just to the modification time, rather than to the contents). It's hard to say off-hand just what that would be.
A workaround try M-x global-auto-revert-mode. It will only auto-revert if you have no local modification since the last saving. This is generally a nice mode to turn on if you use multiple editors, and I keep it enabled all the time.
Other ideas:
Check if any other process currently has the file open using fuser /path/to/filename.txt (note: it only shows open file descriptors, not processes that hold the file content in memory and write it later)
Do you use any non-standard filesystem? (check with df -h /path/to/filename.txt and mount)
Is your system time stable? (Manually check date, scan the output of dmesg for obvious errors concerning timekeeping, and look for errors related to NTP in the logfiles in /var/log/.
My problem is as described. My script downloads files through an external call to cmd (using the system function and then .NET to make keypresses). The issue is that when it tries to fopen these files I downloaded (filenames from a text file I write as I download), it doesn't find them, causing an error. When I run the script again after seeing it fail, it works but only up to the point where it's trying to download/call new files again, where it runs into the same problem.
Are new files downloaded during when a script is running somehow not visible to the search path? Because the folder is most definitely in my search path (seeing as it works outside of during-script downloads). It's not that it isn't getting the files fast enough either, cause they appear in my folder almost instantly, and I've tried a delay to allow for it to recognize it, but that didn't work either.
I'm not sure if it's important to note that the script calls an external function which tries to read the files from the .txt list I create in the main script.
Any ideas?
The script to download the files looks like so:
NET.addAssembly('System.Windows.Forms');
sendkey = #(strkey) System.Windows.Forms.SendKeys.SendWait(strkey);
system('start cygwinbatch.bat')
pause(.1)
sendkey(callStr1)
sendkey('{ENTER}')
pause(.1)
sendkey(callStr2)
sendkey('{ENTER}')
pause(.1)
sendkey('exit')
pause(2)
sendkey('{ENTER}')
But that is not the main reason I am asking: I am confident that the downloads are occurring when the script calls them, because I see them appearing in my folder as it called. I am more confused as to why MATLAB doesn't seem to know they are there while the script is running, and I have to stop it and run it again for it to recognize the ones I've downloaded already.
Thank you,
Aaron
The answer here is probably to run the 'rehash' function. Matlab does not look for new files while executing an operation, and in some environments misses new files even during interactive activity.
Running the rehash function forces Matlab to search through its full path and determine if there are any new files.
I've never tried to run rehash in the middle of an operation though. ...
My guess is that the MATLAB interpreter is trying to look ahead and is throwing errors based on a snapshot of what the filesystem looked like before the files were downloaded. Do you get different behavior if you run it one line at a time using F9? If that's the case then you may be able to prevent the interpreter from looking ahead by using eval().
Alright, here's what I'm dealing with (you can skip to TLDR if all you need to see is what I want to run):
I'm having an issue with file formatting for a nasty conglomeration of several ancient programs I've strung together. I have some data in .CSV format, and I need to put it into .SPC format. I've tried a set of proprietary MATLAB programs called 'GS tools' for fast and easy conversion, but fast and easy doesn't look like its gonna happen here since there are discrepancies in how .spc files are organized now and how they were organized back when my ancient programs were written.
If I could find the source code for the old programs I could probably alter the GS tools code to write my .spc files appropriately, but all I can find are broken links circa 2002 and earlier. Seeing as I don't know what my programs are looking for, I have no choice but to try resaving my data with other programs until one of them produces something workable.
I found my Cinderella program: if I open the data I have in a program called Spekwin and save the file with a .spc extension... viola! Everything else runs on those files. The problem is that I have hundreds of these files and I'd like to automate the conversion process.
I either need to extract the writing rubric Spekwin uses for .spc files (I believe that info is stored in a dll file within the program, but I'm not sure if that actually makes sense) and use it as a rule to write a file from my input data, or I need a piece of code that will open a file with Spekwin, tell Spekwin to save that file under the .spc extension, and terminate Spekwin.
TLDR: Need a command that tells the computer to open a file with a certain program, save that file under a different extension through that program (essentially open*.csv>save as>*.spc), then terminate the program.
OR--I need a way to tell MATLAB to write a file according to rules specified by a .dll, but I'm not sure I fully understand what that entails.
Of course I'm open to suggestions on other ways to handle this.
I have several files in a GridFS Document Store and what I'd like to do is to pipe this data into a zip file via stdin in NodeJS. So that I will end up with a zip file containing all these files.
Now my question is how can I give the files a valid filename inside of the zip file. I think I need to emulate/fake a file header containing the filename?
Any help is appreciated!
Thanks
I had problems when writing zip files with Node.js not long ago. I ended up doing something similar to what is described in Zip archives in node.js
I can't help you directly with your problem, but at least I hope I can point out some things:
Don't try to use node-archive. Even if the description says it allows to create zip files, the moment I read the source code (since documentation is unexistant) I realized that's just a lie. It only exposes methods for reading.
Using zip by spawning a process, like recommended on the provided link, seems to be the best way. Something that would work is copying the files to a local folder with whatever name you desire and then calling the zip command, just to delete the files afterwards.
The other option, which seems ok, is to use zipper (https://github.com/rubenv/zipper, although better just use npm). The reason I'm not really wishing to use it is because there's not that much flexibility, it seems to have been done in a day and it hasn't been modified since the first commit, so I'm not sure it will receive maintenance (sure, you could just fork it...).
I swear the day I have an entire free weekend with no work I will write a freaking module that does this as complete as possible. It's silly that there isn't and it shouldn't be that much struggle. blablablarant.
Edit:
Not sure if it was there before, but now I've been using the node-compress module (also using gzippo). It works fine.