Can Racket get free disk space statistics without executing and parsing the output of external executables? - racket

I'm looking for a way to use native Racket methods of getting some statistics about the host machine the application is running on like free disk space, memory use and processor use. So far I haven't found a library that reports this information within Racket; is there an idiomatic method to get this information or is the only way to find external executables and parse the results on each platform the application runs on?

I'm confident that you're not going to find this functionality in a library that's bundled with Racket. On the other hand, I see no reason why this couldn't be implemented as a Racket library that does not use an external executable. I'm not saying that it is, only that it almost certainly can be implemented as a library. Does that answer your question?

Related

Interact with a locally long-running Common Lisp image, possibly daemonized, from the command line

How could one interact with a locally long-running Common Lisp image, possibly daemonized, from the command line?
I know it is possible to run a Common Lisp function from a Terminal command prompt, I am also aware of this.
I would need to do a similar thing, but with a local, already long-running Common Lisp image, being able to poll available functions from the CLI or shell scripts.
Is there a way to do that from a CLI, for example calling a function from a bash script, and receiving back whatever the function returns?
I though one could, for example, create a primitive web service, perhaps using woo or Hunchentoot, calling functions and fetching returned values via curl or wget, but it feels a little convoluted.
Of course, that is one of the many features of Emacs' SLIME, but I would need to call functions just from the CLI, without invoking Emacs.
Is there perhaps a way to reach a swank backend, outside of SLIME?
If possible at all, what would be the lisp idiomatic way of doing that?
I would be grateful for any pointers.
Update
Additional note
Many years ago, I was intrigued by being able to telnet into a long-running LISP image (I believe in this case uppercasing the name should be fine). If I remember correctly, it was available at prompt.franz.com. An article, somehow connected: telnet for remote access to a running application
Telnet is of course quite unsafe, but the usefulness of being able to access the Lisp application(s) in that way, for whatever reason, cannot be overstated, at least to some people.
Some additional pointers, and thanks
I would like to thank Basile Starynkevitch for his elaborate and thorough answer, especially on the theoretical aspect. I was looking for a more practical direction, specially connected to Common Lisp. Still, his answer is very instructive.
I was all ready to start writing a local server, perhaps using one of the fine Common Lisp libraries, like:
usocket: Universal socket library for Common Lisp
iolib: Common Lisp I/O library
cl-aync: Asynchronous IO library for Common Lisp
But, thanks to Stanislav Kondratyev, I didn't have to. He pointed out an already existing solution that nicely answer my question, ScriptL: Shell scripting made Lisp-like
I tested it with success on Linux, FreeBSD and OS X, just make sure to install the thin wrapper over POSIX syscalls first. Among many features (see README), it allows exposition of just selected functions, security is properly handled, and it even supply a custom C client, which builds as part of the ASDF load operation, and supports a number of new features, such as I/O, in place of netcat.
You may find scriptl useful: http://quickdocs.org/scriptl/. However, it depends on iolib, which depends on some nonstandard C or C++ library, so building it is not quite straighforward.
It is indeed possible to communicate with a swank server if you familiarize yourself with the swank protocol, which seems to be underdocumented (see e. g. here: https://github.com/astine/swank-client/blob/master/swank-description.markdown). However, this exposes a TCP socket over a network, which could be unsafe. Once I tried that, too, but I was not satisfied with the IPC speed.
A while ago I wrote a rather naive SBCL-specific server that uses a local domain socket for communication, and a client in C. It's very raw, but you could take a look: https://github.com/quasus/lispserver. In particular, it supports interactive IO and exit codes. The server and the client form the core of a simple framework for deploying Unix style software. Feel free to borrow code and/or contact me for explanations, suggestions, etc.
It certainly is operating system specific, because you want some inter-process communication, and they are provided by the OS.
Let's assume you have a POSIX like OS, e.g. Linux.
Then you could set up a socket(7) or fifo(7) to which you send s-exprs to be evaluated. Of course you need to adapt the Common Lisp program to add such a REPL.
SBCL has some networking facilities, you could build on them.
Of course, you should understand first how IPC work on your OS. If it is Linux, you could read Advanced Linux Programming (it is centered on C programming, which is the low-level way of using OS services on POSIX, but you can adapt what you have learned to SBCL). And indeed, the Common Lisp standard might not have an official POSIX interface, so you need to dive into implementation specific details.
Perhaps you should learn more about BSD sockets. There are tons of tutorials on them. Then you could use TCP sockets (see tcp(7)) or Unix ones (see unix(7)). Advanced users could use the unsafe telnet command. Or you might make your software use SSL, or perhaps some libssh e.g. use ssh as their client.
You could decide and document that the protocol between user apps and your program is : send some-sexpr (on a documented socket) from user-app code to your server which is terminated by a double newline or by a form feed, and have your server eval it and send back the result or some error message. I did similar things in MELT and it is not a big deal. Be careful about buffering.
I guess that you assume that you have responsible and competent users (so don't open such a remote REPL to the wild Internet!). If you care about malicious or stupid use of a remote REPL, it is becoming complex.
Alternatively, make your server a web application (by using some HTTP server library inside it), and ask users to use their browser or some HTTP client program (curl) or library (libcurl) to interact with it.

Custom Programming Language ~ How to interact with the Operating System

I am trying to create my own programming language but I am already thinking ahead a little.
Of course when I can compile a little program I won't have a Standart Library at that time,
and you'd have to create one yourself. Now take for example I'd like to add some functionality to print a string to the screen, I am pretty sure I'd have to do a few System Calls to the operating system to get this displayed.
So to the point: What would be the best way to interact with the Operating System?
Possibilities I came up with myself:
- Generate Object Files and link those to the (for example) C Standart Library
- Writing the files with embedded assembly language containing System Calls
I have a feeling there are better possibilities!
I hope you can help me,
Christian
EDIT: It's a compiled language I am creating!
You have basically two options, like you say yourself. You can link your standard library with the standard C library, so that the I/O functions in your standard library can use C functions. Alternatively you can make system calls to the operating system directly.
The second approach seems like it's going to be more work: The system calls will be different on each operating system, so you'll have to put a lot of work into porting your system. The system calls may not be well documented, causing many frustrations.
You could start by linking your standard library to the C standard library and worry about other aspects of your language for the moment. Later you can look into replacing the C functions you use with syscalls.
Stdin and stdout are pretty much the minimum - if you are using a Unix environment you can then gain access to they keyboard, command line and also pipe text in and out to files.
If you are writing a new language - then StdError may also be worth considering too!

Zotero: which export format should I use?

Which of Zotero's export format would you recommend regarding
- the portability with similar programs
- possibility of reading and adding new entries with a Perl script?
Much of this depends on what other software you will be working with. Any flexible read/write connection to Zotero should probably use the server API; there are already pretty strong client libraries in Python and PHP that you can explore, and it would be reasonable to write one in Perl.
If you just need read access, or read access in addition to write access, there is a Python library, libzotero, that's provided by the wonderful qnotero tool. It opens a read-only connection to a local Zotero installation's underlying sqlite database. If you need quick read access and searching, that library or its approach will serve you well.
Without using the server API, it's also possible to use the Firefox extension MozRepl with the MozRepl CPAN module to get programmatic access to a running local Zotero instance. This is pretty powerful, but it means that you need to send JavaScript to MozRepl. This approach is used with elisp to implement Zotero access for org-mode, zotero-plain.
If you certainly want export, the most expressive option is Bibliontology RDF, but not much out there understands it. MODS export from Zotero is also pretty solid, and it can be converted into pretty much anything else, using the superb bibutils package.
And the main place for questions like this is the mailing list zotero-dev, where you'll find just about everyone who works on programming in the broader Zotero ecosystem, so it may be worth stopping by there as well.
I made a perl module for my own purposes that tries to improve the reliability of mozrepl communications. Feel free to reuse anything you need. Source is here

Executing prolog code on an iPhone

I currently have the need to execute prolog code in an application I am making. I am aware that Apple probably never would allow something like this in the App Store, but that is not the intention either. This is more a private project that will never reach the App Store.
Purpose
In this case prolog is used to describe an object (like for example a telephone) and its properties. The object will be drawn with OpenGL using coordinates specified in the prolog script. The reason for using prolog is that I need the ability to query the program about some of the features this object has, and prolog eases this a lot. Bottom line: I "need" to query a prolog script from my app.
Possible solutions
Embed an already existing implementation written in C. I am unsure if this will even work.
Execute the prolog code on another machine and use the network to query prolog.
It seems that it is possible to run some sort Ruby VM inside the app (shinycocos uses this as far as I understand), could this be used to run one of the Ruby Prolog implementations?
Find some alternative to Prolog. This needs to give me some of the same possibilities I get with prolog.
Sadly, google gives me close to no results at all, so I have a feeling that I might be quite alone on this project. If anyone have any experience or clue at all, I would be very thankful.
Having faced similar difficulties calling prolog code, albeit in a different situation, I'd recommend checking out the castor c++ library. This allows you to write logic paradigm code in native c++ without needing to extend the language at all. As castor is a header only library it is easy to compile wherever c++ is available.
Castor website: http://www.mpprogramming.com/cpp/default.aspx
Half a year later, I would just like to provide some insight on this. I ended up writing a server with an interface to prolog in Java, accepting prolog calls through TCP. It works almost exactly like the live prolog interpreter SWI-prolog (among others) provides, and mostly works quite well. However, it is far from an optimal solution, as you can't call functions from inside prolog. You lose the possibility of having two-way communication.
If I were to start all over again, I would certainly have tried harder to compile one of the pure C implementations for iOS. I gave it a quick go, but my lack of experience stopped me from even removing all of the errors I got. Judging by the fact that you cannot have prolog running as a background process on a unmodified version of iOS as well, some major rewriting would have to be done. Because of this, one might just have to write a new implementation (perhaps inspired by some of the more lightweight ones out there) from scratch to get the perfect solution.
You can download SWI-Prolog's source code and compile it with XCODE for iOS platform. I've never done that, but it's certainly technically possible.
Once you do that, there are a lot of examples on how to run prolog code from C/C++, hence, you will be able to run prolog from Objective-C.
FYI, you can quite easily bi-directionally make calls between Java and SWI-Prolog if you use JPL:
http://www.swi-prolog.org/packages/jpl/
It is also fully re-entrant, so you can instantiate prolog code from java, which in turn instantiates java code etc...
I did this for a number of commercial projects a few years ago when I was required to connect a Prolog based Reasoning Engine to a lot of Java code.
It does use JNI (the Java Native Interface), so you need to be careful about how you compile and link to the native api. Though if you compile it appropriately for each platform you can make it work cross platform. I had it working on OS-X, Windows, Linux & Solaris.
I do not know if this has been tried but there is the possibility to use the combination of NodeJS for Mobile Apps & TauProlog:
https://code.janeasystems.com/nodejs-mobile
https://github.com/JaneaSystems/nodejs-mobile
and
http://tau-prolog.org/

Does anyone has first-hand experience with G-WAN web Server?

The only place where I found informations on G-WAN web server was the project web site and it looked very much like advertisement.
What I would really know is, for someone who is proficient with C, if it is as easy to use and extend that other architectures. For now I would mostly focus on scripting abilities.
Are C scripts on GWAN easy to write ?
Can you easily update and upload new C scripts to the server (say as easily than some PHP or Java pages on other architectures) ? Do you have to restart the server when doing so ?
Can you easily extend it with third party or existing C libraries ?
Any other feedback welcome.
Well, now G-WAN is available under Linux, I am using it for more than 6 months.
The C scripts are fully-ANSI C compatible so there is no difference for any seasonned C programmer.
To update them on the server, you can edit them directly in the /csp folder (remotely via SSH) or locally on a test machine (and copy them later): G-WAN reloads scripts on-the-fly when they have been changed on disk (no server stop required).
G-WAN C scripts can use any existing library (starting with all those under /usr/lib) without any configuration or interface: you just have to write a '#pragma link' followed by the name of the library at the top of your script.
What I found really useful is the ability to edit C scripts and refresh the view in the Internet browser to see how my code works.
If there is a compilation error, then G-WAN outputs the line in the source code (just like any C compiler).
But where it enters the extraordinary area, is when you have a C script crash: here also it gives you THE LINE NUMBER IN THE SOURCE CODE (with the faulty call and the backtrace).
Kind of black-magic when you are used to Apache modules.
My experience with G-WAN and its C scripts are:
The G-WAN community is very small. Questions you have are mostly answered by its single developer.
I consider the API not mature: it's not as "clean" as Java APIs.
The limitation, but at the same time the power, of C: it's a systems programming language. So writing application logic in it must be done carefully.
You generally need to be a good developer to get good results: if you do something wrong, the server crashes fast and hard (Unix-style).
I've written some scripts now, to try out G-WAN. Overall, it's been very "productive": not much bugs and it works if you follow the guidelines and don't want to do too much funky stuff you expect it to have, like mature web servers. However, I have got the feeling I'm reinventing the wheel a lot of times.
G-WAN also support scripts written in other programming languages (C++, Objective-C, Java, etc.) so you will benefit from whatever native libraries each language implements.
For C scripts, well, the /usr/lib directory lists more than 1,500 libraries that G-WAN can re-use with a simple #pragma link "library".
I found it neat to be able to write a Web application with a part in C, another in C++ and a third one in Java!
Benchmark shown how G-wan fare poorly at handling these tests.
http://joshitech.blogspot.sg/2012/04/performance-nginx-netty-cppcms.html
I have been using G-Wan for about two years. I consider it highly stable and production ready for static files. I have a number of static sites running for over a year with no issues.
I have built some small scale dynamic sites in C with it as demos/test projects. A bittorrent tracker and a real time analytics platform both using the KV Store for data backing.
In my view building large scale dynamic sites in G-Wan is possible but only with a significant investment in development and support. G-Wan is better suited to building robust highly scalable "enterprise grade" applications than tossing something together over a weekend.
I use G-Wan for a CMS http://solicms.com but for now, I use Ruby as primary language.
I have used G-wan for some preliminary testing and it does benchmark well. I have found a few points of concern that make it so that I will not likely use it for any of my projects. I have found that it seems to cache responses for about 0.5secs to speedup the responses/second and I can't have only some of the responses hitting the application code. Also the key/value store is great for cache and temporary data storage but I'm not sure how well it will work as a real back-end storage method.