What should I be careful to avoid in order for my CoffeeScript code to run on both Node.js and javascript? The obvious answer is "don't use Node.js" functions, but I was wondering if there are other minor "gotchas" that would break porting the code between the two.
Assuming you don't rely on any APIs beyond the language itself (e.g. you don't use any functions other than setTimeout/clearTimeout and setInterval/clearInterval and those attached to Math), there are just two things to worry about:
You can rely on newer JS features like Array::forEach and Array::indexOf being around in Node, but not in the browser. CoffeeScript helps you avoid these two gotchas with the for x in arr and if x in arr syntaxes, respectively.
In the browser, the global object is window; in Node, the global object is global, but you usually want to export things instead. So the usual solution, as demonstrated by Underscore.js and others, is to write root = this at the top of your module and attach everything to root. In the outermost scope, this points to window in browsers and exports in Node.
I'm assuming here that you're defining your module in a single script. If not, you should look at a tool like sstephenson's stitch, which lets you write a set of modules that can require each other in Node, then "stitch" them together for browsers.
Related
I would like to have a DML device with interfaces and register banks as the TOP-level of my device but offload processing to Python. Is there a lightweight method of calling into Python from DML?
This post How can I unit test a specific DML method? addresses calling from Python into DML, but I am interested in the reverse.
I think I can create a bunch of custom interfaces to do this, but I'm interested to know if there's a better way.
Implementing parts of a device in Python can make sense, in particular for code that seldom is invoked (user interaction comes to mind), and when faced with tasks like string manipulation where the shortcomings of a C-like language is particularly painful, or when code sharing with CLI commands is desired.
You can use the SIM_call_python_function API to call out to Python from DML. The function uses the equivalent of eval on the passed string, so you can find a module-local function by __import__:
local attr_value_t a = SIM_make_attr_list(0);
local attr_value_t result = SIM_call_python_function(
"__import__('simics').SIM_version", &a);
log info: "%s", SIM_attr_string(result);
SIM_attr_free(&a);
SIM_attr_free(&result);
This API is admittedly not very pretty. Parts of the standard lib for DML 1.2 is in fact written in Python and uses the internal function VT_call_python_module_function for this task instead. That API is nicer, but we cannot recommend use of VT_ functions outside the core Simics team.
This question is really more about the Simics simulator framework than about DML. Given how Simics works, Python code lives in a separate context. There is no obvious light-weight way. Rather, you need to put the Python code in a separate Python class and a separate runtime object and use a Simics simulator interface to orchestrate the calling.
The Python code would in general not be able to access the state of the object anyway, so it would have to be a strict "compute this and return all computed values". Which is not very convenient for device behavior that is really all about mutating the state of the object.
Even with a bunch of interfaces, there is still the matter of state handling. I guess you need an interface for that as well.
I want to outsource some code for a plugin system. Inside my project, I have a trait called Provider which is the code for my plugin system. If you activate the feature "consumer" you can use plugins; if you don't, you are an author of plugins.
I want authors of plugins to get their code into my program by compiling to a shared library. Is a shared library a good design decision? The limitation of the plugins is using Rust anyway.
Does the plugin host have to go the C way for loading the shared library: loading an unmangled function?
I just want authors to use the trait Provider for implementing their plugins and that's it.
After taking a look at sharedlib and libloading, it seems impossible to load plugins in a idiomatic Rust way.
I'd just like to load trait objects into my ProviderLoader:
// lib.rs
pub struct Sample { ... }
pub trait Provider {
fn get_sample(&self) -> Sample;
}
pub struct ProviderLoader {
plugins: Vec<Box<Provider>>
}
When the program is shipped, the file tree would look like:
.
├── fancy_program.exe
└── providers
├── fp_awesomedude.dll
└── fp_niceplugin.dll
Is that possible if plugins are compiled to shared libs? This would also affect the decision of the plugins' crate-type.
Do you have other ideas? Maybe I'm on the wrong path so that shared libs aren't the holy grail.
I first posted this on the Rust forum. A friend advised me to give it a try on Stack Overflow.
UPDATE 3/27/2018:
After using plugins this way for some time, I have to caution that in my experience things do get out of sync, and it can be very frustrating to debug (strange segfaults, weird OS errors). Even in cases where my team independently verified the dependencies were in sync, passing non-primitive structs between the dynamic library binaries tended to fail on OS X for some reason. I'd like to revisit this, find what cases it happens in, and perhaps open an issue with Rust, but I'm going to advise caution with this going forward.
LLDB and valgrind are near-essential to debug these issues.
Intro
I've been investigating things along these lines myself, and I've found there's little official documentation for this, so I decided to play around!
First let me note, as there is little official word on these properties please do not rely on any code here if you're trying to keep planes in the air or nuclear missiles from errantly launching, at least not without doing far more comprehensive testing than I've done. I'm not responsible if the code here deletes your OS and emails an erroneous tearful confession of committing the Zodiac killings to your local police; we're on the fringes of Rust here and things could change from one release or toolchain to another.
I have personally tested this on Rust 1.20 stable in both debug and release configurations on Windows 10 (stable-x86_64-pc-windows-msvc) and Cent OS 7 (stable-x86_64-unknown-linux-gnu).
Approach
The approach I took was a shared common crate both crates listed as a dependency defining common struct and trait definitions. At first, I was also going to test having a struct with the same structure, or trait with the same definitions, defined independently in both libraries, but I opted against it because it's too fragile and you wouldn't want to do it in a real design. That said, if anybody wants to test this, feel free to do a PR on the repository above and I will update this answer.
In addition, the Rust plugin was declared dylib. I'm not sure how compiling as cdylib would interact, since I think it would mean that upon loading the plugin there are two versions of the Rust standard library hanging around (since I believe cdylib statically links the Rust stdlib into the shared object).
Tests
General Notes
The structs I tested were not declared #repr(C). This could provide an extra layer of safety by guaranteeing a layout, but I was most curious about writing "pure" Rust plugins with as little "treating Rust like C" fiddling as possible. We already know you can use Rust via FFI by wrapping things in opaque pointers, manually dropping, and such, so it's not very enlightening to test this.
The function signature I used was pub fn foo(args) -> output with the #[no_mangle] directive, it turns out that rustfmt automatically changes extern "Rust" fn to simply fn. I'm not sure I agree with this in this case since they are most certainly "extern" functions here, but I will choose to abide by rustfmt.
Remember that even though this is Rust, this has elements of unsafety because libloading (or the unstable DynamicLib functionality) will not type check the symbols for you. At first I thought my Vec test was proving you couldn't pass Vecs between host and plugin until I realized on one end I had Vec<i32> and on the other I had Vec<usize>
Interestingly, there were a few times I pointed an optimized test build to an unoptimized plugin and vice versa and it still worked. However, I still can't in good faith recommending building plugins and host applications with different toolchains, and even if you do, I can't promise that for some reason rustc/llvm won't decide to do certain optimizations on one version of a struct and not another. In addition, I'm not sure if this means that passing types through FFI prevents certain optimizations such as Null Pointer Optimizations from occurring.
You're still limited to calling bare functions, no Foo::bar because of the lack of name mangling. In addition, due to the fact that functions with trait bounds are monomorphized, generic functions and structs are also out. The compiler can't know you're going to call foo<i32> so no foo<i32> is going to be generated. Any functions over the plugin boundary must take only concrete types and return only concrete types.
Similarly, you have to be careful with lifetimes for similar reasons, since there's no static lifetime checking Rust is forced to believe you when you say a function returns &'a when it's really &'b.
Native Rust
The first tests I performed were on no custom structures; just pure, native Rust types. This would give a baseline for if this is even possible. I chose three baseline types: &mut i32, &mut Vec, and Option<i32> -> Option<i32>. These were all chosen for very specific reasons: the &mut i32 because it tests a reference, the &mut Vec because it tests growing the heap from memory allocated in the host application, and the Option as a dual purpose of testing passing by move and matching a simple enum.
All three work as expected. Mutating the reference mutates the value, pushing to a Vec works properly, and the Option works properly whether Some or None.
Shared Struct Definition
This was meant to test if you could pass a non-builtin struct with a common definition on both sides between plugin and host. This works as expected, but as mentioned in the "General Notes" section, can't promise you Rust won't fail to optimize and/or optimize a structure definition on one side and not another. Always test your specific use case and use CI in case it changes.
Boxed Trait Object
This test uses a struct whose definition is only defined on the plugin side, but implements a trait defined in a common crate, and returns a Box<Trait>. This works as expected. Calling trait_obj.fun() works properly.
At first I actually anticipated there would be issues with dropping without making the trait explicitly have Drop as a bound, but it turns out Drop is properly called as well (this was verified by setting the value of a variable declared on the test stack via raw pointer from the struct's drop function). (Naturally I'm aware drop is always called even with trait objects in Rust, but I wasn't sure if dynamic libraries would complicate it).
NOTE:
I did not test what would happen if you load a plugin, create a trait object, then drop the plugin (which would likely close it). I can only assume this is potentially catastrophic. I recommend keeping the plugin open as long as the trait object persists.
Remarks
Plugins work exactly as you'd expect just linking a crate naturally, albeit with some restrictions and pitfalls. As long as you test, I think this is a very natural way to go. It makes symbol loading more bearable, for instance, if you only need to load a new function and then receive a trait object implementing an interface. It also avoids nasty C memory leaks because you couldn't or forgot to load a drop/free function. That said, be careful, and always test!
There is no official plugin system, and you cannot do plugins loaded at runtime in pure Rust. I saw some discussions about doing a native plugin system, but nothing is decided for now, and maybe there will never be any such thing. You can use one of these solutions:
You can extend your code with native dynamic libraries using FFI. To use the C ABI, you have to use repr(C), no_mangle attribute, extern etc. You will find more information by searching Rust FFI on the internets. With this solution, you must use raw pointers: they come with no safety guarantee (i.e. you must use unsafe code).
Of course, you can write your dynamic library in Rust, but to load it and call the functions, you must go through the C ABI. This means that the safety guarantees of Rust do not apply there. Furthermore, you cannot use the highest level Rust's functionalities as trait, enum, etc. between the library and the binary.
If you do not want this complexity, you can use a language adapted to expand Rust: with which you can dynamically add functions to your code and execute them with same guarantees as in Rust. This is, in my opinion, the easier way to go: if you have the choice, and if the execution speed is not critical, use this to avoid tricky C/Rust interfaces.
Here is a (not exhaustive) list of languages that can easily extend Rust:
Gluon, a functional language like Haskell
Dyon, a small but powerful scripting language intended for video games
Lua with rlua or hlua
You can also use Python or Javascript, or see the list in awesome-rust.
It's all in the title, really.
In the Babel documentation, there is the following line on the page describing babel-runtime
Another purpose of this transformer is to create a sandboxed environment for your code. Built-ins such as Promise, Set and Map are aliased to core-js so you can use them seamlessly without having to require a globally polluting polyfill.
The polyfill is just that, a separate JavaScript file which is included which shims up some missing things.
I've tested the polyfill vs. using babel-runtime with my build tools (webpack), and my files are slightly smaller when I use babel-runtime.
I'm not developing a library or plugin, just a web application, and also do not care about the global scope being polluted. Knowing this, other than the slightly smaller final filesize, are there any other practical benefits or points in using the runtime over the polyfill?
If you don't care about polluting global scope, the polyfill is the better option: the runtime does not work with instance methods such as [0, 1, 2].includes(1), while the polyfill will.
The main difference between the two is that the polyfill pollutes global scope, and works with instance methods, while the runtime does not pollute global scope and does not work with instance methods.
The polyfill also will not allow you to include two separate versions of itself into your code. This is only a problem if two separate polyfills are being required/imported somewhere in your code.
I have different versions of some libraries I'm developing, and want to, from within various plperl functions I've written, load a certain version based on current_database().
(IIRC using use rather than require is preferred, I think because it might cache the library?)
However, my fear is that different databases on the same server will have problems, in either way that I'm thinking of doing it:
1) use lib and then use--if more than one path gets stuck on #INC, it may not be the right one that gets used
2) require--even if this means the right one is always used in the current script, does it mean that the library gets reloaded every time? And either way, if libraries are kept loaded once used, is it possible that namespace pollution from different versions could result in bugs? (E.g. if I have something branch based on whether a variable is defined, and in one version it is by default, and in another it's not--will all the versions now act as if it is, unless I explicitly undefine it rather than just not defining it?)
If plperl is not loaded through shared_preload_libraries, each database session will have its own interpreter freshly initialized at first use, so the libraries included by one session cannot possibly interfere with another session.
See PL/Perl Under the Hood in the manual for more.
I want to create a big file for all cool functions I find somehow reusable and useful, and put them all into that single file. Well, for the beginning I don't have many, so it's not worth thinking much about making several files, I guess. I would use pragma marks to separate them visually.
But the question: Would those unused methods bother in any way? Would my application explode or have less performance? Or is the compiler / linker clever enough to know that function A and B are not needed, and thus does not copy their "code" into my resulting app?
This sounds like an absolute architectural and maintenance nightmare. As a matter of practice, you should never make a huge blob file with a random set of methods you find useful. Add the methods to the appropriate classes or categories. See here for information on the blob anti-pattern, which is what you are doing here.
To directly answer your question: no, methods that are never called will not affect the performance of your app.
No, they won't directly affect your app. Keep in mind though, all that unused code is going to make your functions file harder to read and maintain. Plus, writing functions you're not actually using at the moment makes it easy to introduce bugs that aren't going to become apparent until much later on when you start using those functions, which can be very confusing because you've forgotten how they're written and will probably assume they're correct because you haven't touched them in so long.
Also, in an object oriented language like Objective-C global functions should really only be used for exceptional, very reusable cases. In most instances, you should be writing methods in classes instead. I might have one or two global functions in my apps, usually related to debugging, but typically nothing else.
So no, it's not going to hurt anything, but I'd still avoid it and focus on writing the code you need now, at this very moment.
The code would still be compiled and linked into the project, it just wouldn't be used by your code, meaning your resultant executable will be larger.
I'd probably split the functions into seperate files, depending on the common areas they are to address, so I'd have a library of image functions separate from a library of string manipulation functions, then include whichever are pertinent to the project in hand.
I don't think having unused functions in the .h file will hurt you in any way. If you compile all the corresponding .m files containing the unused functions in your build target, then you will end up making a bigger executable than is required. Same goes for if you include the code via static libraries.
If you do use a function but you didn't include the right .m file or library, then you'll get a link error.