Build Scala against different versions of external API - scala

I'm writing a small library which I'd like to be backwards compatible with older versions of an API, yet use features of the latest API when possible.
So for example, I have a project which uses an external API, which I'll call FooFoo_v1.
Initially, my code looked like this:
// in Widget.scala
val f = new Foo
f.bar
Foo has since released a new version of their API, FooFoo_v2, which adds the bat method. So long as I'm compiling against the new version, this works fine:
// in Widget.scala
val f = new Foo
f.bar
f.bat
But if you try to build against FooFoo_v1, the build obviously fails. Since the bat feature is truly optional, and I'd like to allow folks to build my code against FooFoo_v1 or FooFoo_v2.
Ignoring the details of the dependency management, what's the right high level approach for something like this? My aim is to keep it as simple as possible.

I think you should split your library in two pieces - one with features used from FooFoo_v1, another depending on the first one and on FooFoo_v2 and using features from FooFoo_v2. How to accomplish it depends on your code... If it's too difficult it's better to follow #rex-kerr advice - to maintain two branches.

I would simply keep separate branches of the project in a repository (one which is sufficiently robust to allow you to edit one and merge effortlessly into the others--git would be my first choice).
If you must do the selection at runtime, then you're limited to using reflection for any new methods.

Related

Absolute and relative path conflict in Modelica

I want to build up a tests library and keep it separated from the libraries under development. My first thought is to go for a structure like the following:
PensLib
--Variants
----BallPoint
----FountainPen
----Tests
------TB_BallPoint
HammocksLib
--Variants
----SingleHammock
----DoubleHammock
----Tests
------TB_DoubleHammock
--Systems
----IndoorWalls
----OutdoorWallAndTree
----CoconutPalms
----Tests
------TB_IndoorWalls
Tests
--PensLib
----Variants
------Test_BallPoint // extends PensLib.Variants.Tests.TB_BallPoint
--HammocksLib
----Variants
------Test_DoubleHammock // extends HammocksLib.Variants.Tests.TB_DoubleHammock
----Systems
------Test_IndoorWalls // extends HammocksLib.Systems.Tests.TB_IndoorWalls
For now let's assume that the way I structure my libraries make sense (which most likely doesn't). I will soon ask more questions on good practices in setting up the testing environment in Dymola and with the Testing Library.
My question is about the correct way to handle relative and absolute paths within models, if possible at all.
The model PensLib.Variants.Tests.TB_BallPoint is used for developing the variant BallPoint
The model Tests.PensLib.Variants.Tests_BallPoint is used for automated testing
I want the model Test_BallPoint to extend the model TB_BallPoint, but I cannot link them. I guess the absolute path PensLib.Variants.Tests.TB_BallPoint is treated as a relative one, since PensLib is found "on the way out" of the Tests library, and from there it goes looking for the rest of the path. Is there perhaps a way to control the path, kind of ..\..\..\PensLib\Variants\Tests\TB_BallPoint?
As you already noted such a setup makes troubles. There are ways around that, namely global name lookup and imports, which I explain briefly further below.
Both solutions are nice when you have such a case in a few situations. But if you have to use it all the time, you make your setup unnecessarily complicated.
Hence, I suggest to make yourself the live easier and change your package structure:
Either create a dedicated test library for every library, maybe PensLib_Tests and HammocksLib_Tests
Or rename the packages in the Tests library and don't use the exact library names
Global name lookup
You can use absolute class paths. They are denoted with a leading ., so this should work:
extends .PensLib.Variants.Tests.TB_BallPoint;
See Modelica Specification chapter 5: Scoping, Name Lookup, and Flattening for details, especially 5.3.3 Global Name Lookup
Importing
You can simply import the library. Lookup of imports is always performed globally.
import PensLib;
extends PensLib.Variants.Tests.TB_BallPoint;

Kaitai Struct Parameter Type

I am trying to pass a parameter to ksy file. The parameter is of type another ksy file. The reason is that i need to access all the fields from the ksy file passed as parameter.
Is that possible?
If yes, would you please provide me with syntax code snippet so I can mimic it.
If no, what would be another solution?
Thank You.
Affiliate disclaimer: I'm a Kaitai Struct maintainer (see my GitHub profile).
First, I recommend always using the development version of the Kaitai Struct Web IDE (https://ide.kaitai.io/devel/), not the stable one. The stable IDE deployed at https://ide.kaitai.io/ has KS compiler of version 0.8, which is indeed the latest stable version, but already 2 years old at the moment. But the project is under active development, new bug fixes and improvements are coming every week, so the stable Web IDE is pretty much outdated. And thanks to the recent infrastructure enhancement, the devel Web IDE now gets rebuilt every time the compiler is updated, so you can use even the most recent features.
However, you won't be able to simulate the particular situation you describe in the Web IDE, because it can't currently handle top-level parameteric types (there is no hook where you can pass your own values as arguments). But it should work in a local environment. You can compile the commontype.ksy and pty.ksy specs in the Web IDE to the target language you want to use (the manual shows how to do it). The code putting it together could look like this (Java):
Commontype ct = new Commontype(new ByteBufferKaitaiStream(new byte[] { 80, 75 }));
Pty r = new Pty(
new ByteBufferKaitaiStream(new byte[] { 80 }), // IO stream
ct // commonword
);
Note that the actual parameter order of the Pty constructor may be different, e.g. in Python come the custom params (commonword) first and then the IO object. Check the generated code in your particular language.

Remove/Add References and Compile antique VB6 application using Powershell

I've been given the task of researching whether one can use Powershell to automate the managing of References in VB6 application and then compile it's projects afterwards.
There are 3 projects. I requirement is to remove a specific reference in each project. Then, compile projects from bottom up (server > client > interface) and add reference back in along the way. (remove references, compile server.dll >add client reference to server.dll, compile client.dll > add interface reference to client.dll, compile interface.exe)
I'm thinking no, but I was still given the task of finding out for sure. Of course, where does one go to find this out? Why here of course, StackOverflow.
References are stored in the project .VBP files which are just text files. A given reference takes up exactly one line of the file.
For example, here is a reference to DAO database components:
Reference=*\G{00025E01-0000-0000-C000-000000000046}#5.0#0#C:\WINDOWS\SysWow64\dao360.dll#Microsoft DAO 3.6 Object Library
The most important info is everything to the left of the path which contains the GUID (i.e., the unique identifier of the library, more or less). The filespec and description text are unimportant as VB6 will update that to whatever it finds in the registry for the referenced DLL.
An alternate form of reference is for GUI controls, such as:
Object={BDC217C8-ED16-11CD-956C-0000C04E4C0A}#1.1#0; tabctl32.ocx
which for whatever reason never seem to have a path anyway. Most likely you will not need to modify this type of reference, because it would almost certainly break forms in the project which rely on them.
So in your Powershell script, the key task would be to either add or remove the individual reference lines mentioned in the question. Unless you are using no form of binary compatibility, the GUID will remain stable. Therefore, you could essentially hardcode the strings you need to add/remove.
Aside from all that, its worth thinking through why you need to take this approach at all. Normally to build a VB6 solution it is totally unnecessary to add/remove references along the way. Also depending on your choice of deployment techniques, you are probably using either project or binary compatibility which tends to keep the references stable.
Lastly, I'll mention that there are existing tools such as Kinook's Visual Build Pro which already know how to build groups of VB6 projects and if using a 3rd party tool like that is an option, could save you a lot of work.

zend framework own functions and classes

Now I have some experience in using the Zend Framework. I want to go deeper in the topic and rewrite some old php projects.
What is the best place to save own functions and classes?
And how do I tell Zend where they are? Or is there already a folder for own stuff? May I have different folders for different files?
For example I want to save a php document with the name math_b.php which includes several special functions to calculate and another one date_b.php which has abilities for datetime stuff. Is that possible or shall I have different files for every function?
I would also like to reuse the functions in other projects and then just copy the folders.
There is no single "right" answer for this. However, there are several general guidelines/principles that I commonly employ.
Do not pollute global scope
Namespace your code and keep all functions is classes. So, rather than:
function myFunction($x) {
// do stuff with $x and return a value
}
I would have:
namespace MyVendorName\SomeComponent;
class SomeUtils
{
public static function myFunction($x)
{
// do stuff with $x and return a value
}
}
Usage is then:
use MyVendorName\SomeComponent\SomeUtils;
$val = SomeUtils::myFunction($x);
Why bother with all this? Without this kind of namespacing, as you bring more code into your projects from other sources - and as you share/publish your code for others to consume in their projects - you will eventually encounter name conflicts between their functions/variables and yours. Good fences make good neighbors.
Use an autoloader
The old days of having tons of:
require '/path/to/class.php';
in your consumer code are long gone. A better approach is to tell PHP - typically during some bootstrap process - where to find the class MyVendor\MyComponent\MyClass. This process is called autoloading.
Most code these days conforms to the PSR-0/PSR-4 standard that maps name-spaced classnames to file-paths relative to a file root.
In ZF1, one typically adds the ./library folder to the PHP include_path in ./public/index.php and then add your vendor namespace into the autoloaderNameSpaces array in ./application/config.ini:
autoloaderNameSpaces[] = 'MyVendor';
and places a class like MyVendor\MyComponent\MyClass in the file:
./library/MyVendor/MyComponent/MyClass.php
You can then reference a class of the form MyVendor\MyComponent\MyClass simply with:
// At top of consuming file
use MyVendor\MyComponent\MyClass;
// In the consuming page/script/class.
$instance = new MyClass(); // instantiation
$val = MyClass::myStaticMethod(); // static method call
Determine the scope of usage
If I have functionality is required only for a particular class, then I keep that function as a method (or a collection of methods) in the class in which it is used.
If I have some functionality that will be consumed in multiple places in a single project, then I might break it out into a single class in my own library namespace, perhaps MyVendor.
If I think that a function/class will be consumed by multiple projects, then I break it out into its own project with its own repo (on Github, for example), make it accessible via Composer, optimally registering it with Packagist, and pay close attention to semantic versioning so that consumers of my package receive a stable and predictable product.
Copying folders from one project into another is do-able, of course, but it often runs into problems as you fix bugs, add functionality, and (sometimes) break backward-compatibility. That's why it is usually preferable to have those functions/classes in a separate, semantically-versioned project that serves as a single source-of-truth for that code.
Conclusion
Breaking functionality out into separate, namespaced classes that are autoloaded in a standard way gives plenty of "space" in which to develop custom functionality that is more easily consumed, more easily re-used, and more easily tested (a large topic for another time).

Sinatra coffeescript --bare?

I've done some searching on this, but I cannot find info. I'm building an application inside sinatra, and using the coffeescript templating engine. By default the compiled code is wrapped as such:
(function() {
// code
}).call(this);
I'd like to remove that using the --bare flag, so different files can access classes and so forth that I'm defining. I realize that having it more contained helps against variable conflicts and so forth, but I'm working on two main pieces here. One is the business logic, and arrangement of data in class structures. The other is the view functionality using raphaeljs. I would prefer to keep these two pieces in separate files. Since the two files wrapped as such cannot access the data, it obviously won't work. However, if you can think of a better solution than using the --bare option, I'm all ears.
Bare compilation is simply a bad practice. Each file should export to the global scope only the public objects that matter to the rest of your app.
# foo.coffee
class Foo
constructor: (#abc) ->
privateVar = 123
window.Foo = Foo # export
Foo is now globally available. Now if that pattern isn't practical, maybe you should rethink your structure a bit. If you have to export too may things, you nest and namespace things better, so that more data can be exposed through fewer global variables.
I support Alex's answer, but if you absolutely must do this, I believe my answer to the same question for Rails 3.1 is applicable here as well: Put the line
Tilt::CoffeeScriptTemplate.default_bare = true
somewhere in your application.