How to load external in Pure Data 0.46-7 (Mac) correctly? - puredata

I'm having some trouble trying to load zexy and iemlib into Pd Vanilla 0.46-7. I had no problems compiling and installing cyclone from https://github.com/electrickery/pd-cyclone. It works fine. So I tried installing iemlib and zexy from https://github.com/iem-projects/pd-iem using their binaries but there's something wrong going on. When I turn on "verbose" under path preferences, PD seems to be looking for a file with the same name as the object I'm trying to use. Using [zexy/multiplex] in a patch gives:
tried ~/Library/Pd/zexy/multiplex.d_fat and failed
tried ~/Library/Pd/zexy/multiplex.pd_darwin and failed
tried ~/Library/Pd/zexy/multiplex/multiplex.d_fat and failed
But there's no multiplex.d_fat only zexy.d_fat. Same with iemlib, there's no dollarg.d_fat or dollarg.pd_darwin only iem_mp3.d_fat, iem_t3_lib.d_fat, iemlib1.d_fat, and iemlib2.d_fat. I'm guessing these files are where the externals were compiled in.
I tried using deken and iemlib installs the .pd_darwin files but I guess this is an older version(?) and zexy is still installing zexy.d_fat so I can't load its objects.
I also tried loading the lib "zexy/zexy" under startup preferences and it loads ok but then I get messages like:
warning: class 'abs~' overwritten; old one renamed 'abs~_aliased'
and I seem to loose namespace functionality, I can no longer refer to [zexy/multiplex] and need to use only [multiplex], which I guess is the correct behaviour.
How does Pd know how to look for objects on files with different names?
Any advice?
This thread is marked as solved http://forum.pdpatchrepo.info/topic/9677/having-trouble-with-deken-plugin-and-zexy-library-solved and sounds like a similar problem but I haven't been successful.

zexy is built as a multi-object library, so there is no separate binary for zexy/multiplex.
As you have correctly guessed, the correct way to load zexy is a whole (either using [declare -lib zexy] in your patch or adding zexy to the startup libs (no need to use zexy/zexy)), and ignore the warning about abs~.
as for how loading works:
Pd maintains a list of objects it knows how to create. e.g. whenever you create [pack], Pd will lookup pack in its list of known-objects and use the information found there to actually create the object.
If you try to create an object that Pd doesn't know about yet (e.g. [foo]), then Pd will look for a library named foo (e.g: foo.pd_linux) and if found, it will "load" it.
loading a library means that it will call a special function in the library (this special function is the entry point of the library and is called foo_setup() in our case)
after that, Pd will check, whether it now has foo in the list of known objects. if it does, it will create the object.
Now the magic is done in the special function, that is called when Pd loads the library: this function's main purpose is to tell Pd about new objects (basically saying: "if somebody asks for object "foo", i can make one or you").
When zexy's special function is loaded, it tells Pd about all zexy objects (including multiplex), so after Pd has loaded zexy, it knows how to create the [multiplex] object.
If the special function registers an object that Pd already knows about (e.g. in the case of zexy it tries to register a new object abs~ even though Pd already has a built-in object of the same name), then Pd will rename the original object by appending _aliased and the newly registered object will take over the name.

Related

dynamic import with interpolated string

I'm trying out parcel in a hobby project, having worked with create-react-app (i.e. webpack) before. I have had a great experience with dynamic imports of the following sort:
const Page = React.lazy(() => import(`./${page}`));
This is in a wrapper component that takes care of the suspense etc. and gets page as a prop (always a literal string, no variable/expression. not sure if that makes a difference).
With webpack this works wonderfully, even though I'm not sure how. Each such page I hit in the app gets loaded the first time, then its available instantly. I understand this is quite hard for the bundler to figure out, but yeah, it works.
When I try the same with parcel, it still builds but fails at runtime. If I dynamically import e.g. './SomePage', that is exactly what is requested from the server (GET /SomePage), which of course serves index.html. This happens both on the dev server and with a build. The build also only produces one .js file, so it doesn't split at all.
Is this even possible with parcel to import like this? Am I missing some configuration (don't have any at the moment)?
I think there is an unfix bug in webpack that dynamic import with interpolation will be having issue due to they are not able to pinpoint the exact file path.
Source
Webpack will collects all the dependencies, then check the import statement, parse the params to a reg statement, like this:
import('./app'+path+'/util') => /^\.\/app.*\/util$/
Webpack look for modules that meet the criteria based on reg. So if the param is a variable, Webpack will use the wrong reg statement to look for modules, in other words, will look for modules globally.
Try adding an empty string and append the interpolation might help you on this.
const Page = React.lazy(() => import("" + `./${page}`));

ILE RPG Bind by reference using CRTSQLRPGI

I've been trying a solution for this, but. I cannot find it.
What I'm trying to do, is work with the "bind by reference" ability, but working with ILE RPG written with embedded sql.
I can use the BNDDIR ctl opt in my source. And everything works correctly.
But that means a "bind by copy" method. Checked deleting the SRVPGM and even the BINDDIR. And the caller program still works.
So, is there any way to use "bind by reference" in an ILERPGSQL program?
After my question, an example:
Program SNILOG is a module, that conains several procedures. Part of them, exported.
In QSRVSRC I set the exported procedures, with a source with the same name: SNILOG. Something like this:
STRPGMEXP PGMLVL(*CURRENT)
/************************************************** ******************/
/* *MODULE SNILOG INIGREDI 04/10/21 15:25:30 */
/************************************************** ******************/
EXPORT SYMBOL("GETDIAG_TOSTRING")
EXPORT SYMBOL("GETDIAGNOSTICS")
EXPORT SYMBOL("GRABAR_LOG")
EXPORT SYMBOL("SNILOG")
ENDPGMEXP
As part of the procedures are programmed with embedded sql, the compilation must be done with CRTSQLRPGI, using the parameter OBJTYPE(*SRVPGM).
So, I finally get a SRVPGM called SNILOG, with those 4 procedures exported.
Once I've got the SRVPGM, I add it to a BNDDIR called SNI_BNDDIR.
Ok, let's go to the caller program: SNI600V.
Defined with
dftactgrp(*no)
, of course!.
And compiled with CRTSQLRPGI and parameter OBJTYPE(*PGM).
Here, if I use the control spec
bnddir('SNI_BNDDIR')
, it works fine.
But not fine enough, as this is a "bind by copy" method (I can delete the SRVPGM or the BNDDIR, and it is still working fine).
When I'm not working with SQL, I can use the CRTPGM command, and I can set the BNDSRVPGM parameter, to set the SRVPGM the program is going to be called. Well, just their procedures...
But I cannot find any similar option in CRTSQLRPGI command.
Nor in opt codes in ctl-opt sentence (We have BNDDIR, but not BNDSRVPGM option).
Any idea?
I'm running V7R3M0 with TR level: 6
Thanks in advance!
the use of
bnddir('SNI_BNDDIR')
Is the way to bind by reference OR bind by copy.
The key is what does your BNDDIR look like?
If you want to bind by reference, then it should include *SRVPGM objects.
If you want to bind by copy, then it should include *MODULE objects.
Generally, you want a *BNDDIR for every *SRVPGM that includes the modules (and maybe a utility *SRVPGM or two) needed for building a specific *SRVPGM.
Then one or more *BNDDIR that includes just *SRVPGM objects that are used to build the programs that use those *SRVPGMs.

Cucumber's AfterConfiguration can't access helper modules

I have a modular Sinatra app without a DB and in order to test memcache, I have some test files that need to be created and deleted on the file system. I would like to generate these files in an AfterConfiguration hook using some helper methods (which are in a module shared with rspec, which also needs to create/delete these files for testing). I only want to create them once at the start of Cucumber.
I do not seem to be able to access the helpers from within AfterConfiguration, which lives in "support/hooks.rb." The helpers are accessible from Cucumber's steps, so I know they have been loaded properly.
This previous post seems to have an answer: Want to load seed data before running cucumber
The second example in this answer seems to say my modules should be accessible to my AfterConfiguration block, but I get "undefined method `foo' for nil:NilClass" when attempting to call helper method "foo".
I can pull everything out into a rakefile and run it that way, but I'd like to know what I'm missing here.
After digging around in the code, it appears that AfterConfiguration not only runs before any features are loaded, but before World is instantiated. Running self.class inside of the AfterConfig block returns NilClass. Running self.class inside of any other hook, such as a Before, will return MyWorldName. In retrospect, this makes sense as every feature is run in a separate instance of World.
This is why helpers defined as instance methods (ie def method_name) are unknown. Changing my methods to module methods (ie def ModuleName.method_name) allows them to function, since they really are module methods anyway.

Strange behaviour when importing types in Scala 2.10

Today I cleared my .ivy cache and cleaned my project output targets. Since then I have been getting really strange behaviour when running tests with SBT or editing in the Scala IDE.
Given the following:
package com.abc.rest
import com.abc.utility.IdTLabel
I will get the following error:
object utility is not a member of package com.abc.rest.com.abc
Notice that com.abc is repeated twice, so it appears that the compiler uses the context of the current package when doing the import (maybe it's supposed to do this, but I never noticed it before).
Also, if I try to access classes in package com.abc from anywhere inside com.abc.rest (even using the full path) the compiler will complain that the type can not be found.
It appears that the errors only occur when I try to include files from parent packages. What I do find strange is that my code used to work. It only started happening after I cleaned up my project and my ivy cache, so maybe a later version of the compiler is more strict than the previous one.
I would love some ideas on what I can be doing wrong, or how I can go about troubleshooting this.
Update:
By first importing the parent classes and then defining the current package, the problem goes away:
import com.abc.utility.IdTLabel
import com.abs._
package com.abc.rest {
// Define classes belonging to com.abc.rest here
}
So this works, but I would still love to know why on earth the other way around worked, and then stopped working, and how on earth I can fix it. I had a good look, and could find no packages, objects or traits by the name of com anywhere inside the parent package.
Update relating to Worksheets:
Scala worksheets belonging to the same package share the same scope, which sounds obvious, but wasn't. Worksheets are not sand-boxed - they can see the project, and the project can see them. So all the 'test' object, traits, and classes you create inside the worksheet files, also becomes visible in the rest of the project.
I have so many worksheets that I did not even try to see where the problem came in. I simply moved them all to their own package, and like magic, the problem went away.
So, lesson learned for the day: If you create stuff inside worksheets, it's visible from outside the worksheet.
Anyway, this new found knowledge will come in handy, meaning anything 'interesting' can be build, monitored and tweaked inside the worksheet, while the rest of the project can actually use it. Quite cool actually.
It's still interesting to think how a sbt clean and cleaned up ivy cache managed to highlight the problem that was hidden before, but hey, that's another story....
(At the request of JacobusR, I'm making a proper answer out of my earlier comments).
This can happen if you have defined some class/trait/object inside package com.abc.rest.com. As soon as package com.abc.rest.com exists, and given that you are in package com.abc.rest, com would designate com.abc.rest.com as opposed to _root_.com. Fastest (but non-conclusive) way to check, without even scanning the source files, is to look for any .class files in the "com/abc/rest/com" sub-folder.
In particular you would get this behaviour if any of your files has duplicate package definitions (as in package com.abc.rest; package com.abc.rest; ...). If you have this duplicate package clause somewhere in the same file where you get the error, you wouldn't even see anything fishy with the .class files, as the failure at compiling the file would prevent the generation of .class files for any class definition inside the file.
The final bit of useful information is that as you found out the scala Worksheets are not sandboxed, and what you define in the worksheets affects your project's code (rather than only having the project's code affecting the worksheet). So a duplicate package clause in a worksheet could very well cause the error you got.
If package names conflict, there might be a custom error message for that. See if specifying the full path resolves the issue by starting from __root__. Ex. import __root__.com.foo.bar._

import and compile axapta 2009 xpo by commandline

i'm looking for a way to import an existing xpo-export via command-line into ax2009 aot and afterwards compile just this imported xpo. google tells me how to compile the whole aot by commandline, which takes quite long.
so is there a way to import an xpo ( shared project ) and compile just these objects?
what possibilities are available, if the objects which should be imported are version-controlled by ax and are checked-in?
hoping for an easy way to automate optionally check-out, import, avoid overwrite?-questions, compile and run ;)
thanks in advance!
You can make you own startup command:
Make a new class and extend SysStartupCmd
Change the construct method of SysStartupCmd to call you class.
Do whatever you need, this includes parsing the parm variable.
Also you will have to deal with version control by calling checkin/checkout in your code, handling compile errors etc.
There are no easy way, this is complicated stuff.
Over the last two years I have introduced and refined a command line process for deploying XPOs to AX 4.0 with great success. The class SysAutoRun is key as mentioned above. The following is a brief explanation of the resulting process:
Developers export AX objects from the AOT to a corresponding folder(layer) i.e. CUS, VAR, etc... for the most part the file name is the default file name set by AX.
Developers commit using SVN in this scenario. This would have to be evaluted to meet your needs.
Console application for the build process reads all file names from each directory(layer) and creates corresponding AX project definition files.
Console application reads all file names from each directory (again) and creates an import definition file for each corresponding layer(folder). The project definition created above is also instructed to be imported after all other objects are loaded and finally compiled. The import definition contains some specialized elements that are recognized by the SysAutoRun.execCommand(XmlNode _command) method.
A call is made to ax32.exe "config.axc" -StartupCmd=AUTORUN_ImportDefinitionMentionedAbove.xml -lazyclassloading -lazytableloading -nocompileonimport -internal=noModalBoxes
AX parses this import definition file invoking customizations as instructed. Logging is added to the process for outputting compilation results to an XML log file. Finally step 3's project definition file is compiled.
Console application validates the outputted XML log and handles appropriately.
Step 5-7 is repeated for each (folder)layer.
I understand this is very vague. The intent of this post is to get feedback on interest before I invest more time on describing the process. The import definition file is probably of most interest as it is responsible for loading the objects in the right order, synchronizing the ORM, compiling, repeating, etc...
Thanks M#