I'm relying on shell calls to 7z (LGPL) for an important part of a project I'm working on: specifically, opening .cbr files. The problem I have is that there is no guarantee that I will be able to find it on a user's computer (assuming it's even on their computer).
Is there some way to keep its binaries inside my compiled tool, so I don't have to worry about calling them externally? (I have the impression that this is what jar files are for, but I'm not sure.)
Or if that's not possible, what is the standard way of going about this?
Typically speaking, this is where you would want to get a library dependency to handle the unzipping of files. Some people use Apache Commons Compress, which would require this library dependency in your sbt build definition:
libraryDependencies += "org.apache.commons" % "commons-compress" % "1.5" // Or whatever version you need
Alternatively, you can include the exe file in a resources file that will get included with your build - assuming that the executable doesn't need to be installed at the system level. This can be as simple as creating the src/main/resources directory and putting the file in there. Your jar will only work on compatible system architectures, though, so think twice before going this route. Unless there is a specific reason that 7-zip needs to be used to unpack the file, it's better to use a Java or Scala-compatible library and avoid having to make the shell calls.
Related
After ~8 years of using Scala, I took a detour to a small language you might have heard of called Go. It's not without its flaws (grow a real type system, boi!), but it does some things much better than Scala could ever hope to.
Go manages dependencies in source form, which any sensible engineer would consider terrifying until she discovers that storing one's dependencies in a vendor/ directory under source control is a "get out of jail free" card for cases when dependency resolution either becomes too complicated for its own good, or depends on flaky 3rd party resources, such as the network.
The latest version of Go's CLI tooling comes with a command called go mod vendor, which does the legwork of downloading the current module's dependencies into a vendor/ directory inside the project, which can subsequently be checked into source control. Setting aside discussions regarding the merits of aggressively and preemptively caching dependencies in this fashion, I would like to state for the record that this command is very convenient.
SBT is notorious for downloading dependencies into ~/.ivy2, which is more of a free-for-all cache shared by all of a user's projects rather than just one. There's a smaller cache in ~/.sbt, which is used by SBT itself as a Humpty Dumpty / Mr Potato Head scratch space. Both directories will be created & populated automatically if they don't exist, but neither is intended to be explicitly managed by the user. Both are internal implementation details of SBT and/or Ivy, and should not be messed with "unless you know what you're doing".
What I want (and now I'll be asking for things) is a sbt vendor command that would do the legwork of populating the unmanaged classpath with all of my project's dependencies. If it can also download all that's needed to run SBT itself into the same directory, that would be just peachy.
Is there a SBT plugin or some sequence of arcane incantations that can be used to accomplish that which I seek?
Before this question gets closed forever by an overzealous posse of moderators, I'm going to post here the hack which got me over this particular hump. As usual, the solution ended up being a shell script:
#!/bin/bash
root="$(readlink -f "$(dirname "$0")")"
sbt="$root/.sbt-launcher"
if [ ! -x "$sbt" ]; then
echo "$sbt does not exist or is not executable"
exit 1
fi
exec "$sbt" \
-ivy "$root/.ivy2" \
-sbt-dir "$root/.sbt" \
-sbt-boot "$root/.sbt/boot" \
-sbt-launch-dir "$root/.sbt/launchers" \
$#
Let's unpack this really quickly. First, the shell script is a wrapper for the real SBT launcher, which is located in the same directory and named .sbt-launcher. Any recent version should work; you too can download one from http://git.io/sbt.
My wrapper ensures that four flags are always passed to the real SBT launcher:
-ivy specifies a custom location for the Ivy cache.
-sbt-dir, -sbt-boot, and -sbt-launch-dir together force SBT to stop using the account-wide ~/.sbt directory as a dumping ground for SBT JARs and other things.
I saved this shell script as sbt inside my project, placed .sbt-launcher from http://git.io/sbt right next to it, and began using the wrapper instead of the real SBT. I then checked into source control the directories .ivy2 and .sbt which were created inside my project.
That's all it took. It's not an elegant solution, but it does well to isolate my CI/CD process from volatility in Internet artifact repositories.
We are getting ready to deploy a Tcl application, but i'm having trouble figuring out how to do it. Currently, I'm experimenting with tclkit and sdx.kit. I can pack a single tcl file and run it, but the structure of the whole application contains folders and images and c files that work together with tcl. i have two folders and inside a bunch of c files and tcl files and other stuff. How would i go and wrap the whole thing. What tool do you guys recommend other than tclkit and why?
The main way that you're recommended to distribute applications is as a tclkit. There are a few alternatives (e.g., TOBE, ActiveState's commercial tooling) but they're pretty similar as they all build on top of Tcl's virtual filesystem layer. (NB. This isn't the same as the Linux VFS stuff; this is a VFS in a single application.) Indeed, the ActiveState tooling actually is a rebadged tclkit (plus some other stuff like code obfuscation). I believe that TOBE uses ZIP archives instead of metakit databases.
The advantage of using a VFS-based solution is that it means that lots of things work inside, particularly including both source (for getting another .tcl file in) and load (for getting a binary library). In fact, you can put your application, the packages it depends on, and the resources (images, etc.) inside the VFS and be fairly sure that things will work. About the only things that we know run into real problems are where you want to exec something in the archive (the VFS mount is process-local; you have to copy the subsidiary file out if you want it to be seen in subprocesses) and if you're wanting to load certificates of private keys with the tls package (because the underlying OpenSSL library doesn't delegate to Tcl to handle that part of its I/O for some reason, AIUI).
When you're building these things, effectively you make a directory (and its subdirectories) that have everything laid out right. Then you run the packager (sdx for tclkits) and it builds the overall application for you. Attach the result to a runtime (the standard tclkit) and you're ready to test and deploy.
We don't generally do tool recommendations here on Stack Overflow, but the ActiveState Tcl Dev Kit is actually rather widely used. Many other people use sdx/tclkit. TOBE is quite a lot rarer. (There are other packaging techniques, but I wouldn't recommend them these days; a packaged VFS works very well indeed.)
I am working on perl module that I would like to submit in CPAN.
But I have a small query in regards to the directory structure of module.
As per the perlmonk article the module code directory structure should be as below:
Foo-Bar-0.01/Bar.pm
Foo-Bar-0.01/Makefile.PL
Foo-Bar-0.01/MANIFEST
Foo-Bar-0.01/Changes
Foo-Bar-0.01/test.pl
Foo-Bar-0.01/README
But when I am using the command, the structure is generated as below
h2xs -AX Foo::Bar
Writing Foo-Bar/lib/Foo/Bar.pm
Writing Foo-Bar/Makefile.PL
Writing Foo-Bar/README0
Writing Foo-Bar/t/Foo-Bar.t
Writing Foo-Bar/Changes
Writing Foo-Bar/MANIFEST
The article in question is advocating a considerably-older module structure. It certainly could be used, but it loses a lot of the advancements that have been put into place as far as good testing, building, and distribution practices.
To break down the differences:
modules have moved from the top level to the lib/ directory. This unifies the location where your module "lives" (i.e., the place where you work on the code and create the baseline modules to be tested and eventually distributed). It also makes it easier to set up any hierarchy that you need (e.g. subclasses, or helper modules); the newer setup will just pick these up. The older one may but I'm not familiar enough with it to say yes or no.
Makefile.PL in the newer setup will, when "make" is run. create a library called "blib", the *b*uild *lib*rary - this is where the code is built for actual testing. It will pretty much be a copy of lib/ unless you have XS code, in which case this is where the compiled XS code ends up. This makes the process of building and testing the code simpler; if you update a file in lib/, the Makefile will rebuild the file into blib before trying to test it.
the t/ directory replaces test.pl; "make test" will execute all the *.t files in t/, as opposed to you having to put all your tests in test.pl. This makes it far easier to write tests, as you can be sure you have a consistent state at the beginning of each test.
MANIFEST and Changes are the same in both: MANIFEST (built by "make manifest") is used to determine which files in the build library should be redistributed when the module is packaged for upload, and used to verify that a package is complete when it's downloaded and unpacked for building. Changes is simply a changelog, which you edit by hand to record the changes made in each distributed version.
As recommended in the comments on your question, using Module::Starter or Dist::Zilla (be warned that Dist::Zilla is Moose-based and will install a lot of prereqs) is a better approach to building modules in a more modern way. Of the two, the h2xs version is closer to modern packaging standards, but you're really better off using one of the recommended package starter options (and probably Module::Build, which uses a Build Perl script instead of a Makefile to build the code).
I had developed a small program in netbeans using c++. I need to know how can i deploy/run the package on another linux system
Abdul Khaliq
I have seen your code, you probably missing XML files in the current folder... where the executable is located... paste then and then run as ./your-executable
I recommend that you use a makefile to recompile on your target machine which will ensure that your program is deployed properly.
You should use a makefile as suggested. I know that NetBeans can generate one, but it's been a while since I last did so. Maybe this can help: http://forums.netbeans.org/topic3071.html
Typically, once compiled, your executable will need several libraries. Chance is that those libraries will also be available on the target linux system.
Thus, you can simply copy your executable over to the other system. If you run ldd on your executable, you should see the list of libraries your executable is dynamically loading. Those libraries should be available on the target system as well.
In case your executable makes use of resources such as images and other binary files, you can use a resource system (e.g. Qt Resource System) and compile those binary files into your executable.
The easiest way to test is to do the copy, run
ldd yourExecutable
on the target system. It will tell you if you are missing any library. Install those libraries using the system package manager.
Of course, you also have the option to statically build all libraries into your executable. However, this is not recommended since it makes the executable too large and complicates matters.
What type of package is your netbeans compiler creating? deb,rpm? If you are moving the package to a different linux install you will need to use that distributions package type. Ubuntu - deb
Fedora/Redhat - rpm
etc...
I'm not sure how you change this in netbeans but I'm pretty sure it has the ability to. A google search could help you more.
With Perl, what is an easy way to handle different development versus production libs paths? I want to use my local box path for testing, but when I launch I want it to automatically point to the production lib path.
Take a look at the PERL5LIB environment variable, or for an even easier time, look at local::lib.
I think your dev box should really be a VM with an identical configuration to production, so you won't need to change the library path.
Libraries should be installed by the same mechanism so everything's consistent.
Not doing this is likely to risk you releasing non-working code to production due to library version differences.