netbeans c++ deployment - deployment

I had developed a small program in netbeans using c++. I need to know how can i deploy/run the package on another linux system
Abdul Khaliq

I have seen your code, you probably missing XML files in the current folder... where the executable is located... paste then and then run as ./your-executable

I recommend that you use a makefile to recompile on your target machine which will ensure that your program is deployed properly.

You should use a makefile as suggested. I know that NetBeans can generate one, but it's been a while since I last did so. Maybe this can help: http://forums.netbeans.org/topic3071.html

Typically, once compiled, your executable will need several libraries. Chance is that those libraries will also be available on the target linux system.
Thus, you can simply copy your executable over to the other system. If you run ldd on your executable, you should see the list of libraries your executable is dynamically loading. Those libraries should be available on the target system as well.
In case your executable makes use of resources such as images and other binary files, you can use a resource system (e.g. Qt Resource System) and compile those binary files into your executable.
The easiest way to test is to do the copy, run
ldd yourExecutable
on the target system. It will tell you if you are missing any library. Install those libraries using the system package manager.
Of course, you also have the option to statically build all libraries into your executable. However, this is not recommended since it makes the executable too large and complicates matters.

What type of package is your netbeans compiler creating? deb,rpm? If you are moving the package to a different linux install you will need to use that distributions package type. Ubuntu - deb
Fedora/Redhat - rpm
etc...
I'm not sure how you change this in netbeans but I'm pretty sure it has the ability to. A google search could help you more.

Related

Swift toolchain location on Linux

I'm looking into running Swift on a Ubuntu 16.04 server. However I want to be certain about where I should install the toolchain.
From swift.org:
If you installed the Swift toolchain on Linux to a directory other than the system root, you will need to run the following command, using the actual path of your Swift installation...
Then from Kitura's Setting Up instructions:
After extracting the .tar.gz file, update your PATH environment variable so that it includes the extracted tools:
$ export PATH=<path to uncompressed tar contents>/usr/bin:$PATH
Where is the best place to install these type of things? In the past I would rely on apt-get or installation scripts provided by maintainers but this doesn't seem to be the case with Swift.
Are there any benefits or disadvantages to not installing it at the system root?
Note: This question borders on "best practices", which I believe is frowned upon here. I'm sorry about that; I've googled around and this seems to be something that people know implicitly. However, I don't yet and need some guidance
The versions of the software in your system root - in /usr/bin, /usr/share, /usr/lib, etc. - are carefully coordinated by the maintainers of your distribution to handle all reasonable dependencies. The maintainers also keep the software up-to-date with bug fixes.
When you need to install software that isn't supplied by your distribution, it's best to install it in a separate directory, such as /opt (in your case, one possibility is /opt/swift-3.1.1). This will avoid overwriting existing installed software (in your case, /usr/bin/lldb and /usr/lib/lldb) with something that's possibly incompatible with other software. And it will make it easy to uninstall (just rm -r /opt/swift-3.1.1 rather than having to get a list of files from the original tarball that are potentially strewn all over /usr).
There is some extra effort: you'll need to add /opt/swift-3.1.1/usr/bin to your PATH1. With some software, you'll need to add the directory containing dynamic library files to LD_LIBRARY_PATH. The software's installation instructions typically explains what you need to do.
[1]An alternative to changing PATH is to add a symlink to each new executable, in a directory that's already in your PATH. GNU Stow can help you do this.

What is difference between installing a perl module and copying whole folder?

I have installed a perl module, say XYZ then a folder is created that contains many .pm files. I copied the folder and put it in any other system where XYZ is not installed. So, I'm able to use methods of XYZ module in both system. I mean, I'm unable to find out the difference between these method, but I think there must be some. What I know is, when we install a perl module then dependencies also gets installed. Am I right? Can anyone mention other difference between two, if any?
A few off the top of my head:
In case of an XS module, the code is compiled for the local platform.
Installing a module via cpan usually runs the test suite so if there is any other reason beyond dependencies why it wouldn't work, you're told so (I guess that's very rare though)
Regular installation automatically goes to a directory where your perl can find modules.
Of course you can take care of all these yourself. These days chances are pretty good you're running either Linux or Windows on something x86-ish and as long as you only copy Linux to Linux and Windows to Windows, and to the same place as on the source system, you'll be fine. Basically that's what binary Linux distributions and ActivePerl packages do, too, and it may make sense e.g. if you want to avoid installing a whole bunch of compile-time dependencies on all target systems. Just make sure you don't get yourself into a mess by writing to system directories (e.g. /usr/share/perl5) that are supposed to be managed by your system's package manager.

Inputs for Porting Perl

I am a very new to porting.
I was trying to port perl to a netbsd system. Since its a custom made build, we wont be able to run configure or make on the target netbsd system. So we are trying to cross-compile it in a host pc and copy the binary over target machine. And in order to do so, we have to make a makefile from scratch, since the format for the makefile in our build is different.
I have some basic doubts regarding this,
Firstly, In order to create a perl makefile for my custom build, what are the basic things will come. Such as ccflags, library paths etc.,?
There are some files like DynaLoader, uudmap.h, myConfig, Config.pm which gets generated while "make". How can i generate them using custum makefile.
How to set various library paths and what are they ?
The #INC, shows the perl search paths, how can i create it ?
Where exactly Perl modules get installed and when it happens?
A perl build normally involves building a stripped down version of perl named miniperl, which is then used extensively in the remainder of the process of building perl and the bundled modules.
There are two basic approaches to cross-compiling: to build miniperl for the target machine and build the modules, etc., there, or to build miniperl for the host and use it to build perl and modules for the target.
The WinCE port uses the latter approach; the rudimentary (last I knew, anyway) support for a -Dusecrosscompile switch to Configure uses the former.
I recommend you ask for advice and help on the perl5-porters mailing list: http://lists.perl.org/list/perl5-porters.html
And be prepared for hard work.
NetBSD's pkgsrc system has perl in it already and has the ability to generate binary packages that you can then install on a target machine.

Cross-compiling Makefile: dealing with test programs

I'm trying to cross-compile several libraries from OSX to iOS. I've successfully cross-compiled libjpeg and libogg.
But I can't compile libvorbis because configure insists on creating and running a small test program. This obviously fails, because it creates an armv7 binary, fails to run it, and then interprets this as missing ogg libraries.
How do you usually deal with this kind of problem? I'm tempted to hack the configure script to work around these issues, but because of this kind of failure some features may be disabled. I'm also thinking of letting configure generate a native Makefile and then convert it to use the iOS toolchain, but this seems too error prone.
Any advice?
If you are cross-compiling anything that has more dependencies than libc (glibc) it becomes much more complicated. You need to have already cross-compiled all the dependencies. And the cross-compiler toolchain and all helper build programs and scripts need to know how to find those dependencies (the cross-compiled libraries and headers).
You need to have already cross-compiled libogg (and its dependencies) and installed them into the cross-compile root directory. The headers and libraries from your build system can't be used for the host (arm7) system. They must be kept separate.
Also, if you want to have shared object libraries (*.so) and not just static libraries then there is a whole new set of complications. For example, while a cross-compiler toolchain contains a cross-compiled libc as part of the toolchain, you still need a libc for the host system. The libc that is part of the toolchain can be used for this, but the way it is structured is different than on the host system. Sometimes people copy and re-arrange the files, but often people just compile and install a new glibc for the root.
Anyways, all that to say, the two errors you are seeing are because the configure script is not able to find a cross-compiled libogg library. If you haven't already, you need to cross-compile libogg (and dependencies) and install them into your target root. Then you need to tell the configure script where your cross-compiled headers (yes, header are architecture specific) and libraries are in your target root. Usually using CFLAGS, LDFLAGS, CXXFLAGS, etc (NOT --prefix) but there may be other environment variables you need to set also to affect things like pkg-config, etc. After you have built each dependency, then you need to get the makefile to install the dependency to the root. Usually this is done with make DESTDIR=[root] install but some makefiles have their own mechanism (or no proper alternate install mechanism).
You may also need to override certain configure checks (using environment variables) that are poorly written and don't have good cross-compile defaults. These variables usually start with ac_cv_*
So the basic process is to do this for packages that you need (in dependency order):
export CFLAGS=-I[root]/usr/include LDFLAGS=-L[root]/usr/lib CXXFLAGS=-I[root]/usr/include
export ac_cv_[test1]=[yes|no] ac_cv_[test2]=[yes|no] ...
./configure --host=[arm7-blah-blah]
make
make DESTDIR=[root] install
Good luck. Once you feel comfortable with standard cross-compiling, then you will be ready to take on the real black art, the Canadian cross ;-)
I finally figured it out. I tricked configure by explicitly making it link with ogg (LDFLAGS="/usr/local/ios/lib/libogg-armv7.a" ./configure ...) and then removed the explicit reference to the library from the generated makefile.

Running java without installing jre?

As asked and answered here, python has a useful way of deployment without installers. Can Java do the same thing?
Is there any way to run Java's jar file without installing jre?
Is there a tool something like java2exe (win32), java2bin (linux) or java2app (mac)?
You can use Launch4j for this. Well documented and easy to use. While the resulting program still needs a JRE to run, you don't have to install the JRE on the target system. You can just copy it with your application and tell Launch4j were to find it or just wrap it up with everything else.
For creating native executables, you can use Excelsion Jet, which compiles Java to native code. We used it for a project at work, and we had to perform zero modification to the original source code (which targetted Sun's JDK).
you can embbed the JRE inside your application and create a setup or installation for your application.
You can have a look at
http://www.bearcave.com/software/java/comp_java.html
You might get it what you want.
You might want to check out how Eclipse does it - it has a native .exe that can use a local (to the installation) JRE.
You might be able to get some luck with GCJ - haven't tried it myself.
You can do it with NetBeans and a couple of tools. The result is a standalone installer that packages everything you need, so your software can run without installing JRE. It is also completely portable, because it install your software on AppData, that is, it does not need privileges to be installed. Maybe you can even configure the installation path, or you can install it on your own PC, locate the folder and copy it to distribute your software in that way.
Check the Answer I made on different post
You can use jlink to create your own customized jre which would contain only those dependencies which are needed for execution. This deployment method is really efficient. please follow **this**link for one such example.