I have successfully configure perl for cross comile by using configure options:
./Configure -des -Dusecrosscompile \
-Dtargethost=172.17.185.91 \
-Dtargetdir=/home/perl/ \
-Dtargetuser=root \
-Dtargetarch=arm-linux \
-Dcc=arm-linux-gcc \
-Dusrinc=/opt/Mozart_Toolchain/arm-eabi-uclibc/include/ \
-Dincpth=/opt/Mozart_Toolchain/arm-eabi-uclibc/include/ \
-Dlibpth=/opt/Mozart_Toolchain/arm-eabi-uclibc/lib/
And the configure script tell me "Now you must run 'make'." But I encounter such as error when I make:
`sh cflags "optimize='-O2'" miniperlmain.o` miniperlmain.c
CCCMD = arm-linux-gcc -DPERL_CORE -c -DOVR_DBL_DIG=14 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2 -Wall
In file included from perl.h:38,
from miniperlmain.c:40:
config.h:4425:12: error: operator '==' has no left operand
In file included from miniperlmain.c:40:
perl.h:713:14: error: operator '>=' has no left operand
... ...
In config.h, some macro is left blank, for example:
#define INTSIZE /**/
#define LONGSIZE /**/
#define SHORTSIZE /**/
... much more ...
And I think it is the undefined macro result in the make error. I have no idea how to fix it. Why the macro is blank even if successfully configure?
Are there some guides to cross compile Perl?
There is a Cross directory that features a README file that includes the following instructions for arm-linux:
1) You should be reading me (README) in perl-5.x.y/Cross
2) Make sure you are in the Cross directory.
3) Edit the file 'config' to contain your target platform information.
4) make patch ## This will patch the existing source-tree.
5) make perl ## Will make perl
(Read the whole thing.)
I got the easiest way to cross compile Perl for arm-linux.Please refer to Cross-compiling perl. It's a great work! It saved my life.
Just according to instructions that give, you can get what you want. You may encounter such error when 'make':
pp_sys.c:78: error: non-thread-local declaration of 'h_errno' follows thread-local declaration
Simply comment that line.
Enjoy it!
Related
while I can use several turtle scripts in the same directory
(have eg. pretty.hs and srv.hs interpreted), I learned that I can have
only have one of them compiled eg. with
ghc -no-user-package-db -package-db .cabal-sandbox/*-packages.conf.d -O2 -threaded -outputdir=. -o srv srv.hs
as this implicitly builds Main.o and Main.hi as well, and and srv and pretty would
need two different object files, obviously.
What's the story of Turtle and the Main module anyway: wouldn't it
have been nicer, if one could use (and thus choose) a module name, like so
Module Whatever
import Turtle
I tried to compile the .o files separatly, but with no luck:
$ ghc -no-user-package-db -package-db .cabal-sandbox/*-packages.conf.d -O2 -threaded -outputdir=. -c -o MainPretty.o pretty.hs
no complaints so far, but then:
$ ghc -no-user-package-db -package-db .cabal-sandbox/*-packages.conf.d -O2 -threaded -outputdir=. -o pretty MainPretty.o
MainPretty.o: In function `rdyO_info':
(.text+0x40e): undefined reference to `transzuGZZTjP9K5WFq01xC9BAGQpF_ControlziMonadziIOziClass_zdfMonadIOIO_closure'
MainPretty.o: In function `rdyQ_info':
(.text+0x4d6): undefined reference to `transzuGZZTjP9K5WFq01xC9BAGQpF_ControlziMonadziIOziClass_zdfMonadIOIO_closure'
MainPretty.o: In function `cfxy_info':
(.text+0x712): undefined reference to `optpazuFpNJ7fLofFNEy3rK4ZZnBoD_OptionsziApplicativeziTypes_AltP_con_info'
MainPretty.o: In function `cfxy_info':
(.text+0x72e): undefined reference to `systezu0e3pMPmZZzzix21iFp2U03Lc_FilesystemziPathziRules_posixFromText_closure'
MainPretty.o: In function `cfyR_info':
(.text+0x92a): undefined reference to `optpazuFpNJ7fLofFNEy3rK4ZZnBoD_OptionsziApplicativeziTypes_AltP_con_info'
and so on...
Is it possible nevertheless to compile two different turtle scripts in the same dir? how?
Thanks.
Ah, to answer my own question: I saw that I just have to remove these Main.o/Main.hi files after compiling (to have different one created anew then),
like so:
ghc -no-user-package-db -package-db .cabal-sandbox/*-packages.conf.d -O2 -threaded -outputdir=. -o pretty pretty.hs
rm -f Main.o Main.hi
Sorry for the noise
I have an autotools-based BitBake recipe which I would like to have binaries installed in /usr/local/bin and libraries installed in /usr/local/lib (instead of /usr/bin and /usr/lib, which are the default target directories).
Here's a part of the autotools.bbclass file which I found important.
CONFIGUREOPTS = " --build=${BUILD_SYS} \
--host=${HOST_SYS} \
--target=${TARGET_SYS} \
--prefix=${prefix} \
--exec_prefix=${exec_prefix} \
--bindir=${bindir} \
--sbindir=${sbindir} \
--libexecdir=${libexecdir} \
--datadir=${datadir} \
--sysconfdir=${sysconfdir} \
--sharedstatedir=${sharedstatedir} \
--localstatedir=${localstatedir} \
--libdir=${libdir} \
...
I thought that the easiest way to accomplish what I wanted to do would be to simply change ${bindir} and ${libdir}, or perhaps change ${prefix} to /usr/local, but I haven't had any success in this area. Is there a way to change these installation variables, or am I thinking about this in the wrong way?
Update:
Strategy 1
As per Ross Burton's suggestion, I've tried adding the following to my recipe:
prefix="/usr/local"
exec_prefix="/usr/local"
but this causes the build to fail during that recipe's do_configure() task, and returns the following:
| checking for GLIB... no
| configure: error: Package requirements (glib-2.0 >= 2.12.3) were not met:
|
| No package 'glib-2.0' found
This package can be found during a normal build without these modified variables. I thought that adding the following line might allow the system to find the package metadata for glib:
PKG_CONFIG_PATH = " ${STAGING_DIR_HOST}/usr/lib/pkgconfig "
but this seems to have made no difference.
Strategy 2
I've also tried Ross Burton's other suggestion to add these variable assignments into my distribution's configuration file, but this causes it to fail during meta/recipes-extended/tzdata's do_install() task. It returns that DEFAULT_TIMEZONE is set to an invalid value. Here's the source of the error from tzdata_2015g.bb
# Install default timezone
if [ -e ${D}${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ]; then
install -d ${D}${sysconfdir}
echo ${DEFAULT_TIMEZONE} > ${D}${sysconfdir}/timezone
ln -s ${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ${D}${sysconfdir}/localtime
else
bberror "DEFAULT_TIMEZONE is set to an invalid value."
exit 1
fi
I'm assuming that I've got a problem with ${datadir}, which references ${prefix}.
Do you want to change paths for everything or just one recipe? Not sure why you'd want to change just one recipe to /usr/local, but whatever.
If you want to change all of them, then the simple way is to set prefix in your local.conf or distro configuration (prefix = "/usr/local").
If you want to do it in a particular recipe, then just assigning prefix="/usr/local" and exec_prefix="/usr/local" in the recipe will work.
These variables are defined in meta/conf/bitbake.conf, where you can see that bindir is $exec_prefix/bin, which is probably why assigning prefix didn't work for you.
Your first strategy was on the right track, but you were clobbering more than you wanted by changing only "prefix". If you look in sources/poky/meta/conf/bitbake.conf you'll find everything you are clobbering when you set the variable "prefix" to something other than "/usr" (like it was in my case). In order to modify only the install path with what would manually be the "--prefix" option to configure, I needed to set all the variables listed here in that recipe:
prefix="/your/install/path/here"
datadir="/usr/share"
sharedstatedir="/usr/com"
exec_prefix="/usr"
I'm trying to link a library using mex from command line, or more exactly, from a makefile. I do this from a Makefile which I post here:
BDDM_MATLAB = #matlabhome#
MEXCC = $(BDDM_MATLAB)/bin/mex
MEXFLAGS = -v -largeArrayDims -O
MEXEXT = mexa64
TDIR = $(abs_top_srcdir)/test
IDIR = $(abs_top_srcdir)/src
LDIR = $(abs_top_srcdir)/lib
LOP1 = $(CUDA_LDFLAGS) $(LIBS)
SOURCES := $(wildcard *.cpp)
OBJS = $(SOURCES:.cpp=.o)
mTESTS = $(addprefix $(TDIR)/, $(SOURCES:.cpp=.$(MEXEXT)))
all: $(TDIR) $(mTESTS)
$(OBJS) : %.o : %.cpp
$(MEXCC) $(MEXFLAGS) -c -outdir ./ -output $# $(CUDA_CFLAGS) -I$(IDIR) CFLAGS="\$$CFLAGS -std=c99" $^
$(mTESTS) : $(TDIR)/%.$(MEXEXT) : %.o
$(MEXCC) $(MEXFLAGS) -L$(LDIR) -outdir $(TDIR) $^ $(LOP1) -lmpdcm LDFLAGS="-lcudart -lcuda"
.PHONY = $(TDIR)
$(TDIR):
$(MKDIR_P) $#
clean:
$(RM) *.o
libmpdcm is a static library that includes calls to two shared libraries libcuda and libcudart. My environment has
export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH:
My make rule produces
/usr/local/MATLAB/R2014a/bin/mex -v -largeArrayDims -O -L/home/eaponte/projects/test_cpp/lib -outdir /home/eaponte/projects/test_cpp/test test_LayeredEEG.o -L/usr/local/cuda/lib64 -lcudart -lcuda -lmpdcm LDFLAGS="-lcudart -lcuda"
This produces the following g++ command:
/usr/bin/gcc -lcudart -lcuda -shared -O -Wl,--version-script,"/usr/local/MATLAB/R2014a/extern/lib/glnxa64/mexFunction.map" test_LayeredEEG.o -lcudart -lcuda -lmpdcm -L/home/eaponte/projects/test_cpp/lib -L/usr/local/cuda/lib64 -L"/usr/local/MATLAB/R2014a/bin/glnxa64" -lmx -lmex -lmat -lm -lstdc++ -o /home/eaponte/projects/test_cpp/test/test_LayeredEEG.mexa64
The problem is that afterwards I get a linking error in Matlab:
Invalid MEX-file '/home/eaponte/projects/test_cpp/test/test_Fmri.mexa64': /home/eaponte/projects/test_cpp/test/test_Fmri.mexa64: undefined symbol: cudaFree
I know that the solution is simply to put the cuda libraries at the end of the g++ command
/usr/bin/gcc -lcudart -lcuda -shared -O -Wl,--version-script,"/usr/local/MATLAB/R2014a/extern/lib/glnxa64/mexFunction.map" test_LayeredEEG.o -lmpdcm -L/home/eaponte/projects/test_cpp/lib -L/usr/local/cuda/lib64 -L"/usr/local/MATLAB/R2014a/bin/glnxa64" -lmx -lmex -lmat -lm -lstdc++ -lcudart -lcuda -o /home/eaponte/projects/test_cpp/test/test_LayeredEEG.mexa64
How can achieve that running mex from command line (or from a Makefile)?
Just to illuminate the problem and solution and offer some help in avoiding the like:
The fundamental rule of linkage with the GNU linker
that your problem makefile transgressed is: In the commandline sequence of entities to be linked, ones that need symbol definitions
must appear before the ones that provide the definitions.
An object file (.o) in the linkage sequence will be incorporated entire in the output executable,
regardless of whether or not it defines any symbols that the executable uses. A library
on the other hand, is merely examined to see if it provides any definitions for symbols
that are thus-far undefined, and only such definitions as it provides are linked into in
the executable (I am simplifying somewhat). Thus, linkage doesn't get started until some object file is seen,
and any library must appear after everything that needs definitions from it.
Breaches of this principle usually arise from inept bundling of some linker flag-options
and some library-options together into a make-variable and its placement in the linkage recipe,
with the result that the bundled options are interpolated at a position that is valid for
the flags but not valid for libraries. This was so in your problem makefile, with LOP1 the
bad bundle.
In the typical case, the bundling causes all of the libraries to be placed before all the object files,
and never mentioned again. So the object files yield undefined symbol errors, because the libraries
they require were seen by the linker before it had discovered any undefined symbols, and were ignored.
In your untypical case, it resulted in libcudart and libcuda being seen later than your only
object file test_LayeredEEG.o - which however required no symbols from them - but earlier than
the only thing that did require symbols from them, the library libmpdcm. So they were ignored,
and you built a .mex64 shared library that had not been linked with them.
Long ago - pre-GCC 4.5 - shared libraries (like libcudart and libcuda) were exempt
from the requirement that they should be needed, at the point when the linker sees them,
in order to be linked. They were linked regardless, like object files, and the belief that
this is so has not entirely died out. It is not so. By default, shared libraries and
static libraries alike are linked if and only if needed-when-seen.
To avoid such traps it is vastly helpful to understand the canonical nomenclature of
the make variables involved in compilation and linkage and their semantics, and
their canonical use in compilation and linkage recipes for make. Mex is a
manipulator of C/C++/Fortran compilers that adds some commandline options of its own:
for make purposes, it is another compiler. For the options that it inherits from and
passes to the underlying compiler, you want to adhere to the usage for that compiler in make recipes.
These are the make variables most likely to matter to you and their meanings:
CC = Your C compiler, e.g. gcc
FC = Your Fortran compiler, e.g. gfortran
CXX = Your C++ compiler, e.g. g++.
LD = Your linker, e.g. ld. But you should know that only for specialized uses
should the linker be directly invoked. Normally, the real linker is invoked on your
behalf by the compiler. It can deduce from the options that you pass it whether you
want compiling done or linking done, and will invoke the appropriate tool. When you
want linking done, it will quietly augment the linker options that you pass with
additional ones that it would be very tiresome to specify, but which ensure
that the linkage acquires all the the correct flags and libraries for the language of the
program you are linking. Consequently almost always, specify your compiler as your
linker.
AR = Your archiving tool (static library builder)
CFLAGS = Options for C compilation
FFLAGS = Options for Fortran compilation
CXXFLAGS = Options for C++ compilation
CPPFLAGS = Options for the C preprocessor, for any compiler that uses it. Avoid the common mistake of writing CPPFLAGS when you mean CXXFLAGS
LDFLAGS = Options for linkage, N.B. excluding library options, -l<name>
LDLIBS = Library options for linkage, -l<name>
And the canonical make rules for compiling and linking:
C source file $< to object file $#:
$(CC) $(CPPFLAGS) $(CFLAGS) -c $# $<
Free-from Fortran file $< to object file $#, with preprocessing:
$(FC) $(CPPFLAGS) $(FFLAGS) -c $# $<
(without preprocessing, remove $(CPPFLAGS))
C++ source file $< to object file $#:
$(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $# $<
Linking object files $^ into an executable $#:
$(<compiler>) $(LDFLAGS) -o $# $^ $(LDLIBS)
If you can as much as possible write makefiles so that a) you have assigned the right options to the right variables from
this glossary, and b) used the canonical make recipes, then your path will be much smoother.
And BTW...
Your makefile has the following bug:
.PHONY = $(TDIR)
This is apparently an attempt to make $(TDIR) a phony target,
but the syntax is wrong. It should be:
.PHONY: $(TDIR)
what the assignment does is simply create a make variable called, .PHONY with the value of $(TDIR),
and does not make $(TDIR) a phony target.
Which is fortunate, because $(TDIR) is your output directory and not a phony
target.
You wish to ensure that make creates $(TDIR) before you need to output anything into
it, but you do not want it to a normal prequisite of those artefacts, which would oblige
make to rebuild them whenever the timestamp of $(TDIR) was touched. That is presumably
why you thought to make it a phony target.
What you actually want $(TDIR) to be is an order-only prerequsite
of the $(mTESTS) that will be output there. The way to do that is to amend the $(mTESTS) rule to be:
$(mTESTS) : $(TDIR)/%.$(MEXEXT) : %.o | $(TDIR)
This will cause $(TDIR) to be made, if needed, before $(mTESTS) is made, but
nevertheless $(TDIR) will not be considered in determining whether $(mTESTS) does
need to be made.
On the other hand, the targets all and clean are phony targets: no such artefacts
are to be made, so you should tell make so with:
.PHONY: all clean
As pointed out in the comments, the problem was in the order of the dynamic libraries in the compilation flags. After searching the reason for this I found in SO that static libraries need to be linked taking into account the order of dependency. In my case, the library libmpdc had dependencies on libcuda and libcudart but was on the left. The solution is to swap the order in the makefile from:
$(mTESTS) : $(TDIR)/%.$(MEXEXT) : %.o
$(MEXCC) $(MEXFLAGS) -L$(LDIR) -outdir $(TDIR) $^ $(LOP1) -lmpdcm LDFLAGS="-lcudart -lcuda"
to
$(mTESTS) : $(TDIR)/%.$(MEXEXT) : %.o
$(MEXCC) $(MEXFLAGS) -L$(LDIR) -outdir $(TDIR) $^ -lmpdcm $(LOP1)
I am beginner in Iphone programming. I am trying to compile (ubuntu).
#import <Foundation/Foundation.h>
int main (void)
{
NSLog (#"Executing");
return 0;
}
I compiled it but getting following error
subhash#subhash-Lenovo-G570:~/grit/iphone/mac$ gcc -lgnustep-base -lpthread -lob
jc -lm -I/usr/local/include/GNUstep -I/usr/include/GNUstep -fconstant-string-cla
ss=NSConstantString hello.m -o hello
In file included from /usr/include/GNUstep/Foundation/NSClassDescription.h:30:0,
from /usr/include/GNUstep/Foundation/Foundation.h:50,
from hello.m:1:
/usr/include/GNUstep/Foundation/NSException.h:42:2: error: #error The current se
tting for native-objc-exceptions does not match that of gnustep-base ... please
correct this.
i followed the http://ubuntuforums.org/showthread.php?p=5593608 as a reference.
I commented #error directive of NSException.h and problem is solved. Now i am getting new error.
/tmp/ccQlI9wJ.o: In function `main':
hello.m:(.text+0x11): undefined reference to `NSLog'
/tmp/ccQlI9wJ.o: In function `__objc_gnu_init':
hello.m:(.text+0x2a): undefined reference to `__objc_exec_class'
/tmp/ccQlI9wJ.o:(.data+0x40): undefined reference to `__objc_class_name_NSConsta
ntString'
collect2: ld returned 1 exit status
in Compile Objective-C Programs Using gcc
there is the following
Also note that if you did not include -D_NATIVE_OBJC_EXCEPTIONS, you
may run into the following error:
/usr/include/GNUstep/Foundation/NSException.h:42:2: error: #error The
current setting for native-objc-exceptions does not match that of
gnustep-base ... please correct this.
I has the same error as the original poster did, and passing the -D_NATIVE_OBJC_EXCEPTIONS flag fixed the problem for me.
I was trying to do something pretty non-standard, so it might not work for everyone.
Note that shalki's answer may also fix the issue. In case the link referenced there vanishes,
the blog post in question,
Compile Objective-C Programs on Linux
is in Chinese or Japanese or something like that, so I don't know exactly what it is saying, but I think the upshot is
to pass
`gnustep-config --objc-flags`
as an argument to gcc. The post has
gcc `gnustep-config --objc-flags` hello.m -o hello -I /usr/include/GNUstep/ -L /usr/lib/GNUstep/ -lgnustep-base
at the end. Now, on my machine, gnustep-config --objc-flags expands to
-MMD -MP -DGNUSTEP -DGNUSTEP_BASE_LIBRARY=1 -DGNU_GUI_LIBRARY=1 -DGNU_RUNTIME=1 -DGNUSTEP_BASE_LIBRARY=1 -D_REENTRANT -fPIC -Wall -DGSWARN -DGSDIAGNOSE -Wno-import -g -O2 -fno-strict-aliasing -fexceptions -fobjc-exceptions -D_NATIVE_OBJC_EXCEPTIONS -fgnu-runtime -fconstant-string-class=NSConstantString -I. -I/home/faheem/GNUstep/Library/Headers -I/usr/local/include/GNUstep -I/usr/include/GNUstep
Yowza. Notice that this list of flags does contain-D_NATIVE_OBJC_EXCEPTIONS, along with lots of other stuff. For the record, my machine is running Debian squeeze. This might be a Debian/Ubuntu specific problem. I'm not sure.
Better write a GNUmakefile.
http://www.gnustep.it/nicola/Tutorials/WritingMakefiles/index.html
And better stop using gcc and switch to clang.
I want to stop debug optimization in eclipse cdt and I read article about this
http://husks.wordpress.com/2012/03/29/hardware-debugging-the-arduino-using-eclipse-and-the-avr-dragon/
it supposed to see tool setting in eclipse indigo but I didn't see it.
what is the problem
see this for more info
https://sourceforge.net/projects/cmusphinx/forums/forum/5471/topic/5170910
this is my make file
TOP=../../..
DIRNAME=src/programs/init_gau
BUILD_DIRS =
ALL_DIRS= $(BUILD_DIRS)
SRCS = \
accum.c \
init_gau.c \
main.c \
parse_cmd_ln.c
H = \
accum.h \
init_gau.h \
mk_sseq.h \
parse_cmd_ln.h
FILES = Makefile $(SRCS) $(H)
TARGET = init_gau
ALL = $(BINDIR)/$(TARGET)
include $(TOP)/config/common_make_rules
I found this config file
# -*- makefile -*-
#
# This file is automatically generated by configure.
# Do not hand edit.
CC = gcc
CFLAGS = -g -O0 -Wall -fPIC -DPIC
CPPFLAGS = -I/media/sda5/sphinx/tutorial/SphinxTrain/../sphinxbase/include -I/media/sda5/sphinx/tutorial/SphinxTrain/../sphinxbase/include
DEFS = -DPACKAGE_NAME=\"SphinxTrain\" -DPACKAGE_TARNAME=\"sphinxtrain\" -DPACKAGE_VERSION=\"1.0.99\" -DPACKAGE_STRING=\"SphinxTrain\ 1.0.99\" -DPACKAGE_BUGREPORT=\"\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_LIBM=1
LIBS = -lm -lsphinxbase
LDFLAGS = -L/media/sda5/sphinx/tutorial/SphinxTrain/../sphinxbase/src/libsphinxad -L/media/sda5/sphinx/tutorial/SphinxTrain/../sphinxbase/src/libsphinxbase -L/media/sda5/sphinx/tutorial/SphinxTrain/../sphinxbase/src/libsphinxbase/.libs
AR = ar
RANLIB = ranlib
FESTIVAL = /usr/bin/festival
PERL = /usr/bin/perl
The options are under project properties as explained in first tutorial. If you are trying to build a project with existing makefile, then you need to edit the makefile.
You dont typiclly need to change project properties. Debug configuration builds without optimization by default. You just need to make sure you jave it selected. This is done using the icon (sundial? - the one next to CDT's build (hammer)).