How can I get the version of eclipse(x32 or x64 ) in OS X?
You can use the file command to get more information about a file:
$ file /Applications/Preview.app/Contents/MacOS/Preview
/Applications/Preview.app/Contents/MacOS/Preview: Mach-O universal binary with 2 architectures
/Applications/Preview.app/Contents/MacOS/Preview (for architecture x86_64): Mach-O 64-bit executable x86_64
/Applications/Preview.app/Contents/MacOS/Preview (for architecture i386): Mach-O executable i386
As you can see, /Applications/Preview.app/Contents/MacOS/Preview (the executable file for Preview) is both 32- and 64-bit. Substitute the path to Eclipse’s executable and you should be able to find out.
Run Activity Monitor while Eclipse is running. There is a column called 'Kind' which shows whether an app is running in 32 or 64-bit mode.
Related
I have 64-bit Mac, OS X 10.8.5, and I have xcode installed. I can also verify gcc works from the command line. When I type mex -setup I get
The options files available for mex are:
1: /Applications/MATLAB_R2013a.app/bin/mexopts.sh :
Template Options file for building MEX-files
0: Exit with no changes
This is unhelpful. And when I type make, with all of the relevant libsvm files in my folder of choice, I get
make
xcodebuild: error: SDK "macosx10.7" cannot be located.
xcrun: error: unable to find utility "clang", not a developer tool or in PATH
mex: compile of ' "libsvmread.c"' failed.
If make.m fails, please check README about detailed instructions.
Is anyone able to help me with this?
The quickest thing is to edit the mexopts.sh file directly, using your favorite text editor (you may need to do this with "Administrator Privileges"). The file:
/Applications/MATLAB_R2013a.app/bin/mexopts.sh
defines a bunch of paths and flags for invoking the C/C++ compiler on your system. It tends not to keep up with revisions to the MacOS.
On my system, I had to make the following changes:
lines 258-260
CC='gcc'
SDKROOT='/Developer/SDKs/MacOSX10.6.sdk'
MACOSX_DEPLOYMENT_TARGET='10.6'
line 273
CXX=g++
There will be many references to "CC=" in the file; you're looking for the ones that follow the line
maci64)
But the correct values for your system depend on which gcc/g++ you have and where they are installed. As you can see, I have the MacOS 10.6 Developer tools installed under /Develop. You will need an install of the Developer tools (XCode) - see
How to use/install gcc on Mac OS X 10.8 / Xcode 4.4
In more recent versions of the XCode tools, the path might look more like:
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk
But compiling MEX code with more recent versions of XCode might cause other problems - I had issues with char16_t, see:
MEX compile error: unknown type name 'char16_t'
This might seem like a very specific question but central idea is quite broad.
I have a simple hello world console application in C. I've compiled it on Mac OS X using following command:
$ export PLATFORM=/Developer/Platforms/iPhoneOS.platform
$ $PLATFORM/Developer/usr/bin/arm-apple-darwin10-llvm-gcc-4.2 -o hello hello.c -isysroot $PLATFORM/Developer/SDKs/iPhoneOS5.0.sdk/
It compiles successfully but gives this warning:
ld: warning: -force_cpusubtype_ALL will become unsupported for ARM architectures
Now, when I run lipo -info hello I get Non-fat file: hello is architecture: arm
Which specific arm is it and how to compile it to armv7 specifically?
A) "Lipo" is only for fat binaries (that is , multi-architecture). You're running it on a Mach-O file, single architecture. If you tried "file hello" it would tell you "mach-o Executable arm".
B) "arm" is , if memory serves, armv6. You can compile to armv6 by specifying "-arch armv7". You also specify "armv7s" (for Apple A6 devices), and now also arm64 (technically armv8) for 5S/iPad Air/Mini 2. Though technically, all ARM architectures are also v6 compatible, and the v7/v7s only makes a difference for NEON/SIMD instructions.
C) You can compile multiple times for different architectures (even x86_64) with different -arch specifiers, then use lipo -create to fuse all the binaries together to one big binary (hence the name "fat" binary), which would work on all devices.
say, if choose mexopts.sh as the configuration files for mex,
then how does mex decide which option listed in the mexopts.sh is used for compiling?
for example, using 32bit matlab on 64bit mac os x:
mexopts.sh looks like:
,,maci
........
,,maci64
......
Then, maci or maci64 is used when compiling?
What commands or way can I do in order to compile 32bit lib instead of 64bit lib?
Further explanation of my process and the error message I got :
I am using mac os x 10.8 (64bit) with matlab R2010a (32bit) to produce a binary mex-file.
The Xcode is 4.6 version, I installed Command Line Tools on my machine. Then I downloaded the patch from matlab for 2011 and 2012 version anyway. (if I don't install the patch, I got a lot of link errors saying some header files are missing).
After I installed the patch ( I believe it changes my mexopts.sh file), when I run mex a.cpp, I got error message saying that /Applications/MATLAB_R2010a.app/bin/maci64 cannot be found. Of course, it cannot find the maci64 folder, it is 32bit, there is only maci folder.
So Anyone knows what I should do in order to make matlab look for maci folder instead of maci64 folder? Thanks a lot!
MATLAB does not support cross compilation of MEX files. So your 32-bit MATLAB installation should be producing 32-bit MEX files even though the OS is 64-bit.
Also, from the article I've linked
Further, beginning with R2010b, a 32-bit version of MATLAB is no longer produced for the Mac.
If you're running R2010b or later, your MATLAB is not 32-bit anyway.
To see what switches the MEX script is invoking the compiler with, use the -v option.
You can also use the file tool to check whether the generated binary is 32 or 64-bit.
As its been explained, MATLAB produces MEX files of the same bit-ness as itself, not that of the OS. This is true at least on Windows with recent MATLAB versions, where you can have either 32-bit or 64-bit MATLAB running on 64-bit Windows. Other platforms are moving towards 64-bit versions only.
Here is another way to get the configured mex switches:
>> cc = mex.getCompilerConfigurations
>> cc.Details
In my case I get:
>> cc = mex.getCompilerConfigurations
cc =
CompilerConfiguration with properties:
Name: 'Microsoft Visual C++ 2010'
Manufacturer: 'Microsoft'
Language: 'C++'
Version: '10.0'
Location: 'C:\Program Files\Microsoft Visual Studio 10.0'
Details: [1x1 mex.CompilerConfigurationDetails]
LinkerName: 'Microsoft Visual C++ 2010'
LinkerVersion: '10.0'
>> cc.Details
ans =
CompilerConfigurationDetails with properties:
CompilerExecutable: 'cl'
CompilerFlags: [1x115 char]
OptimizationFlags: '/O2 /Oy- /DNDEBUG'
DebugFlags: '/Z7'
LinkerExecutable: 'link'
LinkerFlags: [1x327 char]
LinkerOptimizationFlags: ''
LinkerDebugFlags: '/debug /PDB:"%OUTDIR%%MEX_NAME%%MEX_EXT%.pdb"'
To answer my own question, just for those who may be interested in it,
I checked the contents of mexopts.sh and modified the part for maci (specifically set ARCH=i386) , then compile. The error message is gone.
Here is the tool stack: Installed on Windows 7 (x64) is Eclipse (Juno x64) with CDT and the SConsolidator plugin. Underneath is the TDM-GCC (x64) bundle installed with 64-bit support.
If I build a 64-bit application and debug it using Eclipse (which uses gdb bundled with GCC), it builds without error and debugs fine.
When I build a 32-bit application and try to debug it with Eclipse, it builds fine but gdb fails:
gdb: unknown target exception 0x4000001f...
Debugging it with the same gdb via command line works fine.
Any ideas on how to work around this?
FYI: Here are some warnings leading up to the gdb exception:
warning: `C:\Windows\system32\ntdll.dll': Shared library architecture i386:x86-64 is not compatible with target architecture i386.
warning: `C:\Windows\SYSTEM32\wow64.dll': Shared library architecture i386:x86-64 is not compatible with target architecture i386.
warning: `C:\Windows\SYSTEM32\wow64win.dll': Shared library architecture i386:x86-64 is not compatible with target architecture i386.
warning: `C:\Windows\SYSTEM32\wow64cpu.dll': Shared library architecture i386:x86-64 is not compatible with target architecture i386.
warning: Could not load shared library symbols for ntdll32.dll.
Do you need "set solib-search-path" or "set sysroot"?
warning: Could not load shared library symbols for WOW64_IMAGE_SECTION.
Do you need "set solib-search-path" or "set sysroot"?
warning: Could not load shared library symbols for WOW64_IMAGE_SECTION.
Do you need "set solib-search-path" or "set sysroot"?
warning: Could not load shared library symbols for NOT_AN_IMAGE.
Do you need "set solib-search-path" or "set sysroot"?
warning: Could not load shared library symbols for NOT_AN_IMAGE.
Do you need "set solib-search-path" or "set sysroot"?
I had a similar problem in a different scenario but i think the solution should be applicable here too.
I was gdb.exe downloaded from http://www.equation.com/servlet/equation.cmd?fa=gdb to debug a C++ program.
I first tried the 64-bit because my pc is 64-bit but I got the same error as Paul got above. Then I tried 32-bit gdb.exe and it worked.
I also followed the links that were given by Paul and there was also bundle available for 32-bit. So I assume the bundles depend on the type of application rather than the configuration of the platform. But, I doubt 64-bit bundle will work on 32-bit architecture. I haven't tried that and can't say for sure.
I suggest install the bundles that support 32-bit to debug a 32-bit application.
I just installed CUDA 5.0 Preview (Mac OS X Lion) and I'm having trouble with Nsight.
The toolkit seems to be installed correctly. (Driver loads, nvcc -V works in bash, samples work fine).
When I create a new project I get warnings:
Error launching external scanner info generator (nvcc -dryrun ...)
Program 'nvcc' is not found in $PATH
In Preferences -> CUDA Toolkit I get no CUDA-compatible devices detected. Which is strange because I have nVidia GT 650M on my machine. So why doesn't Nsight recognize it?
If I try to build a project I get 2 errors:
/bin/sh: nvcc: command not found
make: * [src/test.o] Error 127
How do you start Nsight? Do you use /usr/local/cuda/bin/nsight? Unfortunately, it is not currently possible to launch Nsight by double-clinking the application on Mac OS X.
In the CUDA 5.0 Preview build we had a bug when shell script did not properly setup paths. This is how this script looks like in latest internal toolkit builds (you may need to adjust paths depending on your toolkit install location - in the final release installer will handle it):
#!/bin/sh
PATH="$PATH:/Developer/NVIDIA/CUDA-5.0/bin" DYLD_LIBRARY_PATH="$DYLD_LIBRARY_PATH:/Developer/NVIDIA/CUDA-5.0/lib" "/Developer/NVIDIA/CUDA-5.0/libnsight/nsight.app/Contents/MacOS/nsight" $#