memory command is not available even after compiled with TCL_MEM_DEBUG flag - command

I have memory related problem in my application on solaris9 environment where Tcl_DeleteInterp() function calls lot of free() and mutex_unlock() functions. To debug the problem i followed the below steps to compile tcl on solaris server (with TCL_MEM_DEBUG flag) but still i couldn't use the 'memory' command in my interpreter.
Ran configure script on server (./configure –prefix=<directory needs to be installed> --enable-symbols=mem)
Make clean all
Make install (tcl libraries and tlcsh exe is copied to the path specified in step1)
Compilation generated two libraries (libtcl8.4g.so and libtclstub8.4g.a), I copied libtcl8.4g.so as libtcl8.4.so to my app
Copied tcl8.4 directory as well.
I also copied the tclsh8.4 to $PROVHOME/bin and created soft link as tclsh-> tclsh8.4.
From my application i linked the debug symbol enabled libraries to the place where exactly i created the Tcl interpreter.
Initialized the Tcl interpreter to using Tcl_InitMemory() function (so that the memory command will be registered in the supplied(arg) interpreter.
When i used the interpreter exe (tclsh) separately i could execute the memory command, but when i used the same exe on my application its not working. Can someone help me what could be the possible reason for this problem ?
Also help me how can i cross verify the libraries that they are compiled with TCL_MEM_DEBUG flag.
Will the Tcl source code tar file contain Solaris directory where i have to build the libraries or should i use the unix source code for solaris platform as well ?
Thanks

Are you using [mem] interactively (which does expansion of unambiguous short command names) and forgetting to use the full name ([memory]) in your scripts?

You're using Tcl embedded in your code? You need to call Tcl_InitMemory (passing in the handle to the interpreter where you want the memory command created) after creation of the interpreter and before you run user scripts, i.e., straight after the Tcl_CreateInterp gives you the handle (which should in turn come after the Tcl_FindExecutable call that initializes the shared parts of the library).
You must also make sure that everything is built with that flag set so that the correct memory allocation APIs are used in both your code when it integrates with Tcl, and you must make sure that you are linking against the debugging build. It's probably the linking that has gone wrong, but I've not done that level of development on Solaris for many years.
I think you'll find that “Getting a list of used libraries by a running process (unix)” is relevant to your problems.

Related

gtk+ without msys2 mingw

On Windows, trying something with gtk+. I have downloaded Msys2, along with gtk+3.0.
Successfully compiled all the gtk+3.0 examples in the msys2 mingw-w64 terminal.
Now I want to move a bit further to try work without the msys environment.
I opened up cmd and navigated to where the example executables are compiled. Then I fired them up by typing "example.exe".
libgio-2.0-0.dll missing, not surprised. I go back to check the PATH environment of the msys environment, PATH=/mingw64/bin/:/usr/local/bin:/usr/bin/:/bin:/c/Windows/System32:..blablabla
So in the cmd environment I did set PATH=%PATH%;pathto/mingw64/bin; and run example.exe again.
This time it gave a very strange error cannot find entrypoint inflateValidate (in dll libpng16-16.dll)
So I checked, indeed there was no inflateValidate function in the dll. it seemed to me that something thought the function is in the dll and tried to call it but because it doesn't exist so it failed. What I don't understand is that why did it not fail in the msys environment but failed in the windows environment. And does that bring any impact to me if I am going to ship any gtk application? I thought simply distributing the relevant dll would be enough.
I have tried instead of adding the mingw64/bin path to the PATH variable, but copying the required dll the the execute location 1 by 1, but at the end it still gave the same error.
I have also tried to search for other libpng*.dll in my computer, none of them contained the inflateValidate function.
If anyone know whats going on please shed some light to the question.
I might be very late to the party but I ran into the same issue yesterday (missing the inflateValidate symbol) and after checking the contents of the zlib1.dll file could ascertain that the function is just not there.
I downloaded another version (specifically this one https://sourceforge.net/projects/uqm-mods/files/latest/download, though I am in no way affiliated to this project) and saw that the inflateValidate symbol was indeed declared, so I suppose that the zlib bundled with your files is not up to date with the libpng requirements.
This solved my problem. I hope it solves yours too.

cross-compile postgresql for ARM Sitara AM335x

I'm having trouble cross compiling PostgreSQL for my TI Sitara AM335x EVM SK. My host system is an i386 machine running Ubuntu 12.04.
My application is written in C++ using Qt. When I try and compile, I get the error that libpq.so is incompatible. I believe this is because the cross compiler is trying to use the host libpq.so instead of one for the target system (which as I have found out, doesn't exist).
I've downloaded the source for PostgreSQL with the intention of cross compiling that in order to give me the libpq.so library that will be compatible with my target system, however there is virtually no information on how to do this.
I have tried using the CC argument with the configure file to change my compiler to the following: CC=/home/tim/ti-sdk-am335x-evm-06.00.00.00/linux-devkit/sysroots/i686-arago-linux/usr/bin/arm-linux-gnueabihf-gcc but the configure script gives me this error: configure: error: cannot run C compiled programs. If you meant to cross compile, use --host.
The configure file makes a small reference to the --host option, but the only information in the file that I could find is in reference to mingw and windows, which isn't what I want.
I've done some quick searching through the configure file, and it references the --host option, but with no explanation of what is a valid host. I'm assuming that with --host option there will be an associated --target.
What arguments can I give the configure script so that it will cross compile with the correct compiler to generate a library that my target device can use? Are there any resources out there that I haven't found in regards to how the --host/--target works or how to use them?
OK, so after fiddling around for a little while, I think I was actually able to cross compile PostgreSQL and answer my own question.
Before I went any further, I had realized I had forgotten to add the path to my cross compiler to the PATH environment variable. I used the command export PATH=/path/to/cross/compiler:$PATH to insert the compiler path to the PATH environment variable.
Next, I did some experimenting with the --host option. To start off with I tried using ./configure --host=arm-linux-gnueabihf and running the configure script. The configure script seemed to accept this as the host argument. I then went to the next step of running the makefile. Running this makefile resulted in errors being generated. The errors were selected processor does not support Thumb mode. I did a quick search to see what information I could find about this error and came to this webpage: http://www.postgresql.org/message-id/E1Ra1sk-0000Pq-EL#wrigleys.postgresql.org.
This webpage gave me a bit more information since it seemed like the person was trying to do something very similar to me. One of the responders to the post mentioned that --disable-spinlocks is intended for processors that aren't supported by default by PostgreSQL. I emulated the arguments that were used in the website listed above and used the command: ./configure --host=arm-linux CC=arm-linux-gnueabihf-gcc AR=arm-linux-gnueabihf-ar CPP=arm-linux-gnueabihf-cpp --without-readline --without-zlib --disable-spinlocks to generate my makefile. This makefile actually generated all of the files, including the libpq.so library file I was needing.
Hope this helps somebody else in the future!

Could not access the MCR component cache

I want to use CGI and Apache web server to give users access to run my compiled Matlab application (an exe file). I followed the instructions at this guide.
But, I received an error in the web server logs as:
[error] [client 127.0.0.1] Could not access the MCR component cache., referer: .../standalone.html
I am using Matlab 2012a. Is there anyway to control the MCR cache for applications compiled using Matlab 2012a? There is no more CTF file with the compilations of this Matlab release.
Thanks.
I believe that in recent versions of MATLAB Compiler the CTF archive is embedded in the .exe by default, but that you can change that back, either by selecting an option from within the deploytool settings, or by using the -C parameter with mcc.
The CTF archive would normally expand automatically the first time you run the component, but if you need to manually expand it (I believe there are reasons you need to when calling it from CGI, although I've never done it myself) there's a utility called extractCTF.exe in matlabroot\toolbox\compiler\arch, where arch is your OS type, such as win32 etc.
See here and here for more information.
Hope that helps!

Can't load 'C:/strawberry/perl/site/lib/auto/XML/LibXML/LibXML.dll' for module XML::LibXML

I have downloaded strawberry PERL and writing one application with CGI Perl Apache on Winxp sp3).
One of the libraries (written by someone else) which I using uses XML::LibXML. When i load the page it gives Internal Server Error. From Apache error log i can see this error:
Can't load 'C:/strawberry/perl/site/lib/auto/XML/LibXML/LibXML.dll' for module XML::LibXML: load_file:The specified module could not be found at C:/strawberry/perl/lib/DynaLoader.pm line 190.
C:/strawberry/perl/site/lib/auto/XML/LibXML/LibXML.dll exists with all permissions.
Also this library works properly on Linux. My application also works fine if I remove all code that needs LibXML.
Can anyone tell me when can be possible issue here.
If you peek into the source for DynaLoader you'll find
Many dynamic extension loading problems will appear to come from
this section of code: XYZ failed at line 123 of DynaLoader.pm.
Often these errors are actually occurring in the initialisation
C code of the extension XS file. Perl reports the error as being
in this perl code simply because this was the last perl code
it executed.
You should have also gotten (but may not have noticed) the following dialog, which provides a more accurate error message:
The problem isn't that perl can't find LibXML.dll; it's that LibXML.dll can't find the real libxml. (The former is just a wrapper that provides Perl bindings for the latter.) To fix that you need to ensure that Strawberry Perl's c\bin folder is in your PATH. In your case, that would be C:\strawberry\c\bin.
You might have to check the environment variable settings in the windows,
make sure that the installation path of the module is present in the PATH variable.
The reason it works in linux is that make files usually set the environment variables for you in linux in windows it may not have set properly.
For eg;
go to Control Panel\System and Security\System click change settings then advanced tab in user variable section see if there is a variable called perl5lib.
if not create an new perl5lib variable and add the path of your library ( usuall C:\Perl\site\lib but may be different in your case)
I had the same issue after installing Strawberry perl. It was working fine when I run the script from server, but not remotely from a automation tool. The issue was because of the Environment variables not updated when we run it remotely. So I did server reboot, which resolved the issue.
I encountered the same problem recently, in my case it was not related to PATH variable (it was already correct). The thing is I was executing my script from Git Bash console and as it turned out git-bash perl was being used instead of Strawberry (see git-bash perl should not be first in path). Switching to standard Windows CMD terminal helped.

How to Stop Perl EPIC eclipse plugin showing errors

My eclipse tries to compile/build Perl files in my Java project and fails. I installed Perl EPIC just for syntax colouring, how can I get it to ignore errors?
I tried going into Project->Properties->Builders, and uncheck Perl Epic, this didn't change anything.
I'm using Eclipse :Helios Service Release 1
Build id: 20100917-0705
On Windows XP
I have basically the same issue as this question,
How can I set up Eclipse to edit Perl without the runtime checking?
I've been looking into similar issue for quite some time too. Apparently the Epic Perl plugin goes wildly checking anything/folder/file it finds inside the project, so like mine where I have config files, data directories, it goes inside and tries to validate "perl stuff", which evidently is an annoyance: the error log view displays a lot of useless information.
Did you try to uncheck the "Perl Auto Builder" ?
I'm not parsing this sentence in the context of your question: "My eclipse tries to compile/build perl files in my java project and fails."
Are you saying that you are running perl as a java project, and getting the inevitable error message because it is not java? Just wondering why you don't want to have your perl program set up as a perl project possibly referenced by your java project, assuming that that is what you are trying to do.
Generally, when I set up a perl project, I edit its properties and set its includes to match the current directory or local module paths. Assuming that there are self-written modules I must call, and they are not located on this machine (e.g. I wouldn't have FOO::smb on a windows machine -- it makes no sense. When I am developing for linux, I will put all my functions in there for convenience's sake)
In that case, I create a FOO directory in the workspace, and create a dummy FOO::smb module with however many stub functions in it to get me going and let my syntax highlighting and error checkign do their proper jobs for me. If I write dummy subs to match the real modules well enough, I can debug my scripts somewhat before uploading them. I figure that I should be well enough aware of what they are supposed to do anyway.
I will go so far as to dummy out CPAN modules assuming that installing them on my development workstation makes no sense or is impossible. Highlighting and syntax checking are both invaluable tools, and finding a way to make both of them work saves my sanity.