How to use MallocStackLogging on the device? - iphone

I've a memory issue in an iPhone app that I'd like to debug with MallocStackLogging. The error involves the gyroscope so I have to debug on the device not the simulator.
I've set the MallocStackLogging environment variable and the iPhone properly records the mallock stack logs:
MyApp(1856) malloc: recording malloc stacks to disk using standard recorder
MyApp(1856) malloc: stack logs being written into /private/var/mobile/Applications/1FD1F8D2-5D30-4AA7-B426-C52FE20266DE/tmp/stack-logs.1856.MyApp.index
MyApp(1856) malloc: Please issue: cp /private/var/mobile/Applications/1FD1F8D2-5D30-4AA7- B426-C52FE20266DE/tmp/stack-logs.1856.MyApp.e8z3IL.link /tmp/
Now how can I work with them?
I can transfer them to the Mac using the Xcode Organizer. But what should I do with these two files?
stack-logs.1856.MyApp.index
stack-logs.1856.MyApp.e8z3IL.link
I tried moving the files in /tmp on the Mac and called:
$ malloc_history 1856 -all_events
malloc_history cannot examine process 1856 because the process does not exist.
Clearly, the malloc_history command looks for running processes on the local machine. I'm missing an option to specify the log file manually.
Is there any way to get this to work either directly working with Xcode on the (non-jailbroken) device or after transferring the logs to the Mac?

Here is how I debug APP with malloc stack history on idevice, it's really complicate, but I have no other way to deal with an auto release pool memory problem.
You need A jailbreak idevice with developer tools installed, then you have gdb.
To enable malloc stack loggin, you need set environment variables MallocStackLoggingNoCompact and MallocStackLogging, we need some trick to do it.
First, we need grant your app root privilege.
mv -f /User/Application/xxxxxxxxxxxxx/YOUR_APP.app /Application/YOUR_APP.app
cd /Application
chown -R root:wheel YOUR_APP.app
chmod 4755 YOUR_APP.app/YOUR_APP
Rename your program
mv YOUR_APP.app/YOUR_APP YOUR_APP.app/BACK_UP_NAME
Use a short shell scrip to start your program, so we can keep the env. Save it to YOUR_APP.app/YOUR_APP
#!/bin/bash
export MallocStackLogging=1
export MallocStackLoggingNoCompact=1
exec /Applications/YOUR_APP.app/BACK_UP_NAME
Done.
Just start you app, touching on the icon or use open command, you'll see a stack log file in /tmp
directory.
Use ps aux | grep YOUR_APP find process id, gdb -p PROCESS_ID attach to the progress, make a breakpoint, try info malloc ADDRESS, malloc history will show up.

In the Instruments application, which can diagnose an app running in the simulator or on a device, the Allocations instrument records memory addresses and allocation histories. You can browse by object/allocation type or specific memory address. This is likely the most straightforward way to accomplish what you want.
Running malloc_history on the device would require either jailbreaking to enable an ssh connection to the device, or running malloc_history from within your code. But I am not certain whether malloc_history exists on an iOS device. And malloc_history's help text does not mention an option for operating on log files rather than an existing process, which you likely already know.

I don't mean to sound flippant, but have you tried plugging the device in and running it under the debugger whilst connected ?
I do extensive debugging whilst runnning the application on the device. You do need to start the application under the debugger.

Related

save files in MATLAB with user ownership

I am using savefig() and saveas() functions to save .fig and .jpg files resp. in MATLAB (R2015a, Ubuntu 14.04, personal computer, single account). However, the owner of files being generated is root. I want the owner to be my user account.
I can use chown in terminal to later obtain the ownership, but I want that to happen directly from MATLAB, i.e. at the time of file creation.
Also, this problem was not occurring before. I just made a fresh installation of OS and all software, and this behaviour started happening.
I agree with previous users that this is more likely an issue of what user starts MATLAB to begin with.
A quick and dirty way of solving this issue is using the system command.
system('chown user:group DIRTOSAVEDFILE');
or
system(sprintf('chown %s:%s %s',USERSTRING, GROUPSTRING, SAVEDFILEDIR));
Please reconsider using system if you plan to distribute this code as the systemcommand gives access to to /bin/sh (maybe even with root privileges depending on how MATLAB is started).
I have figured out what I was doing wrong.
I was running MATLAB using the command sudo matlab, which is why the files being saved to disk had the ownership of root. The reason why I was running MATLAB as root was because simply using matlab in terminal was not working for me. Particularly, MATLAB gave JAVA exception error: "Error starting desktop". To resolve that error, I had to get the ownership of MATLAB's preferences directory, which is ~/.matlab/R2015a. I did sudo chown -R username:username ~/.matlab/R2015a/ to get the ownership. Now, I can run MATLAB without sudo as well as the files being generated have also my ownership. I used the following link to solve my ownership problem:
http://in.mathworks.com/matlabcentral/answers/50971-matlab-r2012b-java-exception-error-starting-desktop
Thanks for the comments and answers. I should have done more research I guess.

How can I get backtrace of this Perl Project? (Segmentation Fault)

I test now an web application at my job. I use Debian. I don't know what kind of project is, just know that is built in Perl and uses PostgreSQL. The project where Back-End is built, uses CARTON, a Perl module dependency manager (aka Bundler for Perl) http://search.cpan.org/~miyagawa/Carton-v1.0.12/lib/Carton.pm. To run the Back-End I have to start PostgreSQL sudo su postgres and then execute command carton exec foo and Back-End start to work. But today, after some updates and upgrades, I executed it and got this error message Segmentation fault. I found that to check what was going on I had to get a backtrace so I found and read this article:
https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD
but still don't understand how to run the project with GDB.
Thanks
but still didn't find out how to get a backtrace. Once I start in gdb mode, I don't know what to type to run the back-end in gdb mode.
It's hard to answer your question because it's not clear exactly where you are getting stuck.
Use the first link you provided, to attach GDB to the running back end:
sudo gdb -p pid
(gdb) continue
Now execute whatever command causes your backend to crash. Once you do, GDB will stop and print something like:
Program received signal SIGSEGV, Segmentation fault.
0x00000000004004c0 in foo (p=0x0) at t.c:1
(gdb)
Now you are ready to obtain the crash stack trace, by using where GDB command.
(gdb) where
#0 0x00000000004004c0 in foo (p=0x0) at t.c:1
#1 0x00000000004004dc in bar (p=0x0) at t.c:2
#2 0x00000000004004ec in main () at t.c:4
You will likely not get file/line info and parameter values (you'll need to install debuginfo packages described in your second link for that), but you should get function names, which may be sufficient to find the relevant bug.

breakpoints in eclipse using postgresql

I am using helios Eclipse for debugging my code in postgresql.
My aim is to know how postgresql uses join algorithms during the join query, so I started to debug nodenestloop.c which is in the Executor folder.
I gave break points in that file, But whenever I try to debug that file, the control goes to main.c and never comes back,How do I constraint the control only to that particular file(nodenestloop.c)
Below are the following fields which I gave in Debug configurations of Helios Eclipse.
C/C++ Application - src/backend/postgres and
project - pgsql
I followed the steps given in the following link for running the program.
https://wiki.postgresql.org/wiki/Working_with_Eclipse#
I even uncheked the field "Start on Start up=main" , but When I do that, The step in and Step over buttons are not activated and the following problem has popped up.
Could not save master table to file '/home/ravi/workspace/.metadata/.plugins/org.eclipse.core.resources/.safetable/org.eclipse.core.resources'.
/home/ravi/workspace/.metadata/.plugins/org.eclipse.core.resources/.safetable/org.eclipse.core.resources (Permission denied)
So I started eclipse using sudo, but this time the following error has come in the console of eclipse.
"root" execution of the PostgreSQL server is not permitted.
The server must be started under an unprivileged user ID to prevent
possible system security compromise. See the documentation for
more information on how to properly start the server.
Could any one help me with this.
Thank you
Problem 1: User ID mismatch
Reading between the lines, it sounds like you're trying to debug a PostgreSQL instance that's running as the postgres user, or a different user ID to your own anyway. Hence your attempt to use sudo.
That's painful, especially when using an IDE like Eclipse. With plain gdb you can just sudo the gdb command to the desired uid, e.g. sudo -u postgres -p 12345 to attach to pid 12345 running as user postgres. This will not work with Eclipse. In fact, running it with sudo has probably left your workspace with some messed up file permissions; run:
sudo chown -R ravi /home/ravi/workspace/
to fix file ownership.
If you want to debug processes under other user IDs with Eclipse, you'll need to figure out how to make Eclipse run gdb with sudo. Do not just run all of Eclipse with sudo.
Problem 2: Trying to run PostgreSQL under the control of Eclipse
This:
"root" execution of the PostgreSQL server is not permitted. The server must be started under an unprivileged user ID to prevent possible system security compromise. See the documentation for more information on how to properly start the server.
suggests that you're also attempting to let Eclipse start postgres directly. That's very useful if you're trying to debug the postmaster, but since you're talking about the query planner it's clear you want to debug a particular backend. Launching the postmaster under Eclipse is useless for that, you'll be attached to the wrong process.
I think you probably need to read the documentation on PostgreSQL's internals:
Tour of PostgreSQL Internals
PostgreSQL internals through pictures
Documentation chapter - internals
Doing it right
Here's what you need to do - rough outline, since I've only used Eclipse for Java development and do my C development with vim and gdb:
Compile a debug build of PostgreSQL (compiled with ./configure --enable-debug and preferably also CFLAGS="-ggdb -Og -fno-omit-frame-pointer"). Specify a --prefix within your homedir, like --prefix=$HOME/postgres-debug
Put your debug build's bin directory first on your PATH, e.g. export PATH=$HOME/postgres-debug/bin:$PATH
initdb -U postgres -D $HOME/postgres-debug-data a new instance of PostgreSQL from your debug build
Start the new instance with PGPORT=5599 pg_ctl -D $HOME/postgres-debug-data -l $HOME/postgres-debug-data.log -w start
Connect with PGPORT=5599 psql postgres
Do whatever setup you need to do
Get the backend process ID with SELECT pg_backend_pid() in a psql session. Leave that session open; it's the one you'll be debugging.
Attach Eclipse's debugger to that process ID, using the Eclipse project that contains the PostgreSQL extension source code you're debugging. Make sure Eclipse is configured so it can find the PostgreSQL source code you compiled with too (no idea how to do that, see the manual).
Set any desired breakpoints and resume execution
In the psql session, do whatever you need to do to make your extension run and hit the breakpoint
When execution pauses at the breakpoint in Eclipse, debug as desired.
Basic misunderstandings?
Also, in case you're really confused about how all this works: PostgreSQL is a client/server application. If you are attempting to debug a client program that uses libpq or odbc, and expecting a breakpoint to trigger in some PostgreSQL backend extension code, that is not going to happen. The client application communicates with PostgreSQL over a TCP/IP socket. It's a separate program. gdb cannot set breakpoints in the PostgreSQL server when it's connected to the client, because they are separate programs. If you want to debug the server, you have to attach gdb to the server. PostgreSQL uses one process per connection, so you have to attach gdb to the correct server process. Which is why I said to use SELECT pg_backend_pid() above, and attach to the process ID.
See the internals documentation linked above, and:
PostgreSQL site - coding
PostgreSQL wiki - developer resources
Developer FAQ
Attaching gdb to a backend on linux/bsd/unix
I also faced similar issue and resolved it after some struggle
I misunderstood the following point under Debugging with child processes in the wiki (https://wiki.postgresql.org/wiki/Working_with_Eclipse).
5."Start postmaster & one instant of postgresql client (for creating one new postgres)"
The above step should be performed from terminal by starting postgres server and one client.
Hope this helps
Once this is done then debugger in eclipse needs to be started for C/C++ Attach to Application

See and clear Postgres caches/buffers?

Sometimes I run a Postgres query and it takes 30 seconds. Then, I immediately run the same query and it takes 2 seconds. It appears that Postgres has some sort of caching. Can I somehow see what that cache is holding? Can I force all caches to be cleared for tuning purposes?
I'm basically looking for a Postgres version of the following SQL Server command:
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
But I would also like to know how to see what is actually contained in that buffer.
You can see what's in the PostgreSQL buffer cache using the pg_buffercache module. I've done a presentation called "Inside the PostgreSQL Buffer Cache" that explains what you're seeing, and I show some more complicated queries to help interpret that information that go along with that.
It's also possible to look at the operating system cache too on some systems, see [pg_osmem.py] for one somewhat rough example.
There's no way to clear the caches easily. On Linux you can stop the database server and use the drop_caches facility to clear the OS cache; be sure to heed the warning there to run sync first.
I haven't seen any commands to flush the caches in PostgreSQL. What you see is likely just normal index and data caches being read from disk and held in memory. by both postgresql and the caches in the OS. To get rid of all that, the only way I know of:
What you should do is:
Shutdown the database server (pg_ctl, sudo service postgresql stop, sudo systemctl stop postgresql, etc.)
echo 3 > /proc/sys/vm/drop_caches
This will clear out the OS file/block caches - very important though I don't know how to do that on other OSs. (In case of permission denied, try sudo sh -c "echo 3 > /proc/sys/vm/drop_caches" as in that question)
Start the database server (e.g. sudo service postgresql start, sudo systemctl start postgresql)
Greg Smith's answer about drop_caches was very helpful. I did find it necessary to stop and start the postgresql service, in addition to dropping the caches. Here's a shell script that does the trick. (My environment is Ubuntu 14.04 and PostgreSQL 9.3.)
#!/usr/bin/sudo bash
service postgresql stop
sync
echo 3 > /proc/sys/vm/drop_caches
service postgresql start
I tested with a query that took 19 seconds the first time, and less than 2 seconds on subsequent attempts. After running this script, the query once again took 19 seconds.
I use this command on my linux box:
sync; /etc/init.d/postgresql-9.0 stop; echo 1 > /proc/sys/vm/drop_caches; /etc/init.d/postgresql-9.0 start
It completely gets rid of the cache.
I had this error.
psql:/cygdrive/e/test_insertion.sql:9: ERROR: type of parameter 53
(t_stat_gardien) does not match that when preparing the plan
(t_stat_avant)
I was looking for flushing the current plan and a found this:
DISCARD PLANS
I had this between my inserts and it solves my problem.
Yes, it is possible to clear both the shared buffers postgres cache AND the OS cache. Solution bellow is for Windows... others have already given the linux solution.
As many people already said, to clear the shared buffers you can just restart Postgres (no need to restart the server). But just doing this won't clear the OS cache.
To clear the OS cache used by Postgres, after stopping the service, use the excelent RamMap (https://technet.microsoft.com/en-us/sysinternals/rammap), from the excelent Sysinternals Suite.
Once you execute RamMap, just click "Empty"->"Empty Standby List" in the main menu.
Restart Postgres and you'll see now your next query will be damm slow due to no cache at all.
You can also execute the RamMap without closing Postgres, and probably will have the "no cache" results you want, since as people already said, shared buffers usually gives little impact compared to the OS cache. But for a reliable test, I would rather stop postgres as all before clearing the OS cache to make sure.
Note: AFAIK, I don't recommend clearing the other things besides "Standby list" when using RamMap, because the other data is somehow being used, and you can potentially cause problems/loose data if you do that. Remember that you are clearing memory not only used by postgres files, but any other app and OS as well.
Regards, Thiago L.
Yes, postgresql certainly has caching. The size is controlled by the setting shared_buffers. Other than that, there is as the previous answer mentions, the OS file cache which is also used.
If you want to look at what's in the cache, there is a contrib module called pg_buffercache available (in contrib/ in the source tree, in the contrib RPM, or wherever is appropriate for how you installed it). How to use it is listed in the standard PostgreSQL documentation.
There are no ways to clear out the buffer cache, other than to restart the server. You can drop the OS cache with the command mentioned in the other answer - provided your OS is Linux.
There is pg_buffercache module to look into shared_buffers cache. And at some point I needed to drop cache to make some performance tests on 'cold' cache so I wrote an pg_dropcache extension that does exactly this. Please check it out.
this is my shortcut
echo 1 > /proc/sys/vm/drop_caches; echo 2 > /proc/sys/vm/drop_caches; echo 3 > /proc/sys/vm/drop_caches; rcpostgresql stop; rcpostgresql start;
If you have a dedicated test database, you can set the parameter: shared buffers to 16. That should disable the cache for all queries.
The original heading was "See and Clear" buffers.
Postgres 13 with pg_buffercache extension provides a way to see doc page
On OSX there is a purge command for that:
sync && sudo purge
sync - force completion of pending disk writes (flush cache)
purge - force disk cache to be purged (flushed and emptied)
Credit goes to kenorb answering echo 3 > /proc/sys/vm/drop_caches on Mac OSX

Stop Oracle from generating sqlnet.log file

I'm using DBD::Oracle in perl, and whenever a connection fails, the client generates a sqlnet.log file with error details.
The thing is, I already have the error trapped by perl, and in my own log file. I really don't need this extra information.
So, is there a flag or environment for stopping the creation of sqlnet.log?
As the Oracle Documentation states: To ensure that all errors are recorded, logging cannot be disabled on clients or Names Servers.
You can follow the suggestion of DCookie and use the /dev/null as the log directory. You can use NUL: on windows machines.
From the metalink
The logging is automatic, there is no way to turn logging off, but since you are on Unix server, you can redirect the log file to a null device, thus eliminating the problem of disk space consumption.
In the SQLNET.ORA file, set LOG_DIRECTORY_CLIENT and LOG_DIRECTORY_SERVER equal to a null device.
For example:
LOG_DIRECTORY_CLIENT = /dev/null
LOG_FILE_CLIENT = /dev/null
in SQLNET.ORA suppresses client logging completely.
To disable the listener from logging, set this parameter in the LISTENER.ORA file:
logging_listener = off
Are your clients on Windows, or *nix? If in *nix, you can set LOG_DIRECTORY_CLIENT=/dev/null in your sqlnet.ora file. Not sure if you can do much for a windows client.
EDIT: Doesn't look like it's possible in Windows. The best you could do would be to set the sqlnet.ora parameter above to a fixed location and create a scheduled task to delete the file as desired.
Okay, as Thomas points out there is a null device on windows, use the same paradigm.
IMPORTANT: DO NOT SET "LOG_FILE_CLIENT=/dev/null", this will cause permissions of /dev/null be reset each time your initialize oracle library, and when your umask is something that does not permit world readable-writable bits, those get removed from /dev/null and if you have permission to chmod that file: i.e running as root.
and running as root maybe something trivial, like php --version having oci php-extension present!
full details here:
http://lists.pld-linux.org/mailman/pipermail/pld-devel-en/2014-May/023931.html
you should use path inside directory that doesn't exist:
LOG_FILE_CLIENT = /dev/impossible/path
and hope nobody creates dir /dev/impossible :)
for Windows NUL probably is fine as it's not actual file there...