How do I suppress the bloat of useless information when using the DUMP command while using grunt via 'pig -x local'? - dump

I'm working with PigLatin, using grunt, and every time I 'dump' stuffs, my console gets clobbered with blah blah, blah non-info, is there a way to surpress all that?
grunt> A = LOAD 'testingData' USING PigStorage(':'); dump A;
2013-05-06 19:42:04,146 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2013-05-06 19:42:04,147 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
...
...
--- another like 50 lines of useless context clobbering junk here... till ---
...
...
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!
now my like 4 lines of info looking for:
(daemon,*,1,1,System Services,/var/root,/usr/bin/false)
(uucp,*,,,/var/spool/uucp,/usr/sbin/uucico)
(taskgated,*,13,13,Task Gate Daemon,/var/empty,/usr/bin/false)
(networkd,*,24,24,Network Services,/var/empty,/usr/bin/false)
(installassistant,*,25,25,/usr/bin/false)
grunt>
---> obviously if it errors, fine lotsa info helpful, but not when it basically works great.

You need to set the log4j properties.
For example:
$PIG_HOME/conf/pig.properties :
enable:
# log4jconf=./conf/log4j.properties
rename: log4j.properties.template -> log4j.properties
log4j.properties :
set info to error:
log4j.logger.org.apache.pig=info, A
You may also set the Hadoop related logging level as well:
log4j.logger.org.apache.hadoop = error, A

An easy way to do this seems to be to redirect standard error as below.
But it will suppress all errors.
pig -x local 2> /dev/null
Also found that if you remove or rename your hadoop install directory to basically make it inaccessible to pig then all those INFO messages go away. Changing logging levels in hadoop didn't help, just so that you know.

When you start pig, pass it a log4j.properties file with pig -4 <filename>.
In my case there was a log4j.properties in the conf directory and setting the level of the logger named org.apache.pig to ERROR is sufficient to make the logger less verbose.
log4j.logger.org.apache.pig=ERROR, A

pig has debug log level one need to set that in pig.properties file,
# Logging level. debug=OFF|ERROR|WARN|INFO|DEBUG (default: INFO)
#
# debug=INFO
The reason one get large logs on console, e.g. change it to ERROR

Related

How to minimize Eclipse PyDev Console output / tracing

I have multiple installations of Eclipse(2021-12) + PyDev(9.3.0.202203051235) all using Iron Python(2.7). All running on Windows 10. They all run the scripts as expected, but one installation has a much more robust console output when debugging, almost like a tracing option is enabled. I've tried reinstalling, deleting workspaces, deleting '.metadata' folders, etc. All the project settings seem identical.
Any ideas how to minimize the console output? Something in registry?
Expected Console output:
pydev debugger: starting (pid: 15312)
Actual Console output:
1.99s - Using GEVENT_SUPPORT: False
0.00s - Using GEVENT_SHOW_PAUSED_GREENLETS: False
0.00s - pydevd __file__: C:\\Eclipse-2021-12-R\plugins\org.python.pydev.core_9.3.0.202203051235\pysrc\pydevd.py
0.11s - Initial arguments: (['C:\\Eclipse-2021-12-R\\plugins\\org.python.pydev.core_9.3.0.202203051235\\pysrc\\pydevd.py', '--multiprocess', '--protocol-http', '--print-in-debugger-startup', '--vm_type', 'python', '--client', '127.0.0.1', '--port', '60413', '--file', 'C:\\Test.py'],)
0.00s - Current pid: 8884
pydev debugger: starting (pid: 8884)
Those should only appear if you add an environment variable asking it to be shown.
i.e.: Something as:
PYDEVD_DEBUG=1
PYDEV_DEBUG=1
Maybe you have such an environment set in your launch configuration or interpreter configuration or elsewhere in your system?
You may want to check the os.environ of the running program to see what's set there.

How can I hide skipped tasks output in Ansible

I have Ansible role, for example
---
- name: Deploy app1
include: deploy-app1.yml
when: 'deploy_project == "{{app1}}"'
- name: Deploy app2
include: deploy-app2.yml
when: 'deploy_project == "{{app2}}"'
But I deploy only one app in one role call. When I deploy several apps, I call role several times. But every time there is a lot of skipped tasks output (from tasks which do not pass condition), which I do not want to see. How can I avoid it?
I'm assuming you don't want to see the skipped tasks in the output while running Ansible.
Set this to false in the ansible.cfg file.
display_skipped_hosts = false
Note. It will still output the name of the task although it will not display "skipped" anymore.
UPDATE: by the way you need to make sure ansible.cfg is in the current working directory.
Taken from the ansible.cfg file.
ansible will read ANSIBLE_CONFIG,
ansible.cfg in the current working directory, .ansible.cfg in
the home directory or /etc/ansible/ansible.cfg, whichever it
finds first.
So ensure you are setting display_skipped_hosts = false in the right ansible.cfg file.
Let me know how you go
Since ansible 2.4, a callback plugin name full_skip was added to suppress the skipping of task names and skipping keyword in the ansible output. You can try the below ansible configuration:
[defaults]
stdout_callback = full_skip
Ansible allows you to control its output by using custom callbacks.
In this case you can simply use the skippy callback which will not output anything on a skipped task.
That said, skippy is now deprecated and will be removed in ansible v2.11.
If you don't mind losing colours you can elide the skipped tasks by piping the output through sed:
ansible-playbook whatever.yml | sed -nr '/^TASK/{h;n;/^skipping:/{n;b};H;x};p'
If you are using roles, you can use when to cancel the include in main.yml
# roles/myrole/tasks/main.yml
- include: somefile.yml
when: somevar is defined
# roles/myrole/tasks/somefile.yml
- name: this task will only run (and be seen in the output) if somevar is defined
debug:
msg: "Hello World"

The input line is too long. The syntax of the command is incorrect

When I start play scala production mode that throw this kind of error please any one give me the clear idea..
F:\New_CMS\trunk\server\cms>activator start
[info] Loading project definition from F:\New_CMS\trunk\server\cms\project
[info] Set current project to cms (in build file:/F:/New_CMS/trunk/server/cms/)
[info] Wrote F:\New_CMS\trunk\server\cms\target\scala-2.11\cms_2.11-1.0-SNAPSHOT.pom
Starting server. Type Ctrl+D to exit logs, the server will remain in background
The input line is too long.
The syntax of the command is incorrect.
Follow these steps as a Windows solution:
activator stage in the command line
Copy the stage directory from target\universal\stage to c:\stage to avoid issues with long file paths
To avoid the Bad Application Path issues just create a new .bat file with the following (my project is called proj): set PROJ_OPTS="-Dconfig.file=../conf/application.conf" proj.bat
Note: change PROJ_OPTS to YOURPROJECTNAME_OPTS and proj.bat to yourprojectname.bat

Where to find logs for a cloud-init user-data script?

I'm initializing spot instances running a derivative of the standard Ubuntu 13.04 AMI by pasting a shell script into the user-data field.
This works. The script runs. But it's difficult to debug because I can't figure out where the output of the script is being logged, if anywhere.
I've looked in /var/log/cloud-init.log, which seems to contain a bunch of stuff that would be relevant to debugging cloud-init, itself, but nothing about my script. I grepped in /var/log and found nothing.
Is there something special I have to do to turn logging on?
The default location for cloud init user data is already /var/log/cloud-init-output.log, in AWS, DigitalOcean and most other cloud providers. You don't need to set up any additional logging to see the output.
You could create a cloud-config file (with "#cloud-config" at the top) for your userdata, use runcmd to call the script, and then enable output logging like this:
output: {all: '| tee -a /var/log/cloud-init-output.log'}
so I tried to replicate your problem. Usually I work in Cloud Config and therefore I just created a simple test user-data script like this:
#!/bin/sh
echo "Hello World. The time is now $(date -R)!" | tee /root/output.txt
echo "I am out of the output file...somewhere?"
yum search git # just for fun
ls
exit 0
Notice that, with CloudInit shell scripts, the user-data "will be executed at rc.local-like level during first boot. rc.local-like means 'very late in the boot sequence'"
After logging in into my instance (a Scientific Linux machine) I first went to /var/log/boot.log and there I found:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200! I am
out of the file. Log file somewhere? Loaded plugins: changelog,
kernel-module, priorities, protectbase, security,
: tsflags, versionlock 126 packages excluded due to repository priority protections 9 packages excluded due to repository
protections ^Mepel/pkgtags
| 581 kB 00:00
=============================== N/S Matched: git =============================== ^[[1mGit^[[0;10mPython.noarch : Python ^[[1mGit^[[0;10m Library c^[[1mgit^[[0;10m.x86_64 : A fast web
interface for ^[[1mgit^[[0;10m
...
... (more yum search output)
...
bin etc lib lost+found mnt proc sbin srv tmp var
boot dev home lib64 media opt root selinux sys usr
(other unrelated stuff)
So, as you can see, my script ran and was rightly logged.
Also, as expected, I had my forced log 'output.txt' in /root/output.txt with the content:
Hello World. The time is now Wed, 11 Sep 2013 10:21:37 +0200!
So...I am not really sure what is happening in you script.
Make sure you're exiting the script with
exit 0 #or some other code
If it still doesn't work, you should provide more info, like your script, your boot.log, your /etc/rc.local, and your cloudinit.log.
btw: what is your cloudinit version?

Algebra filter error in moodle

I installed moodle 1.9.12 and now I want to use Algebra notation in content. I enable "TeX Notation" and "Algebra Notation" in administrator panel and also install mimetext and dvips and Imagemagic on the server. fortunately Tex Notation works fine but I got the following error for Algebra:
sh: /var/www/html/moodle/filter/tex/mimetex.linux: not found
The shell command
"/var/www/html/moodle/filter/tex/mimetex.linux" -e "/var/www/moodledata/filter/algebra/de06d6c44d98ba4e42dffca988bf530b.gif" -- '\Large \frac{\sin\left(z\right)}{x^{2}+y^{2}}'
returned status = 127
File size of mimetex executable /var/www/html/moodle/filter/tex/mimetex.linux is 830675
The file permissions are: 100775
The md5 checksum of the file is 56bcc40de905ce92ebd7b083c76e019e
Image not found!
Note: /var/www/html/moodle/filter/tex/mimetex.linux exists on the server and is executable!!!
What is the problem?? Any idea?????
From what you have described, calling the general tex filter debug page works and does not show up the same error.
/filter/tex/texdebug.php works, but /filter/algebra/algebradebug.php does not.
If this is the case, perhaps you could check for an open_basedir, or safe_mode_exec_dir being set to include the current working directory, or otherwise restricting the execution of /var/www/html/moodle/filter/tex/mimetex.linux, while the current working directory is /var/www/html/moodle/filter/algebra.
You could look at this by visiting /admin/phpinfo.php at your site, and look carefully at the effective values of open_basedir, safe_mode and safe_mode_exec_dir.
You could also check the apache error log or add the following lines to the top of the algebra debug php file, and you might see some extra error messages:
$CFG->debug = 6143 ;
$CFG->debugdisplay= 1 ;
Hope that helps