Python file handling function takes mode as optional parameter which for default is "r" (read).
open(file, mode) #where mode by default is "r"
What is the Scala's analogous default for the same ?
In case of using the concise one-liner method scala.io.Source.fromFile, file is opened in read mode, even if the same method is used to get reference to the buffered source.
Besides, to write TXT files we use Java fallbacks of PrintWriter or FileWriter, which opens files in write mode. Hence the "default" depends on whether we are writing to the file or reading from it.
I figured this by filtering the open files with lsof after opening file in sbt console, any productive additions and/or corrections are appreciated.
Related
I am coding a hashing program in ada and using direct io to read and write to/from a file. I am trying to read from a file that is in the same folder as the executable as it should be but still raising the exception. Any ideas as to why its still raising this exception?
adb showing exception driver ads file
The location of the executable has no impact on the interpretation of the names of files to be opened or created. The relevant issue is the current working directory (or folder, if you will) of the process that executes the program. In the common OSes, for a file to be found based on its file-name alone (without any directory path), the file must lie in the current working directory.
You seem to be executing the program from within some IDE, right? Then the IDE probably defines the current working directory to be used when the IDE executes the program. Do you know how the IDE does that, and can you override the default within the IDE? If not, I suggest that you execute the program from the shell command line and manually set the current working directory as needed, in that shell window, using the "cd" command before executing the program.
You could use Ada.Directories (ARM A.16) to work out the location of the data file from the location of the executable:
use Ada.Directories;
Program_Name : constant String := Ada.Command_Line.Command_Name;
Complete_Name : constant String := Full_Name (Program_Name);
Full_Directory : constant String := Containing_Directory (Complete_Name);
Source_File_Name : constant String
:= Compose (Containing_Directory => Full_Directory,
Name => "foo",
Extension => "txt");
Note, the use Ada.Directories meant I had to be a bit 'creative' about variable names; without it, I could say e.g.
Full_Name : constant String
:= Ada.Directories.Full_Name (Program_Name);
I have a program that checks to see if the files in my directory are readable,writeable, and executable.
i have it set up so it looks like
if (-e $file){
print "exists";
}
if (-x $file){
print "executable";
}
and so on
but my issue is when I run it it shows that the text files are executable too. Plain text files with 1 word in them. I feel like there is an error. What did I do wrong. I am a complete perl noob so forgive me.
It is quite possible for a text file to be executable. It might not be particularly useful in many cases, but it's certainly possible.
In Unix (and your Mac is running a Unix-like operating system) the "executable" setting is just a flag that is set in the directory entry for a file. That flag can be set on or off for any file.
There are actually three of these permissions why record if you can read, write or execute a file. You can see these permissions by using the ls -l command in a terminal window (see man ls for more details of what various ls options mean). There are probably ways to view these permissions in the Finder too (perhaps a "properties" menu item or something like that - I don't have a Mac handy to check).
You can change these permissions with the chmod ("change mode") command. See man chmod for details.
For more information about Unix file modes, see this Wikipedia article.
But whether or not a file is executable has nothing at all to do with its contents.
The statement if (-x $file) does not check wether a file is an executable but if your user has execution priveleges on it.
For checking if a file is executable or not, I'm affraid there isn't a magic method for it. You may try to use:
if (-T $file) for checking if the file has an ASCII or UTF-8 enconding.
if (-B $file) for checking if the file is binary.
If this is unsuitable for your case, consider the following:
Assuming you are on a Linux enviroment, note that every file can be executed. The question here is: The execution of e.g.: test.txt, is going to throw a standard error (STDERR)?
Most likely, it will.
If test.txt file contains:
Some text
And you launched it in your Perl script by: system("./test.txt"); This will display a STDERR like:
./test.txt: line 1: Some: command not found
If for some reason you are looking to run all the files of your directory (in a for loop for instance) be warned that this is pretty dangerous, since you will launch all your files and you may not be willing to do so. Specially if the perl script is in the same directory that you are checking (this will lead to undesirable script behaviour).
Hope it helps ;)
Ok, so here's my issue. I have written a build script in bash that pipes output to tee and sorts different output to different log files (so I can summarize errors/warnings at the end and get some statistics on files built). I wanted to use the colorgcc perl script (colorgcc.1.3.2) to colorize the output from gcc and had found in other places that this won't work piping to tee, since the script checks if it is writing to something that is not a tty. Having disabled this check everything was working until I did a full build and discovered some of the code we receive from another group builds C dependency files (we don't control this code, changing it or the build process for these isn't really an option).
The problem is that these .d files have the form as follows:
filename.o filename.d : filename.c \
dependant_file1.h \
dependant_file2.h (and so on for however many dependencies there are)
This output from GCC gets written into the .d file, but, since it is close enough to a warning/error message colorgcc outputs color codes (believe it's the check for filename:lineno:message but not 100% sure, could be filename:message check in the GCCOUT while loop). I've tried editing the regex to attempt to not match this but my perl-fu is admittedly pretty weak. So what I end up with is a color code on each line for these dependency files, which obviously causes the build to fail.
I ended up just replacing the check for ! -t STDOUT with a check for a NO_COLOR envar I set and unset in the build script for these directories (emulates the previous behavior of no color for non-tty). This works great if I run the full script, but doesn't if I cd into the directory and just run make (obviously setting and unsetting manually would work but this is a pain to do every time). Anyone have any ideas how to prevent this script from writing color codes into dependency files?
Here's how I worked around this. I added the following to colorgcc to search the gcc input for the flag to generate the .d files and just directly called the compiler in that case. This was inserted in place of the original TTY check.
for each $argnum (0 .. $#ARGV)
{
if ($ARGV[$argnum] =~ m/-M{1,2}/)
{
exec $compiler, #ARGV
or die("Couldn't exec");
}
}
I don't know if this is the proper 'perl' way of doing this sort of operation but it seems to work. Compiling inside directories that build .d files no longer inserts color codes and the source file builds do (both to terminal and my log files like I wanted). I guess sometimes the answer is more hacks instead of "hey, did you try giving up?".
I have an ETL process set up in perl to process a number of files, and load them to a database.
Recently, for performance reasons I set up the code to be multi-threaded, through use of a fork() call and a call to system("perl someOtherPerlProcess.pl $arg1 $arg2").
I end up with about 12 instances of someOtherPerlProcess.pl running with different arguments, and these processes each work through one directories worth of files (corresponding to a single table in our database).
The applications main functions work, but I am having issues with figuring out how to configure my logging.
Ideally, I would like to have all the someOtherPerlProcess.pl share the same $log_config value to initialize their loggers, but have each of those create a log file in the directory they are working on.
I haven't been able to figure out how to do that. I also noticed that in the directory I am calling these perl scripts from I see several files (ARRAY(0x260eec), ARRAY(0x313f8), etc) that contain all my logging messages!
Is there a simple way to change the log4perl.appender.A1.filename value from running code?
Or to otherwise dynamically configure the file name we use, but use all other values from a config file?
I came up with a less than ideal solution for this, which is to configure my logger from someOtherPerlProcess.pl directly.
my $FORKED_LOG_CONF = "log4perl.appender.A1.filename=$directory_to_load/log.txt
log4perl.rootLogger=WARN, A1
log4perl.appender.A1=Log::Log4perl::Appender::File
log4perl.appender.A1.mode=append
log4perl.appender.A1.autoflush=1
log4perl.appender.A1.layout=PatternLayout
log4perl.appender.A1.layout.ConversionPattern=[%p] %d{yyyy-MM-dd HH:mm:ss}: %m%n";
#Logger start up
Log::Log4perl::init( \$FORKED_LOG_CONF);
my $logger = get_logger();
The $directory_to_load is the process specific portion of the logger, which works in the context of the perl process that is running and has a (local) value for that variable, but that method will fail if used in an external config file.
I would be happy to hear of any alternative solutions.
In your config file:
log4perl.appender.A1.filename=__LOGFILE__
In your script:
use File::Slurp;
my $log_cfg = read_file( $log_cfgfile );
my $logfile = "$directory_to_load/log.txt";
$log_cfg =~ s/__LOGFILE__/$logfile/;
Log::Log4perl::init( \$log_cfg );
Let's see if I can reach the EmacsW32 users on stackoverflow.
I've just installed the patched version of EmacsW32 from http://ourcomments.org/Emacs/EmacsW32.html
I find it very nice that .txt files are associated wth Emacs, so that when you click on one, emacsclient opens it in the running instance of Emacs.
Problem is, for some reason, the buffer is renamed with the old-style shortened file names, so, for example, the buffer with file "activities-2008.txt" is renamed to "ACTIV~1.TXT", which I don't like.
How do I get EmacsW32 not to rename the buffer, and use the whole file name as the buffer name instead ?
Ick, that sucks.
Why not just use the emacsclientw that comes with the standard Windows emacs distribution?
It does have a bit of an issue in that you get an annoying "No error" error box if Emacs isn't already running, but any real emacs user starts emacs first thing when they log on anyway. :-)
Solved.
The problem is not with emacs, but with the way Windows runs a program when a file type is associated in the registry.
In my registry, I had this value for the keys that associate txt files with Emacs:
C:\emacs-23.0.91.1\Emacs\bin\emacsclientw.exe -n "%1"
The problem is the %1, which is replaced by a short file name.
According to this message http://lists.gnu.org/archive/html/help-emacs-windows/2009-05/msg00022.html:
%L is long file names.
%1 is long file names IF
* Explorer can find the exe file (it does not look very hard)
AND
* The file header says it is Win 95 aware Win16 exe, or
* It is a 32 bit program
Else %1 will be a short name.
The solution is to use %L in place of %1 in the reg keys.