IntelliJ: can't execute SVN through batch file - powershell

When using SVN from IntelliJ I wrapped my SVN client in a batch file in order to do some processing with it. Actually the batch file will call a Powershell script. It works fine when I invoke it from the command line, yet when I configure IntelliJ to use that batch file as the SVN command line client I get the following error:
Can't use Subversion command line client: mySvn.bat
Probably the path to Subversion executable is wrong. Fix it.

IDE calls a batch file that calls PowerShell script... Why using several wrappers in a row instead of just a single one around regular command-line client?
IDE verifies executable by trying to execute it and making sure the call does not fail with a timeout and does not return errors. Looks like one of the wrappers returns an error code.
It works fine for me if I wrap svn calls into e.g. a bat file, or sh script on Mac.

Related

How do I get the commands executed by Bazel

I was wondering if there is a way to get Bazel to list, output, display, etc., all of the commands that can be executed from a command line that are run during a build after a clean. I do not care if the output is to the screen, in a file, etc. I will massage it into a usable form if necessary.
I have captured the screen output during a run of Bazel which gives me an idea of what is being done, however it does not give me a command I can execute on the command line. The command would have to include all of the command options and not display variables.
If this is not possible, since Bazel is open source, where in the code is/are the lines that represent the commands to be run so that I can modify Bazel to output the executable commands.
I am aware of the query command within Bazel, and used it generate the dependency diagram. If this could be done as a query command it would be even better.
TLDR;
My goal is to build TensorFlow using Bazel on Windows. Yes I know of all of the problems and reasons NOT to do it and have successfully installed TensorFlow on Windows via a Virtual Machine or Docker. I did take a shot at building Bazel on Windows starting with Cygwin, but that started to get out of hand as I am use to installing with packages and Cygwin doesn't play nice with packages, so then I started trying to build Bazel by hand and that was turning into a quagmire. So I am now trying to just build TensorFlow by hand on Windows by duplicating what Bazel would do to build TensorFlow on Linux.
You are correct, you can use the -s (--subcommands) option:
bazel build -s //foo
See https://docs.bazel.build/versions/master/user-manual.html#flag--subcommands.
For your use case, you'd probably want to redirect the output to a file and then global replace any library/binary paths to the Windows equivalents.
You might want to track https://github.com/bazelbuild/bazel/issues/276 (Windows support), although it'll probably be a while.
(Disclaimer: This solution does not print the commands that currently get executed but the commands that would get or got executed.)
I'd use aquery (action graph query) (forget about "graph"):
bazel aquery //foo
Advantages:
It's very fast, because it prints the actions without executing the build.
It's a query. It does not have side effects.
You don't have to do a bazel clean before in order to find out the build steps for a library that has already been built.
It prints information about the specific build step that you request. It does not print all the build commands required for the dependencies.

python script for validating files in eclipse

I have a self defined text editable (datafile) file format (which uses certain python types as dict, tuples, lists etc as well) for providing arguments data to my python scripts. These arguments are later used in my Main python script.
Currently, at the start of Main program, I am consolidating (using os.walk) all such datafiles and parsing them every time which takes a lot of time.
This is my issue!
Is there a mechanism in eclipse to run a python script (independent like a parser) and use above "datafile" as argument to check for syntax errors immediately after I save the file. So that I will not bother to check for syntax errors while running Main program.
Is this possible?
I am using Eclipse IDE with pydev for my development work.
Regards,

Make a Shell script calling Perl scripts executable in windows

My question is I have folder containing 3 scripts(perlx2 and shellx1) and an internal folder of data that I wish to use in the perl scripts.
The shell script calls the two perl scripts consecutively the first one creating a file and the second one performing an analysis of the data from the created file.
From reading I could use Active perl or something of the sort to package the Perls scripts to be executable for a basic windows user. However I think it would make more sense if I made a sort of Batch file that will run the whole set of programs.
Sorry about my lack of knowledge I would love some help with this.
Thanks!
Running perl scripts from a .bat in Windows
Installation & Configuration of perl
First, install perl (I prefer Strawberry Perl but ActivePerl will work too).
This should put perl in your path, to check open up a command line and type:
echo %PATH%
You should see C:\strawberry\perl\bin in there (perl is in that directory).
Adding perl to your path (skip if already present)
Navigate Start -> Settings -> Control Panel -> System -> Advanced -> Environment Variables. Choose Path and append the following to the variable:
;C:\strawberry\c\bin;C:\strawberry\perl\bin
Close your command shell and open a new one. Retry echo %PATH%.
Assuming perl is in your path now...
Make a file called perlscripts.bat and type the following:
perl c:\path\to\scripts\script1.pl
perl c:\path\to\scripts\script2.pl
Running this batch should run the scripts in order as desired.
The shell script calls the two perl scripts consecutively the first one creating a file and the second one performing an analysis of the data from the created file.
You can directly call those perl scripts from perl itself, it will work in both windows& linux, why shell/batch, why need bother every time for different OS.
like, <script to run>

Run part of a build script on a windows box and the rest on linux

My build script runs on linux and invokes things like gcc, shell scripts, etc.
Part of the solution is written in mono and could be compiled easily on linux.
But I want to obfuscate the code. Not manually, but as part of the build process.
Therefore I need to invoke Dotfuscator and Dotfuscator so far only runs on windows.
Is there a good solutions to invoke command line based workers/build scripts remotely from linux on a windows machine? I don't just want to run a command remotely, but also pass files along.
Like a windows service that is accessed using simple curl-uploads of a tar file, creates a temp folder for each concurrently connected client (or blocks concurrent calls) and unpacks the file, invokes something on these files and packages the result again as tar file to give it back to the caller? And clears the temp file even in case of failures?
Maybe someone knows a good solution that saves me from writing this myself!
It should not be so uncommon that a build process spans multiple platforms, yet common build server answers I found mainly talk about only one build script.
Also think about running e.g. the nsis setup builder from a linux driven build script, if part of your solution has a tiny windows component

How to make sphinx look for modules in virtualenv while building html?

I want to build html docs using a virtualenv instead of the native environment on my machine.
I've entered the virtualenv but when I run make html I get errors saying the module can't be imported - I know the errors are due to the module being unavailable in my native environment.
How can I specify which environment should be used when searching for docs (eg the virtualenv)?
The problem is correctly spotted by Mathijs.
$ which sphinx-build
/usr/local/bin/sphinx-build
I solved this issue installing sphinx itself in the virtual environment.
With the environment activated:
$ source /home/migonzalvar/envs/myenvironment/bin/activate
$ pip install sphinx
$ which sphinx-build
/home/migonzalvar/envs/myenvironment/bin/sphinx-build
It seems neat enough.
The problem here is that make html uses the sphinx-build command as a normal shell command, which explicitly specifies which Python interpreter to use in the first line of the file (ie. #!/usr/bin/python). If Python gets invoked in this way, it will not use your virtual environment.
A quick and dirty way around this is by explicitly calling the sphinx-build Python script from an interpreter. In the Makefile, this can be achieved by changing SPHINXBUILD to the following:
SPHINXBUILD = python <absolute_path_to_sphinx-build-file>/sphinx-build
If you do not want to modify your Makefile you can also pass this parameter from the command line, as follows:
make html SPHINXBUILD='python <path_to_sphinx>/sphinx-build'
Now if you execute make build from within your VirtualEnv environment, it should use the Python interpreter from within your environment and you should see Sphinx finding all the goodies it requires.
I am well aware that this is not a neat solution, as a Makefile like this should not assume any specific location for the sphinx-build file, so any suggestions for a more suitable solution are warmly welcomed.
I had the same problem, but I couldn't use the accepted solution because I didn't use the Makefile. I was calling sphinx-build from within a custom python build file. What I really wanted to do was to call sphinx-build with the exact same environment that I was calling my python build script with. Fiddling with paths was too complicated and error prone, so I ended up with what seems to me like an elegant solution, which is to "manually" load the console script entry point and call it:
from pkg_resources import load_entry_point
cmd = load_entry_point('Sphinx', 'console_scripts', 'sphinx-build')
cmd(['sphinx-build', basepath, destpath])