Setting [no_vars] theory-wide - presentation

I have an Isabelle theory that collects all my results and only presents them, so I’d currently apply [no_vars] to all my #thm antiquotations. (This suppresses the question marks in the schematic variables which are undesirable in the final presentation.)
Is there a way to make Isabelle use no_vars once for the whole theory?

I think the canonical way (whatever that means) is to add such options to your session ROOT. Either globally, e.g.,
session A = B +
options [show_question_marks = false]
theories
...
or per theory, e.g.,
session A = B +
theories [show_question_marks = false]
T1
theories
T2
...

The magic invocation is
declare [[show_question_marks = false]]
as documented in the section “Details of printed output” in isar-ref.pdf.

Related

Allowed approaches for addressing SQL Injection in Fortify

I have the following code for implementing a drop down menu. The user selects two values, and, based on the input, the query selects the relevant columns to be shown to the user:
String sql = "SELECT :first, :second from <table>";
sql = sql.replace(":first", <first_user_input>);
sql = sql.replace(":second", <second_user_input>);
Now, Fortify catches these lines as allowing SQL Injection. My question is, will Fortify accept a RegEx based whitelisting approach as a solution?
I was thinking of adopting the following approach:
if(isValidSQL(<first_user_input>) && isValidSQL(<second_user_input>))
{
sql = sql.replace(...);
}
else
throw new IllegalSQLInputException
and
public boolean isValidSQL(String param)
{
Pattern p = Pattern.compile([[A-Z]_]+); //RegEx for matching column names like "FIRST_NAME", "LNAME" etc. but NOT "DROP<space>TABLE"
Matcher m = p.matcher(param);
return m.matches(param);
}
So, would Fortify accept this as a valid whitelisting method? If Fortify works on the below grammar:
valid_sql := <immutable_string_literal> //Something like "SELECT * FROM <table> WHERE x = ?" or //SELECT * FROM <table>
valid_sql := valid_sql + valid_sql //"SELECT * FROM <table>" + "WHERE x = ?"
then I don't suppose RegEx-based whitelisting would work. In that case, only this example would work, since it appends strings that are fixed at run time. I would not prefer this approach since it would result in a massive amount of switch-case statements.
Thank You
So, after trying the above mentioned approach, I found that Fortify is still catching the line as a potential threat. However, after reading about how Fortify works, I'm not sure if the "strictness" comes from Fortify itself, or the corporate rules defined in the XML configuration file for the same. I think that, if one's corporate rules allow regular expression-based whitelisting to work, then that should do the trick.

Spark caching strategy

I have a Spark driver that goes like this:
EDIT - earlier version of the code was different & didn't work
var totalResult = ... // RDD[(key, value)]
var stageResult = totalResult
do {
stageResult = stageResult.flatMap(
// Some code that returns zero or more outputs per input,
// and updates `acc` to number of outputs
...
).reduceByKey((x, y) => x.sum(y))
totalResult = totalResult.union(stageResult)
} while(stageResult.count() > 0)
I know from properties of my data that this will eventually terminate (I'm essentially aggregating up the nodes in a DAG).
I'm not sure of a reasonable caching strategy here - should I cache stageResult each time through the loop? Am I setting up a horrible tower of recursion, since each totalResult depends on all previous incarnations of itself? Or will Spark figure that out for me? Or should I put each RDD result in an array and take one big union at the end?
Suggestions will be welcome here, thanks.
I would rewrite this as follows:
do {
stageResult = stageResult.flatMap(
//Some code that returns zero or more outputs per input
).reduceByKey(_+_).cache
totalResult = totalResult.union(stageResult)
} while(stageResult.count > 0)
I am fairly certain(95%) that the stageResult DAG used in the union will be the correct reference (especially since count should trigger it), but this might need to be double checked.
Then when you call totalResult.ACTION, it will put all of the cached data together.
ANSWER BASED ON UPDATED QUESTION
As long as you have the memory space, then I would indeed cache everything along the way as it stores the data of each stageResult, unioning all of those data points at the end. In fact, each union does not rely on the past as that is not the semantics of RDD.union, it merely puts them together at the end. You could just as easily change your code to use a val due to RDD immutability.
As a final note, maybe the DAG visualization will help understand why there would not be recursive ramifications:

Assigning a whole DataStructure its nullind array

Some context before the question.
Imagine file FileA having around 50 fields of different types. Instead of all programs using the file, I tried having a service program, so the file could only be accessed by that service program. The programs calling the service would then receive a DataStructure based on the file structure, as an ExtName. I use SQL to recover the information, so, basically, the procedure would go like this :
Datastructure shared by service program :
D FileADS E DS ExtName(FileA) Qualified
Procedure called by programs :
P getFileADS B Export
D PI N
D PI_IDKey 9B 0 Const
D PO_DS LikeDS(FileADS)
D LocalDS E DS ExtName(FileA) Qualified
D NullInd S 5i 0 Array(50) <-- Since 50 fields in fileA
//Code
Clear LocalDS;
Clear PO_DS;
exec sql
SELECT *
INTO :LocalDS :nullind
FROM FileA
WHERE FileA.ID = :PI_IDKey;
If SqlCod <> 0;
Return *Off;
EndIf;
PO_DS = LocalDS;
Return *On;
P getFileADS E
So, that procedure will return a datastructure filled with a record from FileA if it finds it.
Now my question : Is there any way I can assign the %nullind(field) = *On without specifying EACH 50 fields of my file?
Something like a loop
i = 1;
DoW (i <= 50);
if nullind(i) = -1;
%nullind(datastructure.field) = *On;
endif;
i++;
EndDo;
Cause let's face it, it'd be a pain to look each fields of each file every time.
I know a simple chain(n) could do the trick
chain(n) PI_IDKey FileA FileADS;
but I really was looking to do it with SQL.
Thank you for your advices!
OS Version : 7.1
First, you'll be better off in the long run by eliminating SELECT * and supplying a SELECT list of the 50 field names.
Next, consider these two web pages -- Meaningful Names for Null Indicators and Embedded SQL and null indicators. The first shows an example of assigning names to each null indicator to match the associated field names. It's just a matter of declaring a based DS with names, based on the address of your null indicator array. The second points out how a null indicator array can be larger than needed, so future database changes won't affect results. (Bear in mind that the page shows a null array of 1000 elements, and the memory is actually relatively tiny even at that size. You can declare it smaller if you think it's necessary for some reason.)
You're creating a proc that you'll only write once. It's not worth saving the effort of listing the 50 fields. Maybe if you had many programs using this proc and you had to create the list each time it'd be a slight help to use SELECT *, but even then it's not a great idea.
A matching template DS for the 50 data fields can be defined in the /COPY member that will hold the proc prototype. The template DS will be available in any program that brings the proc prototype in. Any program that needs to call the proc can simply specify LIKEDS referencing the template to define its version in memory. The template DS should probably include the QUALIFIED keyword, and programs would then use their own DS names as the qualifying prefix. The null indicator array can be handled similarly.
However, it's not completely clear what your actual question is. You show an example loop and ask if it'll work, but you don't say if you had a problem with it. It's an array, so a loop can be used much like you show. But it depends on what you're actually trying to accomplish with it.
for old school rpg just include the nulls in the data structure populated with the select statement.
select col1, ifnull(col1), col2, ifnull(col2), etc. into :dsfilewithnull where f.id = :id;
for old school rpg that can't handle nulls remove them with the select statement.
select coalesce(col1,0), coalesce(col2,' '), coalesce(col3, :lowdate) into :dsfile where f.id = :id;
The second method would be easier to use in a legacy environment.
pass the key by value to the procedure so you can use it like a built in function.
One answer to your question would be to make the array part of a data structure, and assign *all'0' to the data structure.
dcl-ds nullIndDs;
nullInd Ind Dim(50);
end-ds;
nullIndDs = *all'0';
The answer by jmarkmurphy is an example of assigning all zeros to an array of indicators. For the example that you show in your question, you can do it this way:
D NullInd S 5i 0 dim(50)
/free
NullInd(*) = 1 ;
Nullind(*) = 0 ;
*inlr = *on ;
return ;
/end-free
That's a complete program that you can compile and test. Run it in debug and stop at the first statement. Display NullInd to see the initial value of its elements. Step through the first statement and display it again to see how the elements changed. Step through the next statement to see how things changed again.
As for "how to do it in SQL", that part doesn't make sense. SQL sets the values automatically when you FETCH a row. Other than that, the array is used by the host language (RPG in this case) to communicate values back to SQL. When a SQL statement runs, it again automatically uses whatever values were set. So, it either is used automatically by SQL for input or output, or is set by your host language statements. There is nothing useful that you can do 'in SQL' with that array.

Erlang mnesia equivalent of "select * from Tb"

I'm a total erlang noob and I just want to see what's in a particular table I have. I want to just "select *" from a particular table to start with. The examples I'm seeing, such as the official documentation, all have column restrictions which I don't really want. I don't really know how to form the MatchHead or Guard to match anything (aka "*").
A very simple primer on how to just get everything out of a table would be very appreciated!
For example, you can use qlc:
F = fun() ->
Q = qlc:q([R || R <- mnesia:table(foo)]),
qlc:e(Q)
end,
mnesia:transaction(F).
The simplest way to do it is probably mnesia:dirty_match_object:
mnesia:dirty_match_object(foo, #foo{_ = '_'}).
That is, match everything in the table foo that is a foo record, regardless of the values of the fields (every field is '_', i.e. wildcard). Note that since it uses record construction syntax, it will only work in a module where you have included the record definition, or in the shell after evaluating rr(my_module) to make the record definition available.
(I expected mnesia:dirty_match_object(foo, '_') to work, but that fails with a bad_type error.)
To do it with select, call it like this:
mnesia:dirty_select(foo, [{'_', [], ['$_']}]).
Here, MatchHead is _, i.e. match anything. The guards are [], an empty list, i.e. no extra limitations. The result spec is ['$_'], i.e. return the entire record. For more information about match specs, see the match specifications chapter of the ERTS user guide.
If an expression is too deep and gets printed with ... in the shell, you can ask the shell to print the entire thing by evaluating rp(EXPRESSION). EXPRESSION can either be the function call once again, or v(-1) for the value returned by the previous expression, or v(42) for the value returned by the expression preceded by the shell prompt 42>.

Get all top level entry values from NotesView

What is the easiest way to get all top level entry values from a notes view? I have found the property "TopLevelEntryCount" returning the number of alle categories, but I want the values, too.
To iterate the view entries and column values need too much time.
Set ecl = view.Allentries
Set ve = ecl.Getfirstentry()
While Not(ve Is Nothing)
If IsArray(ve.Columnvalues(0)) Then
If flag = "" Then
arr = ve.Columnvalues(0)
Else
arr = ArrayUnique(ArrayAppend(arr, ve.Columnvalues(0)))
End If
Else
'error if arr is not already an array
arr = ArrayUnique(ArrayAppend(arr, ve.Columnvalues(0)))
End If
flag = "1"
Set ve = ecl.Getnextentry(ve)
Wend
Anyone know a faster way? It must not be written in LotusScript, actually I would prefer Java, JS or SSJS.
Idea 1: You can use the NotesViewNavigator class, and call the getFirst() and getNextCategory() methods. This should be faster than walking all the entries.
Idea 2: You can use NotesSession.Evaluate() with a formula that does an #Unique on #DbColumn for the first column of view. That should bring you back an array, and you can get the ubound of the array. Formulas tend to be very fast, but evaluate() has to compile them first, so I don't know if this will be faster or not. The disadvantage of this approach is that for very large views, this could exceed formula language's size limits. But if it does prove to be faster, you could catch the exception and fall-back to the slower method of iterating.
Richard's response is perfect!
For a java version it's almost the same:
see the Domino help for getNextCategory the given example below ViewNavigator really answer your need: link here
The evaluate method is also available in java Session