Is it legal to repeat the same value in a MULTIPLECHARVALUE or MULTIPLESTRINGVALUE field? - fix-protocol

Let's assume a FIX field is of type MULTIPLECHARVALUE or MULTIPLESTRINGVALUE, and the enumerated values defined for the field are A, B, C and D. I know that "A C D" is a legal value for this field, but is it legal for a value to be repeated in the field? For example, is "A C C D" legal? If so, what are its semantics?
I can think of three possibilities:
"A C C D" is an invalid value because C is repeated.
"A C C D" is valid and semantically the same as "A C D". In other words, set semantics are intended.
"A C C D" is valid and has multiset/bag semantics.
Unfortunately, I cannot find any clear definition of the intended semantics of MULTIPLECHARVALUE and MULTIPLESTRINGVALUE in FIX specification documents.

The FIX50SP2 spec does not answer your question, so I can only conclude that any of the three interpretations could be considered valid.
Like so may questions with FIX, the true answer is dependent upon the counterparty you are communicating with.
So my answer is:
if you are client app, ask your counterparty what they want (or check their docs).
if you are the server app, you get to decide. Your docs should tell your clients how to act.
If it helps, the QuickFIX/n engine treats MultipleCharValue/MultipleStringValue fields as strings, and leaves it to the application code to parse out the individual values. Thus, it's easy for a developer to support any of the interpretations, or even different interpretations for different fields. (I suspect the other QuickFIX language implementations are the same.)

The definition of MultipleValueString field is a string field containing one or more space delimited values. I haven't got the official spec, but there are few locations where this definition can be found:
https://www.onixs.biz/fix-dictionary/4.2/index.html#MultipleValueString (I know onixs.biz to be very faithful to the standard specification)
String field (see definition of "String" above) containing one or more space delimited values.
https://aj-sometechnicalitiesoflife.blogspot.com/2010/04/fix-protocol-interview-questions.html
12. What is MultipleValueString data type? [...]
String field containing one or more space delimited values.
This leaves it up to a specific field of this type whether multiples are allowed or not, though I suspect only a few if any would need to have multiples allowed. As far as I can tell, the FIX specification deliberately leaves this open.
E.g. for ExecInst<18> it would be silly to specify the same instruction multiple times. I would also suspect each and every implementation to behave differently (for instance one ignoring duplicates, another balking with an error/rejection).

Related

Subresource and path variable conflicts in REST?

Is it considered bad practice to design a REST API that may have an ambiguity in the path resolution? For example:
GET /animals/{id} // Returns the animal with the given ID
GET /animals/dogs // Returns all animals of type dog
Ok, that's contrived, because you would actually just do GET /dogs, but hopefully it illustrates what I mean. From a path resolution standpoint, it seems like you wouldn't know whether you were looking for an animal with id="dogs" or just all the dogs
Specifically, I'm interested in whether Jersey would have any trouble resolving this. What if you knew the id to be an integer?
"Specifically, I'm interested in whether Jersey would have any trouble resolving this"
No this would not be a problem. If you look at the JAX-RS spec § 3.7.2, you'll see the algorithm for matching requests to resource methods.
[E is the set of matching methods]...
Sort E using the number of literal characters in each member as the primary key (descending order), the number of capturing groups as a secondary key (descending order) and the number of capturing groups with non-default regular expressions (i.e. not ‘([^ /]+?)’) as the tertiary key (descending order)
So basically it's saying that the number of literal characters is the primary key of which to sort by (note that it is short circuiting; you win the primary, you win). So for example if a request goes to /animals/cat, #Path("/animals/dogs") would obviously not be in the set, so we don't need to worry about it. But if the request is to /animals/dogs, then both methods would be in the set. The set is then sorted by the number of literal characters. Since #Path("/animals/dogs") has more literal characters than #Path("/animals/"), the former wins. The capturing group {id} doesn't count towards literal characters.
"What if you knew the id to be an integer?"
The capture group allows for regex. So you can use #Path("/animals/{id: \\d+}"). Anything not numbers will not pass and lead to a 404, unless of course it is /animals/dogs.

How do purely functional compilers annotate the AST with type info?

In the syntax analysis phase, an imperative compiler can build an AST out of nodes that already contain a type field that is set to null during construction, and then later, in the semantic analysis phase, fill in the types by assigning the declared/inferred types into the type fields.
How do purely functional languages handle this, where you do not have the luxury of assignment? Is the type-less AST mapped to a different kind of type-enriched AST? Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?
Are there purely functional programming tricks that help the compiler writer with this problem?
I usually rewrite a source (or an already several steps lowered) AST into a new form, replacing each expression node with a pair (tag, expression).
Tags are unique numbers or symbols which are then used by the next pass which derives type equations from the AST. E.g., a + b will yield something like { numeric(Tag_a). numeric(Tag_b). equals(Tag_a, Tag_b). equals(Tag_e, Tag_a).}.
Then types equations are solved (e.g., by simply running them as a Prolog program), and, if successful, all the tags (which are variables in this program) are now bound to concrete types, and if not, they're left as type parameters.
In a next step, our previous AST is rewritten again, this time replacing tags with all the inferred type information.
The whole process is a sequence of pure rewrites, no need to replace anything in your AST destructively. A typical compilation pipeline may take a couple of dozens of rewrites, some of them changing the AST datatype.
There are several options to model this. You may use the same kind of nullable data fields as in your imperative case:
data Exp = Var Name (Maybe Type) | ...
parse :: String -> Maybe Exp -- types are Nothings here
typeCheck :: Exp -> Maybe Exp -- turns Nothings into Justs
or even, using a more precise type
data Exp ty = Var Name ty | ...
parse :: String -> Maybe (Exp ())
typeCheck :: Exp () -> Maybe (Exp Type)
I cant speak for how it is supposed to be done, but I did do this in F# for a C# compiler here
The approach was basically - build an AST from the source, leaving things like type information unconstrained - So AST.fs basically is the AST which strings for the type names, function names, etc.
As the AST starts to be compiled to (in this case) .NET IL, we end up with more type information (we create the types in the source - lets call these type-stubs). This then gives us the information needed to created method-stubs (the code may have signatures that include type-stubs as well as built in types). From here we now have enough type information to resolve any of the type names, or method signatures in the code.
I store that in the file TypedAST.fs. I do this in a single pass, however the approach may be naive.
Now we have a fully typed AST you could then do things like compile it, fully analyze it, or whatever you like with it.
So in answer to the question "Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?", I cant say definitively that this is the case, but it is certainly what I did, and it appears to be what MS have done with Roslyn (although they have essentially decorated the original tree with type info IIRC)
"Are there purely functional programming tricks that help the compiler writer with this problem?"
Given the ASTs are essentially mirrored in my case, it would be possible to make it generic and transform the tree, but the code may end up (more) horrendous.
i.e.
type 'type AST;
| MethodInvoke of 'type * Name * 'type list
| ....
Like in the case when dealing with relational databases, in functional programming it is often a good idea not to put everything in a single data structure.
In particular, there may not be a data structure that is "the AST".
Most probably, there will be data structures that represent parsed expressions. One possible way to deal with type information is to assign a unique identifier (like an integer) to each node of the tree already during parsing and have some suitable data structure (like a hash map) that associates those node-ids with types. The job of the type inference pass, then, would be just to create this map.

Motivation for Scala underscore in terms of formal language theory and good style?

Why is it that many people say that using underscore is good practice in Scala and makes your code more readable? They say the motivation comes from formal language theory. Nevertheless many programmers, particularly from other languages, especially those that have anonymous functions, prefer not to use underscores particularly for placeholders.
So what is the point in the underscore? Why does Scala (and some other functional languages as pointed by om-nom-nom) have the underscore? And what is the formal underpinning, in terms of complexity and language theory, as to why it often good style to use it?
Linguistics
The origin and motivation for most of the underscore uses in Scala is to allow one to construct expressions and declarations without the need to always give every variable (I mean "variable" as in Predicate Calculus, not in programming) of the language a name. We use this all the time in Natural Language, for example I referred to a concept in the previous sentence in this sentence using "this" and I referred to this sentence using "this" without there being any confusion over what I mean. In Natural Language these words are usually called "pronouns", "anaphors", "cataphors", the referents "antecedent" or "postcedent", and the process of understanding/dereferencing them is called "anaphora".
Algorithmic Information Theory
If we had to name every 'thing' in Natural Language before we can refer to it, similarly every type of thing in order to quantify over it, as in Predicate Calculus and in most programming languages, then speaking would become extremely long winded. It is thanks to context that we can infer what is meant by words like "this", "it", "that", etc, we do it easily.
Therefore why restrict this simple, elegant and efficient means to communicate to Natural Language? So it was added to Scala.
If we did attempt to name every single 'thing' or 'type of thing', sentences become so long and complicated that it becomes very difficult to understand due to it's verbosity and the introduction of redundant symbols. The more symbols you add to a sentence the more difficult it becomes to understand, ergo this is why it's good practice, not only in Natural Language, but in Scala too. In fact one could formalize this assertion in terms of Kolmogorov Complexity and prove that a sequence of sentences adopting placeholders have lower complexity than those that unnecessarily name everything (unless the name is exactly the same in every instance, but that usually doesn't make sense). Therefore we can conclusively say contrary to some programmers belief, that the placeholder syntax is simpler and easier to read.
The reason why it has some resistance in it's use, is that if one is already a programmer, one must make an effort to retrain the brain not to name everything, just as (if they can remember) they may have found learning to code in the first place required quite an effort.
Examples
Now let's look at some specific uses more formally:
Placeholder Syntax
Means "it", "them", "that", "their" etc (i.e. pronouns), e.g. 1
lines.map(_.length)
can be read as "map lines to their length", similarly we can read lineOption.map(_.length) as "map the line to it's length". In terms of complexity theory, this is simpler than "for each 'line' in lines, take the length of 'line'" - which would be lines.map(line => line.length).
Can also be read as "the" (definite article) when used with type annotation, e.g.
(_: Int) + 1
"Add 1 to the integer"
Existential Types
Means "of some type" ("some" the pronoun), e.g
foo: Option[_]
means "foo is an Option of some type".
Higher Kinded type parameters
Again, basically means "of some type" ("some" the pronoun), e.g.
class A[K[_],T](a: K[T])
Can be read "class A takes some K of some type ..."
Pattern Match Wildcards
Means "anything" or "whatever" (pronouns), e.g.
case Foo(_) => "hello"
can be read as "for a Foo containing anything, return 'hello'", or "for a Foo containing whatever, return 'hello'"
Import Wildcards
Means "everything" (pronoun), e.g.
import foo._
can be read as "import everything from foo".
Default Values
Now I read this like "a" (indefinite article), e.g.
val wine: RedWine = _
"Give me a red wine", the waiter should give you the house red.
Other uses of underscore
The other uses of underscores are not really related to the point of this Q&A, nevertheless we breifly discuss them
Ignored Values/Params/Extractions
Allow us to ignore things in an explicit 'pattern safe' way. E.g.
val (x, _) = getMyPoint
Says, we are not going to use the second coordinate, so no need to get freaky when you cant find a use in the code.
Import Hidding
Just a way to say "except" (preposition).
Function Application
E.g.
val f: String => Unit = println _
This is an interesting one as it has an exact analogue in linguistics, namely nominalization, "the use of a verb, an adjective, or an adverb as the head of a noun phrase, with or without morphological transformation" - wikipedia. More simply it is the process of turning verbs or adjectives into nouns.
Use in special method names
Purely a syntax thing and doesn't really relate to linguistics.

Purpose of Scala's Symbol? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What are some example use cases for symbol literals in Scala?
What's the purpose of Symbol and why does it deserve some special literal syntax e. g. 'FooSymbol?
Symbols are used where you have a closed set of identifiers that you want to be able to compare quickly. When you have two String instances they are not guaranteed to be interned[1], so to compare them you must often check their contents by comparing lengths and even checking character-by-character whether they are the same. With Symbol instances, comparisons are a simple eq check (i.e. == in Java), so they are constant time (i.e. O(1)) to look up.
This sort of structure tends to be used more in dynamic languages (notably Ruby and Lisp code tends to make a lot of use of symbols) since in statically-typed languages one usually wants to restrict the set of items by type.
Having said that, if you have a key/value store where there are a restricted set of keys, where it is going to be unwieldy to use a static typed object, a Map[Symbol, Data]-style structure might well be good for you.
A note about String interning on Java (and hence Scala): Java Strings are interned in some cases anyway; in particular string literals are automatically interned, and you can call the intern() method on a String instance to return an interned copy. Not all Strings are interned, though, which means that the runtime still has to do the full check unless they are the same instance; interning makes comparing two equal interned strings faster, but does not improve the runtime of comparing different strings. Symbols benefit from being guaranteed to be interned, so in this case a single reference equality check is both sufficient to prove equality or inequality.
[1] Interning is a process whereby when you create an object, you check whether an equal one already exists, and use that one if it does. It means that if you have two objects which are equal, they are precisely the same object (i.e. they are reference equal). The downsides to this are that it can be costly to look up which object you need to be using, and allowing objects to be garbage collected can require complex implementation.
Symbols are interned.
The purpose is that Symbol are more efficient than Strings and Symbols with the same name are refered to the same Symbol object instance.
Have a look at this read about Ruby symbols: http://glu.ttono.us/articles/2005/08/19/understanding-ruby-symbols
You can only get the name of a Symbol:
scala> val aSymbol = 'thisIsASymbol
aSymbol: Symbol = 'thisIsASymbol
scala> assert("thisIsASymbol" == aSymbol.name)
It's not very useful in Scala and thus not widely used. In general, you can use a symbol where you'd like to designate an identifier.
For example, the reflection invocation feature which was planned for 2.8.0 used the syntax obj o 'method(arg1, arg2) where 'o' was a method added to Any and Symbol was added the method apply(Any*) (both with 'pimp my library').
Another example could be if you want to create an easier way to create HTML documents, then instead of using "div" to designate an element you'd write 'div. Then one can imagine adding operators to Symbol to make syntactic sugar for creating elements

What is the difference between a strongly typed language and a statically typed language?

Also, does one imply the other?
What is the difference between a strongly typed language and a statically typed language?
A statically typed language has a type system that is checked at compile time by the implementation (a compiler or interpreter). The type check rejects some programs, and programs that pass the check usually come with some guarantees; for example, the compiler guarantees not to use integer arithmetic instructions on floating-point numbers.
There is no real agreement on what "strongly typed" means, although the most widely used definition in the professional literature is that in a "strongly typed" language, it is not possible for the programmer to work around the restrictions imposed by the type system. This term is almost always used to describe statically typed languages.
Static vs dynamic
The opposite of statically typed is "dynamically typed", which means that
Values used at run time are classified into types.
There are restrictions on how such values can be used.
When those restrictions are violated, the violation is reported as a (dynamic) type error.
For example, Lua, a dynamically typed language, has a string type, a number type, and a Boolean type, among others. In Lua every value belongs to exactly one type, but this is not a requirement for all dynamically typed languages. In Lua, it is permissible to concatenate two strings, but it is not permissible to concatenate a string and a Boolean.
Strong vs weak
The opposite of "strongly typed" is "weakly typed", which means you can work around the type system. C is notoriously weakly typed because any pointer type is convertible to any other pointer type simply by casting. Pascal was intended to be strongly typed, but an oversight in the design (untagged variant records) introduced a loophole into the type system, so technically it is weakly typed.
Examples of truly strongly typed languages include CLU, Standard ML, and Haskell. Standard ML has in fact undergone several revisions to remove loopholes in the type system that were discovered after the language was widely deployed.
What's really going on here?
Overall, it turns out to be not that useful to talk about "strong" and "weak". Whether a type system has a loophole is less important than the exact number and nature of the loopholes, how likely they are to come up in practice, and what are the consequences of exploiting a loophole. In practice, it's best to avoid the terms "strong" and "weak" altogether, because
Amateurs often conflate them with "static" and "dynamic".
Apparently "weak typing" is used by some persons to talk about the relative prevalance or absence of implicit conversions.
Professionals can't agree on exactly what the terms mean.
Overall you are unlikely to inform or enlighten your audience.
The sad truth is that when it comes to type systems, "strong" and "weak" don't have a universally agreed on technical meaning. If you want to discuss the relative strength of type systems, it is better to discuss exactly what guarantees are and are not provided.
For example, a good question to ask is this: "is every value of a given type (or class) guaranteed to have been created by calling one of that type's constructors?" In C the answer is no. In CLU, F#, and Haskell it is yes. For C++ I am not sure—I would like to know.
By contrast, static typing means that programs are checked before being executed, and a program might be rejected before it starts. Dynamic typing means that the types of values are checked during execution, and a poorly typed operation might cause the program to halt or otherwise signal an error at run time. A primary reason for static typing is to rule out programs that might have such "dynamic type errors".
Does one imply the other?
On a pedantic level, no, because the word "strong" doesn't really mean anything. But in practice, people almost always do one of two things:
They (incorrectly) use "strong" and "weak" to mean "static" and "dynamic", in which case they (incorrectly) are using "strongly typed" and "statically typed" interchangeably.
They use "strong" and "weak" to compare properties of static type systems. It is very rare to hear someone talk about a "strong" or "weak" dynamic type system. Except for FORTH, which doesn't really have any sort of a type system, I can't think of a dynamically typed language where the type system can be subverted. Sort of by definition, those checks are bulit into the execution engine, and every operation gets checked for sanity before being executed.
Either way, if a person calls a language "strongly typed", that person is very likely to be talking about a statically typed language.
This is often misunderstood so let me clear it up.
Static/Dynamic Typing
Static typing is where the type is bound to the variable. Types are checked at compile time.
Dynamic typing is where the type is bound to the value. Types are checked at run time.
So in Java for example:
String s = "abcd";
s will "forever" be a String. During its life it may point to different Strings (since s is a reference in Java). It may have a null value but it will never refer to an Integer or a List. That's static typing.
In PHP:
$s = "abcd"; // $s is a string
$s = 123; // $s is now an integer
$s = array(1, 2, 3); // $s is now an array
$s = new DOMDocument; // $s is an instance of the DOMDocument class
That's dynamic typing.
Strong/Weak Typing
(Edit alert!)
Strong typing is a phrase with no widely agreed upon meaning. Most programmers who use this term to mean something other than static typing use it to imply that there is a type discipline that is enforced by the compiler. For example, CLU has a strong type system that does not allow client code to create a value of abstract type except by using the constructors provided by the type. C has a somewhat strong type system, but it can be "subverted" to a degree because a program can always cast a value of one pointer type to a value of another pointer type. So for example, in C you can take a value returned by malloc() and cheerfully cast it to FILE*, and the compiler won't try to stop you—or even warn you that you are doing anything dodgy.
(The original answer said something about a value "not changing type at run time". I have known many language designers and compiler writers and have not known one that talked about values changing type at run time, except possibly some very advanced research in type systems, where this is known as the "strong update problem".)
Weak typing implies that the compiler does not enforce a typing discpline, or perhaps that enforcement can easily be subverted.
The original of this answer conflated weak typing with implicit conversion (sometimes also called "implicit promotion"). For example, in Java:
String s = "abc" + 123; // "abc123";
This is code is an example of implicit promotion: 123 is implicitly converted to a string before being concatenated with "abc". It can be argued the Java compiler rewrites that code as:
String s = "abc" + new Integer(123).toString();
Consider a classic PHP "starts with" problem:
if (strpos('abcdef', 'abc') == false) {
// not found
}
The error here is that strpos() returns the index of the match, being 0. 0 is coerced into boolean false and thus the condition is actually true. The solution is to use === instead of == to avoid implicit conversion.
This example illustrates how a combination of implicit conversion and dynamic typing can lead programmers astray.
Compare that to Ruby:
val = "abc" + 123
which is a runtime error because in Ruby the object 123 is not implicitly converted just because it happens to be passed to a + method. In Ruby the programmer must make the conversion explicit:
val = "abc" + 123.to_s
Comparing PHP and Ruby is a good illustration here. Both are dynamically typed languages but PHP has lots of implicit conversions and Ruby (perhaps surprisingly if you're unfamiliar with it) doesn't.
Static/Dynamic vs Strong/Weak
The point here is that the static/dynamic axis is independent of the strong/weak axis. People confuse them probably in part because strong vs weak typing is not only less clearly defined, there is no real consensus on exactly what is meant by strong and weak. For this reason strong/weak typing is far more of a shade of grey rather than black or white.
So to answer your question: another way to look at this that's mostly correct is to say that static typing is compile-time type safety and strong typing is runtime type safety.
The reason for this is that variables in a statically typed language have a type that must be declared and can be checked at compile time. A strongly-typed language has values that have a type at run time, and it's difficult for the programmer to subvert the type system without a dynamic check.
But it's important to understand that a language can be Static/Strong, Static/Weak, Dynamic/Strong or Dynamic/Weak.
Both are poles on two different axis:
strongly typed vs. weakly typed
statically typed vs. dynamically typed
Strongly typed means, a variable will not be automatically converted from one type to another. Weakly typed is the opposite: Perl can use a string like "123" in a numeric context, by automatically converting it into the int 123. A strongly typed language like python will not do this.
Statically typed means, the compiler figures out the type of each variable at compile time. Dynamically typed languages only figure out the types of variables at runtime.
Strongly typed means that there are restrictions between conversions between types.
Statically typed means that the types are not dynamic - you can not change the type of a variable once it has been created.
Answer is already given above. Trying to differentiate between strong vs week and static vs dynamic concept.
What is Strongly typed VS Weakly typed?
Strongly Typed: Will not be automatically converted from one type to another
In Go or Python like strongly typed languages "2" + 8 will raise a type error, because they don't allow for "type coercion".
Weakly (loosely) Typed: Will be automatically converted to one type to another:
Weakly typed languages like JavaScript or Perl won't throw an error and in this case JavaScript will results '28' and perl will result 10.
Perl Example:
my $a = "2" + 8;
print $a,"\n";
Save it to main.pl and run perl main.pl and you will get output 10.
What is Static VS Dynamic type?
In programming, programmer define static typing and dynamic typing with respect to the point at which the variable types are checked. Static typed languages are those in which type checking is done at compile-time, whereas dynamic typed languages are those in which type checking is done at run-time.
Static: Types checked before run-time
Dynamic: Types checked on the fly, during execution
What is this means?
In Go it checks typed before run-time (static check). This mean it not only translates and type-checks code it’s executing, but it will scan through all the code and type error would be thrown before the code is even run. For example,
package main
import "fmt"
func foo(a int) {
if (a > 0) {
fmt.Println("I am feeling lucky (maybe).")
} else {
fmt.Println("2" + 8)
}
}
func main() {
foo(2)
}
Save this file in main.go and run it, you will get compilation failed message for this.
go run main.go
# command-line-arguments
./main.go:9:25: cannot convert "2" (type untyped string) to type int
./main.go:9:25: invalid operation: "2" + 8 (mismatched types string and int)
But this case is not valid for Python. For example following block of code will execute for first foo(2) call and will fail for second foo(0) call. It's because Python is dynamically typed, it only translates and type-checks code it’s executing on. The else block never executes for foo(2), so "2" + 8 is never even looked at and for foo(0) call it will try to execute that block and failed.
def foo(a):
if a > 0:
print 'I am feeling lucky.'
else:
print "2" + 8
foo(2)
foo(0)
You will see following output
python main.py
I am feeling lucky.
Traceback (most recent call last):
File "pyth.py", line 7, in <module>
foo(0)
File "pyth.py", line 5, in foo
print "2" + 8
TypeError: cannot concatenate 'str' and 'int' objects
Data Coercion does not necessarily mean weakly typed because sometimes its syntacical sugar:
The example above of Java being weakly typed because of
String s = "abc" + 123;
Is not weakly typed example because its really doing:
String s = "abc" + new Integer(123).toString()
Data coercion is also not weakly typed if you are constructing a new object.
Java is a very bad example of weakly typed (and any language that has good reflection will most likely not be weakly typed). Because the runtime of the language always knows what the type is (the exception might be native types).
This is unlike C. C is the one of the best examples of weakly typed. The runtime has no idea if 4 bytes is an integer, a struct, a pointer or a 4 characters.
The runtime of the language really defines whether or not its weakly typed otherwise its really just opinion.
EDIT:
After further thought this is not necessarily true as the runtime does not have to have all the types reified in the runtime system to be a Strongly Typed system.
Haskell and ML have such complete static analysis that they can potential ommit type information from the runtime.
One does not imply the other. For a language to be statically typed it means that the types of all variables are known or inferred at compile time.
A strongly typed language does not allow you to use one type as another. C is a weakly typed language and is a good example of what strongly typed languages don't allow. In C you can pass a data element of the wrong type and it will not complain. In strongly typed languages you cannot.
Strong typing probably means that variables have a well-defined type and that there are strict rules about combining variables of different types in expressions. For example, if A is an integer and B is a float, then the strict rule about A+B might be that A is cast to a float and the result returned as a float. If A is an integer and B is a string, then the strict rule might be that A+B is not valid.
Static typing probably means that types are assigned at compile time (or its equivalent for non-compiled languages) and cannot change during program execution.
Note that these classifications are not mutually exclusive, indeed I would expect them to occur together frequently. Many strongly-typed languages are also statically-typed.
And note that when I use the word 'probably' it is because there are no universally accepted definitions of these terms. As you will already have seen from the answers so far.
Imho, it is better to avoid these definitions altogether, not only there is no agreed upon definition of the terms, definitions that do exist tend to focus on technical aspects for example, are operation on mixed type allowed and if not is there a loophole that bypasses the restrictions such as work your way using pointers.
Instead, and emphasizing again that it is an opinion, one should focus on the question: Does the type system make my application more reliable? A question which is application specific.
For example: if my application has a variable named acceleration, then clearly if the way the variable is declared and used allows the assignment of the value "Monday" to acceleration it is a problem, as clearly an acceleration cannot be a weekday (and a string).
Another example: In Ada one can define: subtype Month_Day is Integer range 1..31;, The type Month_Day is weak in the sense that it is not a separate type from Integer (because it is a subtype), however it is restricted to the range 1..31. In contrast: type Month_Day is new Integer; will create a distinct type, which is strong in the sense that that it cannot be mixed with integers without explicit casting - but it is not restricted and can receive the value -17 which is senseless. So technically it is stronger, but is less reliable.
Of course, one can declare type Month_Day is new Integer range 1..31; to create a type which is distinct and restricted.