What is monoid homomorphism exactly? - scala

I've read about monoid homomorphism from Monoid Morphisms, Products, and Coproducts and could not understand 100%.
The author says (emphasis original):
The length function maps from String to Int while preserving the
monoid structure. Such a function, that maps from one monoid to
another in such a preserving way, is called a monoid homomorphism. In
general, for monoids M and N, a homomorphism f: M => N, and all values
x:M, y:M, the following equations hold:
f(x |+| y) == (f(x) |+| f(y))
f(mzero[M]) == mzero[N]
Does he mean that, since the datatypes String and Int are monoids, and the function length maps String => Int preserving the monoid structure (Int is a monoid), it is called monoid homomorphism, right?

Does he mean, the datatype String and Int are monoid.
No, neither String nor Int are monoids. A monoid is a 3-tuple (S, ⊕, e) where ⊕ is a binary operator ⊕ : S×S → S, such that for all elements a, b, c∈S it holds that (a⊕b)⊕c=a⊕(b⊕c), and e∈S is an "identity element" such that a⊕e=e⊕a=a. String and Int are types, so basically sets of values, but not 3-tuples.
The article says:
Let's take the String concatenation and Int addition as
example monoids that have a relationship.
So the author clearly also mentions the binary operators ((++) in case of String, and (+) in case of Int). The identities (empty string in case of String and 0 in case of Int) are left implicit; leaving the identities as an exercise for the reader is common in informal English discourse.
Now given that we have two monoid structures (M, ⊕, em) and (N, ⊗, en), a function f : M → N (like length) is then called a monoid homomorphism [wiki] given it holds that f(m1⊕m2)=f(m1)⊗f(m2) for all elements m1, m2∈M and that mapping also preserves the identity element: f(em)=en.
For example length :: String -> Int is a monoid homomorphism, since we can consider the monoids (String, (++), "") and (Int, (+), 0). It holds that:
length (s1 ++ s2) == length s1 + length s2 (for all Strings s1 and s2); and
length "" == 0.

Datatype cannot be a monoid on its own. For a monoid, you need a data type T and two more things:
an associative binary operation, let's call it |+|, that takes two elements of type T and produces an element of type T
an identity element of type T, let's call it i, such that for every element t of type T the following holds: t |+| i = i |+| t = t
Here are some examples of a monoid:
set of integers with operation = addition and identity = zero
set of integers with operation = multiplication and identity = one
set of lists with operation = appending and identity = empty list
set of strings with operation = concatenation and identity = empty string
Monoid homomorphism
String concatenation monoid can be transformed into integer addition monoid by applying .length to all its elements. Both those sets form a monoid. By the way, remember that we can't just say "set of integers forms a monoid"; we have to pick an associative operation and a corresponding identity element. If we take e.g. division as operation, we break the first rule (instead of producing an element of type integer, we might produce an element of type float/double).
Method length allows us to go from a monoid (string concatenation) to another monoid (integer addition). If such operation also preserves monoid structure, it is considered to be a monoid homomorphism.
Preserving the structure means:
length(t1 |+| t2) = length(t1) |+| length(t2)
and
length(i) = i'
where t1 and t2 represent elements of the "source" monoid, i is the identity of the "source" monoid, and i' is the identity of the "destination" monoid. You can try it out yourself and see that length indeed is a structure-preserving operation on a string concatenation monoid, while e.g. indexOf("a") isn't.
Monoid isomorphism
As demonstrated, length maps all strings to their corresponding integers and forms a monoid with addition as operation and zero as identity. But we can't go back - for every string, we can figure out its length, but given a length we can't reconstruct the "original" string. If we could, then the operation of "going forward" combined with the operation of "going back" would form a monoid isomorphism.
Isomorphism means being able to go back and forth without any loss of information. For example, as stated earlier, list forms a monoid under appending as operation and empty list as identity element. We could go from "list under appending" monoid to "vector under appending" monoid and back without any loss of information, which means that operations .toVector and .toList together form an isomorphism. Another example of an isomorphism, which Runar mentioned in his text, is String ⟷ List[Char].

Colloquially a homomorphism is a function that preserves structure. In the example of the length function the preserved structure is the sum of the lengths of two strings being equal to the length of the concatenation of the same strings. Since both strings and integers can be regarded as monoids (when equipped with an identity and an associative binary operation obeying the monoid laws) length is called a monoid homomorphism.
See also the other answers for a more technical explanation.

trait Monoid[T] {
def op(a: T, b: T): T
def zero: T
}
val strMonoid = new Monoid[String] {
def op(a: String, b: String): String = a ++ b
def zero: String = ""
}
val lcMonoid = new Monoid[List[Char]] {
def op(a: List[Char], b: List[Char]): List[Char] = a ::: b
def zero = List.empty[Char]
}
homomorphism via function f
f{M.op(x,y)} = N.op(f(x),g(y))
for example, using toList available on String
//in REPL
scala> strMonoid.op("abc","def").toList == lcMonoid.op("abc".toList,"def".toList)
res4: Boolean = true
isomorphism via functions f and g
given bi-directional homomorphism between monoids M and N,
f{M.op(x,y)} = N.op(f(x),f(y))
g{N.op(x,y)} = M.op(g(x),g(y))
And if both (f andThen g) and (g andThen f) are identify functions, then monoids M and N are isomorphic via f and g
g{f{M.op(x,y)}} = g{N.op(f(x),f(y))} = M.op(g(f(x)),g(f(y))) = M.op(x,y)
for example, using toList available on String and toString available on List[Char] (where toList andThen toString and toString andThen toList are identity functions)
scala> ( strMonoid.op("abc","def").toList ).toString == ( lcMonoid.op("abc".toList,"def".toList) ).toString
res7: Boolean = true

Related

Monoid homomorphism and isomorphism

I am reading the book 'Programming in Scala' (The red book).
In the chapter about Monoids, I understand what a Monoid homomorphism is, for example: The String Monoid M with concatenation and length function f preserves the monoid structure, and hence are homomorphic.
M.op(f(x), f(y)) == M.op(f(x) + f(y))
// "Lorem".length + "ipsum".length == ("Lorem" + "ipsum").length
Quoting the book (From memory, so correct me if I am wrong:
When this happens in both directions, it is named Monoid isomorphisim, that means that for monoids M, N, and functions f, g, f andThen g and g andThen f are the identity function. For example the String Monoid and List[Char] Monoid with concatenation are isomorphic.
But I can't see an actual example for seeing this, I can only think of f as the length function, but what happens with g?
Note: I have seen this question: What are isomorphism and homomorphisms.
To see the isomorphism between String and List[Char] we have toList: String -> List[Char] and mkString: List[Char] -> String.
length is a homomorphism from the String monoid to the monoid of natural numbers with addition.
A couple of examples of endo-homomorphism of the String monoid are toUpperCase and toLowerCase.
For lists, we have a lot of homomorphisms, many of which are just versions of fold.
Here is siyopao's answer expressed as ScalaCheck program
object IsomorphismSpecification extends Properties("f and g") {
val f: String => List[Char] = _.toList
val g: List[Char] => String = _.mkString
property("isomorphism") = forAll { (a: String, b: List[Char]) =>
(f andThen g)(a) == a && (g andThen f)(b) == b
}
}
which outputs
+ f and g.isomorphism: OK, passed 100 tests.

What kind of morphism is `filter` in category theory?

In category theory, is the filter operation considered a morphism? If yes, what kind of morphism is it? Example (in Scala)
val myNums: Seq[Int] = Seq(-1, 3, -4, 2)
myNums.filter(_ > 0)
// Seq[Int] = List(3, 2) // result = subset, same type
myNums.filter(_ > -99)
// Seq[Int] = List(-1, 3, -4, 2) // result = identical than original
myNums.filter(_ > 99)
// Seq[Int] = List() // result = empty, same type
One interesting way of looking at this matter involves not picking filter as a primitive notion. There is a Haskell type class called Filterable which is aptly described as:
Like Functor, but it [includes] Maybe effects.
Formally, the class Filterable represents a functor from Kleisli Maybe to Hask.
The morphism mapping of the "functor from Kleisli Maybe to Hask" is captured by the mapMaybe method of the class, which is indeed a generalisation of the homonymous Data.Maybe function:
mapMaybe :: Filterable f => (a -> Maybe b) -> f a -> f b
The class laws are simply the appropriate functor laws (note that Just and (<=<) are, respectively, identity and composition in Kleisli Maybe):
mapMaybe Just = id
mapMaybe (g <=< f) = mapMaybe g . mapMaybe f
The class can also be expressed in terms of catMaybes...
catMaybes :: Filterable f => f (Maybe a) -> f a
... which is interdefinable with mapMaybe (cf. the analogous relationship between sequenceA and traverse)...
catMaybes = mapMaybe id
mapMaybe g = catMaybes . fmap g
... and amounts to a natural transformation between the Hask endofunctors Compose f Maybe and f.
What does all of that have to do with your question? Firstly, a functor is a morphism between categories, and a natural transformation is a morphism between functors. That being so, it is possible to talk of morphisms here in a sense that is less boring than the "morphisms in Hask" one. You won't necessarily want to do so, but in any case it is an existing vantage point.
Secondly, filter is, unsurprisingly, also a method of Filterable, its default definition being:
filter :: Filterable f => (a -> Bool) -> f a -> f a
filter p = mapMaybe $ \a -> if p a then Just a else Nothing
Or, to spell it using another cute combinator:
filter p = mapMaybe (ensure p)
That indirectly gives filter a place in this particular constellation of categorical notions.
To answer are question like this, I'd like to first understand what is the essence of filtering.
For instance, does it matter that the input is a list? Could you filter a tree? I don't see why not! You'd apply a predicate to each node of the tree and discard the ones that fail the test.
But what would be the shape of the result? Node deletion is not always defined or it's ambiguous. You could return a list. But why a list? Any data structure that supports appending would work. You also need an empty member of your data structure to start the appending process. So any unital magma would do. If you insist on associativity, you get a monoid. Looking back at the definition of filter, the result is a list, which is indeed a monoid. So we are on the right track.
So filter is just a special case of what's called Foldable: a data structure over which you can fold while accumulating the results in a monoid. In particular, you could use the predicate to either output a singleton list, if it's true; or an empty list (identity element), if it's false.
If you want a categorical answer, then a fold is an example of a catamorphism, an example of a morphism in the category of algebras. The (recursive) data structure you're folding over (a list, in the case of filter) is an initial algebra for some functor (the list functor, in this case), and your predicate is used to define an algebra for this functor.
In this answer, I will assume that you are talking about filter on Set (the situation seems messier for other datatypes).
Let's first fix what we are talking about. I will talk specifically about the following function (in Scala):
def filter[A](p: A => Boolean): Set[A] => Set[A] =
s => s filter p
When we write it down this way, we see clearly that it's a polymorphic function with type parameter A that maps predicates A => Boolean to functions that map Set[A] to other Set[A]. To make it a "morphism", we would have to find some categories first, in which this thing could be a "morphism". One might hope that it's natural transformation, and therefore a morphism in the category of endofunctors on the "default ambient category-esque structure" usually referred to as "Hask" (or "Scal"? "Scala"?). To show that it's natural, we would have to check that the following diagram commutes for every f: B => A:
- o f
Hom[A, Boolean] ---------------------> Hom[B, Boolean]
| |
| |
| |
| filter[A] | filter[B]
| |
V ??? V
Hom[Set[A], Set[A]] ---------------> Hom[Set[B], Set[B]]
however, here we fail immediately, because it's not clear what to even put on the horizontal arrow at the bottom, since the assignment A -> Hom[Set[A], Set[A]] doesn't even seem functorial (for the same reasons why A -> End[A] is not functorial, see here and also here).
The only "categorical" structure that I see here for a fixed type A is the following:
Predicates on A can be considered to be a partially ordered set with implication, that is p LEQ q if p implies q (i.e. either p(x) must be false, or q(x) must be true for all x: A).
Analogously, on functions Set[A] => Set[A], we can define a partial order with f LEQ g whenever for each set s: Set[A] it holds that f(s) is subset of g(s).
Then filter[A] would be monotonic, and therefore a functor between poset-categories. But that's somewhat boring.
Of course, for each fixed A, it (or rather its eta-expansion) is also just a function from A => Boolean to Set[A] => Set[A], so it's automatically a "morphism" in the "Hask-category". But that's even more boring.
filter can be written in terms of foldRight as:
filter p ys = foldRight(nil)( (x, xs) => if (p(x)) x::xs else xs ) ys
foldRight on lists is a map of T-algebras (where here T is the List datatype functor), so filter is a map of T-algebras.
The two algebras in question here are the initial list algebra
[nil, cons]: 1 + A x List(A) ----> List(A)
and, let's say the "filter" algebra,
[nil, f]: 1 + A x List(A) ----> List(A)
where f(x, xs) = if p(x) x::xs else xs.
Let's call filter(p, _) the unique map from the initial algebra to the filter algebra in this case (it is called fold in the general case). The fact that it is a map of algebras means that the following equations are satisfied:
filter(p, nil) = nil
filter(p, x::xs) = f(x, filter(p, xs))

What are isomorphism and homomorphisms

I tried to understand isomorphism and homomorphisms in context of programming and need some help.
In the book FPiS it explains:
Let start with a homomorphisms:
"foo".length + "bar".length == ("foo" + "bar").length
Here, length is a function from String to Int that preserves the monoid structure.
Why is that a homomorphisms?
Why it preserve the monoid structure?
Is for example map on list function a homomorphisms?
About isomorphism, I have following explaination that I took it from a book:
A monoid isomorphism between M and N has two homomorphisms
f and g, where both f andThen g and g andThen f are an identity function.
For example, the String and List[Char] monoids with concatenation are isomorphic.
The two Boolean monoids (false, ||) and (true, &&) are also isomorphic,
via the ! (negation) function.
Why (false, ||), (true, &&) and String and List[Char] monoids with concatenation are isomorphism?
Why is that a homomorphisms?
By definition.
Why it preserve the monoid structure?
Because of the == in the expression above.
Is for example map on list function a homomorphisms?
Yes. Replace "foo" and "bar" by two lists, and .length by .map(f). It's then easy to see (and prove) that the equation holds.
Why (false, ||), (true, &&) and String and List[Char] monoids with concatenation are isomorphism?
By definition. The proof is trivial, left as an exercise. (Hint: take the definition of an isomorphism, replace all abstract objects with concrete objects, prove that the resulting mathematical expression is correct)
Edit: Here are the few definitions you asked in comments:
Homomorphism: a transformation of one set into another that preserves in the second set the relations between elements of the first. Formally f: A → B where bothA and B have a * operation such that f(x * y) = f(x) * f(y).
Monoid: algebraic structure with a single associative binary operation and an identity element. Formally (M, *, id) is a Monoid iff (a * b) * c == a * (b * c) && a * id == a && id * a == a for all a, b, c in M.
"foo".length + "bar".length == ("foo" + "bar").length
To be precise, this is saying that length is a monoid homomorphism between the monoid of strings with concatenation, and the monoid of natural numbers with addition. That these two structures are monoids are easy to see if you put in the effort.
The reason length is a monoid homomorphism is because it has the properties that "".length = 0 and x.length ⊕ y.length = (x ⊗ y).length. Here, I've deliberately used two different symbols for the two monoid operations, to stress the fact that we are either applying the addition operation on the results of applying length vs the string concatenation operation on the two arguments before applying length. It is just unfortunate notation that the example you're looking at uses the same symbol + for both operations.
Edited to add: The question poster asked for some further details on what exactly is a monoid homomorphism.
OK, so suppose we have two monoids (A, ⊕, a) and (B, ⊗, b), meaning A and B are our two carriers, ⊕ : A × A → A and ⊗ : B × B → B are our two binary operators, and a ∈ A and b ∈ B are our two neutral elements. A monoid homomorphism between these two monoids is a function f : A → B with the following properties:
f(a) = b, i.e. if you apply f on the neutral element of A, you get the neutral element of B
f(x ⊕ y) = f(x) ⊗ f(y), i.e. if you apply f on the result of the operator of A on two elements, it is the same as applying it twice, on the two A elements, and then combining the results using the operator of B.
The point is that the monoid homomorphism is a structure-preserving mapping (which is what a homomorphism is; break down the word to its ancient Greek roots and you will see that it means 'same-shaped-ness').
OK, you asked for examples, here are some examples!
The one from the above example: length is a monoid homomorphism from the free monoid (A, ·, ε)* to (ℕ, +, 0)
Negation is a monoid homomorphism from (Bool, ∨, false) to (Bool, ∧, true) and vice verse.
exp is a monoid homomorphism from (ℝ, +, 0) to (ℝ\{0}, *, 1)
In fact, every group homomorphism is also, of course, a monoid homomorphism.

in insufficiently-polymorphic why are there less ways to implement `List a -> List a -> List a` then `List Char -> List Char -> List Char`

in insufficiently-polymorphic
the author says about:
def foo[A](fst: List[A], snd: List[A]): List[A]
There are fewer ways we can implement the function. In particular, we
can’t just hard-code some elements in a list, because we have no
ability to manufacture values of an arbitrary type.
I did not understand this, because also in the [Char] version we had no ability to manufacture values of an arbitrary type we had to have them of type [Char] so why are there less ways to implement this?
In the generic version you know that the output list can only contain some arrangement of the elements contained in fst and snd since there is no way to construct new values of some arbitrary type A. In contrast, if you know the output type is Char you can e.g.
def foo(fst: List[Char], snd: List[Char]) = List('a', 'b', 'c')
In addition you cannot use the values contained in the input lists to make decisions which affect the output, since you don't know what they are. You can do this if you know the input type e.g.
def foo(fst: List[Char], snd: List[Char]) = fst match {
case Nil => snd
case 'a'::fs => snd
case _ => fst
}
I'm assuming the author means, that there's no way to construct a non-empty List a but there's a way to construct a List Char, e.g. by using a String literal. You could just ignore the arguments and just return a hard-coded String.
An example of this would be:
foo :: List Char -> List Char -> List Char
foo a b = "Whatever"
You can't construct a value of an arbitrary type a, but you can construct a value of type Char.
This is a simple case of a property called "parametricity" or "free theorem", which applies to every polymorphic function.
An even simpler example is the following:
fun1 :: Int -> Int
fun2 :: forall a. a -> a
fun1 can be anything: successor, predecessor, square, factorial, etc. This is because it can "read" its input, and act accordingly.
fun2 must be the identity function (or loop forever). This because fun2 receives its input, but it can not examine it in any useful way: since it is of an abstract, unknown type a, no operations can be performed on it. The input is effectively an opaque token. The output of foo2 must be of type a, for which we do not know any construction means -- we can not create a value of type a from nothing. The only option is to take the input a and use it to craft the output a. Hence, fun2 is the identity.
The above parametricity result holds when you have no way to perform tests on the input or the type a. If we, e.g., allowed if x.instanceOf[Int] ..., or if x==null ..., or type casts (in OOP) then we could write fun2 in other ways.

Gentle Intro to Haskell: " .... there is no single type that contains both 2 and 'b'." Can I not make such a type ?

I am currently learning Haskell, so here are a beginner's questions:
What is meant by single type in the text below ?
Is single type a special Haskell term ? Does it mean atomic type here ?
Or does it mean that I can never make a list in Haskell in which I can put both 1 and 'c' ?
I was thinking that a type is a set of values.
So I cannot define a type that contains Chars and Ints ?
What about algebraic data types ?
Something like: data IntOrChar = In Int | Ch Char ? (I guess that should work but I am confused what the author meant by that sentence.)
Btw, is that the only way to make a list in Haskell in which I can put both Ints and Chars? Or is there a more tricky way ?
A Scala analogy: in Scala it would be possible to write implicit conversions to a type that represents both Ints and Chars (like IntOrChar) and then it would be possible to put seemlessly Ints and Chars into List[IntOrChar], is that not possible with Haskell ? Do I always have to explicitly wrap every Int or Char into IntOrChar if I want to put them into a list of IntOrChar ?
From Gentle Intro to Haskell:
Haskell also incorporates polymorphic types---types that are
universally quantified in some way over all types. Polymorphic type
expressions essentially describe families of types. For example,
(forall a)[a] is the family of types consisting of, for every type a,
the type of lists of a. Lists of integers (e.g. [1,2,3]), lists of
characters (['a','b','c']), even lists of lists of integers, etc., are
all members of this family. (Note, however, that [2,'b'] is not a
valid example, since there is no single type that contains both 2 and
'b'.)
Short answer.
In Haskell there are no implicit conversions. Also there are no union types - only disjoint unions(which are algebraic data types). So you can only write:
someList :: [IntOrChar]
someList = [In 1, Ch 'c']
Longer and certainly not gentle answer.
Note: This is a technique that's very rarely used. If you need it you're probably overcomplicating your API.
There are however existential types.
{-# LANGUAGE ExistentialQuantification, RankNTypes #-}
class IntOrChar a where
intOrChar :: a -> Either Int Char
instance IntOrChar Int where
intOrChar = Left
instance IntOrChar Char where
intOrChar = Right
data List = Nil
| forall a. (IntOrChar a) => Cons a List
someList :: List
someList = (1 :: Int) `Cons` ('c' `Cons` Nil)
Here I have created a typeclass IntOrChar with only function intOrChar. This way you can convert anything of type forall a. (IntOrChar a) => a to Either Int Char.
And also a special kind of list that uses existential type in its second constructor.
Here type variable a is bound(with forall) at the constructor scope. Therefore every time
you use Cons you can pass anything of type forall a. (IntOrChar a) => a as a first argument. Consequently during a destruction(i.e. pattern matching) the first argument will
still be forall a. (IntOrChar a) => a. The only thing you can do with it is either pass it on or call intOrChar on it and convert it to Either Int Char.
withHead :: (forall a. (IntOrChar a) => a -> b) -> List -> Maybe b
withHead f Nil = Nothing
withHead f (Cons x _) = Just (f x)
intOrCharToString :: (IntOrChar a) => a -> String
intOrCharToString x =
case intOrChar of
Left i -> show i
Right c -> show c
someListHeadString :: Maybe String
someListHeadString = withHead intOrCharToString someList
Again note that you cannot write
{- Wont compile
safeHead :: IntOrChar a => List -> Maybe a
safeHead Nil = Nothing
safeHead (Cons x _) = Just x
-}
-- This will
safeHead2 :: List -> Maybe (Either Int Char)
safeHead2 Nil = Nothing
safeHead2 (Cons x _) = Just (intOrChar x)
safeHead will not work because you want a type of IntOrChar a => Maybe a with a bound at safeHead scope and Just x will have a type of IntOrChar a1 => Maybe a1 with a1 bound at Cons scope.
In Scala there are types that include both Int and Char such as AnyVal and Any, which are both supertypes of Char and Int. In Haskell there is no such hierarchy, and all the basic types are disjoint.
You can create your own union types which describe the concept of 'either an Int or a Char (or you could use the built-in Either type), but there are no implicit conversions in Haskell to transparently convert an Int into an IntOrChar.
You could emulate the concept of 'Any' using existential types:
data AnyBox = forall a. (Show a, Hashable a) => AB a
heteroList :: [AnyBox]
heteroList = [AB (1::Int), AB 'b']
showWithHash :: AnyBox -> String
showWithHash (AB v) = show v ++ " - " ++ (show . hash) v
let strs = map showWithHash heteroList
Be aware that this pattern is discouraged however.
I think that the distinction that is being made here is that your algebraic data type IntOrChar is a "tagged union" - that is, when you have a value of type IntOrChar you will know if it is an Int or a Char.
By comparison consider this anonymous union definition (in C):
typedef union { char c; int i; } intorchar;
If you are given a value of type intorchar you don't know (apriori) which selector is valid. That's why most of the time the union constructor is used in conjunction with a struct to form a tagged-union construction:
typedef struct {
int tag;
union { char c; int i; } intorchar_u
} IntOrChar;
Here the tag field encodes which selector of the union is valid.
The other major use of the union constructor is to overlay two structures to get an efficient mapping between sub-structures. For example, this union is one way to efficiently access the individual bytes of a int (assuming 8-bit chars and 32-bit ints):
union { char b[4]; int i }
Now, to illustrate the main difference between "tagged unions" and "anonymous unions" consider how you go about defining a function on these types.
To define a function on an IntOrChar value (the tagged union) I claim you need to supply two functions - one which takes an Int (in the case that the value is an Int) and one which takes a Char (in case the value is a Char). Since the value is tagged with its type, it knows which of the two functions it should use.
If we let F(a,b) denote the set of functions from type a to type b, we have:
F(IntOrChar,b) = F(Int,b) \times F(Char,b)
where \times denotes the cross product.
As for the anonymous union intorchar, since a value doesn't encode anything bout its type the only functions which can be applied are those which are valid for both Int and Char values, i.e.:
F(intorchar,b) = F(Int,b) \cap F(Char,b)
where \cap denotes intersection.
In Haskell there is only one function (to my knowledge) which can be applied to both integers and chars, namely the identity function. So there's not much you could do with a list like [2, 'b'] in Haskell. In other languages this intersection may not be empty, and then constructions like this make more sense.
To summarize, you can have integers and characters in the same list if you create a tagged-union, and in that case you have to tag each of the values which will make you list look like:
[ I 2, C 'b', ... ]
If you don't tag your values then you are creating something akin to an anonymous union, but since there aren't any (useful) functions which can be applied to both integers and chars there's not really anything you can do with that kind of union.