Arithmetic division in relational algebra - select

I have an SQL request:
SELECT table1.nr1 / NULLIF(table2.nr2, 0) as percentage
and I want to write this in relational algebra.
Is it possible to represent arithmetic division in relational algebra?

According to this course of the University of Rochester relational algebra can be defined as
a formal system for manipulating relations
Operands of this algebra are relations.
Operations of this algebra include the usual set operations (since relations are sets of tuples), and special operations defined for relations
selection
projection
join
It's an algebra on relations and there is no representation of numbers. If you want to use arithmetic on numbers you have to use an extended formalism such as Safe Database Queries with Arithmetic Relations.

Related

Sparse boolean matrix multiplication

Does anybody know the efficient implementation of sparse boolean matrix multiplication? I'm interested in both CPU and GPGPU implementations because it is necessary to multiply matrices of different sizes (from 8x8 to up to 10^8x10^8). Currently, I use cuSPARSE library, but it supports only numerical matrices (float, double etc) and this fact leads to huge overhead (by memory and time) which is critical in my task.
Since a boolean matrix can be viewed as the adjacency matrix of some (bipartite) graph, its product with another matrix can be interpreted as the distance 2 connections between the nodes of two subgraphs linked by a common set of nodes.
To avoid wasting space and exploit some amount of bit parallelism, you could try using some form of succint data structure for graph storage and manipulation.
One such family of data structures which could be useful in your case is the K2-tree (or Kn in general), which uses an approach to store the adjacencies similar to spatial decompositions such as quad- and oct- trers.
Ultimately, the best algorithm and data structure will heavily depend on the dimension and sparsity patterns of your matrices.

Elisp: What is the time complexity for basic arithmetic operations using calc functions

This includes addition, subtraction, multiplication, and division.
I'm asked to analyze some algorithms that rely heavily on calling calc-eval to work. My teacher does want us to account for the complexity of basic operations when working with large numbers.
How do these arithmetic operations scale as the size of the numbers increase?

How to find the time complexity of the algebra operation in algebraixlib

How can I calculate time complexity using mathematics or Big O notation for algebra operations used in the algebra of data.
I will use book example to explain my question. Consider following example given in book.
B
In above example I would like to calculate the time complexity of transpose and compose operation.
If it possible I would also like to find out other algebra data operations' time complexity.
Please let me know if you need more explanation.
#wesholler I edited my question to understand you explanation. Following is a real life example and suppose we want to calculate the time complexity for operations used below.
suppose I have algebra of data operations as follows
Could you describe how we would calculate the time complexity in above example. Preferably in Big O?
Thanks
This answer has three parts:
General Time Complexity Analysis
Generally, the time complexity/BigO can be determined by considering the origin of an operation - that is, what operations were extended from more primitive algebras to derive this one?
The following rules describe the upper-bound on the time complexity for both unary and binary operations that are extended into their power set algebras.
Unary extension can be thought of similarly to a map operation and so has linear time complexity. Binary extension evaluates the cross product of the operation's arguments and so has a worst-case time complexity similar to O(n^2). However it is important to consider that the real upper bound is a product of the cardinality of both arguments; this comes up in practice often when the right-hand argument to a composition or superstriction operation is a singleton.
Time Complexity for algebraixlib Implementations
We can take a look at a few examples of how extension affects the time complexity while at the same time analyzing the complexity of the implementations in algebraixlib (the last part talks about other implementations.)
Being that it is a reference implementation for data algebra, algebraixlib implements extended operations very literally. For that reason, Big Theta is used below, because the formulas represent both the lower and upper bounds of the time complexity.
Here is the unary operation transpose being extended from couplets to relations and then to clans.
Likewise, here is the binary operation compose being extended from couplets to relations and then to clans.
It is clear that the complexity of both of the clan operations is influenced by both the number of relation elements as well as the number of couplets in those relations.
Time Complexity for Other Implementations
It is important to note that the above section describes the time complexity that is specific to the algorithms implemented in algebraixlib.
One could imagine implementing e.g. clans.cross_union with a method similar to sort-merge-join or hash-join. In this case, the upper bound would remain the same, but the lower bound (and expected) time complexity would be reduced by one or more degrees.

The context between Abstract Algebra and programming

I'm a computer science student among the things I'm learning Abstract Algebra, especially Group theory.
I'm programming for about 5 years and I've never used such things as I learn in Abstract Algebra.
what is the context between programming and abstract algebra? I really have to know.
Group theory is very important in cryptography, for instance, especially finite groups in asymmetric encryption schemes such as RSA and El Gamal. These use finite groups that are based on multiplication of integers. However, there are also other, less obvious kinds of groups that are applied in cryptography, such as elliptic curves.
Another application of group theory, or, to be more specific, finite fields, is checksums. The widely-used checksum mechanism CRC is based on modular arithmetic in the polynomial ring of the finite field GF(2).
Another more abstract application of group theory is in functional programming. In fact, all of these applications exist in any programming language, but functional programming languages, especially Haskell and Scala(z), embrace it by providing type classes for algebraic structures such as Monoids, Groups, Rings, Fields, Vector Spaces and so on. The advantage of this is, obviously, that functions and algorithms can be specified in a very generic, high level way.
On a meta level, I would also say that an understanding of basic mathematics such as this is essential for any computer scientist (not so much for a computer programmer, but for a computer scientist – definitely), as it shapes your entire way of thinking and is necessary for more advanced mathematics. If you want to do 3D graphics stuff or programme an industry robot, you will need Linear Algebra, and for Linear Algebra, you should know at least some Abstract Algebra.
I don't think there's any context between group theory and programming...or rather your question doesn't make any sense. There are applications of programming to algebra and vice versa but they are not intrinsically tied together or mutually beneficial to one another so to speak.
If you are a computer scientist trying to solve some fun abstract algebras problems, there are numerous enumeration and classification problems that could benefit from a computational approach to be worked on in geometric group theory which is a hot topic at the moment, here's a pretty comprehensive list of researchers and problems (of 3 years ago at least)
http://www.math.ucsb.edu/~jon.mccammond/geogrouptheory/people.html
popular problems include finitely presented groups, classification of transitive permutation groups, mobius functions, polycyclic generating systems
and these
http://en.wikipedia.org/wiki/Schreier–Sims_algorithm
http://en.wikipedia.org/wiki/Todd–Coxeter_algorithm
and a problem that gave me many sleepless nights
http://en.wikipedia.org/wiki/Word_problem_for_groups
Existing algebra systems include GAP and MAGMA
finally an excellent reference
http://books.google.com/books?id=k6joymrqQqMC&printsec=frontcover&dq=finitely+presented+groups+book&hl=en&sa=X&ei=WBWUUqjsHI6-sQTR8YKgAQ&ved=0CC0Q6AEwAA#v=onepage&q=finitely%20presented%20groups%20book&f=false

Why is Lisp so often connected to "Symbolic computation"

We know mathematics have both symbolic and numeric computation. But why is Lisp, as a common programming language, connected to symbolic computation more closely?
What parts of Lisp make it good for symbolic problems?
At the time, symbols were a first class object in Lisp, and less so in other languages. Most other languages were focused on numeric computing (1 + 2 + SIN(PI / 2)).
In Lisp, the Symbol is a specific language artifact (distinct from a character string) that made working with Things That Aren't Numbers very easy. Since these were first class objects within the system, Lisp provided "free" parsers, readers, and writers of such objects.
'(A + B / 2) was trivial to represent in off the shelf Lisp.
The ease of representation lifted the burden of reading and writing those aspects of a symbolic computing application, making it easier to focus on the core problems (equation reduction, problem solver, theorem proofs, etc.)
Today, even still, few languages have a first class concept of the Symbol. But there are enough utilities and such that they are less important today than they were back in the day when it was basic Lisp vs Fortan vs Pascal for this kind of work.
The term symbolic computation often associated with Lisp is baffling to millennials who grew up in an age where computers are used in all areas of human and social life. Back in the day when Lisp appeared, computers were expensive and their use was primarily used in a scientific/accounting context. Number crunching. Translating known algorithms from Mathematics into programs. One area where existing languages had trouble solving problems elegantly was algebraic formulas with polynomial expressions. Lisp provided first-class constructs that enabled the design of computer algebra systems that mapped seamlessly with traditional mathematical reasoning, hence the term. Symbolic computation is still relevant today, particularly in the fields of logic programming, constraint solving, artificial intelligence, etc.
In LISP you can enter a symbol without predeclaration. Also, complex data structures are supported as linked lists of arbitrary complexity. Since manipulation of symbols is best accomplished in the context of complex data structures, LISP is a perfect fit. Additionally, when using complex data structures to model a problem other languages require you to spend a lot of effort in walking the data structures. In lisp the data structure traversal is automated and you can work at a higher level of abstraction.