why the sheaf of Y(O_Y), as a closed subscheme of X which is not necessarily affine, is quasi-coherent? - algebra

In Hartshorne chapter 2, proposition 5.9. I get stuck with “Hence, J_Y, being the kernel of of a morphism of quasi-coherent sheaves, is also quasi-coherent.” in the proof. My question is why the sheaf of Y(O_Y), as a closed subscheme of X which is not necessarily affine, is quasi-coherent?

Related

how do i approach "h" parameter to zero in derivative? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have a few problem with my derivative because it shows me an error when I approach Denominator to zero .
func derivativeOf(fn: (Double) -> Double, atX x: Double) -> Double {
let h -> 0
return (fn(x + h) - fn(x))/h
}
i know my syntax sucks but currently it is common in calculus and mathematical Differential.
You can't express "number approaching zero" as let h -> 0 - this is just an invalid syntax in Swift.
There's also no specific operator for "number approaching zero". But depending on what you need, you could for example express "smallest possible positive number", using Double.leastNonzeroMagnitude:
let h = Double.leastNonzeroMagnitude

Permutation of a vector and its sub elements using MATLAB [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have the vector below:
v={'T','AT','AS','C'};
I would like to see all the possible permutations for this vector. To do so I can use the command below:
p=perms(v)
But I want to go one step further as each of the elements has sub index of 1 to 4, for example, T1,T2,T3,T4 .....C1,C2,C3,C4. I would like to have all the possible permutations with its sub index as see such results
T1,AT1,AS1,C1
C3,AT3,AS3,t3
AS2,AT2,C2,T2
.
.
.
Could you please help me how to do that?
Thanks
You can do this by first using ndgrid to generate a set of indices for all your possible combinations:
v = {'T1', 'AT1', 'AS1', 'C1'; ...
'T2', 'AT2', 'AS2', 'C2'; ...
'T3', 'AT3', 'AS3', 'C3'; ...
'T4', 'AT4', 'AS4', 'C4'};
[ind1, ind2, ind3, ind4] = ndgrid(1:4);
c = [v(ind1(:), 1) v(ind2(:), 2) v(ind3(:), 3) v(ind4(:), 4)];
And c will be a 256-by-4 cell array, as expected (44 combinations). Now you can expand each row by it's total number of permutations using perms like so:
p = perms(1:4);
p = reshape(c(:, p.').', 4, []).';
And p will be a 6144-by-4 cell array, also as expected (24 permutations times 256 combinations).

Indexing vectors and arrays with +: [duplicate]

This question already has answers here:
What is `+:` and `-:`?
(3 answers)
Closed 3 years ago.
I am seeing a code in SystemVerilog which has something like this:
if(address[2*pointer+:2])
do_something;
How should I understand the +: when indexing this vector?
I found that it is called bit slicing, but I can't find an explanation about it.
Description and examples can be found in IEEE Std 1800-2017 § 11.5.1 "Vector bit-select and part-select addressing". First IEEE appearance is IEEE 1364-2001 (Verilog) § 4.2.1 "Vector bit-select and part-select addressing". Here is an direct example from the LRM:
logic [31: 0] a_vect;
logic [0 :31] b_vect;
logic [63: 0] dword;
integer sel;
a_vect[ 0 +: 8] // == a_vect[ 7 : 0]
a_vect[15 -: 8] // == a_vect[15 : 8]
b_vect[ 0 +: 8] // == b_vect[0 : 7]
b_vect[15 -: 8] // == b_vect[8 :15]
dword[8*sel +: 8] // variable part-select with fixed width
If sel is 0 then dword[8*(0) +: 8] == dword[7:0]
If sel is 7 then dword[8*(7) +: 8] == dword[63:56]
The value to the left always the starting index. The number to the right is the width and must be a positive constant. the + and - indicates to select the bits of a higher or lower index value then the starting index.
Assuming address is in little endian ([msb:lsb]) format, then if(address[2*pointer+:2]) is the equivalent of if({address[2*pointer+1],address[2*pointer]})
This is another way to specify the range of the bit-vector.
x +: N, The start position of the vector is given by x and you count up from x by N.
There is also
x -: N, in this case the start position is x and you count down from x by N.
N is a constant and x is an expression that can contain iterators.
It has a couple of benefits -
It makes the code more readable.
You can specify an iterator when referencing bit-slices without getting a "cannot have a non-constant value" error.

Calculating prime numbers in Scala: how does this code work?

So I've spent hours trying to work out exactly how this code produces prime numbers.
lazy val ps: Stream[Int] = 2 #:: Stream.from(3).filter(i =>
ps.takeWhile{j => j * j <= i}.forall{ k => i % k > 0});
I've used a number of printlns etc, but nothings making it clearer.
This is what I think the code does:
/**
* [2,3]
*
* takeWhile 2*2 <= 3
* takeWhile 2*2 <= 4 found match
* (4 % [2,3] > 1) return false.
* takeWhile 2*2 <= 5 found match
* (5 % [2,3] > 1) return true
* Add 5 to the list
* takeWhile 2*2 <= 6 found match
* (6 % [2,3,5] > 1) return false
* takeWhile 2*2 <= 7
* (7 % [2,3,5] > 1) return true
* Add 7 to the list
*/
But If I change j*j in the list to be 2*2 which I assumed would work exactly the same, it causes a stackoverflow error.
I'm obviously missing something fundamental here, and could really use someone explaining this to me like I was a five year old.
Any help would be greatly appreciated.
I'm not sure that seeking a procedural/imperative explanation is the best way to gain understanding here. Streams come from functional programming and they're best understood from that perspective. The key aspects of the definition you've given are:
It's lazy. Other than the first element in the stream, nothing is computed until you ask for it. If you never ask for the 5th prime, it will never be computed.
It's recursive. The list of prime numbers is defined in terms of itself.
It's infinite. Streams have the interesting property (because they're lazy) that they can represent a sequence with an infinite number of elements. Stream.from(3) is an example of this: it represents the list [3, 4, 5, ...].
Let's see if we can understand why your definition computes the sequence of prime numbers.
The definition starts out with 2 #:: .... This just says that the first number in the sequence is 2 - simple enough so far.
The next part defines the rest of the prime numbers. We can start with all the counting numbers starting at 3 (Stream.from(3)), but we obviously need to filter a bunch of these numbers out (i.e., all the composites). So let's consider each number i. If i is not a multiple of a lesser prime number, then i is prime. That is, i is prime if, for all primes k less than i, i % k > 0. In Scala, we could express this as
nums.filter(i => ps.takeWhile(k => k < i).forall(k => i % k > 0))
However, it isn't actually necessary to check all lesser prime numbers -- we really only need to check the prime numbers whose square is less than or equal to i (this is a fact from number theory*). So we could instead write
nums.filter(i => ps.takeWhile(k => k * k <= i).forall(k => i % k > 0))
So we've derived your definition.
Now, if you happened to try the first definition (with k < i), you would have found that it didn't work. Why not? It has to do with the fact that this is a recursive definition.
Suppose we're trying to decide what comes after 2 in the sequence. The definition tells us to first determine whether 3 belongs. To do so, we consider the list of primes up to the first one greater than or equal to 3 (takeWhile(k => k < i)). The first prime is 2, which is less than 3 -- so far so good. But we don't yet know the second prime, so we need to compute it. Fine, so we need to first see whether 3 belongs ... BOOM!
* It's pretty easy to see that if a number n is composite then the square of one of its factors must be less than or equal to n. If n is composite, then by definition n == a * b, where 1 < a <= b < n (we can guarantee a <= b just by labeling the two factors appropriately). From a <= b it follows that a^2 <= a * b, so it follows that a^2 <= n.
Your explanations are mostly correct, you made only two mistakes:
takeWhile doesn't include the last checked element:
scala> List(1,2,3).takeWhile(_<2)
res1: List[Int] = List(1)
You assume that ps always contains only a two and a three but because Stream is lazy it is possible to add new elements to it. In fact each time a new prime is found it is added to ps and in the next step takeWhile will consider this new added element. Here, it is important to remember that the tail of a Stream is computed only when it is needed, thus takeWhile can't see it before forall is evaluated to true.
Keep these two things in mind and you should came up with this:
ps = [2]
i = 3
takeWhile
2*2 <= 3 -> false
forall on []
-> true
ps = [2,3]
i = 4
takeWhile
2*2 <= 4 -> true
3*3 <= 4 -> false
forall on [2]
4%2 > 0 -> false
ps = [2,3]
i = 5
takeWhile
2*2 <= 5 -> true
3*3 <= 5 -> false
forall on [2]
5%2 > 0 -> true
ps = [2,3,5]
i = 6
...
While these steps describe the behavior of the code, it is not fully correct because not only adding elements to the Stream is lazy but every operation on it. This means that when you call xs.takeWhile(f) not all values until the point when f is false are computed at once - they are computed when forall wants to see them (because it is the only function here that needs to look at all elements before it definitely can result to true, for false it can abort earlier). Here the computation order when laziness is considered everywhere (example only looking at 9):
ps = [2,3,5,7]
i = 9
takeWhile on 2
2*2 <= 9 -> true
forall on 2
9%2 > 0 -> true
takeWhile on 3
3*3 <= 9 -> true
forall on 3
9%3 > 0 -> false
ps = [2,3,5,7]
i = 10
...
Because forall is aborted when it evaluates to false, takeWhile doesn't calculate the remaining possible elements.
That code is easier (for me, at least) to read with some variables renamed suggestively, as
lazy val ps: Stream[Int] = 2 #:: Stream.from(3).filter(i =>
ps.takeWhile{p => p * p <= i}.forall{ p => i % p > 0});
This reads left-to-right quite naturally, as
primes are 2, and those numbers i from 3 up, that all of the primes p whose square does not exceed the i, do not divide i evenly (i.e. without some non-zero remainder).
In a true recursive fashion, to understand this definition as defining the ever increasing stream of primes, we assume that it is so, and from that assumption we see that no contradiction arises, i.e. the truth of the definition holds.
The only potential problem after that, is the timing of accessing the stream ps as it is being defined. As the first step, imagine we just have another stream of primes provided to us from somewhere, magically. Then, after seeing the truth of the definition, check that the timing of the access is okay, i.e. we never try to access the areas of ps before they are defined; that would make the definition stuck, unproductive.
I remember reading somewhere (don't recall where) something like the following -- a conversation between a student and a wizard,
student: which numbers are prime?
wizard: well, do you know what number is the first prime?
s: yes, it's 2.
w: okay (quickly writes down 2 on a piece of paper). And what about the next one?
s: well, next candidate is 3. we need to check whether it is divided by any prime whose square does not exceed it, but I don't yet know what the primes are!
w: don't worry, I'l give them to you. It's a magic I know; I'm a wizard after all.
s: okay, so what is the first prime number?
w: (glances over the piece of paper) 2.
s: great, so its square is already greater than 3... HEY, you've cheated! .....
Here's a pseudocode1 translation of your code, read partially right-to-left, with some variables again renamed for clarity (using p for "prime"):
ps = 2 : filter (\i-> all (\p->rem i p > 0) (takeWhile (\p->p^2 <= i) ps)) [3..]
which is also
ps = 2 : [i | i <- [3..], and [rem i p > 0 | p <- takeWhile (\p->p^2 <= i) ps]]
which is a bit more visually apparent, using list comprehensions. and checks that all entries in a list of Booleans are True (read | as "for", <- as "drawn from", , as "such that" and (\p-> ...) as "lambda of p").
So you see, ps is a lazy list of 2, and then of numbers i drawn from a stream [3,4,5,...] such that for all p drawn from ps such that p^2 <= i, it is true that i % p > 0. Which is actually an optimal trial division algorithm. :)
There's a subtlety here of course: the list ps is open-ended. We use it as it is being "fleshed-out" (that of course, because it is lazy). When ps are taken from ps, it could potentially be a case that we run past its end, in which case we'd have a non-terminating calculation on our hands (a "black hole"). It just so happens :) (and needs to ⁄ can be proved mathematically) that this is impossible with the above definition. So 2 is put into ps unconditionally, so there's something in it to begin with.
But if we try to "simplify",
bad = 2 : [i | i <- [3..], and [rem i p > 0 | p <- takeWhile (\p->p < i) bad]]
it stops working after producing just one number, 2: when considering 3 as the candidate, takeWhile (\p->p < 3) bad demands the next number in bad after 2, but there aren't yet any more numbers there. It "jumps ahead of itself".
This is "fixed" with
bad = 2 : [i | i <- [3..], and [rem i p > 0 | p <- [2..(i-1)] ]]
but that is a much much slower trial division algorithm, very far from the optimal one.
--
1 (Haskell actually, it's just easier for me that way :) )

Polynomially larger - confusion

I am studying for my algorithms class. I have a question in context to the Masters theorem:
How is n.log2(n) polynomially larger than n^(log4(3))
(log2(x) = log to the base 2 of x
log4(x) = log to the base 4 of x)
(Note: This is a solved problem on page.95 of 'Introduction to Algorithms' by Cormen et.al.)
log4(3) is less than 1, and so n^(log4(3)) is less than n^1 = n, which is less than n*log2(n).