Can rrule start at half past hour? - python-dateutil

Is it possible to create a rrule that runs every 30 minutes but starts and ends at the middle of an hour? If rrule would accept non-integer parameters like byhour=range(16.5,19) that would be great, but unfortunately only integer is accepted.
In my case the times would be 16:30, 17:00, 17:30, 18:00 and 18:30 every weekday. All I can do is from 16:00 to 18:30 as follows:
from dateutil.rrule import *
from dateutil.parser import parse
list(rrule(MINUTELY, interval=30, count=20, byhour=range(16,19), byminute=(0,30), byweekday=(MO,TU,WE,TH,FR), dtstart=parse("20220602T070000")))
...with the following result:
[datetime.datetime(2022, 6, 2, 16, 0),
datetime.datetime(2022, 6, 2, 16, 30),
datetime.datetime(2022, 6, 2, 17, 0),
datetime.datetime(2022, 6, 2, 17, 30),
datetime.datetime(2022, 6, 2, 18, 0),
datetime.datetime(2022, 6, 2, 18, 30),
datetime.datetime(2022, 6, 3, 16, 0),
datetime.datetime(2022, 6, 3, 16, 30),
datetime.datetime(2022, 6, 3, 17, 0),
datetime.datetime(2022, 6, 3, 17, 30),
datetime.datetime(2022, 6, 3, 18, 0),
datetime.datetime(2022, 6, 3, 18, 30),
datetime.datetime(2022, 6, 6, 16, 0),
datetime.datetime(2022, 6, 6, 16, 30),
datetime.datetime(2022, 6, 6, 17, 0),
datetime.datetime(2022, 6, 6, 17, 30),
datetime.datetime(2022, 6, 6, 18, 0),
datetime.datetime(2022, 6, 6, 18, 30),
datetime.datetime(2022, 6, 7, 16, 0),
datetime.datetime(2022, 6, 7, 16, 30)]
I was hoping rrule has more flexibility than crontab...

I found a solution using rrset.exrule:
from dateutil.rrule import *
from dateutil.parser import parse
myrrule = rrule(MINUTELY, interval=30, count=20, byhour=range(16,19), byminute=(0,30), byweekday=(MO,TU,WE,TH,FR), dtstart=parse("20220602T070000"))
myexrule = rrule(DAILY, interval=1, count=4, byhour=16, byminute=0, byweekday=(MO,TU,WE,TH,FR), dtstart=parse("20220602T070000"))
rrset = rruleset()
rrset.rrule(myrrule)
rrset.exrule(myexrule)
list(rrset)
with the following result:
[datetime.datetime(2022, 6, 2, 16, 30),
datetime.datetime(2022, 6, 2, 17, 0),
datetime.datetime(2022, 6, 2, 17, 30),
datetime.datetime(2022, 6, 2, 18, 0),
datetime.datetime(2022, 6, 2, 18, 30),
datetime.datetime(2022, 6, 3, 16, 30),
datetime.datetime(2022, 6, 3, 17, 0),
datetime.datetime(2022, 6, 3, 17, 30),
datetime.datetime(2022, 6, 3, 18, 0),
datetime.datetime(2022, 6, 3, 18, 30),
datetime.datetime(2022, 6, 6, 16, 30),
datetime.datetime(2022, 6, 6, 17, 0),
datetime.datetime(2022, 6, 6, 17, 30),
datetime.datetime(2022, 6, 6, 18, 0),
datetime.datetime(2022, 6, 6, 18, 30),
datetime.datetime(2022, 6, 7, 16, 30)]

Related

Finetuning LayoutLM on FUNSD-like dataset - index out of range in self

I'm experimenting with huggingface transformers to finetune microsoft/layoutlmv2-base-uncased through AutoModelForTokenClassification on my custom dataset that is similar to FUNSD (pre-processed and normalized). After a few iterations of training I get this error :
Traceback (most recent call last):
File "layoutlmV2/train.py", line 137, in <module>
trainer.train()
File "..../lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train
return inner_training_loop(
File "..../lib/python3.8/site-packages/transformers/trainer.py", line 1651, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "..../lib/python3.8/site-packages/transformers/trainer.py", line 2345, in training_step
loss = self.compute_loss(model, inputs)
File "..../lib/python3.8/site-packages/transformers/trainer.py", line 2377, in compute_loss
outputs = model(**inputs)
File "..../lib/python3.8/site-packages/torch/nn/modules/module.py", line 1131, in _call_impl
return forward_call(*input, **kwargs)
File "..../lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 1228, in forward
outputs = self.layoutlmv2(
File "..../lib/python3.8/site-packages/torch/nn/modules/module.py", line 1131, in _call_impl
return forward_call(*input, **kwargs)
File "..../lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 902, in forward
text_layout_emb = self._calc_text_embeddings(
File "..../lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 753, in _calc_text_embeddings
spatial_position_embeddings = self.embeddings._calc_spatial_position_embeddings(bbox)
File "..../lib/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 93, in _calc_spatial_position_embeddings
h_position_embeddings = self.h_position_embeddings(bbox[:, :, 3] - bbox[:, :, 1])
File "..../lib/python3.8/site-packages/torch/nn/modules/module.py", line 1131, in _call_impl
return forward_call(*input, **kwargs)
File "..../lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return F.embedding(
File "..../lib/python3.8/site-packages/torch/nn/functional.py", line 2203, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
After further inspection (vocab size, bboxes, dimensions, classes...) I noticed that there's negative values inside the input tensor causing the error. While input tensors of successful previous iterations have unsigned integers only. These negative numbers are returned by _calc_spatial_position_embeddings(self, bbox) in modeling_layoutlmv2.py
line 92 :
h_position_embeddings = self.h_position_embeddings(bbox[:, :, 3] - bbox[:, :, 1])
What may cause the returned input values to be negative?
What could I do to prevent this error from happening?
Example of the input tensor that triggers the error in torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) :
tensor([[ 0, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 9, 9, 9, 9, 9, 9, 9, 9, 9,
9, 9, 9, 9, 9, 9, 9, 10, 10, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12,
12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10,
10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 12, 12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,
11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12,
12, 12, 12, 12, 12, 12, 12, 12, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
8, 5, 5, 5, 5, 5, 5, -6, -6, -6, -6, -6, -6, 1, 1, 1, 1, 1,
5, 5, 5, 5, 5, 5, 7, 5, 7, 7, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0]])
After double checking the dataset and specifically the coordinates of the labels, I've found that some rows bbox coordinates lead to zero width or height. Here's a simplified example:
x1, y1, x2, y2 = dataset_row["bbox"]
print((x2-x1 < 1) or (y2-y1 < 1)) #output is sometimes True
After removing these labels from the dataset, the issue was resolved.

Filter Scala Matrix with condition

I have the given matrix in Scala:
val matrix = Array([30, 0, 13, 21, 25, 15],
[55, 47, 26, 54, 44, 3],
[21, 19, 23, 47, 29, 13],
[52, 50, 44, 14, 21, 24],
[10, 37, 0, 22, 17, 58],
[36, 55, 48, 27, 13, 35])
I need to filter the matrix (values from 2nd column > 40 and values fom 4rd column <45)
Can i do this somehow with the matrix.filter method?
You can try this way:
scala> :paste
// Entering paste mode (ctrl-D to finish)
val matrix = Array(Array(30, 0, 13, 21, 25, 15),
Array(55, 47, 26, 54, 44, 3),
Array(21, 19, 23, 47, 29, 13),
Array(52, 50, 44, 14, 21, 24),
Array(10, 37, 0, 22, 17, 58),
Array(36, 55, 48, 27, 13, 35))
// Exiting paste mode, now interpreting.
matrix: Array[Array[Int]] = Array(Array(30, 0, 13, 21, 25, 15), Array(55, 47, 26, 54, 44, 3), Array(21, 19, 23, 47, 29, 13), Array(52, 50, 44, 14, 21, 24), Array(10, 37, 0, 22, 17, 58), Array(36, 55, 48, 27, 13, 35))
scala> matrix.filter(x => x(1) > 40 && x(3) < 45)
res0: Array[Array[Int]] = Array(Array(52, 50, 44, 14, 21, 24), Array(36, 55, 48, 27, 13, 35))

Different CRC32 values in Java and swift

I'm in the process of writing a utility in Swift which can be used to calculate CRC32 checksum of an input data . A similar utility exists in Java which we are using extensively and has been working well for us.
The Java Utility uses java.util.zip.CRC32 to calculate the checksum. Pseudo code is as follows :
Java code:
private void transferFileData(short index, byte[] data, long dataSize) {
CRC32 crc32 = new CRC32();
long crc = crc32.update(data, (int) dataSize);
System.out.println("CRC32 : " + crc);
}
The Swift uses the CRC32 (import CryptoSwift) CryptoSwift code for generating the checksum in swift is as follows :
Swift code:
func crc32Func(_ item:[Int8] ) {
let data = Data(bytes: item, count: item.count)
let byte = data.bytes
let crc32 = byte.crc32()
print("checksum == \(crc32)"
}
The output from the Java code is :
Checksum in Java : 3771181957
The output from the swift code is :
Checksum in swift : 1894162356
Why the checksum values are not the same?
This is the code applied by Swift as below:
The data type is [Int8]
The data size is 450 bytes.
let item:
[Int8] = [45, 35, 76, 70, 67, 68, 95, 70, 79, 84, 65, 95, 70, 87, 95,
70, 85, 76, 76, 10, 80, 75, 71, 95, 86 , 69, 82, 83, 73, 79, 78, 58,
51, 46, 48, 46, 48, 10, 66, 65, 83, 69, 95, 86, 69, 82, 83, 73, 79, 78
, 58, 10, 72, 65, 83, 72, 58, 51, 97, 57, 98, 51, 99, 52, 56, 57, 52,
56, 56, 53, 49, 50, 53, 52, 55 , 48, 51, 49, 54, 48, 57, 56, 48, 101,
98, 101, 51, 54, 54, 10, 80, 75, 71, 95, 83, 73, 90, 69, 58, 49 , 51,
49, 48, 55, 50, 48, 10, 26, 0, 0, 5, 32, -79, 11, 5, 8, -39, -5, 4, 8,
-35, -5, 4, 8, -15, -5, 4, 8, -13, -5, 4, 8, -11, -5, 4, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -31, -15, 10, 8, -9, -5, 4, 8,
0, 0, 0, 0, -127, -14, 10, 8, -27, -14, 10, 8, 1, 12, 5, 8, 1, 12, 5,
8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1,
12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12,
5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8,
-7, -5, 4, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 9, -4,
4, 8, 25, -4, 4, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5,
8, 1, 12, 5, 8, 57, -4, 4, 8, 1, 12, 5, 8, 73, -4, 4, 8, 89, -4, 4, 8,
105, -4, 4, 8, 1, 12, 5, 8, -39, -4, 4, 8, 1, 12, 5, 8, 1, 12, 5, 8,
1, 12, 5, 8, 41, -4, 4, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1,
12, 5 , 8, 1, 12, 5, 8, 1, 12, 5, 8, 121, -4, 4, 8, -119, -4, 4, 8, 1,
12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12,
5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8,
1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, 1, 12, 5, 8, -103, -4]
is input.
Java input:
byte[] data
The type is long and the data entered is the same as [Int8] of swift data.
The CRC you are getting from your Swift code (whatever that actual code is) is correct. The CRC you are getting from your Java code (whatever that actual code is) is not correct.
There is no way to know what you're doing wrong in your Java code without at least being able to see that code.

Convert Nepali date to english date using postgresql

I have Nepali date in employment table of kep database and i want to convert this date in to English date using postgresql.Guide me please.
Here is my table
id date
1 2071/1/4
2 2071/1/29
3 2069/4/24
SQLFiddle
1) Import list of nepali years and length of each month (I copied data from here). In first column there is nepali year and in other columns there is length of each month in days (second column in table is length of first month in every year).
-- drop table if exists tmpcal;
create table tmpcal (nyear int, a int, b int, c int, d int, e int, f int, g int, h int, i int, j int, k int, l int);
insert into tmpcal values
(2000 , 30, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2001 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2002 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2003 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2004 , 30, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2005 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2006 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2007 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2008 , 31, 31, 31, 32, 31, 31, 29, 30, 30, 29, 29, 31),
(2009 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2010 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2011 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2012 , 31, 31, 31, 32, 31, 31, 29, 30, 30, 29, 30, 30),
(2013 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2014 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2015 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2016 , 31, 31, 31, 32, 31, 31, 29, 30, 30, 29, 30, 30),
(2017 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2018 , 31, 32, 31, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2019 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2020 , 31, 31, 31, 32, 31, 31, 30, 29, 30, 29, 30, 30),
(2021 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2022 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 30),
(2023 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2024 , 31, 31, 31, 32, 31, 31, 30, 29, 30, 29, 30, 30),
(2025 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2026 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2027 , 30, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2028 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2029 , 31, 31, 32, 31, 32, 30, 30, 29, 30, 29, 30, 30),
(2030 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2031 , 30, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2032 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2033 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2034 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2035 , 30, 32, 31, 32, 31, 31, 29, 30, 30, 29, 29, 31),
(2036 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2037 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2038 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2039 , 31, 31, 31, 32, 31, 31, 29, 30, 30, 29, 30, 30),
(2040 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2041 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2042 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2043 , 31, 31, 31, 32, 31, 31, 29, 30, 30, 29, 30, 30),
(2044 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2045 , 31, 32, 31, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2046 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2047 , 31, 31, 31, 32, 31, 31, 30, 29, 30, 29, 30, 30),
(2048 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2049 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 30),
(2050 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2051 , 31, 31, 31, 32, 31, 31, 30, 29, 30, 29, 30, 30),
(2052 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2053 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 30),
(2054 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2055 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2056 , 31, 31, 32, 31, 32, 30, 30, 29, 30, 29, 30, 30),
(2057 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2058 , 30, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2059 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2060 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2061 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2062 , 30, 32, 31, 32, 31, 31, 29, 30, 29, 30, 29, 31),
(2063 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2064 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2065 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2066 , 31, 31, 31, 32, 31, 31, 29, 30, 30, 29, 29, 31),
(2067 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2068 , 31, 31, 32, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2069 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2070 , 31, 31, 31, 32, 31, 31, 29, 30, 30, 29, 30, 30),
(2071 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2072 , 31, 32, 31, 32, 31, 30, 30, 29, 30, 29, 30, 30),
(2073 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 31),
(2074 , 31, 31, 31, 32, 31, 31, 30, 29, 30, 29, 30, 30),
(2075 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2076 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 30),
(2077 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 30, 29, 31),
(2078 , 31, 31, 31, 32, 31, 31, 30, 29, 30, 29, 30, 30),
(2079 , 31, 31, 32, 31, 31, 31, 30, 29, 30, 29, 30, 30),
(2080 , 31, 32, 31, 32, 31, 30, 30, 30, 29, 29, 30, 30),
(2081 , 31, 31, 32, 32, 31, 30, 30, 30, 29, 30, 30, 30),
(2082 , 30, 32, 31, 32, 31, 30, 30, 30, 29, 30, 30, 30),
(2083 , 31, 31, 32, 31, 31, 30, 30, 30, 29, 30, 30, 30),
(2084 , 31, 31, 32, 31, 31, 30, 30, 30, 29, 30, 30, 30),
(2085 , 31, 32, 31, 32, 30, 31, 30, 30, 29, 30, 30, 30),
(2086 , 30, 32, 31, 32, 31, 30, 30, 30, 29, 30, 30, 30),
(2087 , 31, 31, 32, 31, 31, 31, 30, 30, 29, 30, 30, 30),
(2088 , 30, 31, 32, 32, 30, 31, 30, 30, 29, 30, 30, 30),
(2089 , 30, 32, 31, 32, 31, 30, 30, 30, 29, 30, 30, 30),
(2090 , 30, 32, 31, 32, 31, 30, 30, 30, 29, 30, 30, 30);
2) Assign date to every nepali date:
first convert columns into rows (using union all)
then we generate nepali days (using generate_series())
at the end we number rows (using row_number()) and add this number to 1943-04-14 date and substract 1 (used this converter to match nepali date 2000/01/01 to common date).
.
-- drop table if exists cal_conversion;
create table cal_conversion as (
with tmp as (
select nyear, 1::int as nmonth, a as nday from tmpcal union all
select nyear, 2, b from tmpcal union all
select nyear, 3, c from tmpcal union all
select nyear, 4, d from tmpcal union all
select nyear, 5, e from tmpcal union all
select nyear, 6, f from tmpcal union all
select nyear, 7, g from tmpcal union all
select nyear, 8, h from tmpcal union all
select nyear, 9, i from tmpcal union all
select nyear, 10, j from tmpcal union all
select nyear, 11, k from tmpcal union all
select nyear, 12, l from tmpcal
)
select
*,
nyear || '/' || nmonth || '/' || nday as ndate,
'1943-04-14'::date + row_number() over(order by nyear, nmonth, nday)::int - 1 as edate
from (
select
nyear,
nmonth,
generate_series(1, nday) as nday
from tmp) x
);
3) Finally, use our conversion table:
Sample data:
-- drop table if exists test_data;
create table test_data (
id int,
ndate varchar);
insert into test_data values
(1,'2071/1/4'),
(2,'2071/1/29'),
(3,'2069/4/24');
Usage (simple join):
select
ndate,
id,
edate
from
test_data
join cal_conversion using (ndate);
Result:
2069/4/24;3;2012-08-08
2071/1/29;2;2014-05-12
2071/1/4;1;2014-04-17

Calculating eigenvector centrality using NetworkX

I'm using the NetworkX library to work with some small- to medium-sized unweighted, unsigned, directed graphs representing usage of a Web 2.0 site (smallest graph: less than two dozen nodes, largest: a few thousand). One of the things I want to calculate is eigenvector centrality, as follows:
>>> eig = networkx.eigenvector_centrality(my_graph)
>>> eigs = [(v,k) for k,v in eig.iteritems()]
>>> eigs.sort()
>>> eigs.reverse()
However, this gives unexpected results: nodes with 0 outdegree but receiving inward arcs from very central nodes appear at the very back of the list with 0.0 eigenvector centrality (not being a mathematician I may have got this confused, but I don't think that outward arcs should make any difference to a node's centrality to a directed graph). In the course of investigating these results, I noticed from the documentation that NetworkX calculates 'right' eigenvector centrality by default; out of curiosity, I decided to calculate 'left' eigenvector centrality by the recommended method, i.e. reversing the graph before calculating eigenvector centrality (see Networkx documentation). To my surprise, I got exactly the same result: every node was calculated to have exactly the same eigenvector centrality as before. I think this should be a very unlikely outcome (see Wikipedia article), but I have since replicated it with all the graphs I'm working with. Can anyone explain to me what I'm doing wrong?
N.B. Using the NetworkX implementation of the PageRank algorithm provides the results I was expecting, i.e. nodes receiving inward arcs from very central nodes have high centrality even if their outdegree is 0. PageRank is usually considered to be a variant of eigenvector centrality (see Wikipedia article).
Edit: following a request from Aric, I have included some data. This is an anonymised version of my smallest graph. (I couldn't post toy data in case the problem is specific to the structure of my graphs.) Running the code below on my machine (with Python 2.7) appears to reveal (a) that each node's right and left eigenvector centrality are the same, and (b) that nodes with outdegree 0 invariably also have eigenvector centrality 0, even if they are quite central to the graph as a whole (e.g. node 61).
import networkx
anon_e_list = [(10, 59), (10, 15), (10, 61), (15, 32), (16, 31), (16, 0), (16, 37), (16, 54), (16, 45), (16, 56), (16, 10), (16, 8), (16, 36), (16, 24), (16, 30), (18, 34), (18, 36), (18, 30), (19, 1), (19, 3), (19, 51), (19, 21), (19, 40), (19, 41), (19, 30), (19, 14), (19, 61), (21, 64), (26, 1), (31, 1), (31, 3), (31, 51), (31, 62), (31, 33), (31, 40), (31, 23), (31, 30), (31, 18), (31, 13), (31, 46), (31, 61), (32, 3), (32, 2), (32, 33), (32, 6), (32, 7), (32, 9), (32, 15), (32, 17), (32, 18), (32, 23), (32, 30), (32, 5), (32, 27), (32, 34), (32, 35), (32, 38), (32, 40), (32, 42), (32, 43), (32, 46), (32, 47), (32, 62), (32, 56), (32, 57), (32, 59), (32, 64), (32, 61), (33, 0), (33, 31), (33, 2), (33, 7), (33, 9), (33, 10), (33, 12), (33, 64), (33, 14), (33, 46), (33, 16), (33, 17), (33, 18), (33, 19), (33, 20), (33, 21), (33, 22), (33, 23), (33, 30), (33, 26), (33, 28), (33, 11), (33, 34), (33, 32), (33, 35), (33, 37), (33, 38), (33, 39), (33, 41), (33, 43), (33, 45), (33, 24), (33, 47), (33, 48), (33, 49), (33, 58), (33, 62), (33, 53), (33, 54), (33, 55), (33, 60), (33, 57), (33, 59), (33, 5), (33, 52), (33, 63), (33, 61), (34, 58), (34, 4), (34, 33), (34, 20), (34, 55), (34, 28), (34, 11), (34, 64), (35, 18), (35, 60), (35, 61), (37, 34), (37, 48), (37, 49), (37, 18), (37, 33), (37, 39), (37, 21), (37, 42), (37, 26), (37, 59), (37, 44), (37, 12), (37, 11), (37, 61), (41, 3), (41, 50), (41, 18), (41, 52), (41, 33), (41, 54), (41, 19), (41, 22), (41, 5), (41, 46), (41, 25), (41, 44), (41, 13), (41, 62), (41, 29), (44, 32), (44, 3), (44, 18), (44, 33), (44, 40), (44, 41), (44, 30), (44, 23), (44, 61), (50, 17), (50, 37), (50, 62), (50, 41), (50, 25), (50, 43), (50, 27), (50, 28), (50, 29), (54, 33), (54, 41), (54, 10), (54, 59), (54, 63), (54, 61), (58, 62), (58, 46), (59, 31), (59, 34), (59, 30), (59, 49), (59, 18), (59, 33), (59, 9), (59, 10), (59, 8), (59, 13), (59, 24), (59, 61), (60, 34), (60, 16), (60, 35), (60, 50), (60, 4), (60, 6), (60, 59), (60, 24), (63, 40), (63, 33), (63, 30), (63, 61), (63, 53)]
my_graph = networkx.DiGraph()
my_graph.add_edges_from(anon_e_list)
r_eig = networkx.eigenvector_centrality(my_graph)
my_graph2 = my_graph.reverse()
l_eig = networkx.eigenvector_centrality(my_graph2)
for nd in my_graph.nodes():
print 'node: {} indegree: {} outdegree: {} right eig: {} left eig: {}'.format(nd,my_graph.in_degree(nd),my_graph.out_degree(nd),r_eig[nd],l_eig[nd])
These two lines
my_graph2 = my_graph.copy()
my_graph2.reverse()
should be replaced with
my_graph2 = my_graph.reverse()
since the reverse() method by default returns a copy of the graph.