"Error: n() should only be called in a data context" - group-by

I am trying to spread my data from multiple lines to a more condensed dataset.
I have a dataset on bird nests and am trying to wrangle my data from having separate lines for juveniles and parents who have the same year and nest data entry.
Eg
Year Nest Sex Ring_Number
2009 1 M 321
2009 1 F 189
2009 1 J 232
2009 1 J 101
I want my data to instead look like as follows:
Year Nest M_Ring_Number F_Ring_Number J_Ring_Number
2009 1 321 189 232
2009 1 321 189 101
Is anyone able to help me (I am new to using R)?
Thanks
CI<-C3 %>% group_by(Nest)%>% mutate(grouped_id=1:n())
Error: n() should only be called in a data context
Call rlang::last_error() to see a backtrace

I have faced the same problem. I sensed this function is being masked by another package. So I just detach all the packages and used only this package. It worked fine. You can also try any other package that provides this kind of function like margittr. Try your luck.

Related

Pivot table export excel for header group

How to excel export for pivot table?
Name Code shop sum1 sum2
A 1 16 2 3
B 2 14 4 3
C 4 13 2 5
D 3 33 1 6
Name, code => rowGroup / shop => pivot
Then, it looks like on the screen.
Name Code total1 total2 sum1 sum2 sum1 sum2
A 1 6 6 2 3 4 3 ...
but I export excel. It looks like
I want to export excel like this picture. What should I change to look the same as this?
I was right in my comment on the question. This is because groupHideOpenParents is not applied to the exported file. This is a known issue. It is among the "Standard Feature Requests" in the ag-Grid Pipeline: https://www.ag-grid.com/ag-grid-pipeline/. Ticket key: AG-3756 [Excel Export] Allow groupHideOpenParents to apply in Excel export
Unfortunately it is in the backlog, so it can't be known when it will be fixed and released. Based on this blog entry: https://ag-grid.zendesk.com/hc/en-us/articles/360002213531-Where-Is-My-Ticket-
"Standard Feature Requests are usually fixed within 2 to 4 releases after they have been raised."

fuzzy matching when dataframe are different length

they have marked this question as duplicate, but it has not answer, so trying again.
I have two datasets df2
> Page Title ... dummy
> 383 India Companies Act 2013: Five Key Points Abou... ... 1
> 384 Seven Things Every Company Should Know about A... ... 1
> 385 What Is a Low-Carbon Lifestyle, and How Can I ... ... 1
> 386 Top 10 CSR Events of 2010 | Blog | BSR ... 1
> 387 10 Social Media Rules for Social Responsibilit... ... 1
df1
title
0 Building Responsibly Announces Worker Welfare...
1 Announcing a New Collaboration Using Tech to ...
2 Sustainability Standards Driving Impact for W...
3 What the Right to Own Property Means for a La...
4 The Digital Payments Opportunity: A Conversation
5 The US$660 Billion Sustainable Supply Chain F...
6 A New Tool to Assess the Impact of Your Healt...
7 The Global Climate Action Summit: How Busines...
8 Two Ways Responsible Investors Can Promote In...
9 Where BSR Will Be in June 2018
10 Scaling a Renewable Future for Internet Power
11 How Health Training Changed Social Norms in H...
12 A Map to Help Business Collaborate with Anti-...
they have different lengths.
I tried the approach
df2['Page Title'] = df2['Page Title'].apply(lambda x: difflib.get_close_matches(x, df1.title)[0])
but I get the following error, possibly because of the different length
df2['Page Title'] = df2['Page Title'].apply(lambda x: difflib.get_close_matches(x, df1.title)[0])
IndexError: list index out of range
how to solve it?
This should work:
matched_titles = []
for row in df1.index:
title_name = df1.get_value(row,"Page Title")
for columns in df2.index:
title=df2.get_value(columns,"title")
matched_token=fuzz.partial_ratio(title_name,title)
if matched_token> 80:
matched_titles.append([title_name,title,matched_token])

tensorflow checkpoint missing input tensor node

( please pardon my long post, dearly appreciate your help )
I am training the squeezeDet model for the pascal VOC style custom data as per the training code from the repository HERE
train.py
model_definition and HERE
the saved model checkpoint performs well as I can see acceptable performance.
Now i am trying to freeze the model for deployment using coreML to see how the performance is in a mobile platform. The authors of the script only report performance in a GPU environment in their research paper.
I follow the recommended steps as per tensorflow, my commands are as below
First,
I write the graph out from the checkpoint meta file
path_to_ckpt_meta = rootdir + "model.ckpt-355000.meta"
path_to_ckpt_data = rootdir + "model.ckpt-355000"
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
saver = tf.train.import_meta_graph(path_to_ckpt_meta)
saver.restore(sess, path_to_ckpt_data)
tf.train.write_graph(tf.get_default_graph().as_graph_def(), rootdir, "model_ckpt_355000_graph_V2.pb", False)
Now
I check the graph summary as see all the tensors in the model . The output summary file is HERE.
However, when I check the checkpoint file using the inspect_checkpoint.py function from tensorflow I see no image_input nodes. The output of inspection is HERE.
Second
I freeze the graph using the tensorflow freeze_graph.py function
python ./tensorflow/python/tools/freeze_graph.py \
--input_graph=path-to-dir/train/model_ckpt_355000_graph.pb \
--input_checkpoint=path-to-dir/train/model.ckpt-355000 \
--output_graph=path-to-dir/train/frozen_sqdt_ckpt_355000.pb \
--output_node_names=bbox/trimming/bbox,probability/score,probability/class_idx
the freeze_graph call completes without error and results in the frozen graph as per the command above.
Now,
when I check the frozen graph using the summarize_graph function call
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/tmp/logs/squeezeDet_NewDataset_test01_March02/train/frozen_sqdt_ckpt_355000.pb
I get the following
No inputs spotted.
No variables spotted.
Found 3 possible outputs: (name=bbox/trimming/bbox, op=Transpose) (name=probability/score, op=Max) (name=probability/class_idx, op=ArgMax)
Found 2703452 (2.70M) const parameters, 0 (0) variable parameters, and 0 control_edges
Op types used: 130 Const, 68 Identity, 32 BiasAdd, 32 Conv2D, 31 Relu, 15 Mul, 14 Add, 10 ConcatV2, 9 Sub, 5 RealDiv, 5 Reshape, 4 Maximum, 4 Minimum, 3 StridedSlice, 3 MaxPool, 2 Exp, 2 Greater, 2 Cast, 2 Select, 1 Transpose, 1 Softmax, 1 Sigmoid, 1 Unpack, 1 RandomUniform, 1 QueueDequeueManyV2, 1 Pack, 1 Max, 1 Floor, 1 FIFOQueueV2, 1 ArgMax
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/tmp/logs/squeezeDet_NewDataset_test01_March02/train/frozen_sqdt_ckpt_355000.pb --show_flops --input_layer= --input_layer_type= --input_layer_shape= --output_layer=bbox/trimming/bbox,probability/score,probability/class_idx
this output above suggests that there is no input detected from the frozen graph. I check the summary of the frozen graph and find no image_input tensor. HERE
When I check my original graph ( written in step 1 ) with summarize graph, It does show inputs.
My troubleshooting
Suggests there is some mixup in the original authors code where the image_input is not provided as an input tensor. Though, the confusing part is that I can see the input image tensor in the summary of the output graph from the checkpoint meta file.
My question is,
-- why is the frozen graph removing the input nodes, when the original graph has the inputs ?
-- And, what can I do to change this and be able to successfully freeze_graph correctly.
Is there a transformation that need to perform in order to make this freeze model compatible with the coreML format.?
All your help is much appreciated.
Best
Aman

AS400 Macro, input fields count

5250 emulator:
Hello everyone, I want an operator which will count that input fields as it is shown on attached picture. In this case i have 5 input field.
Thanks in advance and best regards
It can be done!
Download this source: http://www.code400.com/ffd.php
You can comment out the GETKEY section from FFDRPG as you won't need that and it will probably cause it to fall over anyway.
Also, rememember when you use the command, to put the record format name in as well as your display file name - don't just leave *FIRST in there or you'll just get the fields from the first record format in the display file.
EDIT:
You'll need to add an extra field to the ListDs data structure:
D ListDs DS
D SfFld 1 10
D SfType 11 11
D SfUse 12 12
D BufferOut 13 16B 0
D FieldLen 21 24B 0
D Digits 25 28B 0
D Decimals 29 32B 0
D FieldDesc 33 82
If you add the 3rd field SfUse, you can check whether it contains 'I' so you only count Input Capable fields.
Check out the QUSLFLD API https://www.ibm.com/support/knowledgecenter/en/ssw_i5_54/apis/quslfld.htm if you want to see exactly what information can be retrieved by this API.
The example in the download uses the most basic format FLDL0100 but more information can be retrieved if you ask for format FLDL0200 or FLDL0300 but they will take longer to execute and you shouldn't need the extra info to achieve what you're after.
I'm not sure if this is possible, but you might find some joy with the DSM APIs.
QsnQry5250 has a maximum number of input fields return parameter, but it may just show you the maximum allowed on the display rather than the number you have on your screen.
There's an example here https://www.ibm.com/support/knowledgecenter/en/ssw_i5_54/apis/dsm1g.htm
And the API documentation here https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_72/apis/QsnQry5250.htm
Sorry I can't be of more help - I've never used those APIs and can't think of another way to achieve what you're after.
If you tell us the reason you need to know the number of input fields on screen, we may be able to suggest another way to achieve what you want to achieve.
Damian

What class of objects are in the environment ? (R)

I wish to know what type of objects I've got in my environment.
I can show who is there like this:
ls()
But running something like
sapply(ls(), class)
Would (obviously) not tell us what type (class) of objects we are having (function, numeric, factor and so on...)
using
ls.str()
Will tell me what class my objects are, but I won't be able to (for example) ask for all the objects which are factors/data.frame/functions - and so on.
I can capture.output of ls.str(), but probably there is a smarter way - any idea what it is?
This should do the trick:
sapply(ls(), function(x){class(get(x))})
The lsos() function posted in this SO question answers the question too:
> lsos()
Type Size Rows Columns
y data.frame 1864 26 2
r character 320 2 NA
txt character 208 3 NA
x integer 72 10 NA
>
You can also use the mget() function to get all objects at once
sapply(mget(ls()), class)