Taleo Connect Client Script - taleo

Summary: I'm trying to write a script that will pull through a candidates "answer" based off of of a "question code", and have it only pull one row of data per candidate.
Problems:
The "answer" field brings through the answers to multiple questions (which in turn brings multiple rows; 1 for each question/answer), and I only care about bringing through the answer to the one question.
If I filter the entire script off of the question code (DQ_009), it will only pull through candidates that have answered the question and it does not pull through candidates that have not answered it.
I'm not a developer/code writer by trade. I'm being asked to do this because no one else has any idea what's going on.
What I'm looking for:
I'm trying to figure out possibly how to use a complex projection or a subquery (or something else, I'm not sure the right approach) within TCC that will only bring through the answer to that specific question, but also bring it through for people that have not answered it (leaving it blank though).
Code explanation:
In the code I have provided, I copied and pasted this from the "source" of the script. I have it filtering on the candidate number of 36620, which is a candidate that has not answered the question, and thus is not pulling through a value.
I'm trying to have it only pull through the value for the "Question" section when it equals "DQ_009"(which ends up being 'Yes', 'No', or if the candidate has not answered the question, and thus does not exist, I want it to just be blank so it still pulls a row of data for them).
Any help is extremely appreciated.
<quer:query productCode="RC1704" model="http://www.taleo.com/ws/tee800/2009/01" projectedClass="Profile" locale="en" mode="CSV" csvheader="true" largegraph="true" preventDuplicates="false" xmlns:quer="http://www.taleo.com/ws/integration/query">
<quer:subQueries/>
<quer:projections>
<quer:projection alias="Candidate_ID">
<quer:field path="ProfileInformation,Candidate,Number"/>
</quer:projection>
<quer:projection alias="Visa_Needed">
<quer:field path="ProfileInformation,Candidate,QuestionAnswers,Answer,Description"/>
</quer:projection>
<quer:projection alias="Question">
<quer:field path="ProfileInformation,Candidate,QuestionAnswers,Question,Code"/>
</quer:projection>
</quer:projections>
<quer:projectionFilterings/>
<quer:filterings>
<quer:filtering>
<quer:equal>
<quer:field path="ProfileInformation,Candidate,Number"/>
<quer:long>36620</quer:long>
</quer:equal>
</quer:filtering>
</quer:filterings>
<quer:sortings/>
<quer:sortingFilterings/>
<quer:groupings/>
<quer:joinings/>
</quer:query>

Here is a way to do it using a complex projection. Note that I modified your query to start from the "Candidate" entity instead of "Profile" to shorten the field paths.
With the complex projection you can modify the filters of the main query to also extract candidates that did not answer to the question.
<?xml version="1.0" encoding="UTF-8"?>
<quer:query productCode="RC1704" model="http://www.taleo.com/ws/tee800/2009/01" projectedClass="Candidate" locale="en" alias="MainQuery" mode="CSV" csvheader="true" largegraph="true" preventDuplicates="false" xmlns:quer="http://www.taleo.com/ws/integration/query">
<quer:subQueries/>
<quer:projections>
<quer:projection alias="Candidate_ID">
<quer:field path="Number"/>
</quer:projection>
<quer:projection alias="Visa_Needed">
<quer:query projectedClass="Candidate" alias="VisaAnswer">
<quer:projections>
<quer:projection alias="AnswerDesc">
<quer:field path="QuestionAnswers,Answer,Description"/>
</quer:projection>
</quer:projections>
<quer:filterings>
<quer:filtering>
<quer:equal>
<quer:field path="QuestionAnswers,Question,Code"/>
<quer:string>DQ_009</quer:string>
</quer:equal>
</quer:filtering>
<quer:filtering>
<quer:equal>
<quer:field path="Number"/>
<quer:field ownerQuery="MainQuery" path="Number"/>
</quer:equal>
</quer:filtering>
</quer:filterings>
</quer:query>
</quer:projection>
</quer:projections>
<quer:projectionFilterings/>
<quer:filterings>
<quer:filtering>
<quer:equal>
<quer:field path="Number"/>
<quer:long>36620</quer:long>
</quer:equal>
</quer:filtering>
</quer:filterings>
<quer:sortings/>
<quer:sortingFilterings/>
<quer:groupings/>
<quer:joinings/>
</quer:query>

Related

Replace null values in Taleo-Connect-Client extract

I am extracting some values from Taleo via TCC export script and few rows in a column are blank, I want to replace the blank values with 'NULL' written.
For example
SignonBonus
1.20000
2.
3.30000
4.250000
How is this possible ?
I have tried the below code, I am writing Bonus amount in words so have used concatenate function. but this replaceNull logic is not working, can someone suggest ?
<quer:projection alias="BonusinWords">
<quer:concatenate>
<quer:switchByCriterion>
<quer:cases>
<quer:case>
<quer:replaceNull>
<quer:field path="CurrentOffer,SignOnBonus"/>
<quer:string>Null</quer:string>
</quer:replaceNull>
</quer:case>
<quer:case>
</quer:case>
</quer:cases>
</quer:switchByCriterion>
</quer:concatenate>
</quer:projection>
Thanks alot in advance
Shivam
I solved it on my own
I used a notnull equation in the starting, then wrote the below logic of query.
<quer:projection alias="BonusinWords">
<quer:switchByCriterion>
<quer:cases>
<quer:case>
<quer:notEqual>
<quer:field path="CurrentOffer,SignOnBonus"/>
<quer:string>1000000000000000000000</quer:string>
</quer:notEqual>
<quer:concatenate>
<!--Between this I wrote the logic for words!>
</quer:concatenate>
</quer:case>
</quer:cases>
</quer:switchByCriterion>
</quer:projection>
It is not exactly not null condition, but it is serving me since the bonus amount can never be this huge(or I can add more zero's in this), if you know how not null can be put in this I will try that also.
Thanks,
Shivam

Negation of osm class or type

If you search for an airport (aeroway=aerodrome) around brescia, italy, you will also receive a hit for a military airfield, which happens to be tagged as an aerodrome also (it's taggged: aeroway=aerodrome, landuse=military, military=airfield). To avoid this I want to search for aeroway=aerodrome but exclude [military]. I've tried [! military] and [military~"^$"]. Any suggestions?
This particular case may be rare, I realize, but the concept of negating multi-classed elements is useful. And multi-classed elements is not a rare occurance. In general, they seem to be complimentary, not conflicting, so it's not an issue. I also realize that I can weed out conflicting hits with some back-end processing. I wasn't expecting a military airfield to appear with a commercial aerodrome.
In any case, here is a shortened version of my query. I include node, way and relation in full query:
http://overpass-api.de/api/interpreter?
data=[out:json][timeout:25][bbox:45.400861,9.868469,45.641408,10.542755];
(node[aeroway~%22aero|term|heli%22][! military]; ... ) out etc
or:
http://overpass-api.de/api/interpreter?
data=[out:json][timeout:25][bbox:45.400861,9.868469,45.641408,10.542755];
(node[aeroway~%22aero|term|heli%22][military~%22^$%22]; ... ) out etc
If you try to run it, you'll need to include way and relation.
Also, as you can see I don't exactly ask for aeroway=aerodrome. I include terminal and variations on heliport. My experience has been that some aerodromes are tagged only as "terminal", so if you're looking for an airport, asking for "aerodrome" isn't enough.
The correct syntax for negation is as follows:
[military !~ ".*"]
Please see the documentation on the OSM wiki for details.

What text (in English) should I use when asking the user to overwrite a document? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I write a lot of applications in C# and I'm trying to sew up some holes in my standard practices.
Specifically, I'm trying to decide on the best text to use in a message box, and I thought I'd ask the StackOverflow community since I believe that many opinions are always better than one.
What I have currently is:
"Document XXX.docx already exists. Okay to overwrite?"
Buttons for; Yes, No and Cancel
I'm really interested to see which examples turn out to be the most popular.
There are no limits on the style used; formal, casual, humourous, etc. All suggestions are welcome. Aim to err safely within political correctness though.
On a small side note: It would also be great, but by no means essential, to consider that the same text could also be suitable for a command line program.
Please note: English language only please. For other languages, please raise a new question.
Personally, I like to see a bit more context and slightly different wording. Something like:
"<existing document name>" already exists in "<destination path>".
Would you like to replace it (Y/N)?
or perhaps with even more information:
"<existing document name>" (<bytes>) (<date modified>) already exists in
"<destination path>".
Would you like to replace it with file of size <new bytes>, last modified
<new date modified> (Y/N)?
I think "replace" is a bit more clear than "overwrite" - and (speculation) may translate into other languages, and maintain the intended meaning more often.
...and one last option with new file name/location info:
"<existing document name>" (<bytes>) (<date modified>) already exists in
"<destination path>".
Would you like to replace it with file "<new file name>" in "<new path>" of
size <new bytes>, last modified <new date modified> (Y/N)?
This last one would probably just show a temp file / buffer location for an initial file save--but it is reusable, and more meaningful when doing a file copy.
Hope you find one of these useful.
Cheers,
Hans
Title: Overwrite?
"XXX.docx exists.
Would you like to overwrite XXX.docx?"
Buttons: Overwrite, Keep, Panic
Simple solutions are usually best, for example windows user might find it familiar to see a message :
He has 3 options :
Replace
Abort action x3
Create with a new name

How to filter issues on GitHub by issue id?

I need to filter issues by id. For instance, I have #1, #2, #3, #4 issues, and I need to show only 2nd and 3rd issues (#2, #3). Is it possible?
To show a special issue, for example issue #4, you could just write the ID in the URL like this:
https://github.com/unknown-horizons/unknown-horizons/issues/4
I would like to share a full example of how to filter by issues id
As previously mentioned you just need to type in the numbers of ID as e.g. issue: 1 2 3 and it will filter those tickets.
Expanding on Jana's answer above, you can also type the issue's ID (or several IDs) into the Filters' search bar without the hash symbol (#). For example if you want to see issues #66 #42 #17, you can just enter 66 42 17 to show only those issues.
GitHub provides many ways to search for issues, but no way directly to filter by a list of numbers. That's probably because in most cases it doesn't make much sense.
Instead of trying to see just issues #2 and #3, ask yourself what those issues have in common. Why are you trying to see them together, and hide the others? The answer to this question should point you in the right direction.
For example,
if the answer is "they are both assigned to the same person", search by assignee:
assignee:Alex
if the answer is "they are both labelled as important bugs", search by label:
label:bug label:important
There are many other ways that you can search. Finding the commonalities between the issues is the key to effective searching.
If those two issues have something else in common that isn't yet reflected in the issue tracker, you can always add a label to them and then search by label as above.

Unicode character usage statistics [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am looking for some statistical data on the usage of Unicode characters in textual documents (with any markup). Googling brought no results.
Background: I am currently developing a finite state machine-based text processing tool. Statistical data on characters might help searching for the right transitions. For instance latin characters are probably most used so it might make sense to check for those first.
Did anyone by chance gathered or saw such statistics?
(I'm not focused on specific languages or locales. Think general-purpose parser like an XML parser.)
To sum up current findings and ideas:
Tom Christiansen gathered such statistics for PubMed Open Access Corpus (see this question). I have asked if he could share these statistics, waiting for the answer.
As #Boldewyn and #nwellnhof suggested, I could run the analysis of the complete Wikipedia dump or CommonCrawl data. I think these are good suggestions, I'll probably go with the CommonCrawl.
So sorry, this is not an answer, but a good research direction.
UPDATE: I have written a small Hadoop job and ran it on one of the CommonCrawl segments. I have posted my results in a spreadsheet here. Below are the first 50 characters:
0x000020 14627262
0x000065 7492745 e
0x000061 5144406 a
0x000069 4791953 i
0x00006f 4717551 o
0x000074 4566615 t
0x00006e 4296796 n
0x000072 4293069 r
0x000073 4025542 s
0x00000a 3140215
0x00006c 2841723 l
0x000064 2132449 d
0x000063 2026755 c
0x000075 1927266 u
0x000068 1793540 h
0x00006d 1628606 m
0x00fffd 1579150
0x000067 1279990 g
0x000070 1277983 p
0x000066 997775 f
0x000079 949434 y
0x000062 851830 b
0x00002e 844102 .
0x000030 822410 0
0x0000a0 797309
0x000053 718313 S
0x000076 691534 v
0x000077 682472 w
0x000031 648470 1
0x000041 624279 #
0x00006b 555419 k
0x000032 548220 2
0x00002c 513342 ,
0x00002d 510054 -
0x000043 498244 C
0x000054 495323 T
0x000045 455061 E
0x00004d 426545 M
0x000050 423790 P
0x000049 405276 I
0x000052 393218 R
0x000044 381975 D
0x00004c 365834 L
0x000042 353770 B
0x000033 334689 E
0x00004e 325299 N
0x000029 302497 /
0x000028 301057 (
0x000035 298087 5
0x000046 295148 F
To be honest, I have no idea if these results are representative. As I said, I only analysed one segment. Looks quite plausible for me. One can also easily spot that the markup is already stripped off - so the distribution is not directly suitable for my XML parser. But it gives valuable hints on which character ranges to check first.
The link to http://emojitracker.com/ in the near-duplicate question I personally think is the most promising resource for this. I have not examined the sources (I don't speak Ruby) but from a real-time Twitter feed of character frequencies, I would expect quite a different result than from static web pages, and probably a radically different language distribution (I see lots more Arabic and Turkish on Twitter than in my otherwise ordinary life). It's probably not exactly what you are looking for, but if we just look at the title of your question (which probably most visitors will have followed to get here) then that is what I would suggest as the answer.
Of course, this begs the question what kind of usage you attempt to model. For static XML, which you seem to be after, maybe the Common Crawl set is a better starting point after all. Text coming out of an editorial process (however informal) looks quite different from spontaneous text.
Out of the suggested options so far, Wikipedia (and/or Wiktionary) is probably the easiest, since it's small enough for local download, far better standardized than a random web dump (all UTF-8, all properly tagged, most of it properly tagged by language and proofread for markup errors, orthography, and occasionally facts), and yet large enough (and probably already overkill by an order of magnitude or more) to give you credible statistics. But again, if the domain is different than the domain you actually want to model, they will probably be wrong nevertheless.