Extract text from a Wordnik API URL in Google Sheets - import

Can anyone explain how to import/extract a particular field from the following url into Google Sheets:
Wordnik URL
I'm guessing there is an IMPORTXML query that could do it, but it doesn't have the nodes that IMPORTXML usually uses to import that. Instead the code looks like this:
[{"mi":6.720745180909532,"gram1":"pretty","gram2":"much","wlmi":18.953166108085608},{"mi":6.650496643050408,"gram1":"pretty","gram2":"good","wlmi":18.469078820531266},{"mi":9.839004198061549,"gram1":"pretty","gram2":"darn","wlmi":17.298435816698845},{"mi":7.515791105774376,"gram1":"pretty","gram2":"cool","wlmi":15.515791105774376},{"mi":8.233704272151307,"gram1":"pretty","gram2":"impressive","wlmi":15.210984195651225}]
So if Cell A2 has the URL that produces this as the code, how do I get B2 to give me the text after "gram2" (in this case, "good", "darn", "cool" and "impressive").
Thanks
Tardy

I've come up with a workaround but it is kind of messy. I'd still like an answer to the question, but for reference and in case it's of use to anyone I'll post it here:
Using =IMPORTDATA(E10) (where E10 is the cell with the URL in) gave me an array (I guess based on .csv principles) that I could then manipulate using some other tools, like regexreplace, to get at the relevant bits of text.

Related

Issues with "QUERY(IMPORTRANGE)"

Here's my first question on this forum, though I've read through a lot of good answers here.
Can anyone tell me what I'm doing wrong with my attempt to do a query import from one sheet to a column in another?
Here's the formula I've tried, but all my adjustments still get me a parsing error.
=QUERY(IMPORTRANGE("https://docs.google.com/spreadsheets/d/1yGPdI0eBRNltMQ3Wr8E2cw-wNlysZd-XY3mtAnEyLLY/edit#gid=163356401","Master Treatment Log (Responses)!V2:V")"WHERE Col8="'&B2&'")")
Note that importrange is only needed for imports between spreadsheets. If you only import from one sheet into another within the same spreadsheet I would suggest using filter() or query().
Assuming the value in B2 is actually a string (and not a number), you can try
=QUERY(IMPORTRANGE("https://docs.google.com/spreadsheets/d/1yGPdI0eBRNltMQ3Wr8E2cw-wNlysZd-XY3mtAnEyLLY/edit#gid=163356401","Master Treatment Log (Responses)!V2:V"), "WHERE Col8="'&B2&'", 0)
Note the added comma before "WHERE". If you want to import a header row, change 0 to 1.
See if that helps? If not, please share a copy of your spreadsheet (sensitive data erased).

How do I reduce the number of 301 redirect entries using wildcards and variables in Squarespace?

I recently renamed all of the URLs that make up my blog... and have written redirects for almost every page... using wildcards where I can... keeping in mind... all that I know is the * wildcard at this time...
Here is an example of what I have...
/season-1/2017/1/1/snl-s01e01-host-george-carlin -> /season-1/snl-s01e01-george-carlin 301
I want to write a catch-all that will redirect all 38 seasons of reviews with one redirect entry... but I can't figure out how to get rid of just the word "host" between s01e01- and -george-carlin... and was thinking it would work something like this...
/season-*/*/*/*/snl-s*e*-host-*-* -> /season-*/snl-s*e*[code to remove the word "host"]-*-* 301
Is that even close to being correct? Do I need that many *s
Thanks in advance for any help...
Unfortunately, you won't be able to reduce the number of individual redirect entries using the redirect features that Squarespace has to offer, namely the wildcard (*) and a single variable ([name]). Multiple variables would be needed, but only [name] is supported.
The closest you can get is:
/season-1/*/*/*/snl-s01e01-host-[name] -> /season-1/snl-s01e01-[name] 301
But, if I'm understanding things, while the above redirect appears more general, it would still need to be copy/pasted for each post individually. So although it demonstrates the best that could be achieved, it is not a technical improvement.
Therefore, there are only two alternatives:
Create a Google Sheet (or other spreadsheet) where the old URLs are copy/pasted in column one, a formula using arrayformula and regular expressions to parse the old URL and generate the new URL is added in column two, and in column three a formula is written to join the two cells with -> and 301. With that done, you could click, drag and highlight all cells in column 3, copy, and paste them in the "URL Shortcuts" text area in Squarespace.It can be quite time consuming to figure out, write and test the correct formula, but it does avoid having to manually type out every redirect. Whether it is less time/effort in total depends on the number of redirects and one's proficiency with writing spreadsheet formulas.It could be that using the redirect code above would simplify the formula that'd need to be written in the spreadsheet, which may save some time.
Another alternative would be to remove your redirects and instead handle the redirect via JavaScript added to the 404/Page-not-found page. Because it sounds like you already have all of the redirects in place but are simply trying to reduce the overall number, I wouldn't recommend changing to a JavaScript-based approach. There are other drawbacks to using JavaScript, in any case.

importing website to google sheets

I have tried searching everywhere online for a good answer but cannot seem to find anything that matches specifically what i am looking for.
When i use the IMPORTHTML function in google sheets, i end up with data that looks like:
${player.name} (${player.position}, ${team.abbrev}) ${opponent.abbrev} #${opponent_rank} ${minutes} ${pts} ${fgm}-${fga} ${ftm}-${fta} ${p3m}-${p3a} ${treb} ${ast} ${stl} ${blk} ${tov} ${pf} ${fp} $${salary} ${ratio}
the code that i am using looks like this:
=IMPORTHTML("", "table",2)
When I use the same as above (=IMPORTHTML("", "table",2)) only with "0" as my index, it pulls this:
Opp Stats
Player Team Rank Min Pts FGM/A FTM/A 3PM/A Reb Ast Stl Blk Tov Foul FP Cost Value
Basically, I am attempting to pull the table data from this website:
https://www.numberfire.com/nba/fantasy/fantasy-basketball-projections
(because of my rep i cannot post more than two links, however my IMPORTHTML function has the above link input in both functions)
into a google sheet. Please help. any feedback is much appreciated... thanks!
Best advice is to find another Web table you can import. If you do "view source" on the page, you will find that the table content is dynamically populated from a variable named NF_DATA.
You need to create a document script to extract the data you want:
function this_is_test() {
var response = UrlFetchApp.fetch("https://www.numberfire.com/nba/fantasy/fantasy-basketball-projections");
raw_content = response.getContentText();
re = new RegExp('"daily_projections":\\[[^\\]]+','i');
proj = raw_content.match(re);
Logger.log(proj);
}
It will extract all text in-between "daily_projections":[ and ], which is (as of today):
"daily_projections":[{"nba_player_id":"77","nba_game_id":"20015","date":"2016-01-19","nba_team_id":"21","opponent_id":"7","season":"2016","game_play_probability":"1.00","game_start":"1.00","minutes":36.3,"fgm":"8.8","fga":"17.1","p3m":"1.9","p3a":"4.8","ftm":"6.2","fta":"6.9","oreb":"0.8","dreb":"7.2","ast":"4.7","stl":"1.1","blk":"1.2","tov":"2.7","pf":"1.8","pts":"25.3","ts":"0.628","efg":"0.655","oreb_pct":"2.6","dreb_pct":"21.4","treb_pct":"12.4","ast_pct":"23.4","stl_pct":"1.5","blk_pct":"2.4","tov_pct":"12.1","usg":"27.8","ortg":"122.2","drtg":"101.8","nerd":"22.34","star_street_fp":43.08,"star_street_salary":0,"star_street_ratio":0,"draft_street_daily_fp":39.75,"draft_street_daily_salary":0,"draft_street_daily_ratio":0,"fanduel_fp":43.85,"fanduel_salary":9900,"fanduel_ratio":4.43,"draft_kings_fp":46.55,"draft_kings_salary":9900,"draft_kings_ratio":4.7,"fantasy_feud_fp":39.75,"fantasy_feud_salary":153600,"fantasy_feud_ratio":0.26,"fanthrowdown_fp":45.2,"fanthrowdown_salary":0,"fanthrowdown_ratio":0,"fantasy_aces_fp":44.25,"fantasy_aces_salary":7250,"fantasy_aces_ratio":6.1,"draftday_fp":45.25,"draftday_salary":18800,"draftday_ratio":2.41,"fantasy_score_fp":45.6,"fantasy_score_salary":9600,"fantasy_score_ratio":4.75,"draftster_fp":43.75,"draftster_salary":9400,"draftster_ratio":4.65,"yahoo_fp":44.8,"yahoo_salary":52,"yahoo_ratio":0.86,"treb":8},{"nba_player_id":"397","nba_game_id":"20015","date":"2016-01-19","nba_team_id":"21","opponent_id":"7","season":"2016","game_play_probability":"1.00","game_start":"1.00","minutes":35,"fgm":"8.6","fga":"18.0","p3m":"1.3","p3a":"4.1","ftm":"5.9","fta":"7.2","oreb":"1.3","dreb":"5.0","ast":"8.8","stl":"2.0","blk":"0.4","tov":"3.6","pf":"2.2","pts":"24.4","ts":"0.576","efg":"0.592","oreb_pct":"4.6","dreb_pct":"15.3","treb_pct":"10.2","ast_pct":"44.4","stl_pct":"3.0","blk_pct":"0.8","tov_pct":"14.6","usg":"31.3","ortg":"117.8","drtg":"101.2","nerd":"19.75","star_street_fp":44.48,"star_street_salary":0,"star_street_ratio":0,"draft_street_daily_fp":41.33,"draft_street_daily_salary":0,"draft_street_daily_ratio":0,"fanduel_fp":46.36,"fanduel_salary":10500,"fanduel_ratio":4.42,"draft_kings_fp":49.13,"draft_kings_salary":10700,"draft_kings_ratio":4.59,"fantasy_feud_fp":41.33,"fantasy_feud_salary":169800,"fantasy_feud_ratio":0.24,"fanthrowdown_fp":47.33,"fanthrowdown_salary":0,"fanthrowdown_ratio":0,"fantasy_aces_fp":46.68,"fantasy_aces_salary":7800,"fantasy_aces_ratio":5.98,"draftday_fp":47.1,"draftday_salary":20500,"draftday_ratio":2.3,"fantasy_score_fp":48.48,"fantasy_score_salary":9900,"fantasy_score_ratio":4.9,"draftster_fp":45.38,"draftster_salary":9500,"draftster_ratio":4.78,"yahoo_fp":47.01,"yahoo_salary":59,"yahoo_ratio":0.8,"treb":6.3},{"nba_player_id":"279","nba_game_id":"20016","date":"2016-01-19","nba_team_id":"11","opponent_id":"24","season":"2016","game_play_probability":"1.00","game_start":"1.00","minutes":36.7,"fgm":"7.6","fga":"18.1","p3m":"2.5","p3a":"6.9","ftm":"5.5","fta":"6.5","oreb":"1.1","dreb":"5.8","ast":"5.3","stl":"1.8","blk":"0.4","tov":"3.6","pf":"2.4","pts":"22.5","ts":"0.537","efg":"0.610","oreb_pct":"3.3","dreb_pct":"17.6","treb_pct":"10.5","ast_pct":"25.1","stl_pct":"2.5","blk_pct":"0.9","tov_pct":"15.3","usg":"29.1","ortg":"104.1","drtg":"99.2","nerd":"5.26","star_street_fp":38.55,"star_street_salary":0,"star_street_ratio":0,"draft_street_daily_fp":34.13,"draft_street_daily_salary":0,"draft_street_daily_ratio":0,"fanduel_fp":39.53,"fanduel_salary":8700,"fanduel_ratio":4.54,"draft_kings_fp":42.93,"draft_kings_salary":9200,"draft_kings_ratio":4.67,"fantasy_feud_fp":34.13,"fantasy_feud_salary":138800,"fantasy_feud_ratio":0.25,"fanthrowdown_fp":41.13,"fanthrowdown_salary":0,"fanthrowdown_ratio":0,"fantasy_aces_fp":39.88,"fantasy_aces_salary":6500,"fantasy_aces_ratio":6.14,"draftday_fp":41.3,"draftday_salary":16600,"draftday_ratio":2.49,"fantasy_score_fp":41.68,"fantasy_score_salary":8400,"fantasy_score_ratio":4.96,"draftster_fp":39.45,"draftster_salary":8000,"draftster_ratio":4.93,"yahoo_fp":40.78,"yahoo_salary":47,"yahoo_ratio":0.87,"treb":6.9},{"nba_player_id":"2137","nba_game_id":"20014","date":"2016-01-19","nba_team_id":"38","opponent_id":"17","season":"2016","game_play_probability":"1.00","game_start":"1.00","minutes":35,"fgm":"8.0","fga":"16.6","p3m":"0.4","p3a":"1.3","ftm":"4.5","fta":"6.0","oreb":"2.6","dreb":"7.8","ast":"2.2","stl":"1.0","blk":"2.2","tov":"1.9","pf":"2.6","pts":"20.8","ts":"0.541","efg":"0.521","oreb_pct":"8.6","dreb_pct":"24.8","treb_pct":"16.6","ast_pct":"11.5","stl_pct":"1.4","blk_pct":"4.9","tov_pct":"9.4","usg":"27.0","ortg":"107.9","drtg":"103.1","nerd":"5.60","star_street_fp":41.05,"star_street_salary":0,"star_street_ratio":0,"draft_street_daily_fp":36.55,"draft_street_daily_salary":0,"draft_street_daily_ratio":0,"fanduel_fp":41.08,"fanduel_salary":10300,"fanduel_ratio":3.99,"draft_kings_fp":44.25,"draft_kings_salary":10000,"draft_kings_ratio":4.43,"fantasy_feud_fp":36.55,"fantasy_feud_salary":149400,"fantasy_feud_ratio":0.24,"fanthrowdown_fp":41.8,"fanthrowdown_salary":0,"fanthrowdown_ratio":0,"fantasy_aces_fp":41.6,"fantasy_aces_salary":7400,"fantasy_aces_ratio":5.62,"draftday_fp":40.43,"draftday_salary":18200,"draftday_ratio":2.22,"fantasy_score_fp":42.55,"fantasy_score_salary":9700,"fantasy_score_ratio":4.39,"draftster_fp":41.53,"draftster_salary":9200,"draftster_ratio":4.51,"yahoo_fp":41.28,"yahoo_salary":54,"yahoo_ratio":0.76,"treb":10.4},{"nba_player_id":"362","nba_game_id":"20013","date":"2016-01-19","nba_team_id":"15","opponent_id":"16","season":"2016","game_play_probability":"1.00","game_start":"1.00","minutes":34.9,"fgm":"7.3","fga":"15.4","p3m":"1.4","p3a":"3.7","ftm":"4.8","fta":"6.0","oreb":"1.7","dreb":"6.1","ast":"2.3","stl":"0.7","blk":"1.0","tov":"1.7","pf":"2.0","pts":"20.6","ts":"0.571","efg":"0.594","oreb_pct":"6.2","dreb_pct":"19.9","treb_pct":"13.3","ast_pct":"12.1","stl_pct":"1.1","blk_pct":"2.2","tov_pct":"8.5","usg":"26.5","ortg":"115.3","drtg":"104.5","nerd":"11.63","star_street_fp":34.93,"star_street_salary":0,"star_street_ratio":0,"draft_street_daily_fp":30.85,"draft_street_daily_salary":0,"draft_street_daily_ratio":0,"fanduel_fp":35.11,"fanduel_salary":7800,"fanduel_ratio":4.5,"draft_kings_fp":37.05,"draft_kings_salary":7600,"draft_kings_ratio":4.88,"fantasy_feud_fp":30.85,"fantasy_feud_salary":120400,"fantasy_feud_ratio":0.26,"fanthrowdown_fp":36.2,"fanthrowdown_salary":0,"fanthrowdown_ratio":0,"fantasy_aces_fp":35.5,"fantasy_aces_salary":5900,"fantasy_aces_ratio":6.02,"draftday_fp":35.43,"draftday_salary":13950,"draftday_ratio":2.54,"fantasy_score_fp":36.35,"fantasy_score_salary":7000,"fantasy_score_ratio":5.19,"draftster_fp":35.35,"draftster_salary":7200,"draftster_ratio":4.91,"yahoo_fp":35.81,"yahoo_salary":40,"yahoo_ratio":0.9,"treb":7.8},{"nba_player_id":"2249","nba_game_id":"20014","date":"2016-01-19","nba_team_id":"17","opponent_id":"38","season":"2016","game_play_probability":"1.00","game_start":"1.00","minutes":35.7,"fgm":"7.2","fga":"16.6","p3m":"0.9","p3a":"3.2","ftm":"4.6","fta":"6.3","oreb":"1.3","dreb":"2.9","ast":"2.1","stl":"0.9","blk":"0.5","tov":"2.2","pf":"2.4","pts":"20.2","ts":"0.521","efg":"0.530","oreb_pct":"4.3","dreb_pct":"9.6","treb_pct":"6.9","ast_pct":"10.2","stl_pct":"1.3","blk_pct":"1.0","tov_pct":"10.3","usg":"26.7","ortg":"101.6","drtg":"111.5","nerd":"-7.07","star_street_fp":28.68,"star_street_salary":0,"star_street_ratio":0,"draft_street_daily_fp":23.65,"draft_street_daily_salary":0,"draft_street_daily_ratio":0,"fanduel_fp":28.99,"fanduel_salary":6700,"fanduel_ratio":4.33,"draft_kings_fp":30.75,"draft_kings_salary":6900,"draft_kings_ratio":4.46,"fantasy_feud_fp":23.65,"fantasy_feud_salary":113500,"fantasy_feud_ratio":0.21,"fanthrowdown_fp":29.65,"fanthrowdown_salary":0,"fanthrowdown_ratio":0,"fantasy_aces_fp":29.2,"fantasy_aces_salary":4900,"fantasy_aces_ratio":5.96,"draftday_fp":28.43,"draftday_salary":12000,"draftday_ratio":2.37,"fantasy_score_fp":30.3,"fantasy_score_salary":6000,"fantasy_score_ratio":5.05,"draftster_fp":29.23,"draftster_salary":5900,"draftster_ratio":4.95,"yahoo_fp":29.44,"yahoo_salary":29,"yahoo_ratio":1.02,"treb":4.2},{"nba_player_id":"370","nba_game_id":
Note that even this is not complete. You need to somehow map nba_player_id to the appropriate name. Anyway, a lot coding will be involved...

How to get larger version of Facebook's thumbnail images?

Right now the Facebook API is returning a URL like this with all post/album images 130x130 pixels in size:
https://fbcdn-sphotos-h-a.akamaihd.net/hphotos-ak-xfp1/v/t1.0-9/s130x130/10801504_570625556403546_6496651209845129904_n.jpg?oh=dcf8ab3752522532871d2aaab09b6e7e&oe=54E4402F&gda=1424027679_76464aeeaa5d232b8100d01476af4ec7
How do I retrieve a full (or any bigger size) image based on that URL?
For example this one:
https://fbcdn-sphotos-h-a.akamaihd.net/hphotos-ak-xfp1/v/t1.0-9/p417x417/10801504_570625556403546_6496651209845129904_n.jpg?oh=cd2b5cb0d74f7306c098de9f56dc6e27&oe=54E1F4C1&gda=1423830001_e700bfac39039952bfee55b200c158bf
or anything like that?
All the other suggestions of removing s130x130 from the URL, or /v/t1.0-9/, or replacing _s with _n or anything like that aren't valid any more - I've tried them all (try them yourself if you don't believe me). Is there a way to make this happen? Not sure what Facebook guys have changed to disable that...
After a couple of hours searching and pulling my hair out, I found a solution that works for me. In my case I'm pulling posts from {page-id}/posts, but I'm pretty sure this will work for you too, seeing that I used to get larger images using the same approach as you mention.
This is works for me:
bigger_image="https://graph.facebook.com/" + picture_url_from_facebook.match(/_\d+/)[0].slice(1) + "/picture?type=normal";
/_\d+/ matches any substring starting with an underscore (_) followed by any digits (\d matches one digit, \d+ does the digit match over and over until it fails)
match(regex) returns an array with all strings it could find matching the specified regex
we grab the first ([0]) element, as this will be an underscore followed by the ID of the picture in Facebook's database
we slice the string from index 1 to the end, as the underscore is not a part of the ID
we then get a working link from facebook using the graph api without the need for any extra calls (you can use this in for example an img tag directly)
You can see this working in action # this fiddle or this page we did for a client
Thanks to Er Adhish for leading the way to the solution # https://stackoverflow.com/a/27075503/2908761
https://graph.facebook.com/{object_id}/picture?type={thumbnail|album|normal}
Example (using a bogus object_id of 122233334444555):
https://graph.facebook.com/122233334444555/picture?picture?type=large
Make sure object_id is not just the id of an item but it's object_id (which is typically the number following the underscore in the full id).
I got those type values from a helpful facebook response message that said:
"message": "(#100) type must be one of the following values: thumbnail, album, normal"
In place of ?type=... you could do height/width as well:
?width=543&height=543

what is the difference between doc.Content.Text and doc.Range(start, end).Text

Could you please explain what is the difference between doc.Content.Text and doc.Range(start, end).Text
Actually, if I extract a string like
doc.Content.Text.SubString(start, lenofText)
and if I do the same with
doc.Range(start, start + lenofText)
I get correct result for doc.Content.Text but incorrect result with doc.Range ... do you know the reason? I need to find a text and then convert it to a Hyper LINK but the doc.Range does not give the me the correct results...
Your description is a little vague (for instance, how is it not the correct results?) but a document is actually comprised of as many as 17 story parts (which includes things like the main story [the document area], footers, headers, footnotes, and comments). 'Content' refers specifically to the main text story. ‘Doc.Range’ is broader and can include more than one story. If the results are not correct because it looks like the text is offset by a certain number of characters, it may be counting other stories. If you want to limit the results to the body text, specify one of the following:
doc.Content
doc.StoryRanges(wdMainTextStory)