I have searched for $index in many sites but none of them define how to get value of this variable.For now i have to use Model for this like below:
Can someone please help me to solve this issue how to get $index in algolia facets count like below ?
$index->searchForFacetValues("age", ['filters' => 'age']);
I have already tried this many time but cant get the coreect result
$results = User::search($request->get('search'))->with(['filters' => 'age:23',]);
Want to get $index
I think you're looking for this snippet from the docs:
// composer autoload
require __DIR__ . '/vendor/autoload.php';
// if you are not using composer
// require_once 'path/to/algolia/folder/autoload.php';
$client = Algolia\AlgoliaSearch\SearchClient::create(
'YourApplicationID',
'YourAdminAPIKey'
);
$index = $client->initIndex('your_index_name');
You get $index by calling initIndex on your Algolia API client instance.
If that's not what you're looking for, you probably need to rephrase your question :)
Related
I have noticed the [new changes in FB Developer]: https://developers.facebook.com/roadmap/
I'd like to know what you think I need to change in my code.
I have wordpress and I have a function that counts the total number of comments, and of course it still need to works also after July 10.
function full_comment_count() {
global $post;
$url = get_permalink($post->ID);
$filecontent = file_get_contents('https://graph.facebook.com/?ids=' . $url);
$json = json_decode($filecontent);
$count = $json->$url->comments;
$wpCount = get_comments_number();
$realCount = $count + $wpCount;
if ($realCount == 0 || !isset($realCount)) {
$realCount = 0;
}
return $realCount;
}
Is it as simple as changing:
$count
to
$total_count
or something else needs to be changed as well in the code?
Thank you
Facebook Roadmap:
We are removing the undocumented 'count' field on the 'comments'
connection in the Graph API. Please request
'{id}/comments?summary=true' explicitly if you would like the summary
field which contains the count (now called 'total_count')
...file_get_contents is VERY bad, CURL would be better, but more complicated. the best way to use the graph api in this case is the php sdk: https://github.com/facebook/facebook-php-sdk
anyway, i guess those changes are needed:
$filecontent = file_get_contents('https://graph.facebook.com/?ids=' . $url);
...this is still correct, with a var_dump right after this line (or after the json decode) you see that there is an "id". with that id, you have to make a second call to the graph api:
$comments= file_get_contents('https://graph.facebook.com/' . $id . '/comments?summary=true);
the rest is easy-peasy basic php stuff, just do a var_dump of $comments after using json_decode again.
I am currently attempting to create a Perl webspider using WWW::Mechanize.
What I am trying to do is create a webspider that will crawl the whole site of the URL (entered by the user) and extract all of the links from every page on the site.
But I have a problem with how to spider the whole site to get every link, without duplicates
What I have done so far (the part im having trouble with anyway):
foreach (#nonduplicates) { #array contain urls like www.tree.com/contact-us, www.tree.com/varieties....
$mech->get($_);
my #list = $mech->find_all_links(url_abs_regex => qr/^\Q$urlToSpider\E/); #find all links on this page that starts with http://www.tree.com
#NOW THIS IS WHAT I WANT IT TO DO AFTER THE ABOVE (IN PSEUDOCODE), BUT CANT GET WORKING
#foreach (#list) {
#if $_ is already in #nonduplicates
#then do nothing because that link has already been found
#} else {
#append the link to the end of #nonduplicates so that if it has not been crawled for links already, it will be
How would I be able to do the above?
I am doing this to try and spider the whole site to get a comprehensive list of every URL on the site, without duplicates.
If you think this is not the best/easiest method of achieving the same result I'm open to ideas.
Your help is much appreciated, thanks.
Create a hash to track which links you've seen before and put any unseen ones onto #nonduplicates for processing:
$| = 1;
my $scanned = 0;
my #nonduplicates = ( $urlToSpider ); # Add the first link to the queue.
my %link_tracker = map { $_ => 1 } #nonduplicates; # Keep track of what links we've found already.
while (my $queued_link = pop #nonduplicates) {
$mech->get($queued_link);
my #list = $mech->find_all_links(url_abs_regex => qr/^\Q$urlToSpider\E/);
for my $new_link (#list) {
# Add the link to the queue unless we already encountered it.
# Increment so we don't add it again.
push #nonduplicates, $new_link->url_abs() unless $link_tracker{$new_link->url_abs()}++;
}
printf "\rPages scanned: [%d] Unique Links: [%s] Queued: [%s]", ++$scanned, scalar keys %link_tracker, scalar #nonduplicates;
}
use Data::Dumper;
print Dumper(\%link_tracker);
use List::MoreUtils qw/uniq/;
...
my #list = $mech->find_all_links(...);
my #unique_urls = uniq( map { $_->url } #list );
Now #unique_urls contains the unique urls from #list.
I'm new in this forum and I'm having some problems with the perl library Net::Twitter:Stream. I'm following the example in this link Net::Twitter:Stream.
But it is missing the part when I get a bad response code(another than 200) and I have to stop my algorithm. So, what can I do in this case? I'm afraid to use it so much and enter into the twitter black list...
I'm basing in this code below:
use Net::Twitter::Stream;
Net::Twitter::Stream->new ( user => $username, pass => $password,
callback => \&got_tweet,
track => 'perl,tinychat,emacs',
follow => '27712481,14252288,972651' );
sub got_tweet {
my ( $tweet, $json ) = #_; # a hash containing the tweet
# and the original json
print "By: $tweet->{user}{screen_name}\n";
print "Message: $tweet->{text}\n";
}
I think you'll want to add connection_closed_cb=>\&bad_response, see this stackoverflow questions last answer. I'm not sure why that ability isn't documented but it is available if you check the source code. I also couldn't find that module in CPAN.
The Google::Search module, which is based on the AJAX Search API, doesn't seems to work very well, or it is just me?
For example, I use firefox to search google for: http://bloggingheads.tv/forum/member.php?u=12129
It brings results.
But when I use the module this way:
$google_search = Google::Search->Web ( q => "http://bloggingheads.tv/forum/member.php?u=12129" );
#result = $google_search->all;
I get nothing in the array.
Any idea?
Seems like this API doesn't bring the same results like when searching manually, am I missing something?
I had a similar problem with cyrillic queries. Both Google::Search and REST::Google from CPAN didn't work for me - they were giving back fewer or no results compared to manual test.
Eventually I wrote a scraping module using WWW::Mechanize and HTML::TreeBuilder.
Here's a sample to get result stats:
my $tree = HTML::TreeBuilder->new_from_content($content);
if (my $div = $tree->look_down(_tag => 'div', id => 'resultStats')) {
my $stats = $div->as_text();
}
else { warn "no stats" }
Looking at the POD for Google::Search, it looks like it expects you to pass search terms to Web, instead of a URL. I downloaded a test script from CPAN, ran it, and it seems to produce expected results:
use strict;
use warnings;
use Google::Search;
my $search = Google::Search->Web(q => "rock");
my $result = $search->first;
while ($result) {
print $result->number, " ", $result->uri, "\n";
$result = $result->next;
}
print $search->error->reason, "\n" if $search->error;
__END__
0 http://www.rock.com/
1 http://en.wikipedia.org/wiki/Rock_music
2 http://en.wikipedia.org/wiki/Rock_(geology)
3 http://rockyourphone.com/
4 http://rockhall.com/
5 http://www.co.rock.mn.us/
6 http://www.co.rock.wi.us/
7 http://www.rockride.org/
etc...
I realize this does not specifically answer your question, but perhaps it steers you in the right direction.
I have a Zend form with some elements like this :
http://i27.tinypic.com/ogj88i.jpg
I added all element using this way:
$element = $this->CreateElement('text','lockerComb');
$element->setLabel('Locker');
$element->setAttrib('class','colorbox');
$elements[] = $element;
$element = $this->CreateElement('text','parking');
$element->setLabel('Automobile / Parking');
$element->setAttrib('class','colorbox');
$elements[] = $element;
$element = $this->CreateElement('text','customes');
$element->setLabel('Customes Fields');
$element->setAttrib('class','colorbox');
$elements[] = $element;
But when i try to create element for file upload i fail..
Can you give more information about the exact nature of your fail?
It's fairly straight forwards to use. From the docs:
$element = new Zend_Form_Element_File('foo');
$element->setLabel('Upload an image:')
->setDestination('/var/www/upload');
Which is basic usage.
It's easy to get the file path wrong, but you should get an error if the path is wrong.
Supplying the code you are using would help!
I have written tutorial handling multiple file uploads with Zend Framework maybe it would help for you. Here is link to tutorial http://irmantasplius.blogspot.com/2009/08/zendform-multiple-file-uploads.html