Using wget to download page within Moodle - wget

I understand that this might be too specific but I am still quite new and having some difficulty figuring this out. So, I am hoping someone can offer up a clear command which can achieve my goal.
My goal is to download a page from Moodle. Now, this is Moodle 3.9 but I don't think this matters. The page is a wiki which is part of Moodle. So, there are child pages, links, images, etc. Now, each student gets their own wiki and it is available by URL. However, one must be logged in to access the URL or else a login page will show up.
I did review this question, which seemed like it would help. It even has an answer here which seems specific to Moodle.
By investigating each of the answers, I learned how to get the cookies from Firefox. So, I tried logging into Moodle in my browser, then getting the cookie for the session and using that in wget, according to the Moodle answer cited above.
Everything I try results in a response in Terminal like:
Redirecting output to 'wget-log.4'
I then found out where wget-log files are stored and checked the contents, which seemed to be the login page from Moodle. While I am quite sure I got the cookie correct (copy-and-paste) I am not sure I got everything else correct.
Try as I might (for a couple of hours), I could not get any of the answers to work.
My best guess for the command is as follows:
wget --no-cookies --header "Cookie: MoodleSession=3q00000000009nr8i79uehukm" https://myfancydomainname.edu/mod/ouwiki/view.php?id=7&user=160456
It just redirects me to the wget-log file (which has the Moodle login page).
I should not need any post data since once logged into my browser, I can simply paste this URL and it will take me to the page in question.
Any ideas what the command might be? If not, any idea how I could get some pretty clear (step-by-step) instructions how to figure it out?

Related

URI mismatch in facebook oauth login

I've read quite a few question on stack overflow about that. this one and this one and quite a few others...
I've tried with and without encoding the redirect URI in the address bar . With and without https. With and without the final slash. And every combination of the above. I've triple checked the client ID.
encode/decode:
http%3A%2F%2Flocalhost%3A9000%2F
http://localhost:9000/
https://www.facebook.com/dialog/oauth?client_id=272730539567323&redirect_uri=http%3A%2F%2Flocalhost%3A9000%2F
At some point, my code worked, And I logged in with it! Then I tried to change the redirection urls, and it never worked again, even after going back.
The worst is that I'm already logged in and you can see my name and profile picture on the screen !!!
I'm running out of ideas...
Note: I'm not including the actual code as I think it is irrelevant to the question and scala/play specific would only reduce the number of people trying to answer.
Solution:
I'm not sure why, I had been able to login before because I do not remember having ever set/unset the above settings in "settings->advanced tab-> Client OAuth settings".
But if you encounter this problem, here lies the answser.

wget vbulletin forum attachments

I want to download attachments from a vbulletin forum with login. It always gives me an error about unspecified length. In the thread itself only thumbnails are displayed, but i want the full resolution which is in the attachments. I am targeting only the *.jpg, not the rest of the forum.
The url looks something like this: http://www.page.com/attachment.php?attachmentid=1234567&d=1234567890
(I think both numbers "attachmentid" and "d" are random and independent from each other.)
When I try mirroring the whole page everything works except the attachments(thumbnails are downloaded).
Any ideas how I can solve this issue?
Cheers
PS: Httracker brings me to the same problem, alternative solutions welcome as well :)
As you mentioned about "download attachments from a vbulletin forum with login", make sure you have done the login part. So the steps will be as follows.
1) Make the login using wget, and store the cookie into a txt file. Parameter can be --save-cookies
2) Now call the attachment download portion using another wget call and use the cookie.txt(of step 1) with this call. The parameter can be --load-cookies
More detailed about the wget parameters can be found from here.
Both attachmentid and Id can not be random at the same time. Otherwise the forum can not recognize at which attachment you are focusing on.

ColdFusion - OAuthException - This authorization code has expired. [code=100]

I am having a go at trying to get the Facebook API SDK for ColdFusion working.
https://github.com/affinitiz/facebook-cf-sdk
I have followed all the steps and it seems to work well (using only server-side login).
However, if I leave the page for say, an hour, when I return and refresh the page (which was showing my profile name and friends list) it shows up with an error that I am unable to get rid of, unless I clear the cookies.
Is there something I am missing with this FB login? Am I meant to be checking against something manually in order to persist the session?
Looking at my cookies, I have the following stored:
fbm_155030275875
fbsr_155030275875
CFID
CFTOKEN
It's all new to me, so I'm a bit lost. I can't see anything in the docs for the SDK about this and Googling the error brings nothing.
I have attached a screenshot of the error.
I'd appreciate any help you can offer!
Thanks,
Michael.
I'm not familiar enough with that particular project, but in general, your code should be requesting the various Graph API calls, and requesting the token as necessary. If the token has expired, you request a new one. I'd expect the facebook-cf-sdk product to do this, but again, I'm unfamiliar with it.
Good news is, the Facebook Graph API is just a series of HTTP calls. See my talk at NC DevCon for an example of logging in and making some graph calls: (a bit long; go to about the 1:42:00 mark)
http://textiles.online.ncsu.edu/online/Play/61d0900d63fd4c1cb862622d1c8e13521d?catalog=35211b84-031b-4a18-8875-506f09b9b3a7
GitHub repo:
https://github.com/bdcravens/ncdevcon2012-handson-auth (note the branches - check out the step4 branch)
These don't answer your question 100%, but they may be a good starting point for you.
Ok, I figured out how to solve this issue using a solution someone provided on Github. I just wanted to post this here in case someone else encountered the issue and wasn't sure how to solve it. In my case however, after I applied the solution from that post was I need to do a page refresh. Link below.
https://github.com/affinitiz/facebook-cf-sdk/issues/31

Why does Object debugger say my URL is a facebook URL and isn't "scrapable"

In trying to create an "object" page for my first facebook app, I've run into some difficulty. I followed Facebook's Open Graph Tutorial nearly exactly.
After creating an "object" html page with the appropriate <meta property="og:... tags I tried running the URL through the Debugger Tool as suggested in the tutorial but I'm given the following error:
"Facebook URLs aren't scrapable by this Debugger. Try your own."
This page is in the same directory on my company's linux box as the canvas page, and is certainly not a "Facebook URL". If it matters, I'm using an IP instead of a domain name: xx.x.x.xxx/app/obj.html
...
I continued the tutorial anyway, but ultimately it does not seem to want to post a new action/object (is this even right?). I did however manage to get something to work, as in the app timeline view I apparently actioned one of those objects a couple hours ago. I assume this happened when I was pasting curl POST commands into the terminal.
I'm pretty new to the whole open graph, and facebook APIs, etc., so I'm probably operating under false assumptions of some sort, and I've been all over trying different things, but this error seems pretty bizarre to me and I can't seem to resolve it.
UPDATE
I just took the object page and put it on my own personal shared hosting acct. The debugger worked (inexplicably) fine on it, but I couldn't go too far since it's a different domain than the one authorized by my app.
Make sure og:url inside your html page does not point to facebook.
Also, make sure to look at the open graph protocol page (to see you formatted the og tags correctly.
Also, make sure the page is accessible to everyone, not just yourself.
Without knowing the URL it's hard to be sure, but it's most likely that your URL is either including a og:url tag pointing to a facebook.com address, or a HTTP 301/302 redirect to Facebook instead

Facebook can't read my page

I run a site called http://www.theinspiration.com.
a few days ago my facebook share button stopped working. I can still share, but I dont get any fb meta data with, when sharing it.
When i try to linter it:
http://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Fwww.theinspiration.com%2F2011%2F11%2Ftime-lapse-view-from-space-by-nasa%2F
I get: Critical Errors That Must Be Fixed
Error Scraping Page: Bad Response Code
If i just copy the source code, and make a plain HTML file, post it on a server and linter it, it works with all meta data. (really I just need to have the fB thumbnail image working, when sharing)
I run W3 Total Cache and CDN (amazon) and I read that this might be why, but when i disable W3 Total Cache, I still get the error.
I spend 10 hours trying to figure it out today. Can someone help me?
Thanks.
Daniel
I have has problems with this myself. The errors you receive are utterly useless, as I am sure the problem is it really a bad response code.
I am truly sorry that I do not remember my fix for this earlier, but I will try to remember.
Just make sure you have added an app_id and an administrator id to your metadata.
Facebook can suddenly change their required parameters, so they might have done just that.
Good luck!