How to detect the creation date of a webpage from its server - date

I'm trying to find a way to detect the creation date of a webpage from its server. As for example when was this page wwww.Amazon.com/fghhggg created? Is there a way to find it and automate it? Thank you for the clues

In general, the answer is no. Occasionally you'll see a web page that returns this information in the headers, but for a site where the pages are generated from information in the database (like Amazon, or most other sites on the internet), asking for the "creation date" doesn't really make sense.
For example, imagine you're looking up product X on Amazon. Amazon's servers retrieve information from the database, put together an HTML document, and return it to you. What would the "creation date" be? The page didn't exist 5 seconds ago - it was just assembled for you - and it doesn't exist now that it's been sent to you. If you're looking for when the product was added to Amazon's database, that information might be available via Amazon's API.

Related

Golang server background notofication process

I apologize in advance for my bad english.
I've created simple training service in golang, which supports login and registration system with MongoDB. This service allows you to scrape rooms for rent in London in specified location if you loggedin. So, now I want to implement notifications for loggedin user's about new rooms in user's marked location. My first idea was to make some background process, which will scrape rooms every 30 seconds, save the results (in mongo, in cookies or somewhere else, advise me please), match new scrape results with previous and save differences (new rooms) in DB for future posting to user in some form (email or list on html page).
1) Is my idea about notifications generally correct? If not, please describe me better way to do this or point to some relating examples.
2) What is the best way to make that background process in go?
3) This would be great if you'd point me on some examples relating to the case.
The demo of service on Heroku
Github repo
I appreciate your concern.

WWW::Mechanize::Firefox in a high-traffic website

I have a website where I would like to the user to enter a search term and then scrape two other websites and show the user some parsed results.
Since both websites use a lot of JavaScript to return data, I thought of using WWW::Mechanize::Firefox.
Would it be possible to run several simultaneous instances of a script that would use WWW::Mechanize::Firefox?
Yes, it would be possible to do what you describe; but the internet is not a database.
Have you asked for permission from the owners of the sites that you are hoping to use? I would certainly not agree to an unbounded use of my bandwidth and the contents of my site by a third party.
It would be much more responsible (and easier to implement) a system whereby, by agreement, you gather the information that you need, say, once a day, and store that in a database. Thereafter you can display the information that your own users ask for directly from your own resources.

generic form for Seblod (joomla 2.5 )

I'm a developer who has taken over a Joomla website, which was creatied using SEBLOD. The website is a listings website, which has over 300 listings on.
The purpose of the website is to get enquiries through the listsings.
Currently, the queries are attached to a button - which opens your email program and sends the email. This is not ideal.
Is there a way to create and attach a generic enquiry box or form to each listing, and include the name of the listing in this form when its sent?
Is there a way to create a form that can be attached on the frontend of the website page intead of
the "Request a quote" button.
To be candid, seblod is an impressive Joomla app, but I'm afraid you might not be able to get useful answers than on their forum, I've been using it now for over a year and I'm just coming to terms with some of its functionality. Visit the forums and you should be able to find a good answer from the devs there. Its an expansive suite so it might give some unique challenges.

Autofill an HTML form

What applications exist that can take a series of fields from my db (or csv output from my db) and insert them into a web-based form and then submit that form?
Big Picture Use Case:
I maintain an in-house registration management system for webinars that we produce/present. Currently we use GoToWebinar.com to host our events but they haven't always been (and may not always continue to be) our vendor.
GoToWebinars.com does not provide me an API for creating registrations for 3rd party individuals. So when someone decides to attend one of our events they have to fill out 2 registrations forms, mine and GoToWebinars.com. I'd like to automate the task of filling in GoToWebinar's registration form.
I am looking into the same thing. I found some bits and pieces here and there and was able to decipher the URL to post to GTW:
https://www.gotowebinar.com/en_US/island/webinar/registration.flow?Template=island/webinar/registration.tmpl&Form=webinarRegistrationForm&WebinarKey=XXX_YOUR_WEBINAR_ID_XXX&Name_First=ViewersFirstName&Name_Last=ViewersLastName&Email=ViewersEmailAddress
If you are using cURL, then be sure to use CURLOPT_FOLLOWLOCATION because there are some redirections on the GTW side and cURL needs to follow them.
So far this seems to work for us.
Good luck!
I'm late to the party, but let me offer a way to call the CITRIX API via PHP to register a new GotoWebinar attendee, in case somebody else hits this page looking for the answer to your question.

screenshot-grabbing email tool

I have a web site with various graphs embedded in it that are generated externally. Occasionally those graphs will fail to generate and I would like to catch that when it happens. These graphs are embedded in multiple pages and I would rather not check each page manually. Is there any kind of tool or perhaps a browser addon that could periodically take screenshots of different URLs and email them in a single email? It would be sufficient to have scaled-down screenshots of full pages emailed maybe once a day to me, allowing me to take a quick glance and see that all the graphs are there and look okay.
I'm a big fan of automation. Rather than have emails generated that you then have to look at, take a look at 'replacing custom missing images in jquery'. This will run a piece of Javascript for each image that fails. Extending that to make a request to a URL that you control, which may also include the broken URL (or just the filename that is broken) would not be too hard. That URL would then generate an email, and store the broken URL so that it doesn't send 5000 emails if there's a flurry of hits to your page.
Another idea building on the above is to effectively change the external 404 from the source site to a local one (eg /backend/missing-images/) - the full-path need not exist - you are just generating a local 404 record in your apache logs. Logwatch will send a list of 404 pages from the apache log to you daily (or more often, if you want) by email.