Use GitHub page as domain - github

I'm using the "github page" to create my personal page, but I'm going to need a hosting service because it will require some queries in the database. How can I use my GitHub Page url as a domain?

GitHub pages is not really designed for this kind of function. It's there to be a static page, where all content on the page is 'hardcoded' (meaning no dynamically generated data). What you're asking falls along the lines of a web application.
But if you're looking to be a maverick, there might be some options out there for you.
I personally haven't done something like this, but found a couple DB services you might want to check out.
Firebase by Google
RdbHost
The above recommendations may be useful if you're expecting data entry from visitors to your page. But if your data is static as well...you might be better off using s JSON file or some alternative where the data can live right in your repo.

Related

How to automatically fill in proposed code changes in the "Edit" URL for a GitHub file?

EDIT: See comments on Schwern's answer for what I'm looking for in general. It doesn't have to be exactly what I'm asking for in the question
I have a web app which is an editor. I would like users to be able to give me a GitHub file URL, my app would automatically load in the file from GitHub, and then I make a process for them which is as easy as possible to submit that change to GitHub again. Ideally the user wouldn't need to save / upload a file or do any copy/pasting.
GitHub has a URL scheme where you can go to an "Edit" page for a file, make your changes, and then create a PR or create a commit (depending on what you would like to do and your permissions). This is an example:
https://github.com/rails/rails/edit/main/README.md
Looking at the HTML for the form I see that some of the fields have names associated. Using those names I can auto-fill the commit title and description:
https://github.com/rails/rails/edit/main/README.md?message=foo&description=bar
But I can't find a way to automatically fill in/replace the actual contents of the file. Is there a way?
I realize that for some browsers URLs can only be so long (maybe that's not true anymore?), so maybe this isn't perfect. I'd also be open to other suggestions on how to accomplish what I'm looking for.
Don't try to do this via web scraping, it's fragile and slow. Use the Github API.
Specifically, you'd get access via OAuth, get the file, let the user edit it, and then send the edited version.
There is no way to do exactly what you want. The ideal tool for this job is an OAuth App. However, creating one with the GitHub API requires that you store a client ID and a client secret, and there is no secure way to store the client secret in a frontend-only app.
Therefore, you'll need to create a backend to create the OAuth app so that you can issue credentials necessary to use the API on behalf of the user or to push data into the repository via the standard protocols.
As Schwern mentioned, you should not try to do this by driving the GitHub web interface. That isn't a stable interface and may break at any time.

Storing semi persistant data for a SPA

I have a SPA website set up with a REST API. One of the issues I have come up against is how to access data unrelated to the resource being requested. Specifically, accessing the users account details so I can do things like hide admin actions when the user is not an admin.
I have an api endpoint that can get all this data so the simplest solution would be to on every page load request the profile api as well as the resource actually being accessed. This seems a little wasteful to be constantly requesting a resource that rarely changes.
Another option would be to store this data in local storage so it only has to be requested once but then I have the issue where the user updates their username or settings on one device and then the other one is left with outdated data in local storage.
I thought vuex might be able to persist this data when clicking on links and navigating to different pages but it seems like the data gets lost when the page changes.
From what I understand graphql would solve this problem by allowing the initial request to get the user data along with the other data. I'm not really sure how much more efficient this is than 2 requests and rewriting my whole api probably isn't the best solution to this.
Are there any well known solutions to this problem or is one of the options I have come up with the best way to handle this?

How do I see in Google Analytics specific Facebook post exactly bringing me traffic?

I would like to see in analytics specific Facebook posts that are bringing traffic to the site. The 'Referrals' doesn't provide this info, but only shows how much traffic came from Facebook.com and m.facebook.com.
You can do this with the help of UTM Parameters.
What Makes Up a UTM Link?
Here’s my link for a test post
www.yourwebsite.com?utm_source=facebook&utm_medium=social&utm_campaign=post_name&utm_content=post_content
Now let’s break this link down, to understand what each metric means, and what it corresponds to in Google Analytics.
First, there is the “utm_source” value, which translates to the “source” dimension in Google Analytics. This is where traffic is coming from. You can name it whatever you want. In this example, my source is “facebook”
Equally as important is the medium, specifically “utm_medium,” which tells me what type of traffic this is. As this is from social media, I have named it social.
Easily the most important part of the link — “utm_campaign” — is the name of what you’re tracking, for example, “Summer Promotion.” Think of this as another way to roll up all the different posts and sources to see higher-level insights.
The next two metrics — “utm_term” and “utm_content” — are both optional and interchangeable. It’s about personal preference and how granular you want to get with your analysis.
If you don't want to build the URLs manually, you can use URL Builder from google.
Make sure you paste the full URL in the URL builder so that it populates the correct URL. Also make sure that you paste the whole URL created by URL builder with all UTM parameters on Facebook.
Hope this helps.

DotnetNuke redirect

our client needs to shortcuts to particular pages
We need to redirect non existent urls like
http://site.com/promotion1
to the actual URL similar to
http://site.com/promotions/promotion1/tabid/799/language/en-AU/Default.aspx
...
I've sent a list of appropriate DNN modules to our client but it may take them forever to get back to me.
In the mean time they still submitting requests to us to create redirects for them.
if there's no cost involved then i wont have to wait for them to get back to me.
so I'm looking for a Quick and free way to enable the clients to set these up on this own.
I've looked at:
MAS.ActionRedirect
Ventrian Friendly URL Provider
DotNetNuke URL Rewriting HTTP Module
But haven't had much luck in the small amount of time i have available.
Has anyone got some suggestions on how to achieve our goal with either the above resources or maybe some additional resource i haven't found yet?
(DNN v4.9)
You should be able to use the built-in friendly URL functionality within DNN, or use a URL rewriter module within IIS.
You can read my answer about using the DNN Friendly URL functionality for more details, or look into the IIS URL Rewrite module.

Iphone Web put and get

All,
I need to create an app for work that signs into our website using SSL and returns our member information.
I have figured out how to log in but am not sure how to find the id tags that I want to bring into my app and store to be show in a table view.
I know how to do these three things.
Put in username and password,
authenticate against website,
create session cookie.
Not sure how to do these things.
Get information about member, ie, how long a member , sustaining member, ect from the website knowing the tags for these fields on the site.
Store the data (core data?) or flat file. Shouldn't be that much information.
Format and present data in table view.
Lots of information here about getting files or whole websites but not so much about picking information off websites for concise viewing.
Thanks.
If your company's site is designed to provide this information through a web service, then it should be as simple as constructing your request URLs appropriately. If it has not been designed to interact with anything but humans, then you're probably going to have to do a great deal of work parsing HTML which no one can really help you with unless said site is publicly accessible.
Web Services should work fine with our website.