Nginx Proxy for GitHub Pages

Nginx is my favorite web server. I find it much easier to configure and use than Apache. If you believe the hype, it’s also faster and consumes fewer resources. That’s really not the point of this article, however.

This point of this article is to show you how you configure Nginx on your server to serve static content that is actually served by GitHub pages.


  1. Create a pages repo
  2. Clone it to your local computer
  3. Check in your static files
  4. Set up your Nginx configuration file
  5. git commit -am"..." && git push
  6. Profit!

GitHub Pages

GitHub alows users to store static web content (HTML, CSS, and JS) in specific repositories. The only real caveat is that you must have index.html in your repository root. Of course, your files will also be public if you are using the free service. This service is called GitHub Pages. Here is this blogs repository.


There are lots of ways to create your site. I use Hugo to generate the blog out of easy-to-generate Markdown files. Nice and relatively simple. However you do it, you’ll need to be able to generate static content.


GitHub will serve up your pages to the web world just fine. For this blog, you can go to to browse it. It looks the same as, with the exception of the SSL key. If you want users to go to your custom URL, and you want to manage your own SSL keys, you’ll need to set up an Nginx proxy.

NOTE: If you don’t care about SSL and only want a custom domain, read this instead:

Since your content is hosted on GitHub, you only need an Nginx configuration file that redirects. Here’s the configuration that I use (without the SSL setup):

server {
    listen 443 ssl spdy;

    # The site is actually hosted on github pages.  Using this proxy location
    # allows us to secure the connection with our own SSL keys, instead of the
    # generic SSL keys.

    location / {
        proxy_intercept_errors on;

        # allow GitHub to pass caching headers instead of using our own
        expires off;

The magic happens when a user navigates to When the happens, the location / block matches, and all of the content is silently served from How cool is that?

Users will never know (nor will they care).


This might all seem a bit silly. Really, if already have a web server running Nginx, why set up the proxy in the first place? Why not just store your content on the server? Well, the way I see it, this method has the following benefits:

  1. git commit -am"..." && git push is all that’s needed to update your site
  2. Your content won’t take up any space on your web server
  3. You get to control your content entirely

Only you can decide if it’s work it. It is for me. I can decide at any time to move the content off of GitHub, I can create new SSL keys as needed, and all I need to do to update is to push the new pages back up to GitHub.

Folder Structure

As I’ve mentioned before, I use Hugo to generate the static HTML files that are served by GitHub. That, in itself, is a separate repository. I started off with everything in the same repo. It was a little odd, but it wasn’t quite right. My development files were in a folder called dev, and the generated HTML files were in the repo root. I decided to make things a little more complex to make the static html file repo cleaner.

I currently have two projects: 1) My development files; 2) My GitHub pages static HTML files. The trick here is that my GitHub pages repo is a subtree located inside my other repo. It looks like this:


The base folder contains my development enviroment (batch files, python scripts, hugo binaries, etc). The only thing mtik00-pages/ contains are HTML/CSS/JS files. My normal process goes like this:

  1. hugo new content/post
  2. edit the post in SublimeText2
  3. commit the new content: git add . && git commit -am"adding new post"
  4. build the static pages: hugo -d"
  5. deploy the static pages: cd && git add -A . && git commit -am"new pages" && git push