HomeBlog
todo

Building a Static Generated Blog with Next.js

When I set out to rebuild my personal website, I had 2 criteria - it needed to be performant and fast, and I wanted it to be as simple as possible, meaning no complicated backend. In this post I'm going to go over how I built this website and how it works under the hood - from the javascript frameworks I use, to performance optimization, to SEO. I'll include examples and walk through how I did things, but this is meant to be more of a reflection on web optimization and not a step-by-step tutorial.

  • Javascript Frameworks- Third-party frameworks I rely on for the functioning of my website.
  • Content Management- Where I write blog posts and how I organize media content.
  • Deployment - Hosting my website on vercel, and how new content gets deployed.
  • Performance and SEO - The steps I took to make my website as fast as possible, and discoverable publicly.

Javascript Frameworks

This website is built using Next.js and hosted on Vercel. I've been using Next.js since its inception for a number of reasons, but the biggest is the ease with which I can set up statically generated pages - meaning each blog post of mine is pre-generated, and has 0 api calls on the client side to fetch the data.

Next.js works hand-in-hand with React, so I didn't need to learn any additional frameworks. My site is all built with Typescript, which Next.js supports out of the box. I can't image building a website with vanilla javascript any more after getting comfortable with Typescript and the safety it provides. Getting started with Next.js is super easy, so I love using it for quickly throwing together projects, and I know it will scale as my project does.

npx create-next-app@latest --typescript
bash

This is all you need to create a working boilerplate app!

Other than these two main frameworks, I use very little third party javascript in my frontend. At one point I used Luxon for my date formatting, but that wasn't necessary; a little helper like this suffices and saved me a bit on performance:

const options = { year: 'numeric', month: 'long', day: 'numeric' };

export const formatDate = (ts: number) => {
    return new Date(ts).toLocaleDateString('en-US', options as any);
};
javascript

Content Management

My website's landing page is not managed by a CMS - I have no problem going in and editing code when I want to change the home page text. However, this would be cumbersome and frustrating if I was trying to write blog articles in VS Code. Instead, all of my posts are stored as markdown and committed directly to the repository. Since Markdown is a standard format, I am confident that I won't ever need to re-format or change my content if I ever change my site infrastructure or content management system.

published: 2021-10-18T18:19:26.000+00:00
tags:
  - fitness
  - hobby
live: true
title: A Look Into My Connected Fitness Tools
summary: How I spent the last 2 years building a fitness regimen around "smart" fitness technology
cover_image: '/uploads/dsc05878.JPG'

---
In early 2020, right as things started to shut down due to COVID-19, I decided I wanted
to get back into shape and figured with all the lockdowns it was a good time to start.
## The Ecosystem
yml

Here's an example markdown blog post from my previous piece on connected fitness.

Of course, I want to edit these with a WYSWYG editor, not my code editor. I also want to be able to edit the documents from anywhere, not just when I'm on my laptop with my blog repo cloned. This is where I use Forestry. Forestry is able to take my markdown files, and present them in a nice structured editor, and it manages my media too!

Here's that same blog post, but viewed from Forestry instead of the markdown. Much better, right?

Forestry is a git-backed CMS, meaning every change I make is committed directly to my git repository. This way, my content is all version controlled, but more importantly, I own and control all the content and metadata. My site is self-contained and can be deployed to any static host. I also get automatic deployments when I make changes, as my site re-deploys on push to master.

Deployment

The site is running on Vercel. It’s deployed straight from my git repo on GitHub, so I don’t need to do any extra steps to deploy after I make a change. A huge advantage of Vercel is preview builds - if I open a pull request against my repo, Vercel will build that pull request under a unique web address so I can see the changes in a real production environment.

One of my favorite features is continuous performance monitoring. Right on my dashboard, I can see my sites and and overview of their performance based on real traffic from readers!

At the time of writing, these were my performance metrics. My real experience score is at 100, so there's not much optimization left to do! 🥳

Performance and SEO

I didn't get to a performance score of 100 without a little work (although Next.js really did a lot of the heavy lifting). Here are a few of the tricks I used to squeeze out every last bit of bloat and make my site run as fast as possible:

Images

Images really hurt performance, but my blog is pretty image heavy, because nobody wants to read just a wall of text. This means I need to be extra careful that my images are as small as possible, but I don't want to sacrifice quality. This website uses two approaches.

First, I use the next/image component as much as possible. This is a component provided by Next.js that does a ton of image optimization. It leverages the ability of Next.js to manage both front and backend code. Essentially, this component creates an image API for my site that takes the image I give it, and processes it into webp or avif images. I have this site set to use avif. However, I can't use this fancy component for my actual blog content, since I render it by converting markdown into html. I could go through node by node and convert the html to React nodes, but that would require a lot more complexity (not to mention maintenance) than I would like.

That leads to my second image optimization, using <picture> tags to serve more efficient webp images. This tag can have multiple sources, which allows me to serve webp images to browsers that support it, or fall back on a more common png or jpg when needed (looking at you Safari). To do this, I do a bit of preprocessing on the article html. Note that this is done on the server when the site is generated, so it has absolutely no impact on performance to the user.

export const convertImageForExtension = (
  document: Document,
  image: HTMLImageElement,
  extension: string
) => {
  const mimeType = extension === "png" ? "png" : "jpeg";
  const src = image.getAttribute("src");
  const alt = image.getAttribute("alt");
  const title = image.getAttribute("title");
  if (src?.includes("/uploads") && src?.includes(`.${extension}`)) {
    const picture = document.createElement("picture");
    const newSource = document.createElement("source");
    const initialSource = document.createElement("source");
    const initialImg = document.createElement("img");

    newSource.setAttribute("type", "image/webp");
    newSource.setAttribute("srcset", src.replace(`.${extension}`, ".webp"));
    initialSource.setAttribute("type", `image/${mimeType}`);
    initialSource.setAttribute("srcset", src);
    initialImg.setAttribute("src", src);
    initialImg.setAttribute("alt", alt ?? "");
    initialImg.setAttribute("title", title ?? "");

    picture.appendChild(newSource);
    picture.appendChild(initialSource);
    picture.appendChild(initialImg);
    image.replaceWith(picture);
  }
};
javascript

This is a bunch of boring dom manipulation, performed against a virtual dom created by jsdom. Its job is to take an <img> element, and convert it to a <picture> element, with <source> tags to represent the different file paths.

<picture>
  <source type="image/webp" sreset="/uploads/forestry.webp" />
  <source type="image/png" srcset="/uploads/forestry-png" />
  <img src="/uploads/forestry.pnq" alt title class="medium-zoom-image" />
</picture>
html

But wait! Where do the webp images come from? I'm glad you asked! I have a Github action that handles all my image optimization on commit. It takes any large images (> 2000px wide) and scales them down, and then converts them to webp files and commits them to the repository. In this way, all the files necessary to generate the site are right there in my repo! You can see the full Github action I use in this Github Gist.

Search Engine Optimization

When I built the site, I had this blog in mind. I wanted to start writing more content about things I care about, and wanted to make sure it was discoverable easily. To do this, I leverage the next/head component to set all sort of meta tags, from basic content descriptions, to Twitter social sharing cards. I also rely on server-side rendering to generate a sitemap.xml file for better search engine indexing. This file is generated dynamically based on the content as I publish new posts.

It's important my blog posts have rich embeds on social media - here's an example of how it looks on Twitter and LinkedIn.

Conclusions

It was a fun challenge to try to build the most performant website I could! I learned a lot about techniques for optimizing page loading, serving modern image formats, and continuous performance monitoring.

Let me know any thoughts or questions you have over on Twitter, and thanks for reading!