Smart Tech for a better Web

Deploying Jekyll

von

Jekyll is great for running blogs and websites. But you need to go some extra miles to optimize it for SEO and Google PageSpeed. Other static page generators like Middleman might offer some of the features already which Jekyll users need to handcraft.

I am using Amazon S3 and Amazon CloudFront to distribute my content. This gives me reliable hosting for a low price and high-speed distribution through a CDN.

Uploading the results of "jekyll build" was not enough for me. I wanted Google PageSpeed to show green bars.

To achieve that, I decided to run some additional pre-deploy tasks on the generated _site folder. The tool I use for building is Gulp. You can use any other build tool like Grunt, but I found Gulp very easy to understand.

NPM need to be installed and ready on your path.

In addition I installed the AWS CLI tools and prepared a special AWS IAM profile which allowed me to modify S3 buckets and CloudFront distributions.

First I created a package.json file with some dependencies for my build:

{
    "name" : "grobmeier.de",
    "version" : "1.0.0",
    "dependencies" : {
        "gulp" : "^3.9.0",
        "gulp-sass" : "2.0.1",
        "gulp-concat" : "2.5.2",
        "gulp-rename" : "^1.2.2",
        "gulp-minify-html" : "^1.0.4",
        "gulp-minify-inline" : "^0.1.1",
        "gulp-yuicompressor" : "^0.0.3"
    },
    "private": true
}

As you can see, I added Gulp and some Gulp packages to my dependency definition file. You can install it as usual with "npm install". It will create a "node_modules" folder which you can ignore in GIT by adding a line to .gitignore.

Working with CSS

First, I started to write two build tasks to make a package of my SCSS/CSS files. I have avoided the Jekyll built in feature for SCSS as I needed more control over what happens in my build.

First, I run SASS on my SCSS files and put the result to my _includes folder. This folder is the only place where Jekyll searches for include files. In many cases it is better to just inline your CSS instead of firing another HTTP request to load the necessary stylesheets.

gulp.task('sass', function () {
    return gulp.src('./_scss/*.scss')
        .pipe(sass().on('error', sass.logError))
        .pipe(gulp.dest('_includes/generated/'));
});

Second, I concat my generated CSS with the third party CSS, like the ones provided by Jekyll or grid frameworks.

Please note, this step also compresses the CSS and outputs the product in the same include folder.

gulp.task('package-css', ['sass'], function () {
    gulp.src([
        '_scss/purecss-grids-responsive-0.6.0.min.css',
        '_scss/syntax.css',
        '_includes/generated/main.css'
    ])
        .pipe(concat('package-modern.css'))
        .pipe(compress({type:'css'}))
        .pipe(rename({ extname: '.min.css' }))
        .pipe(gulp.dest('_includes/generated/'));

    gulp.src([
        './_scss/purecss-grids-responsive-old-ie-0.6.0.min.css',
        '_scss/syntax.css',
        '_includes/generated/main.css'
    ])
        .pipe(concat('package-old-ie.css'))
        .pipe(compress({type:'css'}))
        .pipe(rename({ extname: '.min.css' }))
        .pipe(gulp.dest('./css'));
});

On another note, in the example below I created two files. One is for modern browsers, which is provided inline. Older browsers will have to load their old browser css. I determine this decision by browser switch.

It will look like this:

<!--[if lte IE 8]>
<link rel="stylesheet" href="/css/package-old-ie.min.js">
<![endif]-->
<!--[if gt IE 8]><!-->
<style>\{\% include generated/package-modern.min.css \%\}</style>
<!--<![endif]-->

On a side note: consider a small grid framework (like PureCSS). Inline CSS is nice, but you should still be concerned about size.

Another side note: apologizes for the weird escaping of the Liquid tag. Unfortunately it is not trivial to show Liquid tags in blog posts. I envy Middleman users with their plain ERB syntax.

Tips for developing

Of course, I have to run "gulp package-css" on each change. To make this easier while developing, I created a "watch" task:

gulp.task('watch', ['build'], function() {
    gulp.watch('_scss/*.scss', ['package-css']);
});

I run two separate terminal sessions when coding:

$> jekyll serve -w
$> gulp watch

Deployment preparations

When deployment time comes, I quit my servers, rm _site and run "jekyll build". On top of that, I execute a new task I call "pre-deploy":

gulp.task('pre-deploy', function () {
  var opts = {
    conditionals: true,
    spare:true,
    empty : true,
    quotes: true,
    cdata: true
  };

  return gulp.src('./_site/*.html')
    .pipe(minifyHTML(opts))
    .pipe(minifyInlineJs())
    .pipe(gulp.dest('./_site/'));
});

Pre-Deploy will minify my HTML and also all inline JS I might have. The original sources will be overwritten, but that is not an issue as the whole folder is generated anyway.

Uploading your website to S3

Once this task is executed, I am ready to upload. I use a Shell-Script for that, similar to the one I demonstrated in my blog post "Autodeploy Jekyll".

Here we go:

#!/bin/sh
SITE_DIR='_site/'

rm -rf ${SITE_DIR}

gulp build
jekyll build
gulp pre-deploy

The first part of the script performs all of the above things with a clean build.

Now we come to the more Amazon related things:

BUCKET=s3://www.grobmeier.de
STANDARD="--delete --profile webdeploy --region eu-west-1 --acl public-read"

This is some basic configuration. The STANDARD variable just holds what I need for any S3 operation. Most important is the profile setting, which authenticates me using my webdeploy profile.

find ${SITE_DIR} \( -iname '*.html' -o -iname '*.css' -o -iname '*.js' \) -exec gzip -9 -n {} \; -exec mv {}.gz {} \;

After the definitions, I start to gzip html, css and js content. I have tried doing that using Gulp, for some reason it turned out easier to do it on the Shell.

# Upload HTML files
aws s3 sync _site/ ${BUCKET} --exclude "*" --include "*.html" --content-encoding "gzip" ${STANDARD}

# Upload Images
aws s3 sync _site/ ${BUCKET} --exclude "*" --include "img/*" --cache-control "max-age=604801" ${STANDARD}

# Upload CSS
aws s3 sync _site/ ${BUCKET} --exclude "*" --include "css/*" --cache-control "max-age=604801" --content-encoding "gzip" ${STANDARD}

Now the time for the upload have come. This is an excerpt of what I do. HTML is uploaded using the content-encoding gzip, as it is gzipped. Please note the exclude and include definitions.

Images get a special cache-control header. Using long cache time will make Google PageSpeed very happy.

Then I upload some CSS, specifically for older browsers. PageSpeed won't recognize it, as we have the modern CSS inline. People who are using outdated browsers are most likely in trouble. It doesn't hurt to help them getting along and upload CSS using gzip and a cache-control header. If the browser is that old even Gzip is not supported... well, these people most likely don't visit programming websites.

Refreshing the CloudFront CDN

Finally I need to notify the CloudFront CDN of my changes. Using the new AWS CLI features, this is going to be painfree.

First you need to configure AWS CLI so it's actaully accepting CloudFront. As of today, CloudFront calls are considered Beta.

aws configure set preview.cloudfront true

Then you need to create an invalidation JSON telling exactly what you want to invalidate and remove from the CloudFront edges. You will need a unique invalidation ID. I use the current time stamp.

INVALIDATION_ID=$(date +"%S")
INVALIDATION_JSON="{
    \"DistributionId\": \"YOUR_DISTRIBUTION_ID\",
    \"InvalidationBatch\": {
        \"Paths\": {
            \"Quantity\": 6,
            \"Items\": [
                \"/feed\",
                \"/en/all-posts.html\",
                \"/en/all-posts-by-date.html\",
                \"/rss.xml\",
                \"/sitemap.xml\",
                \"/index.html\"
            ]
        },
        \"CallerReference\": \"$INVALIDATION_ID\"
    }
}"

As you can see, the JSON is a bit... weird. Please note, it is allowed to invalidate full paths using the asterisk. It actually can save you some money, because invalidations with an asterisk is considered a single operation.

Once you created the JSON, you can tell cloudfront about it's new job:

aws cloudfront create-invalidation --cli-input-json "$INVALIDATION_JSON"

That's it! In your CloudFront console you should see something is in the makings.

Tags: #Jekyll #PageSpeed #Gulp.js #Deployment

Newsletter

ABMELDEN

BLOG-POST TEILEN