Tuesday 23 January 2018

Optimising javascript files

In this next great installment of my new development process I want to head back to looking at javascript files, as my last three posts have been about adding stylesheets in my processlazy loading stylesheets that are non-critical and using resource hints (although technically that last one applies to any resources, not just stylesheets).

The last time I wrote about javascript files, I had them beautifully minified and ready to go.  My gulp task for javascript files currently looks like this...
gulp.task("js",function(cb) {
  pump([
    browserify(jsFiles).bundle(),
    source("script.js"),
    buffer(),
    uglify(),
    gulp.dest("build/js")
  ],cb);
});

If you have a lot of javascript, especially with immediately-invoked function expressions (IIFEs), it can be a good idea to optimise them.  This essentially works by indicating to the browser's javascript engine during pre-parsing that it can skip this function as it can be fully parsed later, and this stops it from being parsed twice.

For more details, I would definitely recommend checking out the Github page for the plugin I'm about to talk about, which is optimize-js.  Once again, there is a thin wrapper Gulp plugin, called gulp-optimize-js.  

Side note: The naming convention isn't a coincidence, it's actually a naming convention!

Anyway, so if you've been reading this series of posts, you've probably already picked up on the pattern here.  I installed the plugin from NPM, then require it at the top of my Gulp file...
var optimize = require("gulp-optimize-js");

And then I add the relevant call into my javascript Gulp task...
gulp.task("js",function(cb) {
  pump([
    browserify(jsFiles).bundle(),
    source("script.js"),
    buffer(),
    uglify(),
    optimize(),
    gulp.dest("build/js")
  ],cb);
});
It's very important to make sure that you call this plugin after you've called gulp-uglify, because minifying the code will remove the optimisation that has been added, as technically it's done by adding additional brackets into your code.  However, these extra characters should be worth it overall.

I've tried testing with and without this optimisation on my own website, and I can't tell the difference/  The reason for this is that I don't have enough javascript code, but maybe you do.  I like knowing that my development process is as good as it can be though, even if in this case, the results are not tangible.

This plugin is covered in my Skillshare course, Optimising your website: A development workflow with Git and Gulp. The relevant video is 12 - Optimising Javascript Files.

Saturday 20 January 2018

Lazy loading stylesheets - resource hints

One thing I forgot to mention in my previous post on lazy loading stylesheets using loadCSS, was how you can improve performance even further by tipping the browser off to what you're going to lazy load in advance.  This is called using resource hints.

There are a few different levels of hint that you can use, so I'll go through them individually, before giving the example that I've used.


DNS Prefetch

This tells the browser that there will be a file required from this domain, and that it should get a head start by doing the DNS lookup (converting the domain name into an IP address).

For example...
<link rel="dns-prefetch" href="https://example.com">
Preconnect

This tells the browser to go a step further and make the TCP handshake - this is required at the start of each TCP connection.  It will also do the TLS negotiation, if the link points to an https: resource - this is where the browser and the server decide what type of encryption they both understand.

For example...
<link rel="preconnect" href="https://example.com">

Prefetch

This is used to download and cache a particular resource, so can be used if you're sure a particular asset will be required later.  The file is downloaded as relatively low priority, but will still give the browser a head start - this is often used for fetching files for the next anticipated page or future interaction on the current page.

For example...

<link rel="prefetch" href="https://example.com/style.css">

Subresource

This is similar to prefetch, in the sense that it is used to download and cache a particular resource.  However, the file is downloaded as a high priority, meaning it is best used when requesting files for the current page.  Having said that, this has been superceded by "preload" and has been removed from Chrome.


So for me, I typically like preconnect.  In my example from my lazy loading stylesheets using loadCSS post, I was lazy loading a Google Font stylesheet.  In this case, there are two domains I want to preconnect to, which can be done like this...
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com">
The first is the location of the stylesheet itself, and the second is the location of the font file that will subsequently be downloaded.  I could use prefetch on the stylesheet file itself, if I wanted to, but this returns dynamic content based on your browser, so it would be impossible to prefetch the web font file itself.

Thursday 18 January 2018

Lazy loading stylesheets using LoadCSS

In my last post, I talked about adding stylesheets into my Gulp file, part of my new development process.  The follow on to this for me was thinking about whether all of those stylesheets were really needed up front.  As I explained in that post, concatenating them and minifying them will certainly reduce the overall filesize, and the number of TCP connections (and therefore time), but what if some of this could be delayed until after the page had even loaded?

This is often referred to as lazy loading.  The Filament Group have created a great plugin for this called loadCSS, which can be found on NPM as fg-loadcss.  Their description of why you should be using it goes like this...
Referencing CSS stylesheets with link[rel=stylesheet] or @import causes browsers to delay page rendering while a stylesheet loads. When loading stylesheets that are not critical to the initial rendering of a page, this blocking behavior is undesirable. The new <link rel="preload">standard enables us to load stylesheets asynchronously, without blocking rendering, and loadCSS provides a JavaScript polyfill for that feature to allow it to work across browsers. Additionally, loadCSS offers a separate (and optional) JavaScript function for loading stylesheets dynamically.

The "preload" option is not well supported at all currently, so for the time being at least, a polyfill of this nature is definitely required.

As I'm already using Browserify in my javascript Gulp task, this is really easy to add into my javascript file.  Obviously I need to first install the fg-loadcss package, and then I can add the following lines of javascript...
  var loadcss = require("fg-loadcss");
  var reflink = $("head").children("link[rel=stylesheet]").get(-1);
  loadcss.loadCSS("https://fonts.googleapis.com/css?family=Indie+Flower",reflink);


This first requires the package (which Browserify will pull in), then finds a reference element (it will insert the <style> tag directly after this one) and then it calls the manual "loadCSS" function with the path to the stylesheet.  In my example, this is to a Google Font file.

This manual method is actually not part of Filament Groups recommended workflow, but I prefer it, as it keeps the code neat in my opinion, and runs it after my javascript is running, which I think is best for non-critical styles.  If you look on their Github repo, they do give other example usage though.

You can then remove the reference to the stylesheet from the <head> section, or better yet, move it to the very bottom of your page with a no-javascript fallback, like this...
<noscript><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Indie+Flower"></noscript>

This means that even if javascript is disabled, your stylesheet (or font, in my case) will still load, which is great!

I didn't get the chance to cover this in my Optimising your website: A development workflow with Git and Gulp course on Skillshare, but I hope to add it into a future course.

Sunday 14 January 2018

Adding stylesheets into my Gulp file

I have been working on a series of blog post about my new development process, which so far has focused exclusively on javascript, including concatenation of javascript filesusing Browserify to load jQuery and other javascript library files, and minifying (or uglifying) javascript files.  Next it's time to look at adding stylesheets.

Similar to javascript files, stylesheet files can be concatenated, in order to save round trips for individual files.  So the first place I started was copying the javascript task, rewriting it for stylesheet files, and stripping out everything but the call to gulp-concat - the same plugin can be used as it fill concatenate any files.  
gulp.task("css",function(cb) {
  pump([
    gulp.src(["css/*.css"]),
    concat("style.css"),
    gulp.dest("build/css")
  ]);
});

I think also creating a default Gulp task, in order to make it easy to call both my javascript and stylesheet tasks...
gulp.task("default",["js","css"]);

This means that when you run just "gulp" on the command line, it will automatically call the default task, which will then run through the array of related tasks - these are pre-requisites that are run first.  This list of pre-requisites can be set for any task, but is especially useful in this default mode.

Also similar to javascript files, stylesheet files can be minified (or uglified), in order to save bandwidth and download time.  However, this time a different plugin will be required, and the best I've found it called gulp-clean-css.  Like many Gulp plugins, this is actually a thin wrapper around another plugin, called clean-css.

First I added the new plugin to the top of my Gulp file...
var cleancss = require("gulp-clean-css");

And then I added the call to my stylesheet task...
gulp.task("css",function(cb) {
  pump([
    gulp.src(["css/*.css"]),
    concat("style.css"),
    cleancss(),
    gulp.dest("build/css")
  ]);
});

The script has two levels...




  1. These operate on single properties only, and are mostly on by default. 
  2. These operate on multiple properties at a time, including restructuring and reordering rules, but are off by default.
I have been using the default options for some time now, and these seems to serve me pretty well.  But if you want even more savings, you can play with the options, especially by activating the second level of optimisations.

These are also covered in my Skillshare course, Optimising your website: A development workflow with Git and Gulp.  The relevant videos are 15 - Concatenating Stylesheets and 16 - Minifying Stylesheets.

Wednesday 10 January 2018

Minifying (or uglifying) javascript

Totally in keeping with my New Year's Resolution, here is a lovely new blog post!  And it's a continuation of my new development process.  The last post in the series detailed my switch to using pump instead of pipe in my Gulp task.

Today's post is about minifying javascript, which is sometimes called "uglifying".  The reason being that this beautifully crafted javascript snippet...
//jquery wrapped anonymous function
$(function() {
  //clickjacking protection
  if(self!==top) {
    top.location = self.location; //break out of frame
  }
});

...gets minified to this monstrosity...
$(function(){self!==top&&(top.location=self.location)});

As you can see, the comments and whitespace have all been stripped, as well as converting the if statement to the shorthand variant - a number of operations are performed as part of this process.

So why on earth would we want to do this, I hear you cry.  Well essentially because it makes the file size smaller (in some cases, considerably smaller).  In the example above, the character count has gone from 166 to 56, which is approximately a third of the size, but with exactly the same functionality.  This makes it both quicker and cheaper (if you're on any kind of data plan, such as a mobile phone) to download the file for the user, and also saves you bandwidth costs.  This can be better than gzip compression for some files, but even better yet, do both!

If you want to read further on why you should do this, you can read this blog post relating to Drupal.

The best way to achieve this is to use the gulp-uglify package. So I installed this plugin and then included it at the top of my Gulp file...
var uglify = require("gulp-uglify");

I then took the task that I wrote about in my last blog post in this series and added the highlighted line...
gulp.task("js",function(cb) {
  pump([ 
    browserify(jsFiles).bundle(),
    source("script.js"),
    uglify(), 
    gulp.dest("build/js")
      ],cb); 
    });

This doesn't actually work, so not quite that simple.  The problem is to do with the way that Gulp can handle files as streams or buffers.  In this case the browserify plugin outputs the files as streams, but the gulp-uglify package requires them as buffers.  

Luckily this is easily fixed using a plugin called vinyl-buffer, which once included like so...
var buffer = require("vinyl-buffer");

...can be added to the task so that it looks like this...

gulp.task("js",function(cb) {
  pump([ 
    browserify(jsFiles).bundle(),
    source("script.js"),
    buffer(),  
    uglify(), 
    gulp.dest("build/js")
      ],cb); 
    });

Now we have a process which minifies our files, improving the performance and user experience, as well as reducing bandwidth costs - win win!

This is also covered in my Skillshare course, Optimising your website: A development workflow with Git and Gulp.  The relevant video is 11 - Minifying Javascript Files.

Wednesday 3 January 2018

New Year's resolution

I don't usually do New Year's resolutions, I figure that if you want to achieve something then you should set that goal straight away, rather than arbitrarily doing it once a year.  However, it's been almost 6 months since I put up a new blog post, and that's just not acceptable!

So, my New Year's resolution is... to update this blog at least once a week.  

I've been busy in the last 6 months, very busy!  Here are a few highlights...

I've created a new website for the lovely Emma Malik - world renowned comedian.  This was a really interesting project, working on someone else's personal site, to their requirements, but putting into practise the security and performance tricks I'd learnt working on my own site, and my many years of professional experience.

I've been freelancing in my spare time on PeoplePerHour.  This has been great fun, as I love taking on new projects and solving problems, and some of the projects have already pushed me outside of my comfort zone - I love learning new things!

I've been getting into Wordpress development a lot, largely as part of the freelancing.  As an application, it's come a long way from when I'd previously looked at it, many years ago.  I'm liking it a lot, and may look at migrating this blog over, when I get more time to think through the logistics of it.

I've been working on and published my first course on Skillshare.  This is entitled Optimising your website: A development workflow with Git and Gulp, and it's based on the series of blog posts that I started to write, about my New development process with Git and Gulp and First Gulp task - concatenation.  It goes through many more steps through the process that I've built, but also, doesn't include everything.  I will be looking to create more courses to go into further detail with optimisation, and also a security-focused workflow course.  I plan to retroactively blog about each part as well, so look out for those posts over the next few weeks.  Should help me keep my resolution for January at least!

Looking forward to a productive and eventful 2018!  I hope you are too.

Tuesday 15 August 2017

Taming Outlook meeting requests (part 2)

Yesterday I published a post about Taming Outlook meeting requests, in which I talked through the writing of a script which checked for working hours and calendar availability, before automatically accepting any meeting which met the requirements.  And then I left you hanging... Sorry about that!

Well here I am to pick up where I left off.  We're going to look at actually running the script!

First of all, you'll need to slightly drop your security settings, whilst you're developing.  We'll come back to fix this at the end, once we're happy the script is working.  The setting I'm referring to can be found in Outlook (I'm using 2016) here...

File > Options > Trust Center > Trust Center Settings > Macro Settings



Hopefully you've either got this set to "Disable all...", which would be the most secure (but possibly a little limiting" or "Notifications for digitally signed..." which is the second most secure.  We haven't digitally signed our script (yet!) so we'll to set this to "Notifications for all macros".  

Please be careful to only click yes on prompts for this script, or others that you know about, and don't blame me if you click yes on something dodgy!

Now we need to go and create a new mail rule.  This can usually be done by choosing "Rules" from the ribbon menu and then "Create a rule", but hopefully this is something you're familiar with, feel free to skip out and Google this is you're not.  

Now you've started creating a new rule, you'll want to select the conditions "on this computer only" (because the script only exists on this computer) and "which is a meeting invitation or update" (so that it doesn't process your script for every single email, something like this...




Then click the "Next" button and you'll want to select the action "run a script".  If you're in a newer version of Outlook (anything from 2013 onwards, I believe) then you may find that you don't have this option, in which case, you'll want to add the following registry entry and then restart Outlook...

Entry: HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\Outlook\Security
Dword: EnableUnsafeClientMailRules
Value: 1

You can also download the .reg file from my site, if you would like.  It should be noted that this is for Outlook 2016, and it might be different (not version 16.0) for other versions.

Once this is down and you've restarted, you should be able to select the "run a script" option.  


At the bottom you then have a link to click which says "a script", which will allow you to select the script you've written and compiled in part 1.


If you're following this blog then this is almost certainly going to only give you one script to select, but if not, look for the one that matches the name you used - for me this is "Project1.ThisOutlookSession.AutoAcceptMeetings", and click "OK".

You can then complete the wizard, adding an exceptions you want (none for me) and then setting the name (something massively original like "Auto Accept Meetings") and when it should run, etc.

You can now test, either on existing meeting requests in your inbox, or by waiting for the next one to come in!  If you've left those Debug.Print lines in then it might be worth hitting Alt+F11 in Outlook to get back to the script editor, and then pressing Ctrl+G in there.  This will bring out an "Immediate" panel at the bottom, and this is like a console, which will display all of the debug messages when they happen.

Once you've got your script working, it's time to go back and sort out your security.  The way that we do this, is by self signing the script.  There are a number of guides online about how to do this, but I'm going to quickly walk you through the steps I took on my Windows 10 machine.

Firstly, find "SelfCert.exe" - this is the Office application that we will be using to create the certificate.  For me, this was located here...

C:\Program Files (x86)\Microsoft Office\Office16\SELFCERT.EXE

Run this application and it should look like this...



There's a box at the bottom to enter your certificate name, so just enter whatever you like and click "OK".  You should get a message to say it's been successfully created.

Now go back into the script editor by pressing Alt+F11 in Outlook, and from the "Tools" menu select the "Digital Signature" menu item.  In this screen you can use the "Choose" button to find your certificate and select it, and then click "OK".  That's it, signed!

Now exit Outlook and run it again "as administrator".  You should then run your rule, manually if it doesn't happen automatically, and you'll get a popup asking you if it's ok to run the macro.  Personally I went with "Trust all documents from this publisher", as I am the publisher, and I trust myself.  You can now set your Trust Center settings back, so that only signed macros are prompted for.

You can now exit Outlook and run it again normally.  You should have Outlook meeting request bliss now, as your rule will run, which will run your script, and automatically accept all those meetings requests for you.  As you've trusted yourself, you shouldn't get any more prompts.  

Ahhhh, that's better!

If you're interested, you can download the .cls file from my site, which contains all the code.