Setting up Ubuntu on Digital Ocean

0 Comments

Digital Ocean* is an internet hosting service that makes it trivial to spin up virtual servers called Droplets. While the base Ubuntu image Droplets are configured for the job, there are a couple of extra steps I take with new Ubuntu Droplets that I’m documenting here as much for my own future reference as to elicit your feedback

SSH Keys

I’ll typically create a new SSH key pair for a each Droplet. Digital Ocean’s community guide is comprehensive if you need a refresher or haven’t done it before.

ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

Droplet creation

After logging into Digital Ocean (or signing up - use this link for an extra $10 USD credit), we click Create Droplet and follow the wizard.

Here are the typical base settings I use:

Distributions Ubuntu, latest LTM, x64
Size As per requirements (usually the smallest $5/mo)
Datacenter region Best to pick the one closest to the majority of our expected userbase. That might only be us
Select additional options As per requirements (usually just Monitoring)
Add your SSH Keys Click New SSH Key and paste in the public part of the SSH Key generated earlier
Finalise and create As per requirements

Then we click Create and wait less than a minute while Digital Ocean performs its magic

Configuration

For convenience we can give our new Droplet a friendly SSH name by adding the following to our local ~/.ssh/config file (I usually make this the same as the Droplet’s name):

# ~/.ssh/config
...
Host {droplet-name}
    User root
    HostName {droplet-ip-address}
    IdentityFile "~/.ssh/{our-new-ssh-private-key}"
...

Now we can SSH into our new Ubuntu Droplet with
ssh {droplet-name}

Set the timezone

dpkg-reconfigure tzdata

Ensure all packages are up-to-date

apt-get update; apt-get -y upgrade; apt-get -y clean

Configure automatic security patches (documentation here and here)

apt-get -y install unattended-upgrades; dpkg-reconfigure unattended-upgrades
Follow the prompts and accept the defaults.

Lock SSH to keys-only

Edit sshd_config to prevent root SSH login with a password - change PermitRootLogin from yes to without-password like so:

# /etc/ssh/sshd_config
...
# Authentication:
LoginGraceTime 120
PermitRootLogin without-password
StrictModes yes
...

And finally, reboot the Droplet to ensure our settings are loaded, current and it comes back to us before we start installing or configuring our application stack of choice..
reboot

Is there anything you’d add to this list of initial Ubuntu server setup steps? - Please let us know in the comments!

Creating Thumbnails for the Synology DSM PhotoStation

0 Comments

Having updated my Synology NAS box to the latest Disk Station Manager (DSM) - version 6.0.2 as of December 2016 - I read that the PhotoStation thumbnail filenames had changed, and it now generates fewer of them which takes less time and saves space. Given that we’ve somewhere north of 105k photos and videos, it would take the little 1.6GHz ARM CPU in the Synology weeks (if not months) to recreate them so I started looking for a faster way.

After a brief search I found the sterling work of Matthew Phillips’ synothumb script, which has been tweaked a few times most recently by one Andrew Harris and it’s his version I started with.

Prerequisites

  • Synology NAS with PhotoStation installed
  • Any PC, Netbook or Laptop that’s more powerful than our Synology NAS, and that we don’t mind leaving running for long periods of time (possibly days, depending on fast it is and how many photos we have..), and ideally with a wired Gigabit LAN connection
  • A Linux install or a live CD - I recommend the latest Linux Mint Cinnamon (18.1 as of writing)
  • An executable copy of the synothumb Python script in our linux home directory..

curl -o synothumb.py https://raw.githubusercontent.com/AndrewFreemantle/synothumbs/master/synothumb.py

chmod +x synothumb.py

1. A little Synology configuration..

First we have to configure the NFS settings in the Synology DSM - log in as ‘Admin’, open up the Package Center and disable PhotoStation - it’s in the Action menu..

Next we need to enable NFS, which we’ll find in the Control Panel under File Services.. we tick the Enable NFS box if it isn’t already and then click Apply

And then we need to configure it.. still in the Control Panel, this time in Shared Folder (just above File Services), we select the photo folder and click the Edit - in the ‘Edit Shared Folder photo’ window, we need the last tab called NFS Permissions where we need to Create a new Read/Write privilege for the IP address of our Linux client. Click Apply and then OK

2. Connecting from Linux to our Synology

Now to our Linux client.. first we need NFS and the video libraries installed. In a Terminal we run

sudo apt-get install nfs-common ffmpeg libav-tools ufraw

Then we can check if we’ve configured NFS properly by running

showmount -e {synology-ip-address}

If it returns the IP address of our Linux client then we can mount the photo share like so..

cd; mkdir mnt_photo

sudo mount -t nfs {synology-ip-address}:/volume1/photo mnt_photo/

3. (Optional) Remove existing thumbnails

Matthew’s synothumbs script skips media that already has thumbnails, so if we want to recreate them all rather than generate any missing ones we just need to run this first..

find mnt_photo/ -type d -name "@eaDir" -exec rm -rf {} \;

4. Generating thumbnails

./synothumb.py mnt_photo/

5. Monitoring progress..

As well at watching the image and video filenames fly past, I like to have the Activity Monitor with the graphical Resources tab running..

Watching the System Monitor - if the CPU's aren't pegged at 100% then we need a faster network!

6. Finishing up..

A little while later..

Once the synothumb.py script has finished, we need to modify the ownership and permissions of the thumbnails we’ve generated. For this we need an SSH or terminal / telnet session on our Synology, then we can issue the following:

Note: If we've logged in with Terminal / telnet, and our prompt shows us as the admin user, we can issue the following command to become root (it'll prompt us for our Admin password again)
admin@Synology#sudo -i

root@Synology#cd /volume1/photo
root@Synology#find . -type d -name "@eaDir" -exec chown -R PhotoStation:PhotoStation {} \;
root@Synology#find . -type d -name "@eaDir" -exec chmod -R 770 {} \;
root@Synology#find . -type f -name "SYNOPHOTO_THUMB*" -exec chmod 660 {} \;


Back to our Linux thumbnail processing box, we can unmount the drive..

sudo umount mnt_photo

.. and in the DSM we then

  • Remove the NFS Permission we created (Control Panel > Shared Folder > photo > NFS Permissions)
  • Disable the NFS Service if we enabled it
  • Restart the PhotoStation package (Package Center > PhotoStation > Actions > Run)

Once the PhotoStation package is back up and running it should automatically start re-indexing - that is, searching /volume1/photo for new photos. If we’d just copied a lot of new photos and videos onto our Synology box then this process can take longer than the thumbnail generation! - for every media file found it reads the file’s EXIF data and writes it to local database.

We can check that the re-indexing has started (and kick it off it hasn’t) by opening PhotoStation in our browser - https://{synology-ip}/photo - and logging in as Admin.

Then we choose Settings > Photos > and click Re-index

Ensuring that PhotoStation knows about any new photos and videos we've added by Re-indexing


Done

Huge thanks to Matthew Phillips for creating the synothumbs script in the first place, to all the forkers for their tweaks and to Francesco Pipia for his comment that helped me get the NFS connection working! Grazie mille!

Aurelia with a Rails API

0 Comments

At the end of Aurelia’s Contact Manager Tutorial the first ‘Next Step’ is:

Create a real backend for the app and use the http-client or fetch-client to retrieve data
As with most things, there's more than one way to skin a cat. This is not the only way to combine Aurelia and Rails. In all likelihood it's not the best way either - as I write this in December 2016 I'm new to Aurelia, I'm by no means an expert at Rails, and I'm still learning about modern front-end web development.

My aim was to be as in-keeping with both frameworks as possible. This is straightforward because Aurelia borrows some of the core principles from Rails: convention over configuration and clean, simple models.source

Table of Contents

Installing Rails

Picking up at the end of the Aurelia Contact Manager Tutorial, and assuming our project folder is ~/contact-manager..

Drop back to the parent folder from our Aurelia Contact Manager app

cd ..

gem install rails --no-document

We’ll want to persist the Contacts to a database at some point, but although we’re not going to cover that in this post we can set the groundwork by specifying our database engine now..

rails new contact-manager -d postgresql --api --skip-action-cable --skip-turbolinks

Side note: It's likely that the rails new ... will prompt us to overwrite .gitignore. I'd chose no and just paste in the default Rails .gitignore rules which I've included here for completeness. It's a shame there's no option to merge..
# See https://help.github.com/articles/ignoring-files for more about ignoring files.
#
# If you find yourself ignoring temporary files generated by your text editor
# or operating system, you probably want to add a global ignore instead:
#   git config --global core.excludesfile '~/.gitignore_global'

# Ignore bundler config.
/.bundle

# Ignore all logfiles and tempfiles.
/log/*
/tmp/*
!/log/.keep
!/tmp/.keep

# Ignore Byebug command history file.
.byebug_history

cd contact-manager

Before we create our database, I always give ./config/database.yml a once-over to make sure I’m happy with the database names chosen (in this case, I’d change the dashes to underscores: contact-manager_development becomes contact_manager_development and so on..)

rake db:create

rails server - check it’s working - “Yay! You’re on Rails!”

Yay!

A Rails Contacts API

First, let’s change the code-only WebAPI from the demo to a Rails API backed implemenation. The first thing we need to define is our Rails API and Rails makes that really easy..

# ./config/routes.rb
Rails.application.routes.draw do
  # For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html

  resources :contacts

end

Let’s check our route configuration which tell us which methods we need to implement..

rake routes (we can leave our Rails server running in another Terminal tab)..

$ rake routes
  Prefix Verb   URI Pattern             Controller#Action
contacts GET    /contacts(.:format)     contacts#index
         POST   /contacts(.:format)     contacts#create
 contact GET    /contacts/:id(.:format) contacts#show
         PATCH  /contacts/:id(.:format) contacts#update
         PUT    /contacts/:id(.:format) contacts#update
         DELETE /contacts/:id(.:format) contacts#destroy

Obviously, we now need a Contacts controller..

rails generate controller Contacts index create show update destroy --skip-routes

Now we can update our newly minted Contacts Controller so it mimics the funcionality of the Aurelia WebAPI by implementing the index and show methods like so:

# ./app/controllers/contacts_controller.rb
class ContactsController < ApplicationController

  # Like the Aurelia demo, we'll start with an array of contacts
  #  as it's not important where Rails gets the data - only that
  #  Aurelia gets it from Rails.
  CONTACTS = [
    {
      id:1,
      firstName:'John',
      lastName:'Tolkien-Rails',
      email:'tolkien@inklings.com',
      phoneNumber:'867-5309'
    },
    {
      id:2,
      firstName:'Clive',
      lastName:'Lewis-Rails',
      email:'lewis@inklings.com',
      phoneNumber:'867-5309'
    },
    {
      id:3,
      firstName:'Owen',
      lastName:'Barfield-Rails',
      email:'barfield@inklings.com',
      phoneNumber:'867-5309'
    },
    {
      id:4,
      firstName:'Charles',
      lastName:'Williams-Rails',
      email:'williams@inklings.com',
      phoneNumber:'867-5309'
    },
    {
      id:5,
      firstName:'Roger',
      lastName:'Green-Rails',
      email:'green@inklings.com',
      phoneNumber:'867-5309'
    }
  ]

  def index
    render json: { status: :ok, data: CONTACTS }
  end

  def create
  end

  def show
    contact = CONTACTS.select { |c| c[:id].to_s == params[:id] }
    if !contact.empty?
      render json: { status: :ok, data: contact }
    else
      render json: { status: :no_content }
    end
  end

  def update
  end

  def destroy
  end
end

We can check our handiwork by pointing our browser to http://localhost:3000/contacts where we should see our list of contacts rendered as JSON data into the browser like so:

Browser screenshot showing Rails API Contacts List data in JSON format

Side note: I'm using the JSON Formatter Extension for Google Chrome by Callum Locke. Thanks Callum!

Fetching data from a Rails API with Aurelia

Now let’s modify the tutorial’s WebAPI to pull our data. Following the documentation for Aurelia’s HTTP Services we’ll also need a fetch polyfill as it’s still being implemented by the Browsers[Dec 2016] - to the command line..

npm install whatwg-fetch aurelia-fetch-client --save

Then we can add them to our dependencies:

// ./aurelia_project/aurelia.json
"build": {
  ...
  "bundles": [
    ...
    "whatwg-fetch",
    "aurelia-fetch-client"
  ]
  ...
 }

Before we start using them, let’s check that the Aurelia bit still builds:

au build == no errors. Good, now we can hook up the front to the back..

cp src/web-api.js src/web-api-fetch.js

Edit our new fetch version of the getContactList() method to look like so:

// ./src/web-api-fetch.js
import { HttpClient } from 'aurelia-fetch-client';
let client = new HttpClient();
...
  getContactList() {
    this.isRequesting = true;
    return new Promise((resolve, reject) => {
      // We'll change this hardcoded URL in a moment..
      client.fetch('http://localhost:3000/contacts')
        .then(response => response.json())
        .then(response => {
          let results = response.data.map(x => { return {
            id: x.id,
            firstName: x.firstName,
            lastName: x.lastName,
            email: x.email
          }});
          resolve(results);
          this.isRequesting = false;
        })
        .catch((ex) => {
          console.log('ERROR', ex);
          reject(response);
        });
    });
  }
...

Now we can swap out the in-memory WebAPI for our Rails API..

// ./src/contact-list.js
import {WebAPI} from './web-api';         // change this..
import {WebAPI} from './web-api-fetch';   // ..to this
...

If we take a look at our app (http://localhost:9000 - which will have automatically refreshed if we have an Aurelia CLI au run --watch sitting in a terminal somewhere. Start one if you haven’t) we see that it’s..

empty..

Screenshot of our app in the web browser with no data - it's broken

Oops, we've broken it..

A quick look at our browser javascript console shows us the reason.. CORS..

Screenshot of the web browser developer tools console showing the Cross Origin Resource Sharing error

Of CORS! ba-dum tish*

Cross-Origin Resource Sharing (CORS)

Essentially, our default Rails 5 API which is being hosted on port 3000 (http://localhost:3000) is refusing to service requests from our Aurelia front-end which is being hosted on port 9000 (http://localhost:9000).

In production this won't be an issue as they'll both be hosted from the same origin, but if we wish to use the Aurelia CLI development toolchain (or WebPack, gulp, grunt, burp, etc..) then we'll need to configure our Rails API to allow CORS in development. Thankfully that's easy too..

First we need to enable the CORS middleware by un-commenting it in our Gemfile, but adding the group: constraint:
# ./Gemfile
...
# Use Rack CORS for handling Cross-Origin Resource Sharing (CORS), making cross-origin AJAX possible
gem 'rack-cors', group: :development
...
Then un-comment the default configuration Rails included for us, but we'll enclose it in an if Rails.env.development? then ... end block..
# config/initializers/cors.rb
if Rails.env.development? then
  Rails.application.config.middleware.insert_before 0, Rack::Cors do
    allow do
      origins 'localhost:9000'

      resource '*',
      headers: :any,
      methods: [:get, :post, :put, :patch, :delete, :options, :head]
    end
  end
end

That's it. We just need to install the missing gem and restart our server

  1. Stop our rails server with Ctrl+C
  2. Run bundle install
  3. Then restart it: rails server

Refresh, and we see that the Contact List does indeed show the list of contacts from the Rails API - notice that we appended ‘-Rails’ to the last names so we could tell where our data was coming from..

Screenshot of our app working in development - Contact List is from Rails, but the Profile Details aren't yet

Fixed! Our separately hosted Aurelia frontend and Rails API backend can now talk to each other in development

That’s great, but the Profile panel details on the right are showing the non-Rails-backed data, so next we need to make the same two changes to the Aurelia Contact Details component. First, we’ll update the method to fetch the data from our Rails backend..

// ./src/web-api-fetch.js
...
  getContactDetails(id){
    this.isRequesting = true;
    return new Promise((resolve, reject) => {
      // Normally we'd find the record in a locally held array filled by
      //  an earlier call to getContactList() and then we might consider
      //  asking the server if we didn't find it depending on our
      //  applications requirements.
      // For now though, we'll just leave it hitting the server..
      client.fetch('http://localhost:3000/contacts/' + id)
        .then(response => response.json())
        .then(response => {
          let result = response.data.map(x => { return {
            id: x.id,
            firstName: x.firstName,
            lastName: x.lastName,
            email: x.email,
            phoneNumber: x.phoneNumber
          }});
          resolve(result.pop());
          this.isRequesting = false;
        })
        .catch((ex) => {
          console.log('ERROR', ex);
          reject(response);
        });
    });
  }
...

Then we point src/contact-detail.js to our new Web API..

// ./src/contact-detail.js
import {WebAPI} from './web-api';         // change this..
import {WebAPI} from './web-api-fetch';   // ..to this
...

In our app we see that the Profile panel shows we’re now fetching the details from Rails, which we can verify in the Network tab of our browser development tools, or by watching the output of our running rails server

Screenshot of our app working in development - Contact List and Contact Details are now fetched from our Rails API

Preparing for Release

Now we’ve hooked up the Aurelia front-end to the Rails 5 API backend, it’s about time to ask Rails to serve the Aurelia front-end files as well so we can deploy the entire application to a single server. But before we do that, we really must take care of the hard-coded API URLs in our src/web-api-fetch.js.

As we’re using the Aurelia CLI to build our front-end, we can use the existing dev, stage and prod configuration files they thoughtfully included.

Side note: For the curious.. the Aurelia build environment is a handled by a pre-configured gulp task called configureEnvironment() which you'll find in ./aurelia_project/tasks/transpile.js
// ./aurelia_project/environments/dev.js
export default {
  debug: true,
  testing: true,
  apiBaseUrl: 'http://localhost:3000'
};
// ./aurelia_project/environments/stage.js
// ./aurelia_project/environments/prod.js
export default {
  debug: true,
  testing: true,
  apiBaseUrl: ''
};

Next we can use our new apiBaseURL parameter to configure an application-wide HttpClient..

// ./src/main.js
...
import { HttpClient } from 'aurelia-fetch-client';
...

export function configure(aurelia) {
  ...
  // Configure an application-wide HttpClient
  configureHttpContainer(aurelia.container);      // <- add this..

  aurelia.start().then(() => aurelia.setRoot());
}

// And this new function..
function configureHttpContainer(container) {
  let httpClient = new HttpClient();
  httpClient.configure(config => {
    config
      .useStandardConfiguration()
      .withBaseUrl(environment.apiBaseUrl)
  });

  container.registerInstance(HttpClient, httpClient);
}

And the last thing to do to get this working is to update our web-api-fetch.js file to use our configured HttpClient..

// ./src/web-api-fetch.js
import { inject } from 'aurelia-framework';
import { HttpClient } from 'aurelia-fetch-client';

let latency = 200;
let id = 0;

function getId(){
  return ++id;
}

@inject(HttpClient)
export class WebAPI {
  isRequesting = false;

  constructor(httpClient) {
    this.httpClient = httpClient;
  }

  getContactList() {
    this.isRequesting = true;
    return new Promise((resolve, reject) => {
      this.httpClient.fetch('/contacts')
        .then(response => response.json())
        .then(response => {
          let results = response.data.map(x => { return {
            id: x.id,
            firstName: x.firstName,
            lastName: x.lastName,
            email: x.email
          }});
          resolve(results);
          this.isRequesting = false;
        })
        .catch((ex) => {
          console.log('ERROR', ex);
          reject(response);
        });
    });
  }

  getContactDetails(id){
    this.isRequesting = true;
    return new Promise((resolve, reject) => {
      // Normally we'd find the record in an array filled by the
      //  earlier call to getContactList() and then we might consider
      //  asking the server if we didn't find it depending on our
      //  applications requirements.
      // For now though, we'll just leave it hitting the server..
      this.httpClient.fetch('/contacts/' + id)
        .then(response => response.json())
        .then(response => {
          let result = response.data.map(x => { return {
            id: x.id,
            firstName: x.firstName,
            lastName: x.lastName,
            email: x.email,
            phoneNumber: x.phoneNumber
          }});
          resolve(result.pop());
          this.isRequesting = false;
        })
        .catch((ex) => {
          console.log('ERROR', ex);
          reject(response);
        });
    });
  }

  // We haven't updated this method to use our Rails API yet,
  //  that's left as an exercise for the reader (that's you  :o)
  saveContact(contact){
    this.isRequesting = true;
    return new Promise(resolve => {
      setTimeout(() => {
        let instance = JSON.parse(JSON.stringify(contact));
        let found = contacts.filter(x => x.id == contact.id)[0];

        if(found){
          let index = contacts.indexOf(found);
          contacts[index] = instance;
        }else{
          instance.id = getId();
          contacts.push(instance);
        }
          resolve(results);
          this.isRequesting = false;
      }, latency);
    });
  }
}

The changes in the file above are: 1. We imported the aurelia-framework: import {inject} from 'aurelia-framework' 2. Removed the line let client = new HttpClient(); 3. Added an @inject(HttpClient) decorator the the WebAPI class 4. Saved an reference to the injected HttpClient with a new constructor method 5. Updated the 2 getContact...() methods to use this.httpClient.fetch() instead of client.fetch(), and stripped out the hard-coded URLs!

Because we’re going to use the Aurelia CLI to bundle our front-end app and lean on the Rails Asset Pipeline to actually serve these assets, there’s one last Aurelia configuration tweak we need to make - switching the output destination from ./scripts to ./assets:

/* ./aurelia_project/aurelia.json */

{
  ...
  "platform": {
    ...
    "output": "assets",
    ...
  },
  ...
  "build": {
    "targets": [
      {
        ...
        "output": "assets",
        ...
      }
    ]
  ...
  }
}
<!-- ./index.html -->
...
    <script src="assets/vendor-bundle.js" data-main="aurelia-bootstrapper"></script>
...

A little housekeeping..

rm -rf ./scripts && mkdir ./assets

And a quick check to make sure our development configuration is still working..

au build --env dev then check our browser: http://localhost:9000 == Works fine

Now let’s try our production configuration..

au build --env prod

Screenshot of our app working in production - no data is shown because the URLs are now relative

No data, but it's working! - look at the /contacts URL - it's relative and all we did was change the au --env flag

It correctly breaks! (?!!) - bear with me.. the URL is /contacts on the same server (notice the port is still 9000 - the same as in the address bar) - this proves it’s working!

Serving the app with the Rails Asset Pipeline

There are just a few things to do at the backend to get the Rails API to serve our front-end.

Enable the Rails Asset Pipeline

# ./config/application.rb
require "rails"
...
# Pick the frameworks you want:
require "active_model/railtie"
require "active_job/railtie"
require "active_record/railtie"
require "action_controller/railtie"
require "action_mailer/railtie"
require "action_view/railtie"
# require "action_cable/engine"
require "sprockets/railtie"        # <-- we want sprockets! (uncomment this line)
require "rails/test_unit/railtie"
...

Next we need to tell the Asset Pipeline / sprockets which javascript files our app uses:

mkdir -p ./app/assets/javascripts && mkdir -p ./vendor/assets/javascripts
touch ./app/assets/javascripts/application.js

// ./app/assets/javascripts/application.js

// These 2 files are generated by running `./au build --env prod`
// ** DO NOT MODIFY THESE FILES DIRECTLY **
//= require vendor-bundle
//= require app-bundle

Now we can copy these files.. vendor-bundle.js to vendor assets, and app-bundle.js to app assets - that make me feel warm and fuzzy inside (we’ll recap the steps needed to release the application at the end of the post..)

cp ./assets/vendor-bundle.js ./vendor/assets/javascripts/
cp ./assets/app-bundle.js ./app/assets/javascripts/

Next we need a HTML Controller to serve our index.html..

rails g controller Home index --skip-routes

We skipped the automatic route addition because we need to edit ./config/routes.rb to add our site root..

# ./config/routes.rb
Rails.application.routes.draw do
  # For details on the DSL available within this file, see http://guides.rubyonrails.org/routing.html

  root 'home#index'
  resources :contacts

end

Now we need a quick tweak to our HomeController inheritance so it renders HTML instead of JSON by default..

# ./app/controllers/home_controller.rb
class HomeController < ActionController::Base
  def index
  end
end

And what about that #index? The final step! We copy and then edit the index.html..

mkdir -p ./app/views/home
cp ./index.html ./app/views/home/index.html.erb

<!-- ./app/views/home/inde.html.erb -->
<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Aurelia</title>
  </head>

  <body aurelia-app="main">
    <%= javascript_include_tag "application", { "data-main" => "aurelia-bootstrapper" } %>
  </body>
</html>

We’ve swapped the <script> tag for the Rails Asset Pipeline generated javascript_include_tag, and now we can test it..

This is the only un-clean part of the integration as it means a manual update to this file if we've changed it during development. I suspect that we can use a gulp plugin ([gulp-html-replace](https://www.npmjs.com/package/gulp-html-replace) looks promising) to augment the build process so that we don't have to manually edit ./index.html. We could also get our build process to copy the bundled files into our Rails folders too.. If you can me help with this, please leave a comment!

rails server - hit our Rails server at http://localhost:3000 and..

Screenshot of our Aurelia Contact Manager app completely hosted by Rails!

Et voila! - Our Aurelia Contact Manager application hosted by Ruby on Rails

Wrap up..

Development Workflow

For development, we just spin up the frontend and backends in separate terminal tabs like so:

au run --watch
rails s

Then point our editor to ./src/* and our browser to http://localhost:9000

Release / Deployment Workflow

Stop the Aurelia CLI if it’s running (Ctrl+C our au run --watch task), then

rm ./assets/* - remove our development assets
au build --env prod - generate our production assets
cp ./assets/vendor-bundle.js ./vendor/assets/javascripts/ - copy our vendor bundle
cp ./assets/app-bundle.js ./app/assets/javascripts/ - copy our app bundle

Remember! If we've changed our ./index.html then we need to copy it and update the <script> tag as we did before:
cp ./index.html ./app/views/home/index.html.erb
<!-- ./app/views/home/index.html.erb -->
<!DOCTYPE html>
<html>
  ...
  <body aurelia-app="main">
    <!-- remove this line -->
    <script src="assets/vendor-bundle.js" data-main="aurelia-bootstrapper"></script>

    <!-- add this line -->
    <%= javascript_include_tag "application", { "data-main" => "aurelia-bootstrapper" } %>
  </body>
</html>

.. and we’re done