Project Gnar - Brien Givens

By Brien Givens

Project Gnar

Brien Givens
Sep 24, 2018 · 47 min read

0 to Deployed today: A production-ready open source SaaS starter kit.

Project Gnar is an open source, full-stack, highly-scalable, microservices framework which includes:

  • Sign up email with secure account activation
  • Secure login and session management
  • Basic account details (name, address, etc.)
  • Password reset email with secure confirmation
  • React / Redux / Saga frontend
  • Python microservice backend
  • Terraform + Kubernetes deployment onto AWS

I was recently tasked with building out a web app for a startup. While there are plenty of “starter kits” for the various layers, I couldn’t find a single resource that provided the end-to-end functionality I needed, so I built it out myself. Don’t get me wrong — this is no one man show — I use a wide range of proven front-end and back-end libraries and I’m heavily indebted to the open source community that provides these incredible resources as well as an awesome Beta Team that provided invaluable feedback.

As I worked on the startup’s app, I realized that the features I had built are the basic requirements of any web application. The project seemed like the perfect material for an open source project. I’m very fortunate to have the means, and the support of a truly wonderful wife, that enables me to give back to the open source community that has shaped my career over the past few years.

I present Project Gnar, which includes:

  • Base: Lightweight Docker container for Python apps
  • Gear: Python library for building feature-rich, Flask-based Python apps
  • Gear-CI-Base: Docker container for Gear CI runs with Python 3.5–3.7
  • Piste: Primary Python microservice
  • Off-Piste: Ancillary Python microservice
  • Edge: JS library - base64, drain, handleChange, jwt, notifications, redux
  • Powder: Frontend app based on React / Redux / Redux Saga
  • Genesis: Terraform infrastructure configuration for AWS
  • Patrol: Kubernetes deployment configuration for AWS

All of Project Gnar’s code is open-sourced under the MIT license.

A demo site is up at - feel free to create an account and poke around.

Join Project Gnar on Slack

Get ready to dig in! This is a hands-on tutorial that will have you up and running your own Gnar-based app in a couple of hours.

Photo by Jörg Angeli on Unsplash

The walkthrough begins with a discussion of the layers and technologies used in Project Gnar. If you’d prefer to jump right in and get your app running, skip straight to the Implementation. In the Road Ahead, I discuss the future possibilities of the project, and in the Acknowledgements, I tip my hat to those who have helped get Project Gnar off the ground.

Target Audience

I specifically had the technically-proficient startup founder in mind when writing up this walkthrough. My goal is to empower new businesses with the ability to start up a new app in a few hours and to provide a strong foundation on which to build out their vision. Project Gnar’s foundation is constructed with proven, cutting-edge technologies and libraries that will appeal to engineers of all levels.


The links above and in the Tech Stack and Implementation sections are hash links which will scroll you directly to that part of the walkthrough. There are more hash links throughout the article that will take you back to the main overview sections.

The essential layers of a web app:

Front End + Back End + Database + CI + Infrastructure + Deployment

The technologies I picked for the layers of Project Gnar:

React + Flask (Python) + Postgres + GitLab + AWS + Kubernetes

The layers of Project Gnar:

Front End (Powder)

React + Redux + Saga + Material UI + Lodash + Moment + Webpack 4


React is the market-leading front end solution, by a 4:1 margin over Angular. If you’re interested in an in-depth comparison, TechMagic has a great article comparing React, Angular, and Vue. I’ve worked with both React and Angular on large projects. I find React to be a lot of fun to work with… Angular, not so much. Vue looks promising, but it’s too young for me at this point.

Photo by Laurent Perren on Unsplash


Redux provides a critical unidirectional data flow to the React application. The alternative is basically a spiderweb of dependencies. Redux is the successor to Flux. Both tools (and, of course, React) were written by Dan Abramov— he has a good Stack Overflow post comparing Redux and Flux. Samer Buna also has a nice article on the subject.

Redux Saga

Side effects with generators! I was a bit skeptical at first about using generators instead of Promises, but I’m pleased with the results. Harkirat Saluja’s article on saga vs. thunk helped influence my decision to go with Redux Saga.

Material UI

Material UI is a CSS framework and a set of React components that implement Google’s Material Design specification. It provides a 5 gnar 🤙🤙🤙🤙🤙 set of Layout, Components, and Style! I also love that it supports tree shaking which helps keep Gnar Powder’s build small.


Lodash is an indispensable library. While ES6+ provides a nice upgrade to the core JS functionality, Lodash still reigns supreme imho. Its best feature is chaining. I push my front-end devs to fully embrace Lodash chaining, to the point that I brag that Lodash (not JS or ES6) is their key competency.

The downside to Lodash is its size. While it does support tree shaking, I feel it’s not worthwhile, for several reasons: 1) it doesn’t significantly reduce the build size, 2) cherry picking imports adds a significant overhead to using Lodash, and 3) chain is not supported when using tree shaking (although it is possible to use compose in place of chain).

Instead of tree shaking, I’ve chosen to mitigate the effect of Lodash’s size on Powder’s build by using Webpack to split it out into a separate chuck of the build. As you’ll see when we get to the Webpack configuration, Powder’s build includes several bundles to take advantage of the browser’s concurrent connections. As a result, bundling the entire Lodash library (31 KB gzipped) has a negligible effect on Powder’s load time and we get to continue using all the awesome functionality of Lodash via the _ shortcut 🤙🤙🤙🤙🤙


Nothing comes close to moment for simplifying date manipulation. It has similar size issue as Lodash, but I also split it out into a separate bundle (17 KB gzipped) of Powder’s build. To reduce the size of the moment bundle, I’ve removed the locale modules using Webpack’s IgnorePlugin. If you need i18n date strings, look for the comment in the Webpack config.

Webpack 4 + Babel

Ah Webpack, where would we be without you? Front end development simply wouldn’t be as productive or fun without it. Version 4 (Legato 🎶) brings significant improvements to the tool. As a result, Powder’s Webpack config is fairly concise at about 200 lines. Babel’s transpilation allows us to enjoy all the exciting advancements of ES6+ without having to worry about browser compatibility.

Additional Tools

In addition to the main dependencies listed above, Powder also uses:

Powder also uses these Webpack Loaders, Plugins & Tools:

Back to the Tech Stack

Back End (Base + Gear + Piste)


Photo by Mark Kucharski on Unsplash

Base is a lightweight (72 MB compressed) Linux Alpine-based Docker container for Python apps, specifically designed for use with Gnar Gear. Base consists of two layers - Alpine 3.8 plus a layer of Gear dependencies which includes:

  • gcc: GNU Compiler Collection - Allows Python packages to compile C code
  • libev-dev: Wrapper library for evdev (event device)
  • libffi-dev: Portable foreign function interface library
  • musl-dev: C standard library - Allows Python packages to compile C code
  • postgresql-dev: PostgreSQL Driver
  • python3-dev: Header files and a static library for Python 3.6.4

Back to the Tech Stack


Python + Flask + Bjoern + Requests + Postgres + Boto3 + Argon2 + JWT

Photo by Samuel Ferrara on Unsplash

Gear is a small Python package which simplifies and standardizes Gnar microservices. With two lines of code, it sets up a Flask-based app with:

  • Auto blueprint registration
  • Flask WSGI Development Server in fault-tolerant debug mode
  • Bjoern WSGI Production Server
  • Postgres database connection via
  • SES client connection via Boto 3
  • JWT configuration via Flask-JWT-Extended
  • Peer requests — microservice intra-communication
  • External requests — Convenience wrapper around requests
  • SQS Message Polling and Sending
  • Flask-Argon2 singleton
  • Logger configuration
  • Error handler with traceback
  • Overridable and extendable class-based design


Python is the world’s fastest-growing programming language, second only to JavaScript in overall popularity, according to Stack Overflow. I chose Python partly because of its popularity, partly because of my familiarity with it, and partly because of the strength of its awesome ecosystem. I feel it’s the right choice for modern web applications.


Flask is an amazingly lightweight, bare bones server. (10K lines of code vs. Django’s 240K). It also has ~ 40K Github Stars (more than Django), so no slouch in popularity. It’s simple to use and perfect for what I set out to build in Gnar.


Flask ships with a WSGI server that is not intended to be used in production. I picked Bjoern for Gnar’s production server due to its outstanding benchmarks. In development mode, Gnar Gear provides a fault-tolerant version of Flask’s WSGI server in debug (i.e. watch and reload) mode.


Kenneth Reitz’s awesome Requests package provides a simple and elegant API for making HTTP requests in Python. It’s the 4th most popular PyPI package and receives over 11 million downloads per month. We use it in Gear as the basis for peer, the microservice intra-communication service and for external, a simple wrapper around requests.


PostgreSQL is a powerful, open source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance. That sentence is taken right from their home page. I picked Postgres because it’s extremely powerful, has fantastic JSON support, is truly open source, and is available on AWS RDS free usage tier. is nice high-value abstraction over Pyscopg2 from Chad Whitacre.


Boto is the AWS SDK for Python. We use it in Gnar Piste to send email via SES. Boto3 is bundled into Gnar Gear to simplify the process of creating a new SES client - AWS will close a Boto client after a short period of time, so it’s handy to have a convenience method for creating a new connection.


I initially chose BCrypt as Gnar Gear’s password hashing solution. DanielBoterhoven has a good article listing the advantages of BCrypt over other solutions. While writing this article, I learned of the Password Hashing Competition which selected Argon2 as the winner and successor to BCrypt in 2015. Flask-Argon2 is a nice convenience wrapper around argon2-cffi. I made the switch to Flask-Argon2 in Gnar-Gear 1.0.3.


In Gnar, we use JWT tokens to authenticate requests on protected routes. A JWT token containing the user’s email address and the token expiration time is returned to the client in the response header of the login request and stored in localStorage. Every request on a protected route will retrieve the current JWT from localStorage, send it in the request header, and then receive a new JWT token in the response header which replaces the old token in localStorage. When the user logs out, their localStorage is cleared. I picked Flask-JWT-Extended for the server-side piece of Gnar’s JWT solution because it provides a lightweight and elegant API — basically three functions plus some error handling.

Back to the Tech Stack


Python + Gnar Gear + Pystache + Redis + UltraJSON + Zxcvbn

Photo by David Heslop on Unsplash

Gnar Piste provides the server side functionality of the Gnar app. Essentially, it provides seven blueprint routes and the associated business logic:

  • /user/signup, POST: Creates a user record in the database and sends an activation email which includes a unique token tied to the user’s account.
  • /user/activate, POST: Activates the user account and saves their Argon2-based password hash.
  • /user/login, POST: Logs the user in and returns their whitelisted account information and initial JWT token.
  • /user/get, POST: Retrieves a logged-in user’s whitelisted account information — used when the browser is refreshed.
  • /user/< user id >, PUT: Updates a user’s account information.
  • /user/send-reset-password-email, POST: Sends an email which includes a unique token tied to the user’s account to initiate the password reset process.
  • /user/reset-password, POST: Validates the password reset token and updates the user’s Argon2-based password hash.

Python + Gnar Gear

Gnar Piste is designed to use Gnar Gear — all the features of Gnar Gear are listed above. Setting up the app is as simple as setting a few environment variables, defining the blueprint routes, and using these two lines of code:

from gnar_gear import GnarApp...

GnarApp('my_gnarly_app', production=True, port=80).run()


Pystache is a Python implementation of Mustache, a framework-agnostic and logic-free templating system. We use it for the email template — the signup and reset password emails use the same template.


Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. We use it via ElastiCache to cache SQS messages received from a poll in Piste, so they may be picked up in a later request.


The JSON parser that ships with Python is notoriously slow. UltraJSON (ujson) is the most performant replacement I’ve found. Ironically, the best benchmarks I’ve found are on the competing python-rapidjson’s site. The metrics in the ujson column indicates that ujson is generally faster than rapidjson, simplejson, stdlib, and yajl. Here’s another set of benchmarks that support my choice of ujson.


Zxcvbn is a password strength estimator inspired by password crackers and developed by Dropbox. It provides a great initial strength indicator for Gnar’s password 💪 enforcer — we require a 4/4 on the zxcvbn scale in addition to no matching records at

Back to the Tech Stack


Photo by Robson Hatsukami Morgan on Unsplash

Project Gnar is intended to be used in a microservice environment, however, the starter app functionality really only needs the single Piste microservice. Off-Piste is included to demonstrate how to handle microservice intra-communication as well as SQS queues.

That’s really all there is to Off-Piste, so let’s just enjoy this opportunity to appreciate this off-piste moment. Honestly, it’s July right now (yes, it took me a while to finish this article) and I love California summers, but this photo is kicking in some serious winter cravings.

Major props to Unsplash for providing their
🤙🤙🤙🤙🤙 service!

Back to the Tech Stack

Github is really the obvious choice for code management. When I learned that Git’s parent company had come out with Gitlab, a Complete DevOps Solution, I had to try it. It’s extremely powerful, seamlessly integrates with git, is easy to use, inexpensive, and perfect for Project Gnar.

Back to the Tech Stack


AWS is the original cloud computing platform and remains the dominant player by a 3:1 margin. It’s also the only option I’ve used, so the choice for me was clear. There’s really no reason to look elsewhere — their offering is comprehensive, inexpensive, and offers a generous free tier which we take full advantage of at Project Gnar. The AWS features we’ll be using are:

  • RDS: One db.t2.micro Postgres instance (12 mo. free tier)
  • EC2: Two t2.micro instances (1x 12 mo. free tier, 1x @ $9.50/mo)
  • ECR: Three repositories (132 MB total used, 1 GB/mo. perpetual free tier)
  • ELB: One Internet-facing ELB with TLS termination (12 mo. free tier)
  • Route 53: One Hosted Zone plus data ~$0.60/mo. total (no free tier)
  • EBS: 230 GB/mo., 30 GB/mo. free (12 mo. free tier), rest @ $20/mo total
  • S3: perpetual free tier: 5 GB/mo + 20K GET requests, 2K PUT requests
  • SES: 62K emails/mo perpetual free tier
  • SQS: 1M requests/mo perpetual free tier
  • ElastiCache: One t1.micro cache node (12 mo. free tier)
  • Data Transfer: 15 GB/mo provided under the perpetual free tier

Total Cost: ~ $30.10/mo first twelve months, $76/mo thereafter.

Disclaimer: While I’ve made my best attempt to provide an accurate estimate of AWS costs, I make no guarantee, either express or implied, of the accuracy of these estimates. AWS configurations can be complex and simple configuration missteps can quickly become costly mistakes. You assume full and complete responsibility for all costs associated with your own AWS account.

These cost estimates are based on a total of two t2.micro EC2 instances which is sufficient for low traffic with the two microservices that we’ll initially deploy. Your site will likely grow in traffic and in the number of microservices, which will increase your monthly costs based on the additional services you use.

Back to the Tech Stack


My original version of this article included a long set of manual steps to create the base AWS infrastructure. Fortunately, my good friend Yongjun Jeon (Jun) joined the Project Gnar Beta Team and quickly stepped in and gave me a Terraform script to automate the process, pruning at least two hours off the setup time. The Terraform script in Genesis takes less than five minutes to set up our base infrastructure:

  • One RDS Postgres Database
  • One Autovalidated TLS Certificate
  • Three ECR Registries
  • Three IAM User Accounts
  • One Route 53 Hosted Zone
  • One ElastiCache (Redis) Cluster
  • Two S3 Buckets and One S3 Object
  • One Autovalidated SES Domain
  • One SQS Queue

This base infrastructure is separate from the Kubernetes infrastructure.

Some friends who worked through the manual setup thought that it provides a great introduction to AWS configuration—I may publish it separately, if there’s interest.


I investigated several different deployment options before concluding that Kubernetes was clearly the best choice for Project Gnar. In retrospect, I wish I had spent some time documenting my journey at the time. Here’s what I recall.

On my last professional project, we used ECS with Consul (not my choice). For Project Gnar, I used that as a baseline to evaluate other options based on ease of use, scalability, and networking.

The first option I considered was ECS instances with Task Networking. I spent a couple of days trying to set up a two node cluster before deciding to look elsewhere. I don’t think I ever got the cluster working properly. I was hoping that this approach might be simpler than the Consul setup I had used in the past, but I eventually came to the conclusion that it’s at least as frustratingly time consuming.

Next, I looked into Docker Swarm. I have lots of experience working with Docker containers, so I was naturally attracted to a Docker-brewed deployment solution. I spent some time going through Docker’s docs trying to get a toehold towards an implementation before getting frustrated with the lack of a straightforward set of “getting started” steps. I decided to do a bit more research before investing more time in Swarm and I came across an excellent set of blog posts from Chris Wright at Platform9. His takeaways drove me towards Kubernetes.

Kubernetes vs Docker Swarm

Kubernetes has been deployed more widely than Docker Swarm, and is validated by Google. The popularity of Kubernetes is evident in the chart, which shows Kubernetes compared with Swarm on five metrics: news articles and scholarly publications over the last year, Github stars and commits, and web searches on Google.

Kubernetes leads on all metrics: when compared with Docker Swarm, Kubernetes has over 80% of the mindshare for news articles, Github popularity, and web searches.

Kubernetes vs Mesos + Marathon

A look at the mindshare of Kubernetes vs. Mesos + Marathon shows Kubernetes leading with over 70% on all metrics: news articles, web searches, publications, and Github.

Kubernetes offers significant advantages over Mesos + Marathon for three reasons:

• Much wider adoption by the DevOps and containers community• Better scheduling options for pods, useful for complex application stacks

• Based on over a decade of experience at Google

Kubernetes vs Amazon ECS

Though both container orchestration solutions are validated by notable names, Kubernetes appears to be significantly more popular online, in the media, and among developers.

Above all, Kubernetes eclipses ECS through its ability to deploy on any x86 server (or even a laptop). Unlike ECS, Kubernetes is not restricted to the Amazon cloud. In fact, Kubernetes can be deployed on Amazon in a traditional cloud or hybrid cloud model, and serve as an alternative to those interested in containers using Amazon.

My next step was to test out Minikube. In a few short hours, I had a Minikube cluster set up on my laptop with two instances each of Piste and Off-Piste and communication working between them. Eureka! Or so I thought. I soon discovered that Minikube isn’t designed to be deployed to the cloud. OK, no problem… I moved on to kubectl. I made some decent progress towards a “bare metal” implementation on AWS before discovering kops and ingress‑nginx. True Eureka this time! The combination of kops, ingress‑nginx, and kubectl that I use in Patrol is incredibly powerful and exceptionally straightforward to use. It’s also a lot of fun — just wait to we get to the deployment!

One final note — I was looking forward to EKS, Amazon’s managed Kubernetes service — it was released while I was working on Project Gnar. It costs $145 per cluster / mo — about 5x the cost of the kops cluster we use.

Back to the Tech Stack

Back to the Walkthrough Overview

Photo by Jake Blucker on Unsplash

OK, time to get things set up. The prerequisites (* required) are:

  • * AWS Account - Sign up for a free tier account. If possible, choose either US West (Oregon), US East (Northern Virginia), or EU (Ireland) as your region. Some AWS services (e.g. SES) are only available in specific regions. Choosing one of these three will eliminate the region switching you’ll otherwise have to deal with.
  • * Domain — Use a domain you already own or register a domain using your favorite registrar. Consider using AWS (via Route 53), Google Domains, or Freenom (great price — free)! One gotcha if you choose Freenom — they require that the domain resolves within a week of activation — no problem if you set up your Gnar project today!
  • MacBook - All example code will be given for the MacOS.

Implementation Steps


Homebrew accurately bills itself as “The missing package manager for macOS”. It’s indispensable, and we’ll be using it to install several packages.

  1. Apple’s Xcode development software is a prerequisite for Homebrew. Install it with:
$ xcode-select --install

2. Install Homebrew.

$ /usr/bin/ruby -e "$(curl -fsSL"

If you get a Permission denied error, try

sudo chown -R $USER /usr/local

and then retry the installation.

3. We’ll be using Homebrew to install command line packages as well as graphical applications. We need to install a brew extension to get access to the graphical apps.

$ brew tap caskroom/cask

Back to the Implementation Steps


Atom is an excellent, open source IDE (interactive development environment, aka code editor). If you don’t already have an IDE, install Atom now.

$ brew cask install atom

Back to the Implementation Steps

Environment Variables

We’ll be adding several environment variables to our shell throughout this walkthrough. If you’re not familiar with the process of adding environment variables to your shell, then read on — otherwise, skip ahead.

If you’re using zsh, the variables should be added to ~/.zshenv. If you don’t have a custom shell installed, install zsh now:

$ brew install zsh

If you’re using vanilla bash or a different custom shell, then I’m sure you know where to put the variables.

Edit your ~/.zshenv file with Atom.

$ atom ~/.zshenv

Adding an environment variable is as simple as adding a line to the file. If you don’t have a preference of where to place the envars, put them at the top of the file. Here’s an example of a single environment variable:

export PYTHONPATH=~/gnar

The name of this environment variable is PYTHONPATH and it’s value is the expanded form of ~/gnar, i.e. /Users/<< you >>/gnar. The export command marks an environment variable to be exported with any newly forked child processes.

Whenever you add a new environment variable, be sure to save the file and then source it in your shell, i.e.:

$ . ~/.zshenv

Note that . is a shortcut for source.

One thing that often trips people up is forgetting to source ~/.zshenv in other open tabs of their shell after adding new envars. Keep this in mind as we’re adding new environment variables and executing commands in the shell.

All of the project’s environment variables use the prefix GNAR. If you like, you can replaceGNAR with a preferred prefix (Gear’sGnarApp accepts an optional env_prefix parameter). If you prefer another prefix, there will be a few commands in this walkthrough you’ll need to adjust.

Back to the Implementation Steps


We need to verify an email address so that we can test the app signup process. If you have an email address on the domain you’re using for the Gnar app, you can move on to step 5.

  1. Log into AWS, click on the Services menu (upper-left of the screen), click on Simple Email Service.

2. Click on Email Addresses in the sidebar.

3. Click on Verify a New Email Address. In the modal, enter an email address you’d like to use to test the app signup and click Verify This Email Address. Find the verification email and click on the link.

4. Take a moment to decide on an email address to use as your from address for the signup and reset password emails (suggestion: Add the email address as an environment variable.

export GNAR_EMAIL_SENDER='<< chosen email address >>'

5. By default, you can only send emails to verified email addresses. If you want to allow anyone to signup to your app then we need to submit a support ticket to lift that restriction. Head over to and create a case. Your case needs to include the two requests shown below. The values I’ve chosen are the minimum AWS values for an unlocked account. It takes about 24 hours for AWS to handle the request.

Back to the Implementation Steps

Terraform IAM Account

  1. Log into AWS, click on the Services menu (upper-left of the screen), select IAM.

2. Click on Users in the sidebar and then click on Add User. Enter terraform as the User name and select Programmatic access.

3. Click on Next: Permissions and then Attach existing policies directly. Select the AdministratorAccess policy.

4. Click Next: Review, review the settings and click Create User.

5. Click on Show under the Secret access key.

6. Add the Access key ID and the Secret access key as environment variables. Also add a var for your AWS region. We need these for the Terraform deployment.

export TF_VAR_aws_access_key='<< Access key ID >>'
export TF_VAR_aws_secret_key='<< Secret access key >>'
export TF_VAR_aws_region='us-west-2'

Oregon is us-west-2. If your region is elsewhere, look up your region code:

Back to the Implementation Steps


The AWS CLI tool allows us to programmatically manage our AWS services.

$ brew install awscli

2. Configure the AWS CLI for your IAM account.

$ aws configure --profile terraform

3. At the prompts, enter your AWS Access Key ID & AWS Secret Access Key. Enter us-west-2 for the Default region name (or the region of your AWS account) and json as the Default output format.

$ aws configure --profile terraform
AWS Access Key ID [None]: AKIA****************
AWS Secret Access Key [None]: << your Secret access key >>
Default region name [None]: us-west-2
Default output format [None]: json

4. Add an environment variable for the profile.

$ export AWS_PROFILE=terraform

Back to the Implementation Steps


  1. Head over to the Genesis repo and fork it.

2. Click on the Copy URL to clipboard button and clone your fork of the repo. Make sure you’re in the gnar folder when you run this command.

3. We’ll be forking and cloning the Genesis, Piste, Off-Piste, Powder, and Patrol repos. Decide where you’d like to store them on your disk — I suggest ~/gnar. If you prefer another location, there will be a few commands in this walkthrough you’ll need to adjust. You’re also welcome to rename genesis, piste, off-piste, powder, and patrol — the same caveat applies.

$ cd ~/gnar && git clone<< you >>/genesis.git

4. Determine your default VPC id.

$ aws ec2 describe-vpcs | grep VpcId"VpcId": "vpc-99999999",

5. Find two available S3 bucket names — one will be public, for our app’s assets and the other will be private, for our Kubernetes deployment state. S3 bucket names must be unique within the AWS ecosphere, so you’ll need to try your bucket names with

$ aws s3api head-bucket --bucket 'test'

until you get a 404 — Not Found response. My suggestion is:

  • public: << your domain >>-<< tld >>
  • private: << your domain >>-<< tld >>-kops-state-store

For example, gnar-ca and gnar-ca-kops-state-store, respectively.

6. Add these environment variables:

export TF_VAR_domain='<<< >>'
export TF_VAR_vpc_default_id='vpc-99999999' # <- your VPC idexport TF_VAR_db_identifier='gnar'export TF_VAR_db_username='dbAdmin'

export TF_VAR_db_password='<< choose a password >>'

export TF_VAR_s3_app_bucket_name='<< app bucket name >>'
export TF_VAR_s3_kops_bucket_name='<< kops bucket name >>'
export TF_VAR_logo_file_name='logo-125x125.png'

7. The Project Gnar logo is in the Genesis repo. If you’d like to use a custom logo, create a 125px wide version of it, preferably in png format. A good naming strategy is to include the image dimensions in the file name, for example logo-125x<< image height >>.png. Save the image in the assets folder and update the associated environment variable (above). This logo is uploaded to S3 and used by the activation and reset password email template in Piste.

8. Run the CIDR script and add the environment variables it outputs.

$ python scripts/
export TF_VAR_subnet_a=' ... 'export TF_VAR_subnet_b=' ... 'export TF_VAR_subnet_c=' ... 'export GNAR_KOPS_SUBNET_ZONE=' ... '

export GNAR_KOPS_SUBNET_ID=' ... '

9. Install Terraform.

$ cd ~/gnar/genesis && \ brew install terraform && \

terraform init manifest

10. Deploy the infrastructure. Enter yes at the prompt. It will take a few minutes for the infrastructure creation process to complete.

$ terraform apply manifest
Do you want to perform these actions? Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.

Enter a value: yes

11. The Terraform run may not succeed on the first attempt. If you see
Error: Error applying plan, run terraform apply manifest again.

12. When the infrastructure is ready, Terraform will output a set of information we need to complete the deployment. This info can be retrieved at any time with:

$ terraform output

When you see << terraform_output.variable >> later on in the walkthrough, it refers to variables in this output.

13. Add the environment variables from the environment_variables section of the Terraform output.

export GNAR_ECR_URI_OFF_PISTE=' ... 'export GNAR_ECR_URI_PISTE=' ... '

export GNAR_ECR_URI_POWDER=' ... '

export GNAR_EMAIL_HOST=' ... '
export GNAR_EMAIL_LOGO_URL=' ... '
export GNAR_PG_ENDPOINT=' ... '
export GNAR_SES_ACCESS_KEY_ID=' ... 'export GNAR_SES_REGION_NAME=' ... '


export GNAR_SQS_ACCESS_KEY_ID=' ... 'export GNAR_SQS_REGION_NAME=' ... '


export KOPS_CLUSTER_NAME=' ... '
export KOPS_STATE_STORE=' ... '

14. Create an AWS profile for kops using the values from the Terraform output.

$ aws configure --profile kops
AWS Access Key ID [None]: << iam_kops_access_key_id >>
AWS Secret Access Key [None]: << iam_kops_secret_access_key >>
Default region name [None]: us-west-2
Default output format [None]: json

15. Use the AWS CLI to get your redis cluster’s primary endpoint — Terraform doesn’t support providing it at this time.

$ aws elasticache describe-cache-clusters \ --cache-cluster-id gnar-cache \

--show-cache-node-info | grep Address

16. Add an environment variable for it.

export GNAR_REDIS_ENDPOINT='gnar-cache. ...'

17. Our base infrastructure is up! Infrastructure setup party time!

Back to the Implementation Steps

  1. If you’re not using AWS as your domain registrar, then head over to your domain registrar to add the DNS records for our domain. We’ll need the domain_nameserver_records from the Terraform output.
  • On Google Domains, go to the Configure DNS section and enter the records.
  • On Freenom, go to Nameservers under Management Tools.

Back to the Implementation Steps

$ brew cask install pgadmin4

2. Look for the Postgres icon in the menu bar. If it’s not there, start PgAdmin 4 using Spotlight Search (⌘ + Space).

3. Open a new pgAdmin 4 window.

3. Ctrl + Click on Servers and select Create > Server...

4. Enter your project name as the server name.

5. Enter << terraform_output.db_endpoint >> as the Host name/address, dbAdmin as the Username, your password, and then click Save.

Back to the Implementation Steps

Database Migration

  1. Expand the Browser tree as shown, Ctrl + Click on the postgres database and select CREATE Script.

2. Copy the following code into the Query editor (replace any other code).

3. Click ⚡ or use the dropdown.

4. Expand Login/Group roles in the tree, Ctrl + Click on appUser and select Properties....

5. Click on the Definition tab, choose a password, and click Save.

6. Add these environment variables.

export GNAR_PG_DATABASE='postgres'
export GNAR_PG_PASSWORD='<< appUser password >>'
export GNAR_PG_USERNAME='appUser'

Back to the Implementation Steps

Google reCAPTCHA

We use the free Google reCAPTCHA service to help guard the app’s public endpoints from bots.

  1. Head over to Google’s reCAPTCHA admin site, enter a label for your site, choose Invisible reCAPTCHA, enter your domain name and also localhost, accept the terms and click Register.

2. Add environment variables for your site key and secret key.

export GNAR_RECAPTCHA_SITE_KEY='<< your reCAPTCHA site key >>'
export GNAR_RECAPTCHA_SECRET_KEY='<< your reCAPTCHA secret key >>'

Back to the Implementation Steps


  1. Head over to and sign up for a free account in their cloud.

2. If you already have a Github (or Google, Twitter, Bitbucket) account, you can skip the registration step and sign in with OAuth. Otherwise, go through the registration steps.

Back to the Implementation Steps


  1. We’ll be forking and cloning the Piste, Off-Piste, Powder, and Patrol repos. Decide where you’d like to store them on your disk — I suggest ~/gnar. If you prefer another location, there will be a few commands in this walkthrough you’ll need to adjust. You’re also welcome to rename piste, off-piste, powder, and patrol — the same caveat applies.
$ mkdir ~/gnar && cd ~/gnar

2. Add (or set) this folder to the PYTHONPATH environment variable in ~/.zshenv or ~/.bashrc and source the file (. ~/.zshenv). This enables Python to find our piste imports, e.g. from import main (piste must be a subdirectory of a directory listed in PYTHONPATH).



export PYTHONPATH=~/gnar

3. Head over to the Piste repo and fork it.

4. Click on the Copy URL to Clipboard button and clone your fork of the repo. Make sure you’re in the gnar folder when you run this command.

$ cd ~/gnar && git clone<< you >>/piste.git

5. Set up the Python environment for Piste. A couple of things to note here:

  • Docker is used to build images for the Kubernetes deployment
  • libev is a prerequisite for bjoern, our production WSGI server
  • Pyenv and virtualenv enable us to manage project-specific Python environments
  • We’re using Python 3.6.4 because that’s what we’ll have in our production docker container — it’s based on Gnar Base which is based on Alpine Linux 3.8 (latest version). When we install Python in alpine:3.8, we get version 3.6.4.
$ cd ~/gnar/piste && \ brew install \ docker \ docker-compose \ libev \ pyenv \ pyenv-virtualenv && \ pyenv install 3.6.4 && \ pyenv virtualenv 3.6.4 piste-venv-3.6.4 && \ pyenv local piste-venv-3.6.4 && \ pip3 install --upgrade pip setuptools && \ pip3 install -r app/requirements.txt && \

pip3 install -r test/requirements.txt

6. If you chose to override the GNAR environment variable prefix, edit app/ and change

GnarApp('piste', production, port, sqs=sqs).run()


GnarApp('piste', production, port, env_prefix='<< PREFIX >>', sqs=sqs).run()

and also add the prefix to GNAR_REDIS_ENDPOINT in this file.

7. Choose a JWT secret. Add these environment variables.

export GNAR_EMAIL_PROJECT_NAME='<< your project name >>'
export GNAR_JWT_SECRET_KEY='SuperSecretSecret'

8. Start the Piste microservice.

$ python3 app/ -p 9400

9. If everything goes well, you should get a nice set of startup logs 😅. If you get any errors, look for an indication of what’s gone wrong in the logs and retrace your steps of that section in this tutorial.

>> Logging at INFO>> Argon2 PasswordHasher instantiated with defaults>> Database connected at ***>> Listening on /10-45>> Listening on /user>> JWT configured, tokens expire in 15 minutes

>> Gnar-Piste development server is up

The Gnar development server is a fault-tolerant Flask WSGI development server. It reloads when any app files change. It will also log the stack trace of any module that fails to load, and continue to watch for file changes.

The Gnar production server is a high performance Bjoern WSGI server. If you like, you can test out the production server with

GnarApp('piste', True, port).run()

10. In a separate shell, log into your ECR registry via Docker.

$ $(aws ecr get-login --profile kops --no-include-email)

11. Build a docker image for Piste and push it to the AWS ECR. If you’ve renamed Piste, you also need to change PROJECT_NAME in Dockerfile.

$ docker build -t $GNAR_ECR_URI_PISTE\:latest . && \
docker push $GNAR_ECR_URI_PISTE\:latest

12. It will take a few minutes to perform the initial image push — subsequent pushes will be quicker because docker only pushes new layers of the build. The output should look similar to (but probably not identical to):

>> The push refers to repository []>> 039b85baa664: Pushed>> 266d8ce9901a: Pushed>> cd0ae9e1cc86: Pushed>> 60cacbc5c84a: Pushed>> 357077ade9c3: Pushed>> 2ab94e06c201: Pushed>> e8b0400dc8a1: Pushed>> 73046094a9b8: Pushed

>> latest: digest: sha256:<< hash >> size: 1992

13. Piste is ready for deployment!

14. And now I show you… the dance of my people.

Piste implementation / customization notes

The Piste microservice is quite concise. There are only 24 files in the entire repo. The “heavy lifting” is handled by Gnar Gear and its dependencies. My goal with Gnar Gear was to make microservice creation as simple as possible.

The most important files in Gnar Piste are app/user/ and app/user/ These are the files you’ll edit when you want to change Piste’s behavior. If you want to extend Piste to handle, say administration tasks, add a new folder under app (e.g. admin) with an and file and code out the new endpoints and behavior, emulating the examples in user. Before extending Piste, first consider if creating a new microservice is more appropriate.

If you’d like to create a Gnar Gear-based microservice from scratch (instead of cloning the repo), be sure to read the Application Structure and Blueprints sections of the docs.

Both Piste and Gear have 100% test coverage. To test Piste, run pytest. To test Gear, clone its repo and run tox.

If you have problems running tox, it’s likely because it can’t find the versions of Python it needs. It will be looking for python3.5, python3.6, and python3.7 under /usr/bin and /usr/local/bin. Pyenv (brew install pyenv) is the best tool to use when installing versions of Python, but it may not add the symlink tox needs . To fix the issue, create the symlink manually, for example:

$ pyenv install 3.6.5$ ln -s ~/.pyenv/versions/3.6.5/bin/python3.6 /usr/bin/python3.6

$ tox

Back to the Implementation Steps


2. Click on the Copy URL to clipboard button and clone your fork of the repo. Make sure you’re in the gnar folder when you run this command.

$ cd ~/gnar && git clone<< you >>/off-piste.git

3. Set up the Python environment for Off-Piste.

$ cd ~/gnar/off-piste && \ pyenv virtualenv 3.6.4 off-piste-venv-3.6.4 && \ pyenv local off-piste-venv-3.6.4 && \ pip3 install --upgrade pip setuptools && \ pip3 install -r app/requirements.txt && \

pip3 install -r test/requirements.txt

4. Start the Off-Piste microservice. Feel free to choose a different port, as long as it doesn’t conflict with Piste — be sure to use the same port in the GNAR_OFF_PISTE_SERVICE_PORT environment variable. Note that Gnar Gear looks for a GNAR-prefixed environment variable when the app is in development mode. In production mode, Gnar Gear drops the GNAR prefix — the Kubernetes deployment sets the environment variable in each pod.

$ python3 app/ -p 9401

5. Set the peer environment variable for Off-Piste.


6. Switch over to your Piste shell and exit Piste (Ctrl + C). Source ~/.zshenv or ~/.bashrc and restart Piste (python3 app/ -p 9400).

7. Test out our microservice intra-communication!

$ curl http://localhost:9400/10-45>> "Off-Piste checking in! Check back with Piste at 
/10-45/get-message/<< your private message id >> in 1 minute for a secret message."

We’re not quite ready to pick up our secret message right now. That will have to wait until after we deploy the app because we’ll be using our deployment’s nodes EC2 instance to communicate with Redis. Make a note of the message id until then.

8. Check out the line of code in Piste (app/10-45/ that we use to communicate with Off-Piste.

response = app.peer('off-piste').get('check-in').json()

This is a really important piece of the microservice puzzle that I want to point out — this communication layer is zero configuration in production. In development, you only need to add the envar.

9. In a separate shell, log into your ECR registry via Docker (in case the login from earlier has expired).

$ $(aws ecr get-login --profile kops --no-include-email)

10. Build a docker image for Off-Piste and push it to the AWS ECR. If you’ve renamed Off-Piste, you also need to change PROJECT_NAME in Dockerfile.

$ docker build -t $GNAR_ECR_URI_OFF_PISTE\:latest . && \
docker push $GNAR_ECR_URI_OFF_PISTE\:latest

11. After a couple of minutes you should get output that looks similar to:

>> The push refers to repository []>> 3d605d675f75: Pushed>> 3bbf6e120a1c: Pushed>> e5a01c4c8c1c: Pushed>> db5d6e16444f: Pushed>> 357077ade9c3: Pushed>> 2ab94e06c201: Pushed>> e8b0400dc8a1: Pushed>> 73046094a9b8: Pushed

>> latest: digest: sha256:<< hash >> size: 1992

12. Off-Piste is ready for deployment!

Back to the Implementation Steps


  1. Head over to the Powder repo and fork it.

2. Click on the Copy URL to clipboard button and clone your fork of the repo. Make sure you’re in the gnar folder when you run this command.

$ cd ~/gnar && git clone<< you >>/powder.git

3. Install node.

$ brew install node

4. Set up the npm environment for Powder.

$ npm install

5. If you’d like to customize Powder for your project, replace src/assets/images/logo.png with your project’s logo and update the language files (i.e. all the .json files under src/js/i18n).

6. Just before we start the app, check that the GNAR_RECAPTCHA_SITE_KEY envar is defined in your current shell.


If it’s not defined, source your ~/.zshenv or ~/.bashrc file.

7. Start the UI. Make sure Piste is still running… we’ll be testing it out in the next steps!

$ npm run start:open

The :open part of the command opens a new browser tab. If you later restart the app and want to use the same browser tab, use

$ npm run start

8. If everything goes well, you’ll be greeted with your project’s login screen.

9. Try out the Internationalization (I18N) support.

10. Sign yourself up!

11. Check your email. If everything is setup properly, you should receive an activation email within a few seconds of pressing submit. If you don’t get the email, check Piste’s logs to see what’s gone wrong.

12. The activation link is a base64 encoded string which contains your email address and a unique activation id (UUID). The activation id is tied to your account in the database and the link expires after an hour. Piste is designed to use http://localhost:3000 in the activation link when run in development and when run in production. When you click on the activation link you should briefly see an activation message and then be redirected to the set password screen. I’ll let you in on a little secret — the activation message is just a dramatic pause, a 2s delay in src/js/redux/sagas/activate.js — the account isn’t “activated” until a password is set and, of course, it only takes a few milliseconds to do that.

13. Set a password for your account. The basic password requirements are 8 characters and no spaces. Once the user has entered at least 8 characters, we start scoring their password using zxcvbn in Piste via /user/score-password. This provides a subjective score on a scale from 0 to 4 — we require a score of 4. Once the user has entered a password that has achieved a score of 4, we move onto checking the password against every known hacked account using We use their range api to protect the value of our user’s new password (via k-anonymity). We require that the user’s password does not match any known hacked account.

14. Once you’ve created a password that passes the strength tests, confirm the password and submit it. Piste will make the same checks before saving a salted hash of the password in the database. We use Argon2, the winner of the 2015 Password Hashing Competition, to create and verify our password hashes. Argon2 provides excellent resistance against GPU cracking attacks as well as rainbow tables. After we save your password hash you’ll be redirected to the login screen to use your newly minted creds. Once you log in, you’ll find yourself at the Personal Details screen.

15. I’ve intentionally not added much content to Project Gnar, but I wanted to have something interesting on the landing page. I figured an address form might be the most broadly useful component, so there you go. Each of the fields have their autocomplete property set according to the HTML5 spec which plays nicely with Chrome and other browsers. I also managed to get the Province / State select to (mostly) cooperate.

16. There are a couple of other pages in the account view — click Back and Next. I included these pages in case you might have a use for them in your project. The one other page worth mentioning is the Forgot Your Password? page, accessible from the login screen. This will send a unique reset password link using essentially the same process as the account activation.

17. All the elegant UI components are thanks to Material UI. It’s a fantastic implementation of Google’s Material Design by a small, dedicated core team with massive community support. In addition to a huge toolbox of components, we also get awesome responsive UI support via the Grid. The Material UI library is also built with ES6 module support, which is critical for keeping Powder’s build size small.

18. We don’t expose Piste’s 10-45 endpoint in the UI, but we can test it out in the JavaScript console. Open the console up (⌥ + ⌘ + J in Chrome) and give it a whirl.

>> Off-Piste checking in! Check back with Piste at 
/10-45/get-message/<< your private message id >> in 1 minute for a secret message.

As with the curl to 10-45, we’re not ready to pick up this message — keep a note of this message id as well.

19. Let’s take a quick look at the build. We use webpack and babel to produce an optimized production build that has good cross-browser support.

$ npm run build:prod:analyze

We split the build into several chunks to take advantage of the browser’s concurrent connections. When the app loads, the bundles are loaded simultaneously. We also enable gzip in the nginx server that we bundle in Powder’s docker container. The result is a snappy load time of ~ 200-300ms (without cache).

I’m not going to go into detail on Powder’s architecture or build in this walkthrough. If you’re interested in a deeper dive, please be sure to check out Powder’s README.

20. The previous command built the app — it’s now ready to containerize. If you chose to use a custom environment prefix, rebuild the app with

$ npm run build:prod -- --env.envPrefix='<< PREFIX >>'

21. In a separate shell, log into your ECR registry via Docker (in case the login from earlier has expired).

$ $(aws ecr get-login --profile kops --no-include-email)

22. Build a docker image for Powder and push it to the AWS ECR.

$ docker build -t $GNAR_ECR_URI_POWDER\:latest . && \
docker push $GNAR_ECR_URI_POWDER\:latest

23. Just like the Piste & Off-Piste builds, this step will take a couple of minutes.

>> The push refers to repository []>> 5bafb8b45345: Pushed>> a936f6b0e459: Pushed>> 7ab428981537: Pushed>> 82b81d779f83: Pushed>> d626a8ad97a1: Pushed

>> latest: digest: sha256:<< hash >> size: 1366

24. Powder is ready for deployment!

It’s time to deploy! Let the fireworks begin!

Back to the Implementation Steps

2. Click on the Copy URL to clipboard button and clone your fork of the repo. Make sure you’re in the gnar folder when you run this command.

$ cd ~/gnar && git clone<< you >>/patrol.git

3. Create an RSA key pair. Skip this step if you already have an RSA key pair (look for id_rsa in ~/.ssh). Save the key pair to the default location.

$ ssh-keygen -t rsa

4. Install kubectl (Kubernetes CLI tool) and kops (Kubernetes Cloud integration).

$ brew install kubernetes-cli kops

5. Create a key pair. We’ll need this if we ever want to ssh into the EC2 containers. Open the AWS Services menu and click on EC2.

6. Click on Key Pairs under Resources.

7. Click on Create Key Pair, enter ec2 as the name, and click Create.

8. The private key (ec2.pem) is automatically downloaded by the browser. Move it to ~/.ssh, lock down its access rights, and then add it to the SSH authentication agent.

$ mv ~/Downloads/ec2.pem ~/.ssh/ec2.pem && \ chmod 400 ~/.ssh/ec2.pem && \

ssh-add ~/.ssh/ec2.pem

9. Set the default AWS profile to kops.

$ export AWS_PROFILE=kops

10. Make sure all the environment variables we need to create the cluster are defined in the current shell.

$ : "${KOPS_CLUSTER_NAME:?KOPS_CLUSTER_NAME is missing}" \ : "${TF_VAR_vpc_default_id:?TF_VAR_vpc_default_id is missing}" \ : "${GNAR_KOPS_SUBNET_ZONE:?GNAR_KOPS_SUBNET_ZONE is missing}" \


If any envar is missing, source your ~/.zshenv or ~/.bashrc file and try again. If it’s still missing, check your envars.

11. Create the cluster config.

$ kops create cluster \ --name $KOPS_CLUSTER_NAME \ --cloud aws \ --vpc $TF_VAR_vpc_default_id \ --zones $GNAR_KOPS_SUBNET_ZONE \ --subnets $GNAR_KOPS_SUBNET_ID \ --master-size t2.micro \ --node-size t2.micro \ --node-count 1 \

--dry-run -o yaml > cluster.yml

12. Edit cluster.yml and add these additionalPolicies under spec and also the sshKeyName under sshAccess. During the beta testing, we discovered these additional policies are required to get the Ingress working with older AWS accounts — there’s no harm in adding them even if you have a shiny new account.

...spec: additionalPolicies: master: | [ { "Effect": "Allow", "Action": "iam:CreateServiceLinkedRole", "Resource": "arn:aws:iam::*:role/aws-service-role/*" }, { "Effect": "Allow", "Action": [ "ec2:DescribeAccountAttributes", "ec2:DescribeInternetGateways" ], "Resource": "*" } ]... sshAccess: - sshKeyName: ec2


13. Create the cluster.

$ kops create -f cluster.yml

14. Attach your RSA public key to the cluster.

$ kops create secret --name $KOPS_CLUSTER_NAME \
sshpublickey admin -i ~/.ssh/

This step works around a bug in kops — it shouldn’t be required when we’re using an existing EC2 key pair. The images are actually built with the named key pair, so it’s just a minor inconvenience. Without this step, kops 1.10.0 will error out with:

SSH public key must be specified when running with AWS

15. Deploy the cluster.

$ kops update cluster --name $KOPS_CLUSTER_NAME --yes

16. If everything goes well, you’ll see a bunch of status logs and then finally:

Cluster is starting. It should be ready in a few minutes.

17. Kubernetes is now working to create our AWS infrastructure — 2 EC2 instances, 5 volumes, 1 key pair, 1 load balancer, and 2 security groups. It will take several minutes for AWS to complete this work and for Kubernetes to create its control plane. You can track the progress with

$ kops validate cluster

When the cluster is ready, the output will look like:

Validating cluster
INSTANCE GROUPSNAME ROLE MACHINETYPE MIN MAX SUBNETSmaster-us-west-2c Master t2.micro 1 1 us-west-2c

nodes Node t2.micro 1 1 us-west-2c

NODE STATUSNAME ROLE master True node True

Your cluster is ready

18. Check out the newly minted infrastructure in the AWS UI. Open the AWS Services menu and then click on EC2.

The important components are the running instances and the load balancer. The running instances are our two EC2 machines that host the Kubernetes control plane, and all our services and pods. The load balancer is our ingress — it handles connections from the internet, forces and terminates TLS connections, and routes traffic to our services.

19. Click on Security Groups. Click on the default security group in the list of security groups and then click on the Inbound tab.

20. Click on Edit and then click Add Rule. Enter 6379 for the Port Range. In the Source input, type ‘no’ and then select the entry and then click Save.

21. Make sure all our environment variables we need for Piste are defined in the current shell.



If any envar is missing, source your ~/.zshenv or ~/.bashrc file and try again. If it’s still missing, check your envars.

22. Add our secrets.

$ kubectl create secret generic email \ --from-literal 'host='$GNAR_EMAIL_HOST \ --from-literal 'logo-url='$GNAR_EMAIL_LOGO_URL \ --from-literal 'project-name='$GNAR_EMAIL_PROJECT_NAME \ --from-literal 'sender='$GNAR_EMAIL_SENDER && \ kubectl create secret generic jwt \ --from-literal 'secret-key='$GNAR_JWT_SECRET_KEY && \ kubectl create secret generic postgres \ --from-literal 'endpoint='$GNAR_PG_ENDPOINT \ --from-literal 'database='$GNAR_PG_DATABASE \ --from-literal 'username='$GNAR_PG_USERNAME \ --from-literal 'password='$GNAR_PG_PASSWORD && \ kubectl create secret generic recaptcha \ --from-literal 'secret-key='$GNAR_RECAPTCHA_SECRET_KEY kubectl create secret generic redis \ --from-literal 'endpoint='$GNAR_REDIS_ENDPOINT && \ kubectl create secret generic ses \ --from-literal 'region-name='$GNAR_SES_REGION_NAME \ --from-literal 'access-key-id='$GNAR_SES_ACCESS_KEY_ID \ --from-literal 'secret-access-key='$GNAR_SES_SECRET_ACCESS_KEY &&\ kubectl create secret generic sqs \ --from-literal 'region-name='$GNAR_SQS_REGION_NAME \ --from-literal 'access-key-id='$GNAR_SQS_ACCESS_KEY_ID \

--from-literal 'secret-access-key='$GNAR_SQS_SECRET_ACCESS_KEY

23. Specify our container images. Update the URIs in

  • manifest/off-piste-deployment.yml
  • manifest/piste-deployment.yml
  • manifest/powder-deployment.yml

24. If you chose to use a custom env_prefix with Piste and Off-Piste, update the envars in piste-deployment.yml and off-piste-deployment.yml.

25. Add our application services.

$ kubectl apply -f manifest

26. Wait until our application pods are up. Check the status with

$ kubectl get all --all-namespaces

When the pods are ready, the output will look like:

NAMESPACE NAME READY STATUS RESTARTS default pod/off-piste-7d499d8fd6-5vw8q 1/1 Running 0default pod/off-piste-7d499d8fd6-vldjf 1/1 Running 0default pod/piste-74f6dc97bf-llhq2 1/1 Running 0default pod/piste-74f6dc97bf-qklgb 1/1 Running 0default pod/powder-74798dbccd-f6dwq 1/1 Running 0

default pod/powder-74798dbccd-fz8f2 1/1 Running 0

27. Add our TLS Certificate to the NGINX Ingress Controller. Edit ingress/service-l7.yml and enter << terraform_output.certificate_arn >> beside

28. Change the host in app-ingress.yml.

spec: rules:

- host: << >>

29. Add the NGINX Ingress Controller.

$ kubectl apply -f ingress

30. Add an A record for to point to our Ingress Controller / ELB. Open the AWS Services menu and then click on Route 53.

31. Click on and then click on Create Record Set. Enter app as the Name and click on Yes beside Alias. Click into the Alias Target input, select the entry under ELB Classic load balancers and click Create.

If there is no ELB listed, enter the address from the following command as the Alias Target.

$ kubectl get ingress

32. Launch!

Photo by SpaceX on Unsplash

Enter in your browser. It typically takes 15–60 minutes for the A record to propagate. Occasionally refresh the browser until you’re redirected to The Ingress Controller takes care of forcing a secure connection. You should be able to log in and poke around.

33. Open up the JavaScript console and test out our microservice intra-communication with Off-Piste.

$ fetch('10-45') .then(response => response.json())

.then(data => { console.log(data); })

>> Off-Piste checking in! Check back with Piste at
/10-45/get-message/<< your private message id >> in 1 minute for a secret message.

34. Retrieve your secret messages — you have one from the last step and two others from the previous steps.

35. Congratulations! You’ve arrived.

Take a moment to reflect and enjoy the view!

Photo by asoggetti on Unsplash

Back to the Implementation Steps

Here are a few kubectl commands and resources you might find helpful:

$ kubectl get all --all-namespaces
$ kubectl get ingress --all-namespaces # not included with all
  • Viewing resource metadata
$ kubectl describe service/piste-service$ kubectl describe service/ingress-nginx --namespace ingress-nginx$ kubectl describe pod/piste-xxxxxxxxxx-xxxxx

$ kubectl describe ingress app-ingress

  • Aggregated logs for all pods in a service
$ kubectl logs service/powder-service$ kubectl logs service/piste-service

$ kubectl logs service/ingress-nginx --namespace ingress-nginx

  • Logs for an individual pod
$ kubectl logs pod/powder-xxxxxxxxxx-xxxxx$ kubectl logs pod/piste-xxxxxxxxxx-xxxxx$ kubectl logs pod/nginx-ingress-controller-xxxxxxxxxx-xxxxx \

--namespace ingress-nginx

  • Bash into a pod’s docker container (note: remove pod/ prefix from pod id)
$ kubectl exec -it piste-xxxxxxxxxx-xxxxx sh
  • Redeploy a service after updating its docker container
$ kubectl delete deployment piste && \
kubectl apply -f manifest/piste-deployment.yml
$ kubectl delete secret jwt && \ kubectl create secret generic jwt \

--from-literal 'secret-key='$GNAR_JWT_SECRET_KEY

Note: in production, use versioned containers and rolling updates, e.g.:

$ kubectl set image deployment/piste \

Back to the Implementation Steps

SSH Access

If you’d like to poke around your EC2 instances, you can SSH into the machine using the ec2 private key that we added to the SSH authentication agent earlier in the walkthrough.

  1. Open the AWS Services menu and click on EC2.

2. Click on the Running Instances.

3. Click on the instance you want to access and then copy its IPv4 Public IP to the clipboard.

4. SSH into the instance. If you prefer, you can use the Public DNS in place of the IPv4 Public IP.

$ ssh -A admin@999.999.999.9999

5. You can also use your SSH access to tunnel into the EC2 instance. This can be useful for accessing Redis locally. To access Redis, we need to open a tunnel to the instance — use its IPv4 Public IP in the following command.

$ ssh -f -N -L6379:<< your Elisticache primary endpoint >>:6379 \

This will open a tunnel that runs in the background. If the tunnel is opened successfully, there won’t be any terminal output after running that command. If you want to later close the tunnel, you need to use kill, i.e.:

$ ps aux | grep 6379

Find the process id associated with the ssh tunnel and then close it with

$ kill -9 << process id >>

6. When the tunnel is open, you can set GNAR_REDIS_ENDPOINT to localhost and access Redis locally in Piste. You also need to change this line in Piste

redis_host = production and getenv('GNAR_STAGE_REDIS_ENDPOINT')


redis_host = getenv('GNAR_STAGE_REDIS_ENDPOINT')

Back to the Implementation Steps


When you’re ready to tear down the app, delete the cluster and the infrastructure. Note that you need to manually unlink the nodes security group from the default security group (undo step 20) — everything else should come down automatically.

$ kops delete cluster --name=$KOPS_CLUSTER_NAME --yes$ cd ~/gnar/genesis && terraform destroy manifest

Back to the Implementation Steps

Back to the Walkthrough Overview

The Road Ahead

Photo by Andreas Selter on Unsplash

I’ve had a lot of fun working on this project. Unfortunately, I’ve put in all the time I can for now and it’s time for me to get a new job. I essentially took a 6 month sabbatical to create Project Gnar. I’m excited to discover how it’s received by the community and I’m also excited to see where my own professional journey leads to next (and so is my wife).

If you would like to reward or support my work on Project Gnar, please consider becoming a patron. Alternatively, crypto would be sweet.

  • BTC: 3CEtWDo5XCNJR2PvesaiEQSQp9CSi7GgtB
  • BCH: qp747tmv4kgqp200yk47dahx8kq32u0z4ul4af43j9
  • ETH: 0xf2A4f25edA8Ff9862476FCFb6165196aD27BBa3a
  • ETC: 0x4B25b34f608774355D4D2B6949BE0d11609EC555
  • XRP: rPPa7vR3RsMPxR3DT9XqfwVXFEgwuLSwFk

Based on the interest amongst my beta team and others that have had early access, Project Gnar appears to have a lot of growth potential. The current wish list is:

  • Other Cloud Providers
  • Administration Functionality
  • Behavioral Tests
  • Aggregated Logs

There are also two technologies that I haven’t had a chance to evaluate — I may incorporate these at a later date:

Photo by Connor Wells on Unsplash

Thank You!

Thanks for taking the time to walk through Project Gnar with me. I really appreciate it and I hope you’ve had some fun along the way!

Photo by Nathan Dumlao on Unsplash

This project would not be possible without the open source community. Project Gnar has almost 100 dependencies (each with their own set of dependencies) with contributions from hundreds of engineers across the world. Here’s the npm output from a recent install of Powder:

added 2669 packages 
from 999 contributors

On the backend, Gear has 9 direct dependencies and almost 50 total dependencies. There are also countless contributions to Terraform, Kubernetes, Linux, Docker, Nginx, Node, Python, and JavaScript for which to be grateful. The open source community is truly awesome.

This is my very first Medium article. I’m grateful for the exceptionally user-friendly UI that Medium has created. I look forward to the experience of publishing this article.


On a personal note, I’m eternally grateful for the loving support of my wife, Feihan, and my parents, Alice and Bob. And from my good boy, Dexter!