Feb052019

Ready Developer One Implementation

In this post we dive into the technical implementation details of the Ready Developer One platform.  If you are curious to know more about the high-level details of the platform and want to try it out to play some games, check out our first post Ready Developer One Introduction.  That post will provide you with links to the console and instructions on how to get in and play the various challenges (including full implementations of Zork 1, 2, and 3).  The audience for the content of this post is the developer or architect curious about how a system like this was put together.  This includes the full DevOps stack and the various larger (and smaller) frameworks and tools we used for each component.

Unfortunately the source code for this platform is not open source, but we may share some peeks at it.  Open sourcing the code would be difficult as it’s a hosted system with a datastore.  We have shared quite a bit about the architecture and all the various tools, libraries, and 3rd party APIs we utilized in development.  We hope you find some gems in there that you can utilize in your own development.

We had a lot of fun building the platform and we were blown away by how it was used at the event.  We hope you get some value from the information below.  If you have any questions regarding anything here please do not hesitate to reach out to me (Kevin Grossnicklaus) directly at kvgros@architectnow.net or on Twitter @kvgros.

Key Technologies and DevOps Platform

Knowing what we wanted to build and some of the technology hurdles facing our team we decided to utilize the following core technologies in the implementation:

“Console” and Scoreboard Interfaces

For all the browser-based user interfaces we utilized Google’s Angular framework and the TypeScript development language.  In addition to Angular we used several smaller NPM packages and themes.  These are discussed in later sections.  Our team has a large amount of experience with Angular and it fit very well into the overall technology architecture we were targeting.

Mobile Badge Scanner

To quickly develop mobile applications for multiple platforms (iOS and Android) we used Microsoft’s Xamarin frameworks (primarily Xamarin.Forms in this instance) and C# as a development language.  Using Xamarin allowed us to develop a solution that targeted both iOS and Android platforms with nearly 100% shared code between both platforms.  This level of code sharing isn’t always possible was nearly reached in this case due to the small size of the overall mobile apps.

Common API

For the APIs and all server-side business logic we leveraged NodeJS and the TypeScript programming language.  This meant that we utilized TypeScript as the development language for the front-end UI code as well as the server-side API code.  To some this might seem unusual, but it is common on our projects.

In addition to utilizing NodeJS as the foundation of our API we leveraged a server-side framework called NestJS to build the APIs.  NestJS provides a very robust API platform on top of Node and allows us to utilize TypeScript and a very “Angular-like” framework to implement APIs.  We utilize NestJS on many projects and it was a natural choice for our team to quickly turn around a very powerful API.

As with the use of Angular, our server-side implementation went beyond just using NodeJS and NestJS.  Many smaller NPM packages were utilized and these are described in more detail below.

Database

All platform data is persisted in a MongoDB Atlas cluster.  As a MongoDB partner we have significant experience with this platform and had capacity on an existing MongoDB cluster.  We knew the platform and infrastructure we had available would be more than enough for the load and availability we needed to support.

Development Tools

Development of all components was done on MacBook Pro’s using JetBrains WebStorm (for TypeScript development of Angular and NestJS components) and JetBrains Rider (for C# development of Xamarin mobile applications).  Xamarin applications were compiled using Visual Studio for Mac.  (The reason for using Rider for editing C# and VS Mac for compiling as that our team much prefers the development experience in Rider vs that of Visual Studio).

All source code was tracked in multiple Git repositories hosted in Azure DevOps (hosted TFS).  Each component (Console UI and scoreboard, mobile apps, and API) was tracked in its own repository and all automated build cycles versioned each component separately following the GitFlow pattern.

Hosting/Development Environment

As a Microsoft Azure CSP, our team is very familiar with design and hosting robust solutions within Microsoft’s Azure cloud.  All server-side components are hosted in Docker containers running Linux.  The deployment and management of all Docker containers is managed by a Kubernetes cluster running in Azure.

A continuous deployment cycle was implemented using Azure DevOps to deploy to both a testing and a release environment (building all Docker images and deploying everything to Kubernetes). 

All mobile applications were deployed for testing via Visual Studio AppCenter.  For final deployment of mobile apps to sponsor users we leveraged a 3rd party platform called Appaloosa.  This platform allowed us to deploy updates directly to user phones on demand without the requirement of going through the Apple or Google app stores.

All exceptions caught by any component were logged to Elmah.io and all team members are notified on new exceptions.  We utilize Elmah heavily on all our projects for this very capability.  

All application monitoring and telemetry was logged with Azure’s Application Insights (now Azure Monitor).  All key user interactions were logged as custom events so that we could track usage trends (and include such data in posts like this).

Cross-browser and cross-device testing was performed using BrowserStack.  We were able to publish our mobile builds to BrowserStack and have testers utilize several devices within their cloud testing bank.

Additional APIs and other Technologies

The API relied heavily on large instances of Redis Cache within Azure to optimize all API calls and cache any responses possible.  The API also persisted some aspect of user state (primarily Zork user files) to Azure Blob Storage.

All emails sent by the API were routed through MailGun.  MailGun’s API allowed us to send all emails and their web-hook infrastructure allowed us to be notified on opens or bounces.

Realtime search of session data, speaker data, and sponsor information was achieved using Algolia’s search APIs.  We have extensive experience leveraging Algolia’s APIs making them a relatively easy choice.  The use of Algolia provided a deceptively powerful search capability for our data and offloaded the workload from our own database.  Algolia’s search capabilities support such features as typo-tolerance and faceting.

To support communication in real-time between various components (primarily from the API down to the scoreboard UI and console UI) we leveraged a 3rd party API called Ably.io. Ably supports a pub/sub infrastructure that allows our scoreboard kiosks to subscribe to an event stream and our server-side API code (running in a cluster of Docker containers orchestrated by Kubernetes) to publish events to this stream.  This simplified a lot of headaches we would have had to overcome with lower level frameworks such as Socket.io.  

Implementation Details

The following sections get into a bit more detail on some of the implementation details of each component.

The “Console” User Interface

The console interface was implemented as an Angular application written in TypeScript (as all new Angular applications are).  The initial Angular structure was created with the Angular CLI and the structure remained pretty standard.  All local state is managed via a Redux store.  We kept the TypeScript API proxies in sync with the API thanks to the API supporting Swagger 3.0 and our constant use of NSwag.  NSwag is a large part of our Angular (and Xamarin) development pipeline and eliminates the need for our team to constantly write wrappers around external APIs.

Only a small number of console commands are actually handled locally within the browser.  This includes the following:

  • login – Prompts locally for a username and password and sends the resulting pair to a login endpoint to request a new Javascript Web Token (JWT) used for subsequent calls.
  • logout – Clears the current JWT token and local user info
  • clear/home – Used to clear the local result buffer (and clear the screen)

All other console commands are implemented in the API as a set of REPL endpoints.  The Angular UI collects a string of text input from the user and sends it to an API endpoint called ‘execute’.  The execute command returns an array of results (for multi-line responses).

To provide the ability to allow for some type of rich results we implemented a series of Angular components for each resulting line.  If the line was basic text, we rendered one component, if the text of the line started with a ‘magic string’ (in our case ‘[img]’) we rendered another component that rendered an HTML <img> tag using the URL in the remainder of the line.  If a result like starts with ‘[err]’ a special Angular component renders the result in red.

The console itself is a running queue of response rows and a NgRx listener which we stream results through.  

To support a familiar command prompt, we were required to implement our own text entry (not using an HTML input box except on mobile…for reasons described below).  We keep a list of previously executed commands and allow a user to use the up/down arrow keys to scroll through them (ala most common command console).

Unfortunately, our mobile testing was less than successful with our hand-written command prompt (as most mobile devices make it difficult to pop up a keyboard unless focus is on an input element).  For this reason, we had to revert to using a standard HTML input element for mobile devices only.

To solve the problem of attendees losing their physical badge (and wanting to get it scanned) we implemented the ability for attendees to use a ‘whoami’ command that, in addition to displaying data on the current user, displays a QR code identical to that printed on their badge.  To render a QR code the console looked for result rows starting with ‘[QR]’ and then rendered a custom Angular component for that row.  For the actual rendering of a client-side QR code we used the following library:  https://github.com/Cordobo/angularx-qrcode

Another key point worth noting regarding the console API is that, for every request sent to the API and every result sent back from the API, a value called ‘commandPrompt’ is passed.  We utilized this value to track different states the current user is in.  For example, if a user was playing Tic-Tac-Toe the ‘commandPrompt’ value is ‘Move?’.  This is rendered by the console as the prompt for the users next command and is sent back to the server with the user input.  This value allows the API to have some context regarding what the current user is doing as it decides how to execute the command.  More information on how this works is described below in the section on the API.

The console interface was used for anonymous users, authenticated attendees, authenticated sponsor users, and global administrators.  

Our onsite support team logged into the same console (as global administrator level accounts) and were provided with many additional capabilities they could use to manage the overall ecosystem.  This included inviting new users, resetting accounts, assigning user accounts to sponsors, etc.

We utilized BrowserStack to test the console thoroughly on as many mobile device browsers and desktop browsers as we could (given the time allotted).  The ultimate result isn’t perfect and could be improved but we still supported a massive amount of mobile traffic with little complaints.

API/Data

As mentioned above the entire API was developed on the NestJS framework using TypeScript.  All endpoints were completed HTTP/JSON enabled and the API was used by both the console 

Before we get into the specifics of the API implementation, for those of you wanting to explore its documentation you can find the Swagger Docs here:  https://api.readydev.one/docs/.

The API itself is heavily based on the NestJS framework.  It utilized NodeJS and Express to manage the underlying HTTP stack.  It supports compression, advanced logging, full CORS support (allowing all anonymous access), Swagger 3.0, consistent error handling, and a full JWT security implementation (via the NestJS Passport plugins).  There are many other low-level features the API exposes as part of its HTTP stack to provide a robust platform for communication.

All MongoDB data access via the API is done via a clean repository layer built around Mongoose and TypeGoose (a TypeScript layer on top of Mongoose).  All MongoDB models are defined in TypeScript classes with TypeGoose decorators.  All data access queries are written in MongoDB’s own query syntax and we utilize a number of projections to simplify scoring.

The API source itself is fairly cleanly organized into a set of layered components.  The primary tiers of the API implementation consist of:

  • Controllers – These NestJS controllers define the surface area of the API.  They are documented via TypeScript decorators that are used to expose a robust Swagger JSON representation of the API.
  • Repositories – This set of classes abstract all MongoDB interaction.
  • Services – All core business logic is abstracted into a set of service classes.
  • Models – All MongoDB document definitions are defined as set of model classes
  • ViewModels – The API itself only exposes ViewModel classes and never a direct MongoDB model.  The mapping between these two data representation classes is handled internally.

The native DI/IoC implementation within NestJS allows for all components to be interconnected via an IoC abstraction.  Adding new capabilities to the API project is extremely easy and (if done correctly) non-obtrusive.  The NestJS development cycle includes a very nice NodeMon setup so that, when code is changed and a save detected, the service stops and restarts (thanks to file watchers via NodeMon).  

API development was commonly done directly from WebStorm using NodeMon and integrated debugging.  Developers also had the capability of running and debugging in Docker containers (which took a bit to build so was only done periodically prior to deployment).

The API was developed to be 100% stateless to support a highly distributed Docker/Kubernetes deployment model.  All inputs are provided via JSON and all responses are returned via JSON.  The only additional information per request (Put/Post/Delete) is the addition of a secure bearer token (JWT).

Some of the key reusable services injected into various components include services providing caching, email features, queuing services (via Bull and described later), and real-time services (via Ably).

Security

The API is secured via JWT bearer tokens.  A ‘security/login’ endpoint validates the user against a proprietary user data store.  We initially evaluated a few ways to use OAuth authentication against external social media stores (FB, Twitter, Google, Microsoft, etc) but couldn’t find a way to make this workflow work nicely in the “retro” process we wanted.  Thus, we fell back to custom writing our own.  Another hurdle we had was that our user based was a fixed set of 1,000 users.  We had a master database prior to the conference of the names and emails of all possible people playing our games (or otherwise accessing the system).  We pre-generated each user document in our database from this data and then automated an invitation email to all users to accept the account and provide their own password.  Once this was done, they had access to all features.  Random people were not allowed to create accounts during the conference event.  Only paid attendees, sponsors, or organizers could have an account. 

Queued Work

A number of our API end-points included work that could (and should) be performed asynchronously.  To easily support queued functionality within the technology stack chosen we utilized an NPM package called Bull.  In addition to the core Bull functionality we also utilized a user interface for Bull called bull-arena. To keep things flexible, we abstracted all queuing into a service we could reuse and then added a configuration option to turn queueing on or off.  If turned off, we also disabled the UI.

Bull utilized our existing Redis infrastructure as it’s backing data store.

Caching

A huge performance gain within the overall API was the reliance on large Redis cache implementation.  We leaned heavily on an NPM library called ioredis as the library for accessing the cache.  The core caching functionality was abstracted into a CacheService and injected via the IoC container into all services requiring its functionality.

During development all team members utilized a local Redis cache running in a Docker image managed by Kitematic.  To easily work with the Redis cache we utilized a 3rdparty tool called Medis.

Emails

The primary means of communicating with system users throughout the event was a system of automated or otherwise triggered emails.  As mentioned above all emails were routed through a 3rd party email API called MailGun.  MailGun served as our SMTP server and provides a number of powerful email delivery features including open and bounce tracking, web-hook callbacks on email events, and more.  

To perform the formatting of our emails (and support a nice standard set of themes) we leveraged an NPM package called email-templates.  This library made it very easy to externalize our email templates/themes into their own folder structure and bind our data to them prior to sending (using MailGun).  We initially started the process using PUG as our template engine but switched to EJS to allow for embedded JavaScript and a more comfortable format for designing the emails. 

As we knew the primary means of viewing email would be mobile phones, we started with a clean mobile email template and tested thoroughly on as many mobile email clients as possible.

Command Parsing

One of the most useful pieces of the overall API is the ability for the API to accept a text string of input from the “console”, parse it, and return results (a standard REPL loop).  We initially evaluated chatbot frameworks, but they weren’t a clean fit, so we went with a more “manual” approach.  After some research we settled on an NPM library called bot-commander. The core of all the user input processing is handled by this library.  On startup we configure an instance of this component and register all the commands we want to support, the parameters they expect, their help text, and any aliases.  In addition to this we provide a function to be executed (with the provided parameters) when the command is recognized.  This allowed us to build up a “language” that we supported.  Once we got the pattern of accepting commands down the concept grew (and grew and grew).  

An example of this can be seen in the following code:

listCmd
    .command('sponsors [filter]')
    .option(
        '-l, --level [level]',
        'Sponsorship level',
        /^(all|platinum|gold|silver)$/i,
        'all',
    )
    .option(
        '-u, --unscanned',
        'Show only unscanned sponsors who are onsite',
    )

    .description('List Sponsors')
    .action((context: ConsoleContext, filter: string, opts) => {

Using this library, we grew a massive amount of easy to use commands we could accept from the web based “console”.  Depending on the user’s role (attendee, sponsor, or global admin) we could register more or less commands to provide additional capabilities.  If a user didn’t have access to a command (and it wasn’t registered) the console didn’t recognize it and we displayed a simple “Syntax Error” message.

We did have to be clever on some of the commands to support async operations as the bot-commander wasn’t natively async friendly.  We worked our own Observable patterns into its usage so we could easily query data asynchronously and return the results in a consistent manner.  Once this hurdle was overcome, we reused it heavily as most commands required some interaction with the database.

Every “response” from the bot-commander framework was tracked with a user “context”.  This context gave access to some common state that we load on entry to the API and track with all bot responses (and then some of it got sent back down to the client).  This allowed the console API endpoints to remain stateless but have the console interface to the user still remember where they are (i.e. are they playing a game, are they just entering commands, etc.)

As the Angular UI displaying the console input was built using a font with fixed width characters (a rarity in the web world but a design decision we had to make), the bot-commander responses we generated with all of our commands had to be cognizant of the spacing so that things looked “retro” and aligned correctly (at least the best we could).  This meant adding black rows periodically for vertical spacing and aligning data horizontally by padding spaces (or other characters).  You can see the results in the printed high score list:

In the above output the rank and score each were left padded with 0’s while the player’s name was left padded to a fixed width using spaces.  This was done to make the output look like it would have been done in the 80’s.

Challenges

The core of the “games” part of the console was built around the concept of “challenges”.  For the initial event we were able to implement 10 of these challenges:

We built the infrastructure to allow us to have hidden challenges, be able to enable/disable challenges for all users, track each individual user progress through the challenges, etc.   This was core to our making the event as fun as possible and keeping things going for a large crowd.  We didn’t want to immediately make all challenges available and wanted to expose new ones at periodic times through the three days.

All challenges were initially assigned a 4-digit number (ala a VISUAL BASIC line number) and to run a challenge a user would type:

run 1700


Some people did this, but we quickly realized it was a bit too obscure for most so we came up with aliases for all challenges so you could type various things such as:

run 80smusic


Or

run ttt 


The ttt above is an abbreviation for tic-tac-toe discussed later.  The 80smusic string is an alias for challenge 1400 which is a trivia game for 80’s music (called “When video killed the radio star” in the console).

Two challenges (sponsor badge scanning and session evaluations in the above list) were open for everyone upon their initial login.  They didn’t need to do anything but walk around the conference and get their badge scanned or to attend a session and provide an evaluation to get points for these challenges.  The next six challenges we opened were trivia challenges based on various pop-culture themes.  We developed a reusable trivia engine and abstracted all trivia questions, answers, etc. into a JSON document to drive this engine.

To make the games “stateful” while keeping the API stateless we needed to allow each challenge to store a custom context with the current user.  As a user progresses through answering trivia questions, we needed to remember their answers and be prepared to score the results.  To solve this, we took advantage of the document structure of MongoDB and extended our user model to support an arbitrary “data” element that is untyped JSON (at least as far as our TypeScript goes). When a command comes into the API from a logged in user we load that user, and assign the current value of that “data” element to the context for this call.  Any actions or games that the current command relates to can use that data to determine their response.  They can also update the data prior to returning results to the user.  The infrastructure will persist the data back to the MongoDB database prior to returning.  This allowed us to keep track of things such as which question a user is currently on in a list of them, what their answers were to prior questions, etc.  With regards to the Tic-Tac-Toe game discussed next we store the entire state of a user’s current Tic-Tac-Toe game in this data element. Once the plumbing was in place to load user data on a command start and save it prior to returning to the user it opened up a lot of cool features to developers building out challenges as they could remember (on a user-by-user basis) any state necessary to make a console-based challenge. We made the entire scoring of all challenges configurable in an external file and did our best to come up with a system that was fun for all players, had scores sufficiently high enough to be meaningful, and gave benefit to people willing to work hard.  Looking at the scoring configuration we used for the first event you see:

scoring = {
    minimumHighScore: 15000,
    gameEnabled: true,
    sponsorScan: 1000,
    sessionEval: 1000,
    allSponsorsScan: 10000,
    allSponsorsScan1st: 12000,
    allSponsorsScan2nd: 11000,
    allSessionCount: 10,
    warGames: 1000,
    warGames1st: 3000,
    warGames2nd: 2000,
    zorkPointMultiplier: 100,
    zorkEgg: 5000,
    zorkEgg1st: 7000,
    zorkEgg2nd: 6000,
};

First, no player got on the high score board unless they reached 15,000 points.  This left the scoreboard empty for the start of the event.   Each time an attendee got their badge scanned they were awarded 1,000 points, every session evaluation also accounted for another 1,000 points.   To follow the concept behind Ready Player One, if you got your badged scanned by all sponsors you got another 10,000 points.  BUT if you were the first attendees to get all sponsors to scan your badge instead of the standard 10,000 points you received 12,000.  If you were the second attendee to do this, you received 11,000.  Every other attendee only received 10,000 for getting all sponsors.

We capped session evaluations points to only be scored for the first 10 sessions which is all a single attendee could physically attend.   This kept people from “gaming the system” and just entering a ton of session evaluations for points.

The “WarGames” section of the challenge scoring relates to Tic-Tac-Toe and the points allotted to people who ultimately win the challenge.  Give it a try to see if you can.

The ZorkPointMultiplyer and other Zork point items are discussed below in the section on Zork.

We put a lot of thought into the scoring concepts, how we calculate them, and how they are weighted.  We knew we only had a 3-day event with no ability to beta-test the whole concept at the scale we would during that event.  We wanted to make sure we incentivized the right behaviors and made it as fun as possible.  We weren’t sure if people would play ANY of the challenges and some of them (primarily Zork) we weren’t publishing (as we would keep in our back pocket as a final Easter Egg if things went well).   I had a spreadsheet I played with attempting to calculate points to see where we might end.  

Ultimately, the top 4 players maxed out every single point possible in the game.  We were stunned.  There were some bugs found and some scenarios we had to address in real-time (such as one sponsor who decided not to scan badges at all).  But, in the end, the point system worked very well.  

If you log into the system now as a guest user, you can play every challenge available (they are all auto enabled currently).   You will not see the session evaluation or sponsor badge scan challenges as they only apply during an event.

Tic-Tac-Toe

If you are familiar with the movie War Games you know the simple game of Tic-Tac-Toe plays a large part in the movie.   When I received my first home computer (and Apple IIgs) back in the mid 1980’s the first code I ever wrote was based on that movie (in AppleSoft BASIC).  You can find out more about that story on a post I wrote a few years ago here:  http://architectnow.net/2017/11/touch-applesoft-basic/.  

Knowing how much Tic-Tac-Toe worked into that movie we decided to make one of the Ready Developer One challenges a game of Tic-Tac-Toe where attendees played against the computer.   Given the ability to track context on a per-user basis and the fact that Tic-Tac-Toe can be easily rendered via text it seemed straightforward enough.    The turn-based aspect of Tic-Tac-Toe was also a nice fit for our console model.

The following screen snippet shows this challenge in action:

We stored a 3×3 array (the board) in the user context.   When a game ended (whether with a draw or a winner) we reset the board to blank and allowed a new game to start.  With this model, 1,000’s of attendees could be playing the same computer logic with their own board.

To implement the logic inside the API to play an attendee we researched and implemented a simple “minimax” algorithm.  This concept is well documented on the internet (and in many languages and platforms) in articles such as this.

The outcome of this was that the computer could not be beaten at Tic-Tac-Toe.  Every game was guaranteed to end in a draw OR the computer (i.e. API) winning.  

We did want to provide a way for players to score points in this challenge and I feel we achieved that in a clever way.  If you want to see how, keep trying to beat the computer (hint: play at least 10 games).

Zork and ZCode Parsing

IMHO one of the coolest additions to the overall challenge system was the fact that we were able to integrate the ability for players to play the classic 80’s text-based game Zork into the console.  This took some thought and planning (and a bit of creativity) but we were able to make it work on a large scale.

We initially thought about just “faking it” and implementing the first bits of some Zork games manually to make it “fun” and seem clever.   Then, after some beers and some research, we came to believe we may be able to integrate the full Zork apps.

First, the original Zork games are written in a language called “Z Code”.  This same language (and associated runtimes) were utilized for a number of games in the similar “interactive fiction” genre (primarily written by Infocom).   Over the years many developers have ported the Zork games to other languages and there have been a number of Z Machine runtimes built for various platforms.   Our initial research was for a good runtime to play Zork on our machines (MacBook Pro’s).   We quickly stumbled upon the Frotz Z-Machine Interpreter and the associated GitLab repo.   We were able to clone this repo and (using a makefile) build all source and use the product locally.   We were able to find the original Z Code Zork game files here.

Using the Frotz interpreter from a command prompt to play the Zork1.dat file was pretty straightforward.   But we needed to be able to do this in code (within the API) at scale for up to 1,000 simultaneous players (remembering where each of them are currently at within the game).  

Knowing the API was going to be hosted in Docker images one of our team set out extending our Docker build files to clone the Frotz Gitlab repo and build the project from source on each Docker image (note:  Our Docker images were built upon the Linux node:10-Alpine image).  Once we got this accomplished each of our API Docker images had access to the Frotz runtime.

We now needed a way for the API to call this interpreter via code (our TypeScript/API code) and to send in a command and get a result.  Some heavy Google research led us to an NPM package called frotz-interfacer.   This package servers as an ES6 wrapper around Frotz and gave us a nice API for sending commands and retrieving results.   An example of it’s usage would look like the following (taken from their Readme):

const frotz = require('frotz-interfacer');

let interfacer = new frotz({
executable: '/path/to/executable',
gameImage: '/path/to/game/file',
saveFile: '/path/to/save',
outputFilter: frotz.filter
});

interfacer.iteration('look', (error, output) => {
if (error && error.error) {
console.log(error.error);
} else {
console.log(output.pretty);
}
});

As you can see in the above code we are able to specify a path to the Frotz executable (which we had previously built into our Docker stack), a game image (we found the Zork1.dat file freely available online), a location to a save file, and then a filter for the output (which we didn’t use beyond the defaults).

The call to ‘interfacer.interation’ above is the actual passing of a text command (in this case ‘look’) to the Frotz engine.  It will use the save file to determine the current users place in the game (or create a new one if not found and the user is starting at the beginning).   The “interactive fiction” games such as Zork are based around a user entering a text “command” and getting a response printed to the console.  Users can enter things like “Look around” or “Pick up Sword” and the engine will attempt to parse the input and give a result (or indicate it doesn’t recognize the command).   The above code shows how we can send a user entered command to the engine, have the Frotz engine parse that command given the current game and a user’s save file (i.e. state) and get results sent back to us.  A full list of supported Zork commands can be found here.

To support a large number of users we needed to be able to assign each user their own save game file.   We also had to support the fact that our API Docker images were deployed in a cluster and we couldn’t guarantee multiple HTTP requests would be routed to the same image.  This meant our Zork save game files couldn’t reside solely on the local Linux file system.   We did some local research and realized that an entire Zork save game file was around 300-500 bytes (yes BYTES). Rather than persist this data in our MongoDB data store we decided it was fast enough to stream the game files to Azure Blog Storage.   In our API pipeline, if a user is playing Zork, when a command is sent to the API we take the user’s GUID ID and look for an existing save game file in blob storage.  If we find it, we stream it down to the local Linux file system on the current Docker image and save it into a temporary location.   If we don’t find a current save file, we just create a temporary path to a save game file in the same location.  When we create the frotz-interpreter we use this path (at which a file exists or it doesn’t).  After we have passed the current command to the interpreter and gotten a response, we can assume the save game file has been updated.   We then take the local Linux file and save it to blog storage so that it will be found on the next request.

Once the save game file persistence was in place and the Frotz engine and frotz-interpreter components were hooked up we could accept commands from any number of users and have them play in their own “instance” of Zork.

We would pass a command to the Zork interpreter and get multiple strings of text in response.   We parsed the results by CR/LF and sent them back as our standard console response lines.  Thus, a user of our console who is playing Zork sees the exact Zork output they would see if they were playing the game locally.

Scoring Zork

The biggest remaining piece we had to overcome with the Zork games (we included all 3 in the console.  You can play them by typing ‘run zork -v [version number]’) was how to give the player points for playing.  

First, when playing Zork, the game itself keeps track of a user’s score.  This value is sent back with every command on a distinct line that indicates something like “Score: 10”.   In the first Zork game there is a max score of 350 total points.

Since we had control of the responses sent back to the user, we parsed all the response rows prior to returning them and looked for the line starting with “Score:”.  If that line was found, we passed the data off to our scoring service to modify the current user’s Ready Developer One points.   We implemented a simple (and configurable) 100 points in our game per single Zork point.   If a player plays Zork and gets a total of 10 points in Zork they would receive 1,000 points in the Ready Player One Zork challenge.   This was simple enough and we didn’t think it would have much of an effect on our overall game.   It turned out to be the largest single scoring mechanism in our event.

Next, I wanted to implement an “Easter Egg” within the Zork game.  Something that, if a user did it, they got some bonus points.   Early in the Zork 1 game a user can climb a tree and find a “Jewel Encrusted Egg”.   Within the game there are a number of means by which a player can open this egg (which aren’t trivial).  If a player opens the “egg” inside the Zork game they are awarded bonus points (which the first to do so awarded a bit more, the second a bit less, and everyone else even less but at a fixed amount from there).  To achieve this, we added some checks on the Zork output in the same way that we tracked score.  If any line sent back from the interpreter had specific text (i.e. the user opened the egg) then we assigned the bonus accordingly.

Again, the opening of the egg wasn’t something we expected even to be noticed but it became a much larger thing once the top players were vying for position on the leader board.

It is worth noting that, during the devup 2018 conference’s three-day event, 5 players scored 100% of every point in Zork 1, 2, AND 3.   We had gamers drawing maps and consulting cheat sheets.   It was much crazier than we ever expected. Going into the event we thought that having Zork in the console would be an Easter egg itself, we weren’t sure if anyone was geeky enough to even play beyond the first few moves for nostalgia’s sake.  That turned out to be a false assumption.

We tracked (via Azure Application Insights) a large amount of telemetry throughout the event.  We tracked over 35,000 individual Zork commands handled by the API during those 3 days.

If you have read this far and have never played Zork please go and try it now at www.readydev.one.

Mobile Badge Scanner

As described in the introductory post Ready Developer One Introduction, the need to custom write our own badge scanning app was driven by the need to track all scans in our own API and allow us to assign points to attendees when they get scanned.   We evaluated commercial scanning apps but couldn’t find a robust enough one to support our needs (i.e. web-hooks/callbacks we could listen for).

The badge scanning application itself was written very quickly in Xamarin.Forms.  As mentioned previously we achieved a massive amount of code sharing (C#) between the iOS and Android versions of the application.  

For the actual scanning itself we used an external scanning library called Zebra Crossing.  We had utilized this library on other projects, and it has proven very powerful and quick at scanning via a mobile camera.  The library supports most QR/Barcode formats and we just chose one we knew worked well and coordinated with the conference badge printers to make sure the detail we wanted to get was encoded in a supported QR code on the badges.

Conference Badge

Knowing that the conference WiFi would be suspect (as it usually is in most large venues) we needed to ensure that the scanning application had the ability to function with little to no access to the API.  This is a common request on many of our mobile projects and to support this level of functionality.   To easily cache data locally we utilize Akavache as a local data store where we cache data we’ve already retrieved from the API and queue up data (in this case badge scans) that need to be synced with the API.   The Slack UI is an example of a highly utilized application that also uses Akavache.

In addition to Akavache as our local cache storage we used Redux.NET as a state management pattern.  We use Redux in all of our Angular applications and are very familiar with the concepts.  Having the ability to use a C#/.NET implementation on our mobile applications works well for us and provides the same benefits that we come to expect.

We also utilized the service locator (IoC/DI) provider from Splat to manage injecting services into our Xamarin views.

As mentioned earlier, we decided against publishing the sponsor badge scanning application in the iOS or Android stores due to the limited duration of the overall conference.  Getting store approvals for the initial deployment would have fine BUT, if a bug was found during a 3-day event, we would not have been able to easily deploy an update to resolve.  For this reason, we performed a direct deployment via the Appaloosa platform.  This solved our problems but did add a layer of headache when communicating with the users.  Unlike a simple install from an app store, our model required some extra steps of every user (example: to provide their UUID if using an iOS device).  We documented this heavily and did our best to communicate it but it still caused some headaches both prior to and during the event.

Scoreboard Kiosks

To make the game fun for a large group of people we needed a way to keep everyone up-to-date on the current leaderboard.   We ultimately put 55” 4k LED TV’s on stands throughout the conference center.  We utilized Raspberry Pi 3b+ computers mounted to the back of each TV to stream web content to the screen.  Originally, we had purchased the Raspberry Pi Zero but quickly learned that they did not have enough compute power handle the graphics of the web browser. The content we displayed was the scoreboard, a list of upcoming sessions (with speaker and room info), as well as a rolling ticker we could use to present other pertinent information.  An example of this is below:

Kiosk and Scoreboard

We built in some administrative console commands to allow us to remotely update the scoreboard kiosks as well as to send messages to the ticker.  We also gave ourselves the ability to send warning messages to the kiosks that popped up an overlay looking like this:

Kiosk set to message mode via realtime message

For simplicity sake we built the entire kiosk user interface into the same Angular front-end as the console but just set up a direct route to it:

Full kiosk:  https://www.readydev.one/kiosk

Scoreboard only:  https://www.readydev.one/kiosk/scoreboard

Sessions only:  https://www.readydev.one/kiosk/sessions

Having reusable components in Angular and distinct routes we could use to isolate content gave us the flexibility of showing as much or a little as we wanted on individual kiosks.

Each Raspberry Pi 3b controller was configured to be on the conference’s WiFi network, however was ultimately hard-wired due to WiFi connectivity issues.   We configured a Raspbian Lite image to auto login a default user and boot directly to a FireFox-Esr running in full-screen/kiosk mode using the mKiosk extension (hiding the rest of the OS).  We decided to choose Firefox over Chromium because it was more performant. When the browser opened in this mode, we just defaulted the home screen to point at the routes above using a combination of the following posts as our initial guidance:

https://tamarisk.it/raspberry-pi-kiosk-mode-using-raspbian-lite/ 
https://die-antwort.eu/techblog/2017-12-setup-raspberry-pi-for-kiosk-mode/

The .xsession configuration we utilized is shown in the following image (to get you started):

export URL=https://www.readydev.one/kiosk

xset s off &
xset -dpms &
xset s noblank &

unclutter -idle 10 -noevents &

# Get screen width and height
WIDTH='sudo fbset -s | grep "geometry" | cut -d " " -f6'
HEIGHT='sudo fbset -s | grep "geometry" | cut -d " " -f7'

# Start browser with window dimensions set to fullscreen and url.
/usr/bin/firefox-esr -width ${WIDTH} -height ${HEIGHT} -private -url ${URL}

Once completing the configuration on one of the Raspberry Pi’s SD card we then cloned the rest of the SD cards using ApplePi Baker. Each one of which took about an hour to clone.

As the API updated content (i.e. scores, session updates, etc), it published a small message to an Ably.io channel indicating an update to data was made.   The Kiosk component of the Angular front-end subscribed to the same Ably.io channel and, when such a message was received, it knew to refresh its content accordingly.   We used similarly Ably.io publish/subscribe capabilities to send messages via the API down to all kiosks for warnings or to update the ticker content.  A final “failsafe” option was for us to send a full refresh request via Ably.io to all kiosks telling them to refresh their full browser.  This was required when we changed client-side Angular code and published these changes up to our web servers.   The only way this code would make its way down to the kiosks was if they refreshed themselves and pulled down new packages.

Once the whole infrastructure was in place (and the TVs, Raspbery Pis, etc were configured and deployed throughout the venue) we could do all manipulation remotely and they were pretty much forgotten about.  

We did learn a significant amount about using Raspberry Pis in this manner and how to manage this type of environment.   Moving forward we are looking at a number of other usage scenarios for a very similar setup.

What’s Next?

The purpose for writing such a long-winded overview of the platform was primarily the fact that we rarely see project post-mortems written anymore.   If a project isn’t open source then it’s hard to know much about what the teams utilized to build it (Unless it’s very obvious by looking at it).  We try to share info on all we do but, as contractors on other people’s projects, that’s not always possible.   The ArchitectNow team took on the development and support of Ready Developer One to help promote our capabilities at a local conference and just to build something we’d have fun seeing used.   We leveraged much of what we already knew and threw in some fun stuff on the way.   We also wanted to make sure we allowed others to learn as much as possible from it.

During the event we gathered a ton of feedback and have been brainstorming on what to do for next year.  We had features in the backlog we didn’t get to and whole conceptual areas we left unexplored.

We are starting to plan for 2019 and how we might be able to leverage this foundation on something bigger that can be utilized by others (beyond just the conference we started at).   We are not sure if a “console” will be the core of it but the concepts we’ve learned will definitely carry forward. Thanks again for reading this far and if you have any questions regarding anything here please do not hesitate to reach out to me (Kevin Grossnicklaus) directly at kvgros@architectnow.net or on Twitter @kvgros.