QLD
Traffic

In 2016, Queensland’s Department of Transport and Main Roads (TMR) embarked on the ‘Next Gen Traffic and Travel Information (TTI) project’. The aim was to provide a customer focussed and responsive traffic and travel information service that also met TMR's business objective—an integrated transport system that supports the safe, efficient and reliable movement of people and goods.

The key objectives of the TTI system were to a) provide a single source of truth for all traffic and travel information; b) disseminate timely, reliable and accurate information and; c) facilitate informed travel decisions for Queensland citizens.

Our solution was to develop an app that aided Queenslanders with their journey planning which they called QLD Traffic.

This app provides real-time, statewide information on road conditions including, traffic imagery, incidents and hazards, closures and restrictions, roadworks and special events as well as including personalised push notifications to announce issues as they arise on a commuter’s favourite routes.

Traffic and travel information was maintained using a decentralised model, where TMR's district offices input local data. District offices are supported by a 24/7 state- wide Traffic Management Centre to provide information that is as up to date as possible.

Given how critical the timely nature of the information is, Adapptor developed a Python- based server that integrated with Firebase Cloud Messaging with a custom query layer for location-based incident reporting to drive local area alert functionality.

The app allows commuters to set preferences based on their experiences and needs.

They are able to save favourite routes and places, receive personalised traffic alerts as push notifications, see live traffic alerts and events on a map and filter traffic alerts to see a range of information such as:

  • Real-time route incidents, roadworks and traffic

  • Real-time nearby incidents, roadworks and traffic

  • Incidents near calendarised events (if you’ve set a location for the event)

  • Crashes, flooding and hazards • Road closures and restrictions • Roadworks and special events • Traffic flow and congestion

  • Traffic web cameras

Launched in February 2017, the app has been downloaded over 400,000 times. Used as TMR’s primary traffic and travel information source during Cyclone Debbie in 2017, the app was downloaded and used by 20,000 people in a single weekend.

QLDTraffic scalability

The original deployment of QLDTraffic was to a fixed cluster of 3 AWS EC2 instances. Each instance ran a web server (NGINX), fronting an app server instance. Each node also served as a single node in a 3-node MongoDB replication set. A load balancer controlled forwarding traffic from the internet to one of the instances. Due to the fixed MongoDB configuration, there was no way to configure auto-scaling with more instances (horizontal scaling)—the only way to increase capacity was by upsizing instances to a higher-spec type (vertical scaling). Another issue was that all nodes of the cluster were hosted within a single AWS availability zone—in the case of a datacenter outage, the entire system would be taken offline.

The new deployment separates the MongoDB datastore host from the web/app server components, allowing each to be independently scaled.

Significant work went into reworking the app server to enable it to be hosted in a Docker container. This leverages the auto scaling features of Amazon Elastic Container Service, allowing the cluster of app servers to dynamically scale horizontal capacity up or down depending on CloudWatch metrics. The scaling strategy enables app server instances to be distributed across 3 availability zones mapping to separate AWS datacenters in Sydney. If one datacenter suffers an outage, the app will continue to function.

We currently use simple resource metrics to control scaling, these will require tuning once the application goes live to account for real- world usage of the services.

The MongoDB datastore cluster lives in a separate VPC managed by the MongoDB Atlas service, peered to the app server VPC for maximum responsiveness and distributed across availability zones. The current MongoDB Atlas service plan is a fixed replica set configuration of 3 nodes, but can be upgraded to a scaling cluster.