Latency 2023 roundup
•
Brett Adams
Some of us at Adapptor had the pleasure of attending Latency this year, and it was a great conference. Below we highlight the talks that caught our attention.
Keynote – Peter Sbarski
The conference kicked off with an entertaining welcome-to-country featuring a lesson in Kangaroo dancing. The following keynote speech was delivered by Peter Sbarski, an AWS Serverless Hero. He advocated for Lambdas (serverless functions on AWS) to build backend solutions, an approach he used at A Cloud Guru to scale rapidly as their customer base took off.
Peter related a fascinating story about a controversial blog post in which the Prime Video tech team wrote about switching from a serverless architecture to a monolith, thus reducing costs by 90%. The serverless community was baffled why Amazon, leader in the serverless space, would condone an article from their team that dissed one of their flagship products. Many tweets and blog posts debated the merits of the article. Some pointed out that the proposed solution wasn’t exactly a monolith, as it is a containerised app that can be scaled horizontally. Others noted that since Prime Video were chopping up real time video streams for analysis, Lambdas were not the best choice in the first place.
The moral of the Prime Video story was, as always, use the right tool for the job, and Peter’s argument is that AWS Lambdas or Step Functions are often the right tool for your backend job, at least in the first instance. A serverless prototype has the best chance of scaling gracefully when load ramps up. If you then discover cost or performance issues, you can re-architect as necessary to move or combine functions into containers.
Peter also shared tools he’s loved over the years, the most interesting of which was Serverless Stack. This tool solves the problem of Lambda functions needing to be deployed to be run and tested. It allows developers to debug Lambda functions locally by deploying a stub to the cloud that proxies the requests and responses back to your local development environment, allowing rapid testing and iteration while calling the function the way it will be called when deployed.
Talk #2 – Unlocking Your APIs with GraphQL – Dylan Johnston
The origin of GraphQL can be traced to Facebook’s need to improve their mobile app user experience. They initially wanted to use their existing API, which had been designed for the web application. But they soon realised it was a poor fit for the mobile app, particularly the newsfeed. This realisation led to the creation of GraphQL, a new query language for APIs. To illustrate the limitations of the existing API, the famous “vending machine” example is often used, highlighting the inefficiency of fetching excessive amounts of data or making multiple requests. GraphQL was designed to enable a more efficient and flexible approach to fetching data.
The speaker, Dylan from Qustodio, described problems that can arise when using a REST API, including N+1, over or under fetching, and multiple round trips. REST is perfect when you have a single client fetching data. When you add a second and third client, API bloat threatens.
Dylan provided a simple example. This is a simple request to fetch a hero, and all is well.
Client 1 → GET /hero
But soon Client 2 enters the fray, and its needs aren’t identical. Client 2 would like to also fetch the hero’s friends.
Client 2 → GET /hero?includeFriends=true
Add Client 3, which additionally would like spaceships, and you can see where this is going.
Client 3 → GET /hero?includeFriends=true&includeSpaceships=true
As the number of clients grew, the at-first simple REST API grew in complexity, becoming hard to maintain, requiring constant updates and new endpoints. Just like the problems facing Facebook engineers attempting to use the newsfeed API from the mobile app. This is where GraphQL came in. It helped solve over or under fetching by grabbing the precise data required by each client. It also helped unify their microservices by defining queries with a single schema. The result lowered server load by reducing the number of calls needed to fetch all the data each client needed.
GraphQL is not without drawbacks, and understanding when you should or–more importantly–should not use it provides food for thought. The speaker described the steep learning curve to implement GraphQL. A deep understanding of all the data and its relationships is needed in order to define the schema. For a smaller application that is the only client of an API, a RESTful solution is perfectly fine. But for larger applications, or a microservice architecture backend that has multiple clients all trying to grab data from a single source, GraphQL might be a good solution.
Talk #3 – Building with Generative AI : A Bedtime Story – Brian Foody
AI is currently the hot buzzword. Is it a job killer, white magic that will solve the world’s problems, or simply another IT fad? In this talk, long-time Adapptor friend Brian Foody demystifies some of the magic around what Natural Language Processors such as ChatGPT do, and what kinds of products developers can create using them.
Brian and his team have produced a web app called Bedtime Story. The app uses ChatGPT to produce custom children’s stories based on very simple prompts. The “magic” turns out to lie in the construction of the prompt sent to ChatGPT, together with the presentation. Using a streaming response style means it looks like the computer is writing to you on the fly–magic!
Originally, Brian’s team used a simple prompt to produce a story and another AI service, Midjourney, to generate an appropriate image for the story. They have since evolved this to allow the user to take part in a “choose your own adventure” style story. This requires parsing the response from ChatGPT into the story content and presenting the options. These options are then used to trigger a subsequent call to ChatGPT.
Brian discussed the evolution of the product whereby he was quick to market using toolsets familiar to him, namely AWS, followed by iteration on different platforms using their learnings. An important lesson for small startups: it’s valuable to be the first in the field rather than the best. You can always improve your product.
His talk then shifted to another AI-driven product: chatbots. Specifically, chatbots that can help parents with small children on nutritional issues, meal plans, and the like. Similar to Bedtime Story, Brian is again using crafted prompts to create a friendly “personality” based on a crafter biography who responds to queries in a style appropriate to the topic, which promotes trust in the end user. Along with the personality, he’s using Midjourney to generate avatars for these characters.
Again, the magic lies in the sense the user is talking to this character who looks and feels like someone they expect to be able to answer their questions, when in fact under the hood it’s ChatGPT primed to a context that produces responses of a certain style.
Chatbots will be huge, whether the use-case be customer support, research assistants, healthcare advocates, or just fun.
await Presentation() – Aiden Ziegelaar
This talk explored the inner workings of the Node.js event loop, diving into the details of libUV, a core component of the NodeJS ecosystem. Aiden took a deep look at how Node.js handles asynchronous operations, and emphasised the importance of understanding these details to becoming an expert in Node.js asynchronous programming.
libUV is a multi-platform C library that provides the asynchronous event-driven programming framework for Node.js. It abstracts the underlying operating system interfaces and provides consistent APIs for handling various I/O operations, timers, child processes, and more. Its event loop is the core mechanism that allows Node.js to perform non-blocking I/O operations efficiently, being responsible for handling and dispatching events, such as I/O events, timers, and callbacks.
Besides the event loop, it also provides an asynchronous I/O model that allows apps to perform I/O operations without blocking the execution of other tasks:
Timer functionality, enabling the scheduling of callbacks to be executed after a certain period or at regular intervals
A thread pool for executing blocking operations asynchronously, which helps offload blocking tasks, such as file I/O or cryptographic operations, to separate threads
APIs to create and manage child processes
Error handling mechanisms to propagate and report errors that occur during I/O operations or other asynchronous tasks
Aiden described the event loop in particular, listing the phases of each loop iteration: pending callbacks, idle handles, prepare handles, polling, check handles, close callbacks, update loop time, and due timers.
Software Engineering relies on abstracting away the crazily complex inner workings of our digital platforms. But, as mobile app developers, knowing when to peek beneath the interfaces can mean the difference between a good solution and a great solution.
All up, this year’s Latency provided plenty of food for thought. It highlighted trends in the tech and product landscape, confirmed some of our choices, and put new ones on our radar. Nice work Latency 👏