mirai is an async evaluation framework for R that enables parallel processing through background daemons (persistent worker processes). It provides a simple interface where mirai() evaluates expressions asynchronously and daemons() manages the worker pool.
The package uses a hub architecture where daemons connect to a host, allowing dynamic scaling from local machines to HPC clusters and cloud platforms. It features microsecond-level performance through NNG networking, supports custom serialization for torch tensors and Arrow/Polars data formats, and includes distributed tracing via OpenTelemetry. mirai serves as the parallel backend for major R packages including Shiny, purrr, tidymodels, and targets.
Contributors#
Resources featuring mirai#
Purrrfectly parallel, purrrfectly distributed (Charlie Gao, Posit) | posit::conf(2025)
Purrrfectly parallel, purrrfectly distributed
Speaker(s): Charlie Gao
Abstract:
purrr is a powerful functional programming toolkit that has long been a cornerstone of the tidyverse. In 2025, it receives a modernization that means you can use it to harness the power of all computing cores on your machine, dramatically speeding up map operations.
More excitingly, it opens up the doors to distributed computing. Through the mirai framework used by purrr, this is made embarrassingly simple. For those in small businesses, or even large ones – every case where there is a spare server in your network, you can now put to good use in simple, straightforward steps.
Let us show you how distributed computing is no longer the preserve of those with access to high performance compute clusters.
Materials - https://shikokuchuo-posit2025.share.connect.posit.cloud/ posit::conf(2025) Subscribe to posit::conf updates: https://posit.co/about/subscription-management/

Lift Off! Building REST APIs that Fly (Joe Kirincic, RESTORE-Skills) | posit::conf(2025)
Lift Off! Building REST APIs that Fly
Speaker(s): Joe Kirincic
Abstract:
Picture the scene: you’ve successfully deployed your ML model as a plumber API into production. Your company loves it! One team uses the API’s predictions as an input to their own ML model. Another team displays the predictions in an internal Shiny app. But once adoption reaches a certain point, your API’s performance starts to degrade. What can you do to help your service maintain high performance in the face of high demand? In this talk, we’ll show some strategies for taking your API performance to the next level. Using two R packages, {yyjsonr} and {mirai}, we can augment our API with faster JSON processing and better responsiveness through asynchronous computing, allowing our services to do great things at scale at no additional cost. posit::conf(2025) Subscribe to posit::conf updates: https://posit.co/about/subscription-management/
{mirai} and {crew}: next-generation async to supercharge {promises}, Plumber, Shiny, and {targets}
{mirai} is a minimalist, futuristic, and reliable way to parallelise computations – either on the local machine, or across the network. It combines the latest scheduling technologies with fast, secure connection types. With built-in integration to {promises}, {mirai} provides a simple and efficient asynchronous back-end for Shiny and Plumber apps. The {crew} package extends {mirai} to batch computing environments for massively parallel statistical pipelines, e.g. Bayesian modeling, simulations, and machine learning. It consolidates tasks in a central {R6} controller, auto-scales workers, and helps users create plug-ins for platforms like SLURM and AWS Batch. It is the new workhorse powering high-performance computing in {targets}.
Talk by Charlie Gao and Will Landau
Slides: https://wlandau.github.io/posit2024
GitHub Repo: https://github.com/wlandau/posit2024
mirai: https://shikokuchuo.net/mirai/
crew: https://wlandau.github.io/crew/
