I'm excited to announce the initial alpha release of fate, a modern data client for React & tRPC. fate combines view composition, normalized caching, data masking, Async React features, and tRPC's type safety.
fate is designed to make data fetching and state management in React applications more composable, declarative, and predictable. The framework has a minimal API, no DSL, and no magic—it's just JavaScript.
GraphQL and Relay introduced several novel ideas: fragments co‑located with components, a normalized cache keyed by global identifiers, and a compiler that hoists fragments into a single network request. These innovations made it possible to build large applications where data requirements are modular and self‑contained.
Nakazawa Tech builds apps and games primarily with GraphQL and Relay. We advocate for these technologies in talks and provide templates (server, client) to help developers get started quickly.
However, GraphQL comes with its own type system and query language. If you are already using tRPC or another type‑safe RPC framework, it's a significant investment to adopt and implement GraphQL on the backend. This investment often prevents teams from adopting Relay on the frontend.
Many React data frameworks lack Relay's ergonomics, especially fragment composition, co-located data requirements, predictable caching, and deep integration with modern React features. Optimistic updates usually require manually managing keys and imperative data updates, which is error-prone and tedious.
fate takes the great ideas from Relay and puts them on top of tRPC. You get the best of both worlds: type safety between the client and server, and GraphQL-like ergonomics for data fetching. Using fate usually looks like this:
I was part of the original Relay and React teams at Facebook in 2013, but I didn't build Relay. While I worked on deploying the first server-side rendering engine for React and migrating Relay from React mixins to higher-order components through codemods, I honestly didn't fully grasp how far ahead everyone else on the Relay team was back then.
In the following years, Relay became the default data framework at Facebook. It was such an elegant way to handle client-side data that I had assumed it would gain widespread adoption. That didn't happen, and its backend companion GraphQL has become divisive in the web ecosystem.
This boilerplate is repetitive and ok, but not great. The real problems start when data changes. Mutations tend to have complex logic with detailed patches to the local cache or for handling rollbacks. For example:
When your data client is an abstraction over fetch, keeping client state consistent gets hard quickly. Correctly handling mutations often requires knowing every place in your application that might fetch the same data. That often leads to defensive refetching and waterfalls down the component tree. Component trees frequently look like this:
To be clear: These libraries are great at fetching data. I know better patterns are available in most of these libraries, and advanced developers can avoid many of the downsides. Sync engines address these problems, but they're challenging to adopt and also come with trade-offs.
Still, it's too easy to get something wrong. Codebases become brittle and hard to maintain. Looking ahead to a world where AI increasingly writes more of our code and gravitates towards simple, idiomatic APIs, the problem is that request-centric fetch APIs exist at all.
I did not want to compromise on the key insights from Relay: a normalized cache, declarative data dependencies, and view co-location. At around the same time, I watched Ricky Hanlon's two-part React Conf talk about Async React and got excited to start building.
When fetch-based APIs cache data based on requests, people think about when to fetch data, and requests happen at every level of the component tree. This leads to boilerplate, complexity, and inconsistency. Instead, fate caches data by objects, shifts thinking to what data is required, and composes data requirements up to a single request at the root.
A typical component tree in a React application using fate might look like this:
Let me show you a basic fate code example that declares its data requirements as a "view", co-located with a component. fate requires you to explicitly "select" each field that you plan to use in your components as a "view" into your data:
tsx
import type { Post } from '@org/server/views.ts';import { UserView } from './UserCard.tsx';import { useView, view, ViewRef } from 'react-fate';export const PostView = view<Post>()({ author: UserView, content: true, id: true, title: true,});export const PostCard = ({ post: postRef }: { post: ViewRef<'Post'> }) => { const post = useView(PostView, postRef); return ( <Card> <h2>{post.title}</h2> <p>{post.content}</p> <UserCard user={post.author} /> </Card> );};
A ViewRef is a reference to a concrete object of a specific type, for example a Post with id 7. It contains the unique ID of the object, the type name and some fate-specific metadata.
fate creates and manages these references for you, and you can pass them around your components as needed to resolve them against their views.
fate does not provide hooks for mutations like traditional data fetching libraries do. Instead, all tRPC mutations are exposed as actions for use with useActionState and React Actions. They support optimistic updates out of the box.
A LikeButton component using fate Actions and an async component library might look like this:
When this action is called, fate automatically updates all views that depend on the likes field of the particular Post object. It doesn't re-render components that didn't select that field. There's no need to manually patch or invalidate cache entries. If the action fails, fate rolls back the optimistic update automatically and re-renders all affected components.
All of the above works because fate has a normalized data cache under the hood, with objects stored by their ID and type name (__typename, e.g. Post or User), and a tRPC backend conforming to fate's requirements, exposing byId and list queries for each data type.
You can adopt fate incrementally in an existing tRPC codebase without changing your existing schema by adding these queries alongside your existing procedures.
With these three code examples we covered almost the entire client API surface of fate. As a result, the mental model of using fate is dramatically simpler compared to the status quo. fate's API is a joy to use and requires less code, boilerplate, and manual state management.
It's this clarity together with reducing the API surface that helps humans and AI write better code.
fate-template comes with a simple tRPC backend and a React frontend using fate. It features modern tools to deliver an incredibly fast development experience. Follow its README.md to get started.
fate is not complete yet. The library lacks core features such as garbage collection, a compiler to extract view definitions statically ahead of time, and there is too much backend boilerplate. The current implementation of fate is not tied to tRPC or Prisma, those are just the ones we are starting with. We welcome contributions and ideas to improve fate. Here are some features we'd like to add:
Support for Drizzle
Support backends other than tRPC
Persistent storage for offline support
Implement garbage collection for the cache
Better code generation and less type repetition
Support for live views and real-time updates via useLiveView and SSE
NOTE
80% of fate's code was written by OpenAI's Codex – four versions per task, carefully curated by a human. The remaining 20% was written by @cnakazawa. You get to decide which parts are the good ones! The docs were 100% written by a human.