This post is highlighting the state-management pattern used in Dust, a Next.js app.

The problem we’re trying to solve is that the app has two states that need to be kept in sync:

  1. The client-side state, stored in the user’s browser.
  2. The server-side state, generally stored in a database.

The client-side state (1) can be managed with a normal useState or useReducer hook. Or, we could use Redux, MobX, or whatever.

But what about the server side state (2)? Client and server state are “synchronized” on each page load, when the server-side components fetch whatever data they need to render. But as soon as mutations start to happen, the client and server immediately get out of sync.

We can dispatch calls to the server when client state updates, but then we have two separate mechanisms for state management that need to coordinate somehow.

Also, sometimes we can’t fully update the client state until the server call is complete, e.g. when creating a new resource for which the server has to assign an ID.

A few solutions come to mind here:

  1. One option is to reload the page after every mutation (note that this doesn’t imply a browser refresh, just a magic Next.js page reload). This is inefficient; you have to reload all the page’s data for every mutation, even if it just affects a tiny part of the overall state. But, this is the simplest to implement and has low risk of a state desync issue.
  2. The server could be the source of truth. After dispatching a mutation, the server would have to respond with all the information clients need to update their state. The client UI can’t update until the server migration is complete, resulting in some loading spinners, but there should never be a state desync issue.
  3. The client could be the source of truth. After the client-side state is updated, some state observer function detects the change and notifies the server of the new data. There are no loading spinners, and client code is as simple as possible — but, there has to be a consistent way to map client-side state to the server, which is tricky except in specific cases (GraphQL). This approach also risks a state desync issue, especially if there are multiple users editing the same data.
  4. We could use a CRDT to perform updates on both client and server and then merge them later. This sounds cool, and if there was a use case of multiple people editing the same data, feels like the best approach, but also feels like it has the most overhead to code up. Maybe for a future project, eh?
  5. We could define client and server reducers to independently update the data on the client and the server, and somehow wire them together automatically. This approach allows instant updating on the client and maximally efficient server code (only needs to query for exactly the data that’s needed to do the update), but risks state desync issues, especially with concurrent editors of the same data.

In this post, I’m talking about (5).

Also, before I get too into the weeds, here’s the implementation from my app for reference.

Three weird reducers

At a high level, the weird reducer pattern is an extension of the useReducer hook to support both client and server state.

The rest of this post assumes you’re familiar with the useReducer hook, so if you’re not, I recommend reading the docs.

Also, check out this link: Extracting State Logic into a Reducer.

The main difference is instead of defining a single reducer for client state, you define three weird reducers:

  1. The stateReducer accepts the current client state and an action object, and returns an updated client state, just like useReducer.
  2. The effectReducer accepts just an action object, and performs async client-side side effects (e.g. showing toast dialogs).
  3. The serverReducer accepts just an action object, and is expected to execute some async server-side mutation code with Server Actions (or fetch, or any other server updating mechanism).

Why are these reducers “weird”? Well, the so-called effectReducer and serverReducer aren’t really reducers at all, because they don’t accept the state as a first argument. They are impure effect handlers that we’re just calling reducers to indicate usage intent.

All three reducers are passed into a single hook, like so:

const [state, dispatch] = useClientServerReducer(
  stateReducer,
  effectReducer,
  serverReducer,
  initialState
);

Then you can just dispatch actions using dispatch, and all three reducers will run over the action:

dispatch({ type: 'delete-task', taskId: '123' });

Here’s a simple reducer implementation for handling the above delete-task action:

// Pure, client-side reducer -- runs multiple times, maybe.
function stateReducer(state, action) {
  switch (action.type) {
    case 'delete-task':
      // update client state
      return {
        ...state,
        tasks: state.tasks.filter(({ id }) => id !== action.taskId)
      };
  }
  return state;
}

// Impure client-side "reducer" -- runs once
async function effectReducer(action) {
  // nothing to do here (yet)
}

// Impure "server-side" "reducer" -- runs once
async function serverReducer(action) {
  switch (action.type) {
    case 'delete-task':
      // call a server action
      return await deleteTask(action.taskId);
  }
}

A few notes:

  1. The stateReducer is a sync function of form (oldState, action) => newState, and may be called multiple times, so must not have side effects.
  2. The effectReducer and stateReducer are async functions of form (action) => Promise<void | NewAction>, and are only called once.
    • They return a Promise that can optionally resolve with a new action to dispatch when the call is finished.
  3. We have to implement deletion logic twice — once in the stateReducer, and once in the implementation of the deleteTask(taskId) server action (not shown).
    • It’s important that the logic in both places is exactly the same (and also deterministic).
  4. Despite the name, the serverReducer runs on the client, not the server. The name is meant to indicate that it’s expected to only perform server mutations. Client-side side effects should go in the effectReducer.

Error handling

What happens if the client and server get out of sync somehow, or the user loses Internet access, or something? We should add error handling.

The useClientServerReducer hook has a convenient feature that helps us here:

  • If the serverReducer promise rejects, a server-error action is automatically dispatched.
  • If the effectReducer promise rejects, an effect-error action is automatically dispatched.

The server-error and effect-error actions include the error that occurred as well as the originating action that failed.

Since these errors are also “just actions”, we can handle them in any of our three reducers, depending on how we want to respond.

A simple approach:

  1. If the serverReducer rejects, we should notify the user (e.g. with a toast) and refresh the page to resync with server state.
  2. If the effectReducer rejects, we can notify the user but no other action is needed.

Since the response we’re taking in both cases is a client-side effect, we’ll want to implement these in the effectReducer:

async function effectReducer(action) {
  switch (action.type) {
    case 'server-error':
      // notify user of errors and re-sync with server
      Toast.error('Error saving data to the server!');
      clientRouter.refresh();
      break;
    case 'effect-error':
      // notify user of error -- no resync needed
      Toast.error('Effect error!');
      break;
  }
}

What if the effectReducer fails when handling the effect-error? Can an infinite loop occur?

There is a short circuit in useClientServerReducer to not trigger an effect-error action if the originating action that caused the error is also an effect-error.

Thus, infinite loops are prevented.

Action chains

Another interesting case; suppose we want to create a new task:

dispatch({
  type: 'add-task',
  data: { name: "my new task" }
});

This can be a problem because we don’t know the database ID of the task until we run a server mutation. (This actually applies to any field that’s automatically computed on the server, not just IDs — but IDs are a common example.)

This is where we can take advantage of the ability to return new actions from the serverReducer to update the client with the server-generated ID.

Here’s a simple solution (N.B. error handling omitted):

function stateReducer(state, action) {
  switch (action.type) {
    case 'add-task':
      return {
        ...state,
        isLoading: true
      };
    case 'add-task-finished':
      // update client state
      return {
        ...state,
        tasks: state.tasks.push(action.task),
        isLoading: false
      };
  }
  return state;
}

async function serverReducer(action) {
  switch (action.type) {
    case 'add-task':
      // addTask is a server action
      const newTask = await addTask(action.data);
      // returned object is dispatched as an action
      return { id: 'add-task-finished', task: newTask };
  }
}

Here we use two actions:

  1. The add-task action dispatches a server mutation and updates the client state with a loading spinner.
  2. The add-task-finished action is dispatched when the server mutation is finished, and updates the client state with the new task.

Simple! Simple?

Temporary IDs

One final interesting pattern here: instead of using a loading spinner, we can optimistically update the client state with a temporary ID.

To use this pattern, we need to generate a temporary ID when initially dispatching the action:

dispatch({
  type: 'add-task',
  data: { name: "my new task" },
  tempId: randomUUID(),
});

Then, in the stateReducer, we can update the state.tasks immediately, without waiting for the add-task-finished action:

case 'add-task':
  // add task with temporary id
  return {
    ...state,
    tasks: state.tasks.push({
      ...action.data,
      id: action.tempId,
    })
  };

When the add-task-finished comes in, all we have to do is replace the temporary ID:

case 'add-task-finished':
  // update task with server values
  return {
    ...state,
    tasks: state.tasks.map(task => task.id === action.replacingTempId ? {
      ...task,
      ...action.task
    } : task),
  };

Here’s the full example with both stateReducer and serverReducer defined:

function stateReducer(state, action) {
  switch (action.type) {
    case 'add-task':
      // add task with temporary id
      return {
        ...state,
        tasks: state.tasks.push({
          ...action.data,
          id: action.tempId,
        })
      };
    case 'add-task-finished':
      // update task with server values
      return {
        ...state,
        tasks: state.tasks.map(task => task.id === action.replacingTempId ? {
          ...task,
          ...action.task
        } : task),
      };
  }
  return state;
}

async function serverReducer(action) {
  switch (action.type) {
    case 'add-task':
      const newTask = await addTask(action.data);
      return {
        id: 'add-task-finished',
        task: newTask,
        replacingTempId: action.tempId
      };
  }
}

This way the user sees immediate response to their action of adding a task, and assuming there’s no error, the data synchronizes with the server shortly after.

If we combine this temporary ID approach with the error handling in the Error handling section, we can have a reasonably robust, very responsive, client-side app with server-side “pseudo” synchronization.

One thing glossed over in this example is how tasks are ordered.

The stateReducer adds this task to the end of the state.tasks array. If state.tasks is sorted by something other than chronological order of creation, then a desync occurs (i.e. if the user refreshes the page, the task’s position in the list may change.)

This highlights the importance of making sure the client and server are updating state in exactly the same way. If the server is returning tasks in a certain order, we need to insert the task in the correct order on the client as well instead of simply doing an Array.push().

Immer

Code like this (from the above example) is a little painful:

// update task with server values
return {
  ...state,
  tasks: state.tasks.map(task => task.id === action.replacingTempId ? {
    ...task,
    ...action.task
  } : task),
};

We need this weird mutation logic because useReducer, like useState, requires users to return new objects when the properties of those objects changed. In other words, the reducers must be pure — they cannot actually mutate the state object, they have to return a new state object if there’s changes.

There’s actually a cool little library called Immer (and use-immer) that makes it a lot simpler to write this type of mutation. There’s even a section in the React docs about it.

In my actual Dust app, the stateReducer is, in fact, an Immer reducer. So we could write the above mutation without worrying about producing new objects:

// update task with server values, Immer style
const taskToUpdate = state.tasks.find(({ id }) =>
  id === action.replacingTempId
);
Object.assign(taskToUpdate, action.task);

If we were to rewrite the whole stateReducer from the above example in Immer style, it becomes clearer what’s actually changing:

function stateReducer(draftState, action) {
  switch (action.type) {
    case 'add-task':
      // add task with temporary id
      draftState.tasks.push({
        ...action.data,
        id: action.tempId
      });
    case 'add-task-finished':
      // update task with server values
      const taskToUpdate = draftState.tasks.find(({ id }) =>
        id === action.replacingTempId
      );
      Object.assign(taskToUpdate, action.task);
  }
}

Note there is no need to return the state for an Immer-style reducer, unless you want to change the identity of the base draftState object itself. See the use-immer docs for more details.

Fin

Cool, we’ve covered everything! This wasn’t the most rigorous explanation, but hopefully it sparked some ideas for your next app.

If you are curious for more details (e.g. how it’s typed, how it’s implemented), see the useClientServerReducer hook implementation in the Dust codebase. It’s less than 100 lines.