OAuth Patterns

Advanced OAuth implementation patterns for atproto applications

This page covers advanced OAuth implementation details that most developers won't need to handle directly. If you're using one of the reference SDKs, these details are handled automatically.

For getting started with OAuth, see About OAuth. For understanding permission scopes, see Permission Sets. For the full specification, see the OAuth spec.

How is OAuth Different in AT Protocol?

Atproto specifies a particular "profile" of the OAuth standards, using OAuth 2.1 as the foundation.

There are a few details that might catch you off guard if you're used to using other OAuth systems.

  • Atproto is distributed: Usually, when an app has a "sign in with..." button, it provides a choice of which authorities it allows to authenticate users (usually one of a few big corporations). With atproto OAuth, the app has no prior relationship with the authentication provider: a user's PDS. This is also the reason why atproto OAuth is not compatible with OIDC, which requires a pre-established relationship.

  • Migration: Atproto users can migrate their accounts between servers (PDSes) over time. To facilitate this atproto has a flexible Identity layer, which allows usernames (handles) to be resolved to a static user ID (DID), which in turn can be resolved to locate the user's PDS. When a user logs in to an app, the OAuth client dynamically resolves these relationships.

  • Client IDs: In other OAuth ecosystems, it is often necessary for client apps to pre-register themselves with the resource server. This is not viable in a decentralized system (with many clients and many resource servers). In atproto, the client ID is a URL, which is fetched at auth time to determine OAuth features like redirect URI and scopes. See Client ID Metadata Documents.

Understanding DPoP

DPoP (Demonstrating Proof of Possession) cryptographically binds OAuth tokens to a key held by the client. Even if an attacker intercepts your access token, they can't use it without your private key. This is especially important for confidential clients with long-lived sessions (up to 2 years).

DPoP is mandatory in atproto OAuth. If you're using an SDK, it's handled automatically. If you're implementing OAuth manually (e.g., for a sidecar that writes records), the key things to know are:

  • Generate an ES256 key pair once per session and reuse it for all requests
  • Every request needs a fresh DPoP proof JWT in the DPoP header, with Authorization: DPoP {access_token}
  • Track nonces separately for the authorization server and PDS — they may differ
  • When you get a 401 with error="use_dpop_nonce", retry with the nonce from the DPoP-Nonce response header
  • For PDS requests, include an ath claim containing the SHA-256 hash of the access token

The cookbook/python-oauth-web-app contains a complete manual DPoP implementation in atproto_oauth.py. For the full specification, see DPoP in the OAuth spec.

PKCE and PAR

Two other security mechanisms are mandatory in atproto OAuth:

  • PKCE (Proof Key for Code Exchange) prevents authorization code interception attacks. The client generates a random secret for each authorization request and proves possession of it when exchanging the code for tokens.

  • PAR (Pushed Authorization Requests) sends authorization parameters directly to the server before redirecting the user, reducing URL exposure and enabling better request validation.

Like DPoP, both are handled automatically by the SDKs. If implementing OAuth manually, see the OAuth spec for requirements.

Token Refresh

OAuth access tokens in atproto have a limited lifetime. When a token expires, you'll need to use the refresh token to obtain a new access token. The SDKs handle this automatically, but if you're implementing OAuth manually:

  • Store the refresh token securely alongside the access token
  • When an API call fails with a 401, attempt a token refresh before retrying
  • The new access token will be bound to the same DPoP key pair

Session Storage

For web applications using the BFF pattern, you'll need to associate OAuth sessions with user sessions. Common approaches include:

  • Cookie-based sessions: Store the OAuth tokens server-side, keyed by a session ID stored in an HTTP-only cookie
  • Database-backed sessions: Store tokens in a database with the user's account, enabling multi-device sessions

See Types of App for architectural patterns.

Progressive scope requests

An application does not always need all of its permissions at sign-in. The scope parameter in the client metadata document declares the maximum set of scopes the application may ever request, but individual authorization flows can request a subset. This makes it possible to start with minimal scopes and request more later, as the user opts into features that need them.

Selecting scopes before authorization

For client-side applications where different users have different needs, one approach is to let users choose which permissions to grant before the OAuth redirect. The application builds the scope string from the user's selections and passes it to authorize():

const BASE_SCOPES = ["atproto"];

// Request specific record collection access instead of broad permissions
const OPTIONAL_SCOPES = [
  // Access to your app's Lexicon records
  { id: "posts", scope: "repo:com.example.post", label: "Create posts" },
  { id: "likes", scope: "repo:com.example.like", label: "Like content" },
  // Or access to Bluesky profile data
  { id: "profile", scope: "rpc:app.bsky.actor.getProfile?aud=did:web:api.bsky.app#bsky_appview", label: "Read profile" },
  { id: "blob",  scope: "blob:*/*", label: "Upload images/video" },
];

function buildScopeString(selected: Set<string>): string {
  const granular = OPTIONAL_SCOPES
    .filter((s) => selected.has(s.id))
    .map((s) => s.scope);
  return [...BASE_SCOPES, ...granular].join(" ");
}

// pass the built scope string to the authorization flow
const url = await oauthClient.authorize(handle, {
  scope: buildScopeString(userSelections),
});

Note that repo:com.example.post grants full create/update/delete access to that collection. You can restrict to specific actions with query parameters: repo:com.example.post?action=create&action=update. See the permission spec for full syntax.

After the callback, the application tracks which scopes were granted for the session, so that features can be shown or hidden accordingly.

Upgrading scopes for an existing session

For applications using the BFF pattern with optional integrations — features that require scopes outside the application's primary namespace authority — scope upgrades can happen after the user is already signed in. The application stores the user's preference, initiates a new OAuth flow with the expanded scope set, and replaces the old session on callback:

// Your app's core permissions
const BASE_SCOPE =
  "atproto blob:*/* repo:com.example.post repo:com.example.like";

// Add permissions for another app's Lexicon (cross-namespace integration)
const EXTENDED_SCOPE =
  `${BASE_SCOPE} repo:social.othernet.feed.play repo:social.othernet.actor.status`;

// start a new OAuth flow with the expanded scope
const url = await oauthClient.authorize(handle, {
  scope: wantsIntegration ? EXTENDED_SCOPE : BASE_SCOPE,
});

The server stores the old session identifier alongside the OAuth state so the callback handler knows to replace it. On subsequent logins, the server can look up the user's stored preferences and include the expanded scopes automatically.

Note that permission sets can reduce the need for explicit scope upgrades: when a client refreshes its tokens, the computed permissions may be updated to reflect changes to the resolved sets. If a new capability falls within an existing permission set's namespace authority, updating the published set is sufficient — no re-authorization required. Cross-namespace integrations will still require an explicit upgrade. See Rolling out changes for guidance on coordinating scope changes with application releases.

Further Reading and Resources