No Plugin Required: How We Connected Claude Code to Linear's GraphQL API in One Session

by Alien Brain Trust AI Learning
No Plugin Required: How We Connected Claude Code to Linear's GraphQL API in One Session

No Plugin Required: How We Connected Claude Code to Linear’s GraphQL API in One Session

Meta Description: 5 projects, 9 labels, 6 views, 19 issues — created in Linear via a Claude-written Node.js script using Live AWS SSM credentials. No plugin, no integration layer.

Most people connect AI to their project tracker through integrations, plugins, or automation platforms. We did it differently: Claude Code wrote a Node.js script mid-session, queried Linear’s GraphQL API directly, and created our entire backlog while we watched.

Here’s the technical pattern — and why it matters beyond this specific use case.

Why Direct API Over an Integration Platform

We could have used Zapier, Make, or a Linear MCP integration. We didn’t.

The reason is control. An integration platform gives you buttons to click. A direct API call gives you exactly what you asked for, structured exactly how you want it, with no translation layer adding noise or constraints.

When you’re creating 19 issues with specific labels, projects, descriptions, due dates, and priorities — a form-based integration would take longer than writing the script. And the script is reusable.

More importantly: we weren’t just creating issues. We were introspecting the schema, querying existing state, making decisions based on what was there, and sequencing operations that depended on each other. That’s not a workflow you can build in a no-code tool without substantial friction.

The Credential Pattern: AWS SSM at Runtime

Before any API call, we needed the Linear API key. The key is stored in AWS SSM Parameter Store — not in environment variables, not in a .env file, not in the repo.

Every script Claude Code wrote in this session fetches it at runtime:

const key = execSync(
  'aws ssm get-parameter --name "/abt/linear/api-key" --with-decryption ' +
  '--region us-east-1 --query Parameter.Value --output text',
  {
    encoding: 'utf8',
    env: { ...process.env, MSYS_NO_PATHCONV: '1', AWS_PROFILE: 'deployer' }
  }
).trim();

This is the pattern we use for every secret at ABT: nothing hardcoded, nothing in files that could be committed, everything in SSM with KMS encryption. The script fails fast and clearly if credentials aren’t available — which is the correct behavior.

Step 1: Query Before You Create

Before creating anything, Claude Code queried Linear to understand the current state:

const query = `{
  team(id: "...") {
    states { nodes { id name type } }
    labels { nodes { id name } }
    projects { nodes { id name } }
  }
  issues(filter: { team: { key: { eq: "ABT" } } }, first: 10) {
    nodes { id identifier title state { name } }
  }
}`;

Result: 5 default onboarding tickets (ABT-1 to ABT-4, never used), one real ticket (ABT-5, security baseline), no projects, no custom labels.

That query determined the entire cleanup plan before we touched a mutation.

Step 2: Create Labels First

Issues need label IDs to be assigned labels. So labels come before issues.

async function createLabel(name, color) {
  return gql(
    `mutation($input: IssueLabelCreateInput!) {
       issueLabelCreate(input: $input) {
         success issueLabel { id name }
       }
     }`,
    { input: { teamId: TEAM_ID, name, color } }
  );
}

// Priority tier: p0 (urgent) through p3 (low)
// Area labels: workshop, sapb, course, infra, admin

Nine labels created in sequence. The IDs come back in the response and get stored as constants for use in subsequent mutations.

Step 3: Create Projects

Same pattern — create first, capture the ID, use it when creating issues.

await createProject('AI Builder Sprint', 
  'Lead product. 30-day cohort workshop: Zero to Shipped.',
  '#8B5CF6', '2026-06-18');

Five projects created: AI Builder Sprint, SAPB Lab, AI-1001 Course, Blog & Content, Ops & Infra. Each with a target date and description that explains what done looks like.

Step 4: Clean Up Existing Tickets

ABT-1 through ABT-4 were Linear’s default onboarding tickets — never useful. Cancel them programmatically:

const CANCELED_STATE = 'bf421b92-6080-4a76-9e61-d83b72fc641f'; // queried in step 1

for (const id of onboardingTicketIds) {
  await updateIssue(id, { stateId: CANCELED_STATE });
}

ABT-5 (AWS security baseline) moved to Done and assigned to the Ops & Infra project. Clean board before adding 19 new issues.

Step 5: Create 19 Issues in Sequence

Each issue creation is a single mutation with all fields populated:

async function createIssue(issue) {
  const input = {
    teamId: TEAM,
    title: issue.title,
    description: issue.description,  // full markdown, with context
    stateId: issue.state,             // Todo or Backlog
    priority: issue.priority,         // 1=Urgent, 2=High, 3=Medium
    labelIds: issue.labels,           // [p0, workshop] etc
    projectId: issue.project,
    dueDate: issue.dueDate
  };
  return gql(`mutation($input: IssueCreateInput!) {
    issueCreate(input: $input) {
      success issue { identifier title }
    }
  }`, { input });
}

150ms delay between each call to avoid rate limiting. Total time: under 5 seconds for 19 issues.

Step 6: Create Custom Views

Linear’s customViewCreate mutation accepts filterData as an IssueFilter object. Claude Code introspected the schema first:

// What does IssueFilter accept?
const r = await gql(`{
  __type(name: "IssueFilter") {
    inputFields { name type { name kind } }
  }
}`);

Then used the schema to build correct filter objects:

// "This Week" view: p0 label + not done or cancelled
filterData: {
  and: [
    { labels: { some: { name: { in: ['p0'] } } } },
    { state: { type: { nin: ['completed', 'cancelled'] } } }
  ]
}

// "Launch Countdown" view: due on or before May 5, still open
filterData: {
  and: [
    { dueDate: { lte: '2026-05-05' } },
    { state: { type: { nin: ['completed', 'cancelled'] } } }
  ]
}

Six views created. One failed on icon validation (Linear’s icon names aren’t documented well), retry succeeded without the icon.

The Pattern Is the Point

What we built here is not a one-off script for Linear. It’s a repeatable pattern:

  1. Fetch credentials from SSM at runtime — never hardcode
  2. Query before you mutate — understand current state
  3. Introspect the schema if you’re uncertain — don’t guess field names
  4. Create dependencies in order — labels before issues, projects before issues
  5. Small delay between bulk mutations — rate limits are real
  6. Fail fast and log clearly — silent failures waste debugging time

This same pattern applies to any API: GitHub, Notion, Airtable, HubSpot. The specific mutations change. The approach doesn’t.

What This Unlocks

The interesting outcome isn’t the 19 tickets — it’s that Claude Code can now maintain the backlog the same way it built it.

Need to add 5 new tickets for a new workstream? Describe them in a session, Claude writes the script, runs it, done. Need to bulk-update priorities before a sprint? Same pattern. Need to close out a milestone and create the next one? Still the same pattern.

The backlog stops being something you manage manually and becomes something you maintain through conversation.


Next in this series: from vague workshop idea to AI Builder Sprint — how the planning session shaped a product that didn’t exist at the start.

Tags: #automation#implementation#technical#workflows#building-in-public

Comments

Loading comments...