Linear Import Failed: CSV Encoding Broke Everything (How We Fixed It)
Linear Import Failed: CSV Encoding Broke Everything (How We Fixed It)
Meta Description: CSV encoding errors corrupted our first Linear import. We tried manual fixes, then learned to delete and reimport with validation. Here’s what went wrong.
We set up Linear to track our Secure AI Prompt Builder course development. The plan was simple: export our project plan to CSV, import to Linear, and start tracking work properly.
The import succeeded. 44 issues created. Everything looked fine.
Except it wasn’t. The CSV encoding was wrong, and every single issue was corrupted.
Here’s what we learned about debugging import failures and why “just fix it manually” is never the answer.
The Setup: Moving to Linear for Project Management
Context:
- Building 5-module Secure AI Prompt Builder course
- 44 tasks across 3 milestones (MVP, Beta, Launch)
- Needed proper project tracking (not just markdown files)
- Chose Linear for clean UI, API access, and custom views
The import plan:
- Create CSV with all 44 issues (title, description, milestone, labels, estimates)
- Import via Linear’s GraphQL API using @linear/sdk
- Configure 8 custom views (Sprint Board, MVP Roadmap, Testing Pipeline, etc.)
- Start tracking work
What we expected: Clean import, all issues ready to work
What we got: 44 corrupted issues with encoding errors
The First Sign of Trouble
After running the import script, we checked Linear. Issues were there, but titles looked wrong:
Expected: "Set up automated jailbreak testing framework"
Actual: "Set up automated jailbreak testing framework
Wait. That’s the same… or is it?
Looking closer at the API response:
Title: "Set up automated jailbreak testing framework\r"
The problem: Carriage returns (\r) were being included in the title field.
Why it happened: The CSV export used Windows line endings (CRLF = \r\n). When we parsed the CSV, we split on \n, leaving \r at the end of each field.
Attempt 1: Manual Fixes (Failed)
The approach: “It’s just 44 issues. I’ll update them manually in the Linear UI.”
Why we abandoned it:
- Encoding errors weren’t consistent (some had
\r, some had other invisible characters) - Descriptions had the same problem
- No way to validate that manual edits fixed everything
- If we needed to re-import later (for testing changes), we’d have to do it again
Time wasted: 20 minutes
Lesson: Don’t manually fix data import issues. Fix the source.
Attempt 2: Update via API (Also Failed)
The approach: “Let’s write a script to update all the issues via the API.”
We wrote a validation script that:
- Fetched all 44 issues from Linear
- Detected encoding issues (trailing
\r, embedded control characters) - Attempted to update issues with cleaned data
The code:
const issues = await client.issues({ filter: { project: { id: projectId }}});
for (const issue of issues.nodes) {
const cleanTitle = issue.title.trim().replace(/\r/g, '');
const cleanDescription = issue.description?.replace(/\r/g, '') || '';
if (cleanTitle !== issue.title || cleanDescription !== issue.description) {
await client.updateIssue(issue.id, {
title: cleanTitle,
description: cleanDescription
});
}
}
Why this failed:
- The CSV itself had more encoding problems than just
\r - Some fields had mixed encodings we couldn’t detect programmatically
- We’d still need to validate that relationships (milestones, labels) were correct
- This approach treats the symptom, not the cause
Time wasted: 30 minutes
Lesson: Patching broken data is tech debt. Delete and reimport correctly.
The Right Fix: Delete Everything and Reimport
The approach:
- Delete all 44 corrupted issues from Linear
- Fix the CSV parsing in the import script
- Add validation before import
- Reimport with clean data
Step 1: Delete All Issues
const issues = await client.issues({
filter: { project: { id: projectId }}
});
console.log(`Found ${issues.nodes.length} issues to delete`);
for (const issue of issues.nodes) {
await client.deleteIssue(issue.id);
console.log(` ✅ Deleted: ${issue.identifier}`);
}
Result: Clean slate. 0 issues in Linear.
Step 2: Fix the CSV Parsing
The bug (in import-to-linear.js):
// BROKEN - splits on \n, leaves \r at end of fields
const rows = csvContent.split('\n');
The fix: Don’t parse CSV manually. Use a proper CSV parser that handles encoding.
But we had a better idea: Skip CSV parsing entirely.
Step 3: Hardcode the Data (No CSV Parsing)
The realization:
- We only need to import this data once
- The CSV was an intermediate format (easier to edit than JS)
- Parsing CSV correctly is harder than just writing the data in JavaScript
The solution:
Create import-to-linear-FIXED.js with hardcoded issue data:
const issues = [
{
title: "Set up automated jailbreak testing framework",
description: "Create Node.js test runner...",
milestone: "MVP",
labels: ["infrastructure", "testing"],
estimate: 5,
state: "Todo"
},
{
title: "Test 20 prompts against 15 attack vectors",
description: "Run comprehensive security testing...",
milestone: "MVP",
labels: ["testing", "security"],
estimate: 8,
state: "Todo"
},
// ... 42 more issues
];
Benefits:
- No encoding issues (it’s all UTF-8 JavaScript strings)
- Easy to validate (syntax errors show up immediately)
- Can version control the import script
- Runs consistently
Step 4: Add Validation Before Import
Before creating any issues, validate that all the data is correct:
console.log('Validating import data...');
// Check for encoding issues
issues.forEach((issue, i) => {
if (issue.title.includes('\r') || issue.title.includes('\n')) {
throw new Error(`Issue ${i}: Title contains line breaks`);
}
if (!issue.milestone) {
throw new Error(`Issue ${i}: Missing milestone`);
}
if (!milestoneMap[issue.milestone]) {
throw new Error(`Issue ${i}: Invalid milestone "${issue.milestone}"`);
}
});
console.log('✅ Validation passed');
Result: If data is bad, the script fails BEFORE creating any issues in Linear.
Step 5: Reimport
node import-to-linear-FIXED.js
Output:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
LINEAR IMPORT: Secure AI Prompt Builder
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Validating import data...
✅ Validation passed
Creating issues...
✅ SAPB-1: Set up automated jailbreak testing framework
✅ SAPB-2: Test 20 prompts against 15 attack vectors
✅ SAPB-3: Create Linear project tracking structure
... (41 more)
✅ SAPB-44: Complete beta testing cycle
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Import complete!
Created: 44 issues
Skipped: 0 (already exist)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Verification in Linear:
- All 44 issues created correctly
- Titles clean (no
\r, no encoding issues) - Descriptions formatted properly
- Milestones assigned correctly
- Labels applied
- Estimates set
Total time (including debugging): 90 minutes
Time to reimport after fix: 2 minutes
What We Learned
Lesson 1: CSV Parsing Is Harder Than It Looks
The trap: “It’s just comma-separated text. I’ll split on commas.”
The reality:
- Line endings vary (LF vs CRLF vs CR)
- Quoted fields can contain commas
- Character encoding differs across systems
- Escape sequences need handling
The fix:
Use a proper CSV library (papaparse, csv-parse) or skip CSV entirely.
Lesson 2: Don’t Fix Bad Data Manually
The temptation: “It’s only 44 issues. Manual fixes will be faster.”
Why this fails:
- Inconsistent fixes (you’ll miss some)
- No validation
- Not reproducible
- Time-consuming at scale
The better approach: Fix the import process, delete, and reimport.
Lesson 3: Validate Before You Import
Before implementing validation:
- Import failed silently (data looked fine in Linear UI)
- Encoding errors only visible in API responses
- No way to catch errors before they corrupted the database
After implementing validation:
- Script fails BEFORE creating any issues if data is bad
- Clear error messages pointing to exact problems
- Confidence that imported data is clean
The validation pattern:
function validateImportData(issues) {
issues.forEach((issue, index) => {
// Check required fields
if (!issue.title) throw new Error(`Issue ${index}: Missing title`);
if (!issue.milestone) throw new Error(`Issue ${index}: Missing milestone`);
// Check for encoding issues
if (/[\r\n]/.test(issue.title)) {
throw new Error(`Issue ${index}: Title contains line breaks`);
}
// Check foreign key references
if (!milestoneMap[issue.milestone]) {
throw new Error(`Issue ${index}: Invalid milestone "${issue.milestone}"`);
}
});
}
Lesson 4: Hardcoded Data Is Fine for One-Time Imports
The conventional wisdom: “Don’t hardcode data. Use configuration files.”
When this doesn’t apply:
- One-time imports (not a recurring process)
- Small datasets (44 issues, not 10,000)
- Data won’t change after import
Benefits of hardcoding:
- No parsing errors
- Easier to validate (syntax highlighting catches errors)
- Version controlled
- Runs consistently
Lesson 5: Test the Import Script Before Running in Production
What we should have done:
- Create a test Linear workspace
- Run import script there
- Verify data looks correct
- THEN import to production workspace
What we actually did:
- Import directly to production
- Realize data is corrupted
- Spend 90 minutes debugging
Lesson: Always test data imports in a non-production environment first.
The Final Import Script Structure
// import-to-linear-FIXED.js
const { LinearClient } = require('@linear/sdk');
// Hardcoded issue data (no CSV parsing)
const issues = [
{ title: "...", description: "...", milestone: "MVP", ... },
// ... 43 more
];
// Validation before import
function validateImportData(issues) {
issues.forEach((issue, i) => {
if (!issue.title) throw new Error(`Issue ${i}: Missing title`);
if (/[\r\n]/.test(issue.title)) throw new Error(`Issue ${i}: Line breaks in title`);
// ... more checks
});
}
async function importIssues() {
console.log('Validating import data...');
validateImportData(issues);
console.log('✅ Validation passed\n');
console.log('Creating issues...');
for (const issue of issues) {
const result = await client.createIssue({
teamId: TEAM_ID,
title: issue.title,
description: issue.description,
projectId: milestoneMap[issue.milestone],
// ... other fields
});
const createdIssue = await result.issue;
console.log(` ✅ ${team.key}-${createdIssue.number}: ${issue.title}`);
}
console.log('\n✅ Import complete!');
}
importIssues();
Debugging Checklist for Data Imports
When your import fails:
- Check the data source - Is the CSV/JSON encoded correctly?
- Validate before importing - Add validation that fails early
- Test in non-production first - Use a test workspace/database
- Don’t fix manually - Fix the source and reimport
- Use proper parsers - Don’t roll your own CSV parser
- Check encoding - UTF-8? UTF-16? Windows line endings?
- Verify foreign keys - Do referenced entities exist?
- Add logging - Know exactly what got imported
- Make it idempotent - Can you run it twice safely?
What’s Next
Now that we have clean Linear data:
- 44 issues across 3 milestones (MVP, Beta, Launch)
- 8 custom views configured (Sprint Board, Testing Pipeline, etc.)
- Ready to track prompt hardening work
- API access for automated updates
Next post: “8 Custom Linear Views for Managing a Security Testing Pipeline”
Files mentioned:
00-Project-Management/import-to-linear-FIXED.js- The working import script00-Project-Management/linear-views-setup.md- View configuration guide00-Project-Management/VERIFICATION-CHECKLIST.md- Import validation checklist
Comments