Back to blog
#pipelines#nightloop#workflow

The pipeline that runs while you sleep

A step-by-step walkthrough of the NightLoop pipeline. Pre-check, implement, validate. Why this order matters and what happens when things break at 3am.

It started as 47 lines of bash

Before Zowl had a visual editor, before it had a name, NightLoop was a bash script on my machine at home. I called it nightloop.sh and it did three things: read the codebase, run Claude, check if the output was broken. That was it.

I'd kick it off before bed, and most mornings I woke up to code that actually worked. Not always. But often enough that I kept running it.

The thing is, those three steps weren't random. I tried other orderings. I tried skipping the first step. Every shortcut led to garbage output. The three-step sequence is load-bearing, and I want to explain why.

Step 1: Pre-check

Before the agent writes a single line of code, it reads.

It reads your project structure. It reads relevant files. It checks what frameworks you're using, what patterns exist, what tests are already there. It builds a mental map of your codebase.

This sounds obvious. It's not.

If you've ever pasted a task into Claude Code or Cursor without giving it context, you know what happens. The agent invents its own architecture. It creates files that already exist. It installs packages you already have. It rewrites your auth layer because it didn't know you had one.

Pre-check kills this problem. The agent enters the implement phase already knowing what's there.

Here's what the old bash version looked like:

pre_check() {
  find "$PROJECT_DIR" -name "*.ts" -o -name "*.tsx" | head -200 > /tmp/file_list.txt
  cat /tmp/file_list.txt | xargs head -30 > /tmp/context.txt
  echo "Project uses: $(cat package.json | jq -r '.dependencies | keys[]')" >> /tmp/context.txt
}

Ugly? Absolutely. But it worked. It grabbed file names, read the first 30 lines of each, and dumped your dependency list into a context file. The agent got this context before doing anything else.

In Zowl's visual editor, this is now a draggable "Pre-check" block. You configure which directories to scan, what file types matter, and how deep to go. Same idea, no bash required.

Step 2: Implement

This is where the agent actually writes code.

It takes the PRD (your task description) plus the context from pre-check and generates the implementation. Nothing fancy here conceptually. But the quality difference between "implement with context" and "implement blind" is enormous.

I ran a test last month. Same 10 tasks, same agent, same model. One batch with pre-check, one without. The pre-check batch had a 73% first-pass success rate. Without pre-check? 31%. That's not a marginal improvement. That's the difference between a useful tool and a frustrating toy.

The old bash:

implement() {
  local task="$1"
  local context=$(cat /tmp/context.txt)
  claude --print -m "Context:\n$context\n\nTask:\n$task" > /tmp/output.txt
  apply_changes /tmp/output.txt
}

The new way in Zowl: drag an "Implement" block, connect it to the pre-check output, point it at your PRD. Done.

Step 3: Validate

Here's where most people's scripts stopped. Mine almost did too.

For the first two weeks of nightloop.sh, I didn't have validation. I'd wake up, the agent had "completed" everything, and half the code didn't compile. No tests ran. No build checks. The agent just... declared victory and moved on.

Validate runs your project. It executes npm run build, runs your test suite, checks the acceptance criteria from your PRD. If something fails, it doesn't just log an error and quit.

It retries.

validate() {
  local attempt=1
  while [ $attempt -le 3 ]; do
    npm run build 2>&1 | tee /tmp/build_output.txt
    if [ $? -eq 0 ] && npm test 2>&1 | tee /tmp/test_output.txt; then
      echo "PASS on attempt $attempt"
      return 0
    fi
    echo "FAIL attempt $attempt, feeding errors back..."
    local errors=$(cat /tmp/build_output.txt /tmp/test_output.txt)
    implement "Fix these errors: $errors"
    attempt=$((attempt + 1))
  done
  echo "FAILED after 3 attempts" >> /tmp/failures.txt
  return 1
}

The retry loop feeds the error output back into the implement step. The agent sees exactly what broke and tries to fix it. Three attempts. If it still fails after three, the task gets flagged for human review.

This loop is where the magic happens at 3am. You're asleep. The agent hits a type error. It reads the error, fixes the import, rebuilds. Passes. Moves to the next task. You never knew anything went wrong.

Why this order is non-negotiable

I've talked to other devs building agent workflows. A surprising number skip pre-check entirely. They go straight to implement, then wonder why the agent keeps hallucinating file paths.

Look, there's a token cost to pre-check. You're spending tokens on reading before you spend tokens on writing. Some people see that as waste. I see it as the cheapest insurance you'll ever buy.

Without pre-check, the implement step burns tokens generating code that conflicts with your existing codebase. Then the validate step catches those conflicts, triggers retries, and each retry burns more tokens. You end up spending 3x the tokens you "saved" by skipping pre-check.

Pre-check before implement. Implement before validate. Validate with retries. That's the loop.

From bash functions to visual blocks

Honestly, the bash script worked fine for me. I'm comfortable in a terminal. But I watched three friends try to set it up and all three gave up within 20 minutes. One of them literally said "bro I'm not debugging your bash script at midnight."

Fair.

That's why NightLoop ships as a default template in Zowl. Open the app, click "NightLoop template," and you get the three-step pipeline pre-configured. Pre-check, implement, validate. Each block is visual, configurable, and connected. You can tweak scan depth, retry counts, which test commands to run. You can add steps if you want (I sometimes add a "commit" step at the end).

But you don't have to touch a terminal.

The 80% pipeline

NightLoop isn't the only pipeline pattern that works. Some tasks need different flows. Code review pipelines look different. Migration pipelines look different. But for the bread-and-butter work of "here's a feature, go build it," pre-check, implement, validate covers about 80% of cases.

That's why it's the default. Not because it's the only way, but because it's the right starting point.

I still run a version of nightloop.sh on my own machine sometimes, just for nostalgia. But these days it's mostly the Zowl template doing the same job with a lot less cursing at 6am when something in the bash breaks because of a space in a file path.

Some things are better with a UI. I didn't want to admit that for a long time. But after watching my own pipeline break because of IFS issues for the third time, I built the visual editor. NightLoop graduated from a script to a product, and it turns out that was the right call.