Workflow Architecture: Ports, Processors, and Portable Definitions

William Warne

Software Engineer | Fractional CTO | Founder

Introduction

Agent workflows are powerful—they turn notes into drafts, briefs into brand-aligned prompts, and performance data into actionable feedback. But they get messy when workflow logic is tangled with triggers, APIs, and storage. The fix is a clear architecture: workflows as portable definitions with ports and adapters, and processors that run them. This post explains how we structure workflows so they stay reusable, testable, and independent of any one stack.

Workflows vs Processors

Workflows are first-class definitions. Each workflow defines what it needs: a trigger input, content or context access, a generator, and a sink for output. It does not assume how it is run or where content lives. Think of a workflow as a contract: "I need this input, this context, and this output shape."

Processors (or runners) are the execution layer. They load workflow definitions, supply adapters that implement the workflow’s ports (file reader, HTTP client, API, etc.), and expose triggers (CLI, webhook, queue). The same workflow can run in different processors—our Node-based runner, an n8n workflow, or a future custom one—with different adapters for different environments.

Why this separation: Workflows stay portable. You can swap processors, add new triggers, or change where content lives without rewriting the workflow logic. Content access is an explicit step and a port, not hardcoded paths or API calls.

Ports and Adapters

Each workflow declares ports—interfaces it needs. Typical ports:

  • Trigger — Input that starts the workflow (e.g. a note, a message, a webhook payload).
  • Context reader — Access to brand context, content, or knowledge (e.g. brand store, file system, API).
  • Generator — Logic that produces output (e.g. draft generator, LLM call).
  • Sink — Where output goes (e.g. file, API, email).

The processor supplies adapters that implement these ports. A file-based adapter reads from the filesystem; an HTTP adapter calls an API; a Notion adapter reads from Notion. The workflow never knows the difference—it just calls the port. That keeps workflows decoupled from storage, APIs, and infrastructure.

Our Workflows

We run four workflows today:

| Workflow | What it does | Ports | |----------|--------------|-------| | draft-from-note | Note or message → draft content (LinkedIn, email, image brief) | Trigger, brand reader, generator, sink | | image-from-brand | Brief input + brand context → image brief and prompt files | Trigger, context reader, sink | | linkedin-feedback | LinkedIn performance data → feedback and suggested brand store updates | Performance source, feedback sink | | user-notes | User notes → suggested brand store updates | Notes source, sink |

Each workflow lives in its own folder with a ports contract, run logic, and optional shared orchestration. The processor loads them, wires adapters, and exposes triggers (CLI, webhook). You can run the same workflow with different adapters—e.g. file-based for local dev, HTTP for production.

Content Access as a Port

Content and context must not be hardcoded. They are an explicit step in the workflow and a port. The processor provides the implementation: file reader, API client, knowledge-store, or Notion. Whatever the source, the workflow sees a consistent interface. That keeps workflows tool-agnostic and makes it easy to swap content sources without changing workflow logic.

Conclusion

Workflows as portable definitions, ports as contracts, and processors as the execution layer—that’s the architecture that keeps agent workflows maintainable and reusable. If you want workflows built and managed for you, we do that. See our services and workflows for what we offer.

Managed Agent Workflow — We build and manage your agent work to save you time and so that you can make the most of the AI revolution.