Description
Overview
This automation workflow performs conditional routing and data enrichment on multiple JSON items triggered manually. The orchestration pipeline initiates upon user execution and processes each item through a deterministic conditional node that evaluates item properties.
Designed for developers and automation engineers, this workflow addresses the need for multi-item processing with conditional branching using a manual trigger and function nodes.
Key Benefits
- Manual trigger initiates controlled, user-driven execution of the automation workflow.
- Generates multiple data items programmatically for parallel conditional evaluation.
- Implements conditional routing based on item properties with deterministic logic.
- Applies consistent data enrichment by setting specific fields per conditional outcome.
Product Overview
This automation workflow begins with a manual trigger node that requires a user action to start the process. Upon activation, a function node outputs two JSON objects, each containing an `id` field with values 0 and 1 respectively. These items are then individually evaluated by an IF node, which performs a self-referential equality check on the `id` field. Because the condition evaluates to true for all items, the workflow routes every item through the true branch. In this branch, a set node assigns a new attribute `name` with the static value “n8n” to each item. The false branch, which would assign a different value, remains unused due to the condition’s nature. The workflow completes synchronously with enriched JSON objects containing both `id` and `name` fields. Error handling and retries are managed by platform defaults, as no explicit mechanisms are configured. This workflow demonstrates fundamental conditional data processing within a manual, event-driven integration pipeline.
Features and Outcomes
Core Automation
This automation workflow leverages a manual trigger and conditional logic to process multiple JSON items deterministically. It uses an IF node to evaluate each item’s `id` property and route accordingly within the orchestration pipeline.
- Processes multiple items in a single execution cycle using a function node output.
- Evaluates conditions with a self-comparison ensuring deterministic true branch routing.
- Applies data enrichment by setting new fields based on conditional outcomes.
Integrations and Intake
The workflow intake consists exclusively of a manual trigger node requiring user initiation. No external API integrations or authentication methods are involved. Input data is generated internally through a function node emitting JSON arrays with predefined fields.
- Manual trigger node initiates the automation without external dependencies.
- Function node supplies static JSON objects as input data for processing.
- No external credentials or API keys required for execution.
Outputs and Consumption
The workflow outputs enriched JSON objects containing original and added fields. Execution is synchronous, with output available immediately after processing. Each item includes an `id` and a `name` field assigned by conditional logic.
- Outputs JSON objects with properties: `id` and `name`.
- Delivers results synchronously upon manual trigger execution.
- Supports downstream consumption or further processing in automated pipelines.
Workflow — End-to-End Execution
Step 1: Trigger
The workflow is initiated manually by the user invoking the “On clicking ‘execute'” node. This manual trigger does not require any input data or headers and serves as the explicit start point for the automation sequence.
Step 2: Processing
The function node generates two JSON items with distinct `id` fields (0 and 1). No schema validation or transformation beyond static data creation occurs; items pass through unchanged except for their initial generation.
Step 3: Analysis
The IF node evaluates each item’s `id` property against itself using an equality condition, which always returns true. This deterministic check routes all items to the true branch for further processing, effectively bypassing the false branch.
Step 4: Delivery
Items routed on the true branch are processed by a set node that assigns a static `name` field with the value “n8n”. The workflow completes synchronously, outputting enriched JSON data with `id` and `name` fields for each item.
Use Cases
Scenario 1
When testing conditional logic in multi-item processing, this automation workflow provides a controlled environment. By manually triggering and generating test data, developers can verify conditional routing and enrichment functions, ensuring consistent output across items.
Scenario 2
For demonstration purposes, this orchestration pipeline illustrates how to handle multiple JSON objects in parallel with deterministic routing. Users can apply this pattern to validate data transformation steps before integrating external event sources.
Scenario 3
When implementing no-code integration templates that require manual initiation, this workflow exemplifies synchronous processing of multiple data items with conditional branching. The deterministic outcomes assist in debugging and workflow design validation.
How to use
Integrate this automation workflow in your n8n instance by importing the workflow JSON. No external credentials or APIs are required. To operate, manually trigger execution using the designated node, which initiates processing of predefined JSON items. Observe the conditional routing in the IF node and subsequent field setting in the set node. Outputs will contain the enriched `name` field alongside the original `id` values. This workflow is suitable for testing logic and conditional routing within manual or development environments.
Comparison — Manual Process vs. Automation Workflow
| Attribute | Manual/Alternative | This Workflow |
|---|---|---|
| Steps required | Manual data creation, conditional checks, and enrichment performed separately. | Single triggered execution handling creation, evaluation, and enrichment automatically. |
| Consistency | Subject to human error and inconsistent application of conditions. | Deterministic conditional logic ensures uniform processing of all items. |
| Scalability | Limited by manual effort and complexity of multi-item management. | Supports scalable multi-item processing within a single automated run. |
| Maintenance | Requires manual updates and validation for each condition or data change. | Centralized workflow logic allows easier updates and reduces failure surface. |
Technical Specifications
| Environment | n8n automation platform |
|---|---|
| Tools / APIs | Manual Trigger, Function, IF, Set nodes |
| Execution Model | Synchronous processing upon manual trigger |
| Input Formats | Internal JSON objects generated in function node |
| Output Formats | JSON objects with enriched fields |
| Data Handling | Transient in-memory processing without persistence |
| Credentials | None required |
Implementation Requirements
- n8n instance with access to manual trigger, function, IF, and set nodes.
- User interaction to initiate the manual trigger node for execution.
- No external API keys or authentication required for this workflow.
Configuration & Validation
- Import the workflow JSON into an n8n environment supporting required nodes.
- Execute the manual trigger node and observe the generation of two JSON items.
- Verify that both items pass the IF node’s condition and receive the `name` field set to “n8n”.
Data Provenance
- Triggered by the “On clicking ‘execute'” manual trigger node.
- Function node generates base JSON objects with `id` fields 0 and 1.
- IF node evaluates each item’s `id` property for routing; Set node applies final enrichment.
FAQ
How is the automation workflow triggered?
It is triggered manually by a user action on the manual trigger node labeled “On clicking ‘execute'”.
Which tools or models does the orchestration pipeline use?
The pipeline uses native n8n nodes: manual trigger, function, IF conditional, and set nodes to process and enrich JSON data.
What does the response look like for client consumption?
The output consists of JSON objects each containing `id` and `name` fields, delivered synchronously after processing.
Is any data persisted by the workflow?
No data persistence is configured; all processing happens transiently in memory during workflow execution.
How are errors handled in this integration flow?
No explicit error handling is defined; the workflow relies on platform default mechanisms for retries or failure management.
Conclusion
This automation workflow provides a simple, manual trigger-based orchestration pipeline that processes multiple JSON items with deterministic conditional routing. It enriches each item consistently by setting static fields based on a condition that always evaluates true. The workflow operates synchronously without external dependencies or persistence, suitable for testing or demonstration of conditional logic in multi-item processing. One constraint is its reliance on manual execution, which requires explicit user initiation for each run. Overall, it offers a transparent, reliable method to validate basic branching and data enrichment within n8n’s automation environment.








Reviews
There are no reviews yet.