Description
Overview
This fact-checking automation workflow is designed to detect hallucinations and verify factual correctness in textual content by breaking down the input into discrete claims and comparing them against a reference set of verified facts. Utilizing a no-code integration pipeline, it enables precise identification of inaccurate statements through a specialized language model trigger.
Intended for content reviewers, editors, and data accuracy specialists, this orchestration pipeline begins with a manual or external workflow trigger and processes text to provide structured factual validation of each sentence. The workflow employs a code node to segment input text into sentences, preserving complex date formats and list structures for accuracy.
Key Benefits
- Automates sentence-level fact verification by splitting text into individual claims for granular analysis.
- Applies a specialized language model to detect hallucinations, improving content reliability in automation workflows.
- Supports both manual and external workflow triggers, enabling flexible integration in varied operational contexts.
- Filters and aggregates only inaccurate claims, streamlining the review process in the orchestration pipeline.
Product Overview
This fact-checking automation workflow initiates via manual trigger or invocation from another workflow, receiving two primary inputs: a block of verified factual data and a text passage to analyze. The core logic begins with a JavaScript code node that parses the input text into sentences using a regex designed to preserve date formats and list items, ensuring precise claim segmentation.
Each claim is merged with the verified facts data and individually processed through a specialized Ollama language model, “bespoke-minicheck,” optimized for hallucination detection and factual validation. The model outputs a binary correctness indicator for each claim. Subsequent filtering isolates claims flagged as inaccurate.
The workflow aggregates these inaccurate claims and uses a secondary language model chain to generate a structured summary report detailing the number and content of factual errors. Error handling relies on platform default mechanisms without customized retry or backoff logic. The workflow maintains data confidentiality by processing inputs transiently and does not persist any user data beyond execution.
Features and Outcomes
Core Automation
This fact-checking automation workflow processes input text by splitting it into sentences, each evaluated for accuracy against known facts using a no-code integration pipeline. The “Code” node implements custom sentence segmentation, while “Basic LLM Chain4” and the “Ollama Chat Model” perform deterministic fact validation.
- Single-pass sentence segmentation preserving complex date and list formats.
- Deterministic binary evaluation of claim correctness per sentence.
- Structured aggregation of inaccurate claims for downstream analysis.
Integrations and Intake
The workflow integrates with Ollama’s language modeling API, authenticating via API credentials to access the “bespoke-minicheck” model specialized in hallucination detection. Input events originate from manual triggers or other workflows, requiring two input parameters: verified facts and the text to analyze.
- Ollama Chat Model for fact-checking with API key authentication.
- Manual and external workflow triggers for flexible event intake.
- Custom JavaScript node for precise text-to-sentence transformation.
Outputs and Consumption
Outputs consist of a structured summary report detailing the number of factual inaccuracies and listing individual incorrect claims. This synchronous orchestration pipeline returns aggregated results suitable for editorial review or integration with downstream content validation systems.
- JSON-formatted summary of incorrect factual statements.
- Aggregated array of inaccurate claims extracted from the source text.
- Deterministic yes/no correctness flags per claim for precise filtering.
Workflow — End-to-End Execution
Step 1: Trigger
The workflow initiates either manually via a user action or through a call from another workflow, requiring two inputs: a factual reference document and the text passage to verify. This trigger model supports flexible integration into larger automation systems.
Step 2: Processing
The input text undergoes sentence segmentation using a custom JavaScript function that respects sentence-ending punctuation while preserving date expressions and list markers. This ensures that sentences representing individual factual claims are accurately extracted without erroneous splitting.
Step 3: Analysis
Each sentence is merged with the verified facts and sent to a specialized Ollama language model designed for hallucination detection. The model evaluates each claim’s factual accuracy, outputting a “yes” or “no” response. Claims marked as incorrect are filtered for aggregation and further summarization.
Step 4: Delivery
The workflow aggregates all claims flagged as incorrect and generates a concise summary report via another language model chain. This report enumerates the number of factual errors and lists the problematic statements, providing structured output suitable for editorial review or automated downstream consumption.
Use Cases
Scenario 1
A content editor receives a scientific article draft and needs to verify factual accuracy efficiently. Using this fact-checking automation workflow, the editor inputs the draft and verified facts, receiving a detailed list of inaccurate claims in one processing cycle, expediting review and correction.
Scenario 2
A data integrity team monitors published content for hallucinations introduced by AI-generated text. By integrating this orchestration pipeline into their validation system, they automatically flag and review sentences inconsistent with known facts, improving reliability of the published information.
Scenario 3
Academic researchers compiling literature reviews use this automation workflow to validate claim accuracy against a curated fact base. The pipeline returns structured factual assessments, enabling researchers to identify and exclude erroneous statements before publication.
How to use
To deploy this fact-checking automation workflow, import it into your n8n instance and configure the required credentials for the Ollama API. Provide two inputs: a string containing verified factual information and the target text for analysis. Execute the workflow manually or trigger it from another automation. The output will include an aggregated summary of incorrect claims to guide content correction efforts. Regularly update the verified facts input to maintain contextual accuracy.
Comparison — Manual Process vs. Automation Workflow
| Attribute | Manual/Alternative | This Workflow |
|---|---|---|
| Steps required | Multiple manual reviews and cross-referencing for each claim. | Automated sentence splitting and fact-checking in a single pipeline. |
| Consistency | Subject to human error and variable judgment. | Deterministic evaluation using a specialized language model. |
| Scalability | Limited by human reviewer availability and speed. | Scales with system resources and API throughput. |
| Maintenance | Requires ongoing training and oversight of reviewers. | Relies on periodic updates to verified facts and model versions. |
Technical Specifications
| Environment | n8n automation platform |
|---|---|
| Tools / APIs | Ollama language model API (“bespoke-minicheck”) |
| Execution Model | Synchronous synchronous request–response with manual or triggered start |
| Input Formats | Plain text string for both facts and source text |
| Output Formats | Aggregated JSON summary of inaccurate claims with binary correctness flags |
| Data Handling | Transient processing with no persistent storage |
| Credentials | Ollama API key authentication |
| Known Constraints | Relies on external Ollama API availability and verified facts input accuracy |
Implementation Requirements
- Access to n8n automation platform with version supporting JavaScript code and language model nodes.
- Configured Ollama API credentials with permission to use “bespoke-minicheck” model.
- Provision of verified factual data and target text inputs in proper string format.
Configuration & Validation
- Set up Ollama API credentials in n8n credentials manager before executing the workflow.
- Input verified factual content and the text to analyze via the trigger node or external workflow call.
- Confirm the workflow outputs a JSON summary listing any claims marked as factually incorrect.
Data Provenance
- Trigger nodes: manual trigger and external workflow trigger for flexible initiation.
- Processing nodes: “Code” node for sentence extraction, “Basic LLM Chain4” and “Ollama Chat Model” for claim validation.
- Output nodes: “Filter” to isolate incorrect claims, “Aggregate” for collecting errors, and “Basic LLM Chain” for generating summary report.
FAQ
How is the fact-checking automation workflow triggered?
The workflow can be triggered manually via a user-initiated action or invoked from another workflow, requiring input parameters for verified facts and the text to analyze.
Which tools or models does the orchestration pipeline use?
It integrates with the Ollama API, leveraging the “bespoke-minicheck” specialized language model for hallucination detection and factual verification.
What does the response look like for client consumption?
The output is a JSON-formatted summary report enumerating the number of incorrect factual statements along with a list of those claims for editorial review.
Is any data persisted by the workflow?
No data is stored persistently; the workflow processes inputs transiently and only outputs aggregated results without maintaining data beyond execution.
How are errors handled in this integration flow?
Error handling is managed by the n8n platform’s default mechanisms; no custom retry or backoff strategies are configured within the workflow.
Conclusion
This fact-checking automation workflow offers a precise and deterministic method for validating textual claims against verified facts by leveraging a specialized language model within an n8n orchestration pipeline. It ensures consistent identification of hallucinations at the sentence level, facilitating reliable content accuracy assessments. While the workflow depends on external Ollama API availability and the quality of the provided factual inputs, it delivers structured summaries to support editorial and data integrity processes without persisting user data. Its design prioritizes accuracy, scalability, and integration flexibility in automated fact verification contexts.








Reviews
There are no reviews yet.