Description
Overview
This product description details a data import automation workflow designed to transfer product information from an Excel spreadsheet into a PostgreSQL database. This orchestration pipeline addresses the common challenge of manual data entry by programmatically reading a binary Excel file, parsing it into structured JSON, and inserting records directly into a database table.
Targeted at database administrators and data engineers, the workflow begins with a binary file read trigger node that accesses the “spreadsheet.xls” file, ensuring deterministic ingestion of spreadsheet data for streamlined processing.
Key Benefits
- Automates Excel-to-database data transfer, eliminating manual entry errors and delays.
- Parses spreadsheet content into JSON format for flexible downstream processing.
- Supports batch insertion of product records with specified columns (name, EAN) into PostgreSQL.
- Utilizes a no-code integration pipeline for seamless connectivity between file system and database.
Product Overview
This automation workflow initiates with a “Read Binary File” node that loads the Excel spreadsheet (“spreadsheet.xls”) from the local file system as raw binary data. The binary content is then passed to the “Spreadsheet File1” node, which parses the Excel file into structured JSON objects, representing rows and columns as key-value pairs. This transformation supports precise data handling and validation in subsequent steps.
The parsed JSON data flows into the “Insert Rows1” node, which connects to a PostgreSQL database using stored credentials. This node executes insert operations into the “product” table, specifically targeting the “name” and “ean” columns. The workflow operates synchronously from file read to database insertion, ensuring complete data transfer per execution cycle.
Error handling relies on platform defaults without explicit retry or backoff strategies configured. Security considerations are addressed by using credential storage for database authentication, with no data persistence outside the database insertion step.
Features and Outcomes
Core Automation
The automation workflow accepts a local binary Excel file as input, parses its data using a spreadsheet parsing node, and deterministically inserts product records into the database. This no-code integration pipeline processes each row independently without manual intervention.
- Single-pass evaluation from binary read to row insertion ensures data integrity.
- Structured JSON transformation enables extensible data handling.
- Direct database insertion prevents intermediate storage or duplication.
Integrations and Intake
The workflow integrates with the local file system for intake and PostgreSQL for data storage. Authentication to the database is managed via stored credentials, enabling secure connectivity. The trigger relies on a static file path, requiring the presence of “spreadsheet.xls” in the designated location.
- File system node reads binary Excel files from local storage.
- Spreadsheet parser converts Excel sheets to JSON objects automatically.
- PostgreSQL node inserts data into specified tables using credential-based authentication.
Outputs and Consumption
Output consists of inserted rows within the PostgreSQL “product” table, specifically populating the “name” and “ean” columns. The workflow completes synchronously, confirming data persistence after each execution cycle. No additional output formats or external notifications are produced.
- Data stored as relational records in PostgreSQL database.
- Synchronous operation ensures completion before workflow termination.
- Output restricted to database insertion confirmation events.
Workflow — End-to-End Execution
Step 1: Trigger
The workflow begins by reading a binary Excel file named “spreadsheet.xls” from a fixed local file path. This node acts as the initiating event, loading the raw file content into the workflow for subsequent processing.
Step 2: Processing
The binary data from the file read node is parsed by a spreadsheet file node, which extracts the spreadsheet content into structured JSON. This parsing converts each row into an object with keys matching the spreadsheet columns, enabling schema-aligned data for insertion.
Step 3: Analysis
No complex logic or conditional branching is applied. The workflow deterministically maps spreadsheet rows to database insertions, relying on the presence of “name” and “ean” fields in the parsed JSON to populate corresponding database columns.
Step 4: Delivery
Parsed rows are inserted into the “product” table of a PostgreSQL database using stored credentials. The workflow completes synchronously, confirming data persistence upon successful insertion of each row.
Use Cases
Scenario 1
A retail company needs to migrate product data from legacy Excel spreadsheets into a centralized PostgreSQL database. This automation workflow imports product names and EAN codes with no manual data entry, producing accurate database records in a single execution cycle.
Scenario 2
A data engineering team requires a repeatable process to update product catalogs stored in Excel files. The workflow parses spreadsheet content into JSON and inserts new product rows into the database, ensuring consistent data integration without manual intervention.
Scenario 3
An inventory system needs batch uploads of product identifiers from Excel exports. This orchestration pipeline reads the binary file, converts it to structured data, and inserts records into PostgreSQL, enabling reliable bulk data ingestion.
How to use
To deploy this workflow, import it into an n8n instance with access to the local file system containing “spreadsheet.xls”. Configure the PostgreSQL credentials under the node settings to enable database connectivity. Execute the workflow manually or schedule it to run periodically. Upon execution, expect synchronous reading, parsing, and insertion of product records with “name” and “ean” fields into the specified database table.
Comparison — Manual Process vs. Automation Workflow
| Attribute | Manual/Alternative | This Workflow |
|---|---|---|
| Steps required | Multiple manual steps: open Excel, review, enter data manually. | Single automated sequence: file read, parse, database insert. |
| Consistency | Prone to human errors and inconsistencies in data entry. | Deterministic data parsing and insertion ensure uniform results. |
| Scalability | Limited by manual throughput and human capacity. | Scales linearly with file size and database capacity. |
| Maintenance | Requires ongoing manual effort and verification. | Low maintenance; configured once with credential updates as needed. |
Technical Specifications
| Environment | n8n workflow running on local or hosted instance with file system access |
|---|---|
| Tools / APIs | Read Binary File node, Spreadsheet File parser node, PostgreSQL node |
| Execution Model | Synchronous sequential processing from file read to database insertion |
| Input Formats | Binary Excel file (.xls) |
| Output Formats | Relational rows inserted in PostgreSQL “product” table |
| Data Handling | Binary file read, JSON transformation, direct DB insert |
| Known Constraints | Requires local presence of “spreadsheet.xls”; fixed columns (“name”, “ean”) expected |
| Credentials | PostgreSQL stored credentials for secure database connection |
Implementation Requirements
- Local file “spreadsheet.xls” must exist and be accessible by the workflow environment.
- PostgreSQL credentials configured and valid for database connection.
- n8n instance with access to required nodes and network permissions for database access.
Configuration & Validation
- Verify the presence of “spreadsheet.xls” in the expected file path accessible by n8n.
- Configure PostgreSQL node credentials with correct authentication details.
- Test workflow execution to confirm rows from the spreadsheet are correctly inserted into the “product” table with “name” and “ean” fields populated.
Data Provenance
- Trigger: “Read Binary File” node reads local Excel file as binary input.
- Transformation: “Spreadsheet File1” node parses binary Excel into JSON objects.
- Destination: “Insert Rows1” node inserts data into PostgreSQL “product” table using stored credentials.
FAQ
How is the data import automation workflow triggered?
The workflow is triggered by reading a binary Excel file named “spreadsheet.xls” from a local file path, initiating the data processing sequence.
Which tools or models does the orchestration pipeline use?
The pipeline uses n8n nodes including a binary file reader, a spreadsheet file parser to convert Excel data into JSON, and a PostgreSQL node for database insertion.
What does the response look like for client consumption?
The workflow outputs are database insertions into the “product” table, with no external response payload. Confirmation of insertion is implicit upon successful execution.
Is any data persisted by the workflow?
Data persistence occurs only in the PostgreSQL database. The workflow does not store data outside the database insertion step.
How are errors handled in this integration flow?
Error handling relies on n8n’s default behavior; no custom retry or backoff mechanisms are configured in this workflow.
Conclusion
This automation workflow reliably imports product data from an Excel spreadsheet into a PostgreSQL database, eliminating manual data entry and ensuring consistent data transfer. It operates synchronously, parsing binary Excel files into structured JSON for direct insertion of product names and EAN codes. The workflow depends on the availability of the local spreadsheet file and configured PostgreSQL credentials to function correctly. Its deterministic, no-code integration pipeline supports scalable and maintainable data migrations with minimal operational overhead.








Reviews
There are no reviews yet.