Triggers & Inputs

URL Fetch & Remote Input Documentation

Injest remote files into your workflows automatically via HTTP/HTTPS with variable support and SSRF protection.

Updated 2 min read

Quick Answer: What is the URL Fetch Node?

[!NOTE] The URL Fetch Node allows your workflow to automatically download one or more files from the internet without requiring a manual upload. It is the core bridge for ingesting documents from public APIs, signed storage URLs, or CDN links.

Core Capabilities

1. Dynamic Injection

The URL Fetch node isn't just for static links. You can use data from a previous AI Extract Node to build a URL on the fly (e.g., https://api.docs.com/v1/download/{{extract_1.file_id}}).

2. Intelligent Metadata Discovery

When fetching a file, the node accurately determines the filename and type by prioritizing the Content-Disposition header, falling back to the URL path only when necessary.

3. Resilience & Logging

If fetching a batch of 10 URLs, the node will complete successfully even if some fail, as long as at least one file is recovered. Failed URLs are recorded in the execution log for auditing.

Configuration Guide

FieldDescriptionRequirement
URLsA list of one or more fully qualified remote links.Mandatory
Header Passthrough(Advanced) Optional headers for basic authentication.Optional
TimeoutThe maximum duration allowed for the download.Default: 30s

Best Practices

  • Direct Links: Ensure your URLs point directly to the file stream. The node will fetch HTML landing pages as text, which may break downstream conversion nodes.
  • Variable Sanitization: Use a Wait Node or simple logic to ensure dynamic URLs are fully formed before the fetch occurs.

[!TIP] Working with protected storage? For Google Drive or Dropbox, use the specialized Cloud Storage Nodes instead for OAuth-backed access.