Directly query a Large Language Model (LLM) without a browser session to generate text or variables for your workflow.
LLM Completion
block allows you to leverage the power of a Large Language Model (LLM) directly within your workflow, without needing an active browser session or webpage content. You can send a prompt to the LLM and receive a response that can be used as plain text or captured as a variable for use in subsequent blocks.
This is useful for tasks like generating dynamic content from scratch, formatting data based on abstract rules, answering general knowledge questions (based on the LLM’s training), translating text, or creating dynamic values for other workflow steps based on non-web data.
LLM Completion
block to:
{{some_previous_data}}
not necessarily from a webpage) can be used within the prompt to make it dynamic.generated_slogan
, current_date
, translated_text
).{{current_date}}
) can then be used in subsequent blocks.Screenshot: LLM Completion block showing a prompt to get today's date and outputting it as a variable named 'date'
Variable
current_date
20250605
). This value is now available as {{current_date}}
.
Usage in a subsequent Open Websites block:
https://example.com/archive/{{current_date}}/news
Text
Usage in a subsequent Extract Data block:
A list of items
NAME | EXAMPLE VALUE OR A LONGER DESCRIPTION |
---|---|
company_name | Extract each company name from the provided list. |
Extract Data
block will process the text generated by the LLM and output a structured list of ten company names.
Variable
city
{{city}}
will contain the random name of city.Usage in a subsequent Open Websites block:
https://en.wikipedia.org/wiki/{{city}}
LLM Completion
block offers a flexible way to integrate generative AI capabilities directly into the logic of your Jsonify workflows, especially for tasks that don’t require direct webpage interaction.