FlowDown - Open & Fast AI

FlowDown - Open & Fast AI

开发者: 子衎 王

中国
APP ID 复制
6740553198
分类
价格
USD7.99
内购
0个评分
工具(付费)
昨日下载量
最近更新
2026-02-24
最早发布
2025-03-03
版本统计
  • 14天19小时

    最新版本上线距今

  • 72

    近1年版本更新次数

  • 2025-03-03

    全球最早版本上线日期

版本记录
显示信息
日期
  • 全部
每页显示条数
  • 请选择
  • 版本: 4.6.1

    版本更新日期

    2026-02-24

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    We improved how we display match formal.

    视频/截图

    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图

    应用描述

    FlowDown is a fast, minimal AI chat app that makes working with large language models feel smooth, focused, and under your control.

    For guides, tips, and advanced setup help, visit flowdown.ai.

    Why FlowDown

    Lightweight and efficient
    Focus on your conversation, not the app. FlowDown launches quickly, stays out of your way, and keeps long chats responsive, even on busy machines.

    Rich formatting with Markdown
    Get clear, readable answers. FlowDown renders Markdown beautifully, so code blocks, bullet lists, tables, and headings in AI responses are easy to scan and use.

    Works with the tools you already have
    Connect to any OpenAI-compatible API, from major cloud providers to services running in your own home lab. If it speaks the OpenAI API, FlowDown can talk to it.

    Talk with text, images, and audio
    Show, not just tell. Paste screenshots, drop in images, or record your voice to ask questions. Use it to:
    - Transcribe meetings or lectures
    - Summarize voice notes
    - Ask about the content inside a photo

    Smooth, instant-feeling conversations
    Blazing-fast text rendering makes responses stream in fluidly, so you can start reading and refining your prompt without waiting for the model to finish.

    Automatic chat titles
    Stop hunting for the right thread. FlowDown names your conversations automatically based on their content, so you can quickly find that "API bug fix" or "Trip planning" chat later.

    Shortcut integrations
    Use FlowDown inside your workflows. With Shortcuts support, you can:
    - Send selected text from another app to your model
    - Generate drafts, summaries, or replies with one tap
    - Build custom automations that call your favourite model

    Privacy by design
    Your data is yours. FlowDown does not collect your chats. For maximum confidentiality, you can run models locally on your own machine with no external server involved.

    Open source and transparent
    Every line of code is available on GitHub. Inspect it, audit it, or adapt it for your own setups. You can see exactly how your AI client works.

    Free trial and open code

    FlowDown is fully open source, and we provide a free trial so you can confirm it fits your workflow before buying.

    Source code and trial builds are available at:
    https://github.com/Lakr233/FlowDown
    To try it, open the Releases section and download the latest trial version.

    Local models

    For offline, private usage, FlowDown can run models locally on Apple Silicon devices with sufficient memory.

    Supported model families include:
    Cohere, Gemma, Gemma2, Gemma3, InternLM2, Llama / Mistral, OpenELM, Phi, Phi3, PhiMoE, Qwen2, Qwen3, Starcoder2, MiMo, and GLM4.

    This gives you:
    - Full control over your data
    - No dependency on external servers
    - The ability to experiment with different open models

    Remote models

    A complimentary remote model is included so you can start chatting immediately. This starter service is best-effort, with rate limits and no uptime guarantee.

    For consistent, reliable use, we recommend:
    - Bringing your own API, or
    - Hosting your own inference server on local or home-lab hardware

    FlowDown supports all OpenAI-compatible endpoints, making it ideal for connecting to tools like Ollama and LM Studio running on your own machine for maximum privacy.

    Some setups require technical knowledge. If you are new to self-hosted AI or custom endpoints, please read our documentation carefully; the app can be challenging without it.

    We strongly recommend using the free trial before purchasing, as refunds may not be available. For advanced or enterprise use, configuration examples and best practices are documented on our website.

    Important note

    AI-generated content can be inaccurate or misleading. Always verify critical information and use FlowDown responsibly; you are responsible for how you use the output.

    © 2025 FlowDown Team. All rights reserved.
  • 版本: 4.6.0

    版本更新日期

    2026-02-21

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    We once again improved our handling over local models.

    应用描述

    暂无应用描述数据

  • 版本: 4.5.7

    版本更新日期

    2026-02-10

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    We updated several dependencies to improve inference reliability.

    视频/截图

    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图

    应用描述

    FlowDown is a fast, minimal AI chat app that makes working with large language models feel smooth, focused, and under your control.

    For guides, tips, and advanced setup help, visit flowdown.ai.

    Why FlowDown

    Lightweight and efficient
    Focus on your conversation, not the app. FlowDown launches quickly, stays out of your way, and keeps long chats responsive, even on busy machines.

    Rich formatting with Markdown
    Get clear, readable answers. FlowDown renders Markdown beautifully, so code blocks, bullet lists, tables, and headings in AI responses are easy to scan and use.

    Works with the tools you already have
    Connect to any OpenAI-compatible API, from major cloud providers to services running in your own home lab. If it speaks the OpenAI API, FlowDown can talk to it.

    Talk with text, images, and audio
    Show, not just tell. Paste screenshots, drop in images, or record your voice to ask questions. Use it to:
    - Transcribe meetings or lectures
    - Summarize voice notes
    - Ask about the content inside a photo

    Smooth, instant-feeling conversations
    Blazing-fast text rendering makes responses stream in fluidly, so you can start reading and refining your prompt without waiting for the model to finish.

    Automatic chat titles
    Stop hunting for the right thread. FlowDown names your conversations automatically based on their content, so you can quickly find that "API bug fix" or "Trip planning" chat later.

    Shortcut integrations
    Use FlowDown inside your workflows. With Shortcuts support, you can:
    - Send selected text from another app to your model
    - Generate drafts, summaries, or replies with one tap
    - Build custom automations that call your favourite model

    Privacy by design
    Your data is yours. FlowDown does not collect your chats. For maximum confidentiality, you can run models locally on your own machine with no external server involved.

    Open source and transparent
    Every line of code is available on GitHub. Inspect it, audit it, or adapt it for your own setups. You can see exactly how your AI client works.

    Free trial and open code

    FlowDown is fully open source, and we provide a free trial so you can confirm it fits your workflow before buying.

    Source code and trial builds are available at:
    https://github.com/Lakr233/FlowDown
    To try it, open the Releases section and download the latest trial version.

    Local models

    For offline, private usage, FlowDown can run models locally on Apple Silicon devices with sufficient memory.

    Supported model families include:
    Cohere, Gemma, Gemma2, Gemma3, InternLM2, Llama / Mistral, OpenELM, Phi, Phi3, PhiMoE, Qwen2, Qwen3, Starcoder2, MiMo, and GLM4.

    This gives you:
    - Full control over your data
    - No dependency on external servers
    - The ability to experiment with different open models

    Remote models

    A complimentary remote model is included so you can start chatting immediately. This starter service is best-effort, with rate limits and no uptime guarantee.

    For consistent, reliable use, we recommend:
    - Bringing your own API, or
    - Hosting your own inference server on local or home-lab hardware

    FlowDown supports all OpenAI-compatible endpoints, making it ideal for connecting to tools like Ollama and LM Studio running on your own machine for maximum privacy.

    Some setups require technical knowledge. If you are new to self-hosted AI or custom endpoints, please read our documentation carefully; the app can be challenging without it.

    We strongly recommend using the free trial before purchasing, as refunds may not be available. For advanced or enterprise use, configuration examples and best practices are documented on our website.

    Important note

    AI-generated content can be inaccurate or misleading. Always verify critical information and use FlowDown responsibly; you are responsible for how you use the output.

    © 2025 FlowDown Team. All rights reserved.
  • 版本: 4.5.4

    版本更新日期

    2026-02-01

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    We fixed several issue loading most recent models.

    应用描述

    暂无应用描述数据

  • 版本: 4.5.3

    版本更新日期

    2026-01-19

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    We fixed several issues when loading new models.

    应用描述

    暂无应用描述数据

  • 版本: 4.5.0

    版本更新日期

    2026-01-12

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    We updated the version number to sync across platforms.

    视频/截图

    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图

    应用描述

    FlowDown is a fast, minimal AI chat app that makes working with large language models feel smooth, focused, and under your control.

    For guides, tips, and advanced setup help, visit flowdown.ai.

    Why FlowDown

    Lightweight and efficient
    Focus on your conversation, not the app. FlowDown launches quickly, stays out of your way, and keeps long chats responsive, even on busy machines.

    Rich formatting with Markdown
    Get clear, readable answers. FlowDown renders Markdown beautifully, so code blocks, bullet lists, tables, and headings in AI responses are easy to scan and use.

    Works with the tools you already have
    Connect to any OpenAI-compatible API, from major cloud providers to services running in your own home lab. If it speaks the OpenAI API, FlowDown can talk to it.

    Talk with text, images, and audio
    Show, not just tell. Paste screenshots, drop in images, or record your voice to ask questions. Use it to:
    - Transcribe meetings or lectures
    - Summarize voice notes
    - Ask about the content inside a photo

    Smooth, instant-feeling conversations
    Blazing-fast text rendering makes responses stream in fluidly, so you can start reading and refining your prompt without waiting for the model to finish.

    Automatic chat titles
    Stop hunting for the right thread. FlowDown names your conversations automatically based on their content, so you can quickly find that "API bug fix" or "Trip planning" chat later.

    Shortcut integrations
    Use FlowDown inside your workflows. With Shortcuts support, you can:
    - Send selected text from another app to your model
    - Generate drafts, summaries, or replies with one tap
    - Build custom automations that call your favourite model

    Privacy by design
    Your data is yours. FlowDown does not collect your chats. For maximum confidentiality, you can run models locally on your own machine with no external server involved.

    Open source and transparent
    Every line of code is available on GitHub. Inspect it, audit it, or adapt it for your own setups. You can see exactly how your AI client works.

    Free trial and open code

    FlowDown is fully open source, and we provide a free trial so you can confirm it fits your workflow before buying.

    Source code and trial builds are available at:
    https://github.com/Lakr233/FlowDown
    To try it, open the Releases section and download the latest trial version.

    Local models

    For offline, private usage, FlowDown can run models locally on Apple Silicon devices with sufficient memory.

    Supported model families include:
    Cohere, Gemma, Gemma2, Gemma3, InternLM2, Llama / Mistral, OpenELM, Phi, Phi3, PhiMoE, Qwen2, Qwen3, Starcoder2, MiMo, and GLM4.

    This gives you:
    - Full control over your data
    - No dependency on external servers
    - The ability to experiment with different open models

    Remote models

    A complimentary remote model is included so you can start chatting immediately. This starter service is best-effort, with rate limits and no uptime guarantee.

    For consistent, reliable use, we recommend:
    - Bringing your own API, or
    - Hosting your own inference server on local or home-lab hardware

    FlowDown supports all OpenAI-compatible endpoints, making it ideal for connecting to tools like Ollama and LM Studio running on your own machine for maximum privacy.

    Some setups require technical knowledge. If you are new to self-hosted AI or custom endpoints, please read our documentation carefully; the app can be challenging without it.

    We strongly recommend using the free trial before purchasing, as refunds may not be available. For advanced or enterprise use, configuration examples and best practices are documented on our website.

    Important note

    AI-generated content can be inaccurate or misleading. Always verify critical information and use FlowDown responsibly; you are responsible for how you use the output.

    © 2025 FlowDown Team. All rights reserved.
  • 版本: 4.4.10

    版本更新日期

    2026-01-09

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    We fixed several dead links.

    视频/截图

    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图

    应用描述

    FlowDown is a fast, minimal AI chat app that makes working with large language models feel smooth, focused, and under your control.

    For guides, tips, and advanced setup help, visit flowdown.ai.

    Why FlowDown

    Lightweight and efficient
    Focus on your conversation, not the app. FlowDown launches quickly, stays out of your way, and keeps long chats responsive, even on busy machines.

    Rich formatting with Markdown
    Get clear, readable answers. FlowDown renders Markdown beautifully, so code blocks, bullet lists, tables, and headings in AI responses are easy to scan and use.

    Works with the tools you already have
    Connect to any OpenAI-compatible API, from major cloud providers to services running in your own home lab. If it speaks the OpenAI API, FlowDown can talk to it.

    Talk with text, images, and audio
    Show, not just tell. Paste screenshots, drop in images, or record your voice to ask questions. Use it to:
    - Transcribe meetings or lectures
    - Summarize voice notes
    - Ask about the content inside a photo

    Smooth, instant-feeling conversations
    Blazing-fast text rendering makes responses stream in fluidly, so you can start reading and refining your prompt without waiting for the model to finish.

    Automatic chat titles
    Stop hunting for the right thread. FlowDown names your conversations automatically based on their content, so you can quickly find that "API bug fix" or "Trip planning" chat later.

    Shortcut integrations
    Use FlowDown inside your workflows. With Shortcuts support, you can:
    - Send selected text from another app to your model
    - Generate drafts, summaries, or replies with one tap
    - Build custom automations that call your favourite model

    Privacy by design
    Your data is yours. FlowDown does not collect your chats. For maximum confidentiality, you can run models locally on your own machine with no external server involved.

    Open source and transparent
    Every line of code is available on GitHub. Inspect it, audit it, or adapt it for your own setups. You can see exactly how your AI client works.

    Free trial and open code

    FlowDown is fully open source, and we provide a free trial so you can confirm it fits your workflow before buying.

    Source code and trial builds are available at:
    https://github.com/Lakr233/FlowDown
    To try it, open the Releases section and download the latest trial version.

    Local models

    For offline, private usage, FlowDown can run models locally on Apple Silicon devices with sufficient memory.

    Supported model families include:
    Cohere, Gemma, Gemma2, Gemma3, InternLM2, Llama / Mistral, OpenELM, Phi, Phi3, PhiMoE, Qwen2, Qwen3, Starcoder2, MiMo, and GLM4.

    This gives you:
    - Full control over your data
    - No dependency on external servers
    - The ability to experiment with different open models

    Remote models

    A complimentary remote model is included so you can start chatting immediately. This starter service is best-effort, with rate limits and no uptime guarantee.

    For consistent, reliable use, we recommend:
    - Bringing your own API, or
    - Hosting your own inference server on local or home-lab hardware

    FlowDown supports all OpenAI-compatible endpoints, making it ideal for connecting to tools like Ollama and LM Studio running on your own machine for maximum privacy.

    Some setups require technical knowledge. If you are new to self-hosted AI or custom endpoints, please read our documentation carefully; the app can be challenging without it.

    We strongly recommend using the free trial before purchasing, as refunds may not be available. For advanced or enterprise use, configuration examples and best practices are documented on our website.

    Important note

    AI-generated content can be inaccurate or misleading. Always verify critical information and use FlowDown responsibly; you are responsible for how you use the output.

    © 2025 FlowDown Team. All rights reserved.
  • 版本: 4.4.9

    版本更新日期

    2026-01-07

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    You can now track the conversation's progress in real-time from the background using Live Activities. Due to system limitations, enabling this feature requires you to also turn on audio feedback in the general settings. Additionally, we have fixed some long-standing bugs.

    视频/截图

    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图

    应用描述

    FlowDown is a fast, minimal AI chat app that makes working with large language models feel smooth, focused, and under your control.

    For guides, tips, and advanced setup help, visit flowdown.ai.

    Why FlowDown

    Lightweight and efficient
    Focus on your conversation, not the app. FlowDown launches quickly, stays out of your way, and keeps long chats responsive, even on busy machines.

    Rich formatting with Markdown
    Get clear, readable answers. FlowDown renders Markdown beautifully, so code blocks, bullet lists, tables, and headings in AI responses are easy to scan and use.

    Works with the tools you already have
    Connect to any OpenAI-compatible API, from major cloud providers to services running in your own home lab. If it speaks the OpenAI API, FlowDown can talk to it.

    Talk with text, images, and audio
    Show, not just tell. Paste screenshots, drop in images, or record your voice to ask questions. Use it to:
    - Transcribe meetings or lectures
    - Summarize voice notes
    - Ask about the content inside a photo

    Smooth, instant-feeling conversations
    Blazing-fast text rendering makes responses stream in fluidly, so you can start reading and refining your prompt without waiting for the model to finish.

    Automatic chat titles
    Stop hunting for the right thread. FlowDown names your conversations automatically based on their content, so you can quickly find that "API bug fix" or "Trip planning" chat later.

    Shortcut integrations
    Use FlowDown inside your workflows. With Shortcuts support, you can:
    - Send selected text from another app to your model
    - Generate drafts, summaries, or replies with one tap
    - Build custom automations that call your favourite model

    Privacy by design
    Your data is yours. FlowDown does not collect your chats. For maximum confidentiality, you can run models locally on your own machine with no external server involved.

    Open source and transparent
    Every line of code is available on GitHub. Inspect it, audit it, or adapt it for your own setups. You can see exactly how your AI client works.

    Free trial and open code

    FlowDown is fully open source, and we provide a free trial so you can confirm it fits your workflow before buying.

    Source code and trial builds are available at:
    https://github.com/Lakr233/FlowDown
    To try it, open the Releases section and download the latest trial version.

    Local models

    For offline, private usage, FlowDown can run models locally on Apple Silicon devices with sufficient memory.

    Supported model families include:
    Cohere, Gemma, Gemma2, Gemma3, InternLM2, Llama / Mistral, OpenELM, Phi, Phi3, PhiMoE, Qwen2, Qwen3, Starcoder2, MiMo, and GLM4.

    This gives you:
    - Full control over your data
    - No dependency on external servers
    - The ability to experiment with different open models

    Remote models

    A complimentary remote model is included so you can start chatting immediately. This starter service is best-effort, with rate limits and no uptime guarantee.

    For consistent, reliable use, we recommend:
    - Bringing your own API, or
    - Hosting your own inference server on local or home-lab hardware

    FlowDown supports all OpenAI-compatible endpoints, making it ideal for connecting to tools like Ollama and LM Studio running on your own machine for maximum privacy.

    Some setups require technical knowledge. If you are new to self-hosted AI or custom endpoints, please read our documentation carefully; the app can be challenging without it.

    We strongly recommend using the free trial before purchasing, as refunds may not be available. For advanced or enterprise use, configuration examples and best practices are documented on our website.

    Important note

    AI-generated content can be inaccurate or misleading. Always verify critical information and use FlowDown responsibly; you are responsible for how you use the output.

    © 2025 FlowDown Team. All rights reserved.
  • 版本: 4.4.8

    版本更新日期

    2026-01-06

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    We have restored support for iOS 16 by conditionally disabling the iCloud sync feature—which requires iOS 17+ APIs—at runtime. Additionally, we fixed some UI bugs.

    视频/截图

    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图

    应用描述

    FlowDown is a fast, minimal AI chat app that makes working with large language models feel smooth, focused, and under your control.

    For guides, tips, and advanced setup help, visit flowdown.ai.

    Why FlowDown

    Lightweight and efficient
    Focus on your conversation, not the app. FlowDown launches quickly, stays out of your way, and keeps long chats responsive, even on busy machines.

    Rich formatting with Markdown
    Get clear, readable answers. FlowDown renders Markdown beautifully, so code blocks, bullet lists, tables, and headings in AI responses are easy to scan and use.

    Works with the tools you already have
    Connect to any OpenAI-compatible API, from major cloud providers to services running in your own home lab. If it speaks the OpenAI API, FlowDown can talk to it.

    Talk with text, images, and audio
    Show, not just tell. Paste screenshots, drop in images, or record your voice to ask questions. Use it to:
    - Transcribe meetings or lectures
    - Summarize voice notes
    - Ask about the content inside a photo

    Smooth, instant-feeling conversations
    Blazing-fast text rendering makes responses stream in fluidly, so you can start reading and refining your prompt without waiting for the model to finish.

    Automatic chat titles
    Stop hunting for the right thread. FlowDown names your conversations automatically based on their content, so you can quickly find that "API bug fix" or "Trip planning" chat later.

    Shortcut integrations
    Use FlowDown inside your workflows. With Shortcuts support, you can:
    - Send selected text from another app to your model
    - Generate drafts, summaries, or replies with one tap
    - Build custom automations that call your favourite model

    Privacy by design
    Your data is yours. FlowDown does not collect your chats. For maximum confidentiality, you can run models locally on your own machine with no external server involved.

    Open source and transparent
    Every line of code is available on GitHub. Inspect it, audit it, or adapt it for your own setups. You can see exactly how your AI client works.

    Free trial and open code

    FlowDown is fully open source, and we provide a free trial so you can confirm it fits your workflow before buying.

    Source code and trial builds are available at:
    https://github.com/Lakr233/FlowDown
    To try it, open the Releases section and download the latest trial version.

    Local models

    For offline, private usage, FlowDown can run models locally on Apple Silicon devices with sufficient memory.

    Supported model families include:
    Cohere, Gemma, Gemma2, Gemma3, InternLM2, Llama / Mistral, OpenELM, Phi, Phi3, PhiMoE, Qwen2, Qwen3, Starcoder2, MiMo, and GLM4.

    This gives you:
    - Full control over your data
    - No dependency on external servers
    - The ability to experiment with different open models

    Remote models

    A complimentary remote model is included so you can start chatting immediately. This starter service is best-effort, with rate limits and no uptime guarantee.

    For consistent, reliable use, we recommend:
    - Bringing your own API, or
    - Hosting your own inference server on local or home-lab hardware

    FlowDown supports all OpenAI-compatible endpoints, making it ideal for connecting to tools like Ollama and LM Studio running on your own machine for maximum privacy.

    Some setups require technical knowledge. If you are new to self-hosted AI or custom endpoints, please read our documentation carefully; the app can be challenging without it.

    We strongly recommend using the free trial before purchasing, as refunds may not be available. For advanced or enterprise use, configuration examples and best practices are documented on our website.

    Important note

    AI-generated content can be inaccurate or misleading. Always verify critical information and use FlowDown responsibly; you are responsible for how you use the output.

    © 2025 FlowDown Team. All rights reserved.
  • 版本: 4.4.5

    版本更新日期

    2025-12-24

    FlowDown - Open & Fast AI

    FlowDown - Open & Fast AI

    更新日志

    With some bug fixes.

    视频/截图

    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图
    FlowDown - Open & Fast AI App 截图

    应用描述

    FlowDown is a fast, minimal AI chat app that makes working with large language models feel smooth, focused, and under your control.

    For guides, tips, and advanced setup help, visit flowdown.ai.

    Why FlowDown

    Lightweight and efficient
    Focus on your conversation, not the app. FlowDown launches quickly, stays out of your way, and keeps long chats responsive, even on busy machines.

    Rich formatting with Markdown
    Get clear, readable answers. FlowDown renders Markdown beautifully, so code blocks, bullet lists, tables, and headings in AI responses are easy to scan and use.

    Works with the tools you already have
    Connect to any OpenAI-compatible API, from major cloud providers to services running in your own home lab. If it speaks the OpenAI API, FlowDown can talk to it.

    Talk with text, images, and audio
    Show, not just tell. Paste screenshots, drop in images, or record your voice to ask questions. Use it to:
    - Transcribe meetings or lectures
    - Summarize voice notes
    - Ask about the content inside a photo

    Smooth, instant-feeling conversations
    Blazing-fast text rendering makes responses stream in fluidly, so you can start reading and refining your prompt without waiting for the model to finish.

    Automatic chat titles
    Stop hunting for the right thread. FlowDown names your conversations automatically based on their content, so you can quickly find that "API bug fix" or "Trip planning" chat later.

    Shortcut integrations
    Use FlowDown inside your workflows. With Shortcuts support, you can:
    - Send selected text from another app to your model
    - Generate drafts, summaries, or replies with one tap
    - Build custom automations that call your favourite model

    Privacy by design
    Your data is yours. FlowDown does not collect your chats. For maximum confidentiality, you can run models locally on your own machine with no external server involved.

    Open source and transparent
    Every line of code is available on GitHub. Inspect it, audit it, or adapt it for your own setups. You can see exactly how your AI client works.

    Free trial and open code

    FlowDown is fully open source, and we provide a free trial so you can confirm it fits your workflow before buying.

    Source code and trial builds are available at:
    https://github.com/Lakr233/FlowDown
    To try it, open the Releases section and download the latest trial version.

    Local models

    For offline, private usage, FlowDown can run models locally on Apple Silicon devices with sufficient memory.

    Supported model families include:
    Cohere, Gemma, Gemma2, Gemma3, InternLM2, Llama / Mistral, OpenELM, Phi, Phi3, PhiMoE, Qwen2, Qwen3, Starcoder2, MiMo, and GLM4.

    This gives you:
    - Full control over your data
    - No dependency on external servers
    - The ability to experiment with different open models

    Remote models

    A complimentary remote model is included so you can start chatting immediately. This starter service is best-effort, with rate limits and no uptime guarantee.

    For consistent, reliable use, we recommend:
    - Bringing your own API, or
    - Hosting your own inference server on local or home-lab hardware

    FlowDown supports all OpenAI-compatible endpoints, making it ideal for connecting to tools like Ollama and LM Studio running on your own machine for maximum privacy.

    Some setups require technical knowledge. If you are new to self-hosted AI or custom endpoints, please read our documentation carefully; the app can be challenging without it.

    We strongly recommend using the free trial before purchasing, as refunds may not be available. For advanced or enterprise use, configuration examples and best practices are documented on our website.

    Important note

    AI-generated content can be inaccurate or misleading. Always verify critical information and use FlowDown responsibly; you are responsible for how you use the output.

    © 2025 FlowDown Team. All rights reserved.