Skip to main content
Loading Apiction...
Apiction Apiction

Welcome to Apiction

A unified workspace to interact with any AI model from any platform's API

🔗 Source Code

This project is open source! View the code, report issues, or contribute on GitHub:

github.com/abhish-shrivastava/apiction

🌐 Supported Platforms

Apiction works with a variety of AI platforms and providers:

  • OpenAI — https://api.openai.com/v1/chat/completions
  • Anthropic — https://api.anthropic.com/v1/messages
  • Google Gemini — https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent
  • OpenRouter — https://openrouter.ai/api/v1/chat/completions
  • HuggingFace — https://router.huggingface.co/v1/chat/completions
  • Pollinations — https://gen.pollinations.ai/v1/chat/completions
  • Fireworks.ai — https://api.fireworks.ai/inference/v1/chat/completions
  • NanoGPT — https://nano-gpt.com/api/v1/chat/completions
  • fal.ai — https://fal.run/{model_id}
  • Local AI servers — Ollama, llama.cpp, Open WebUI, LM Studio, and others

Please note that all OpenAI-compatible endpoints are supported irrespective of the platform — simply set the adapter to OpenAI and provide your endpoint URL. For example, https://api.together.xyz/v1/chat/completions for Together or >https://api.groq.com/openai/v1/chat/completions for Groq.

🔒 Privacy

Your privacy matters. This app and its server do not retain your messages or API tokens. All chat data and keys are kept only in your browser (IndexedDB) unless you explicitly export or share them. Clear your browser data to remove all traces.

Please note that when you send a request, it is forwarded to the AI provider you configure — that provider may collect or retain data under its own policies. Check with their documentation for details.

For providers like Openrouter and some others, you can also send requests directly from the browser to the AI provider (bypassing the PHP proxy) by enabling "Direct API" in settings. This may have implications for CORS and security and may not work with all providers.

🚀 Getting Started

  • Click the + button to create a new chat tab
  • Expand Settings (click the gear icon) to configure your AI provider
  • Enter your API endpoint and token, or use a preset
  • Start chatting!

🏠 Using Local AI platforms (Ollama, llama.cpp, Open WebUI etc.)

You can also connect to locally-hosted AI servers:

  • First, download a version of this app locally on your system using the github link provided above and following the installation instructions.
  • Set the API URL to your local server's endpoint (e.g., http://localhost:11434/v1/chat/completions)
  • Leave the API token empty for local servers (unless configured otherwise)
  • Use the model name as shown in your local server (e.g., llama3, mistral)

⚠️ Troubleshooting

DNS Error: Could not resolve host
This is usually a temporary network issue. Try refreshing or check your internet connection. On Windows, you can try flushing DNS cache: ipconfig /flushdns
CORS errors in browser console
The app uses a PHP proxy to avoid CORS issues. Make sure you're accessing via the PHP server (not opening the HTML file directly).
SSL/TLS errors
Ensure your PHP installation has valid CA certificates. On Windows/XAMPP, you may need to configure curl.cainfo in php.ini.
Local AI not connecting
Ensure your local AI server is running. Check that the port matches your configuration. Some servers may need CORS headers enabled.

Made by Abhishek Shrivastava

Sometimes I write on newgenstack.com