A unified workspace to interact with any AI model from any platform's API
This project is open source! View the code, report issues, or contribute on GitHub:
github.com/abhish-shrivastava/apictionApiction works with a variety of AI platforms and providers:
https://api.openai.com/v1/chat/completionshttps://api.anthropic.com/v1/messageshttps://generativelanguage.googleapis.com/v1beta/models/{model}:generateContenthttps://openrouter.ai/api/v1/chat/completionshttps://router.huggingface.co/v1/chat/completionshttps://gen.pollinations.ai/v1/chat/completionshttps://api.fireworks.ai/inference/v1/chat/completionshttps://nano-gpt.com/api/v1/chat/completionshttps://fal.run/{model_id}Please note that all OpenAI-compatible endpoints are supported irrespective of the platform — simply set the adapter to OpenAI and provide your endpoint URL. For example, https://api.together.xyz/v1/chat/completions for Together or >https://api.groq.com/openai/v1/chat/completions for Groq.
Your privacy matters. This app and its server do not retain your messages or API tokens. All chat data and keys are kept only in your browser (IndexedDB) unless you explicitly export or share them. Clear your browser data to remove all traces.
Please note that when you send a request, it is forwarded to the AI provider you configure — that provider may collect or retain data under its own policies. Check with their documentation for details.
For providers like Openrouter and some others, you can also send requests directly from the browser to the AI provider (bypassing the PHP proxy) by enabling "Direct API" in settings. This may have implications for CORS and security and may not work with all providers.
You can also connect to locally-hosted AI servers:
http://localhost:11434/v1/chat/completions)llama3, mistral)ipconfig /flushdnscurl.cainfo in php.ini.Click on Settings above to configure your AI, then send a message.