Quick Start Guide
We have built simple to use no-code solution for deploying your AI models on bare metal servers. This product is called OpenxAI Studio.
OpenxAI Studio
OpenxAI Studio is a drag-and-drop interface that allows you to deploy your models on bare metal servers with just a few clicks. The system supports a broad range of CPUs and GPUs, providing flexibility for diverse AI workloads, including large-scale open-source models such as LLaMA 2 and Mixtral. Backend optimizations ensure fast and stable deployments.
To get started you can visit OpenxAI Studio and login with your Web3 wallet.
We have also setup a demo project for you where no web3 wallet is neededto get started where people can interact, deploy, re-deploy onto a bare metal server.
Select your model
Choose from supported models in the OpenxAI App Store. The platform can recommend suitable models based on your needs.
Select your bare metal machine
Pick a machine based on CPU, GPU, RAM, and storage. If your model requires a specialized GPU, click on the GPU options to select from the available GPUs. Supported operating systems are listed for each machine.
Deploy your model
Hit the deploy button and your model will be deployed onto a bare metal server. This process can take a few minutes.Deployment is fast and reliable, with hardware automatically selected for optimal performance.
Interact with your model
Ollama is a simple to use, chat interface that allows you to interact with your model. In the future you can build custom apps hosted on your bare metal machine and even use the Ollama API to interact with your model.