Miqu LLM: Versions, Prompt Templates & Hardware Requirements
Explore all versions of the model, their file formats like GGML, GPTQ, and HF, and understand the hardware requirements for local inference.
Miqu is a large language model that has garnered significant attention and speculation within the AI community. It is rumored to be a leaked version of MistralAI's models, possibly falling under the "Mistral Medium" category or an older Mixture of Experts (MoE) experiment. This model, known as miqu-1-70b, was released as a 5 bit quantized version and there are no unquantized weights available at the moment.
The model has large context window of 32K tokens and has been developed further into variations, including a 120 billion parameter frankenmerge version that offers better performance than the base model. Miqu, probably, is the strongest model you can download and use (non commercially) on a consumer hardware.
Hardware requirements
The performance of an Miqu model depends heavily on the hardware it's running on. For recommendations on the best computer hardware configurations to handle Miqu models smoothly, check out this guide: Best Computer for Running LLaMA and LLama-2 Models.
Below are the Miqu hardware requirements for 4-bit quantization:
For 65B and 70B Parameter Models
When you step up to the big models like 65B and 70B models (), you need some serious hardware. For GPU inference and GPTQ formats, you'll want a top-shelf GPU with at least 40GB of VRAM. We're talking an A100 40GB, dual RTX 3090s or 4090s, A40, RTX A6000, or 8000. You'll also need 64GB of system RAM. For GGML / GGUF CPU inference, have around 40GB of RAM available for both the 65B and 70B models.
Format | RAM Requirements | VRAM Requirements |
---|---|---|
GPTQ (GPU inference) | 64GB (Swap to Load*) | 40GB |
GGML / GGUF (CPU inference) | 40GB | 600MB |
Combination of GPTQ and GGML / GGUF (offloading) | 20GB | 20GB |
*RAM needed to load the model initially. Not required for inference. If your system doesn't have quite enough RAM to fully load the model at startup, you can create a swap file to help with the loading.
Memory speed
When running Miqu AI models, you gotta pay attention to how RAM bandwidth and mdodel size impact inference speed. These large language models need to load completely into RAM or VRAM each time they generate a new token (piece of text). For example, a 4-bit 7B billion parameter Miqu model takes up around 4.0GB of RAM.
Suppose your have Ryzen 5 5600X processor and DDR4-3200 RAM with theoretical max bandwidth of 50 GBps. In this scenario, you can expect to generate approximately 9 tokens per second. Typically, this performance is about 70% of your theoretical maximum speed due to several limiting factors such as inference sofware, latency, system overhead, and workload characteristics, which prevent reaching the peak speed. To achieve a higher inference speed, say 16 tokens per second, you would need more bandwidth. For example, a system with DDR5-5600 offering around 90 GBps could be enough.
For comparison, high-end GPUs like the Nvidia RTX 3090 boast nearly 930 GBps of bandwidth for their VRAM. The DDR5-6400 RAM can provide up to 100 GB/s. Therefore, understanding and optimizing bandwidth is crucial for running models like Miqu efficiently
Recommendations:
- For Best Performance: Opt for a machine with a high-end GPU (like NVIDIA's latest RTX 3090 or RTX 4090) or dual GPU setup to accommodate the largest models (65B and 70B). A system with adequate RAM (minimum 16 GB, but 64 GB best) would be optimal.
- For Budget Constraints: If you're limited by budget, focus on Miqu GGML/GGUF models that fit within the sytem RAM. Remember, while you can offload some weights to the system RAM, it will come at a performance cost.
Remember, these are recommendations, and the actual performance will depend on several factors, including the specific task, model implementation, and other system processes.
CPU requirements
For best performance, a modern multi-core CPU is recommended. An Intel Core i7 from 8th gen onward or AMD Ryzen 5 from 3rd gen onward will work well. CPU with 6-core or 8-core is ideal. Higher clock speeds also improve prompt processing, so aim for 3.6GHz or more.
Having CPU instruction sets like AVX, AVX2, AVX-512 can further improve performance if available. The key is to have a reasonably modern consumer-level CPU with decent core count and clocks, along with baseline vector processing (required for CPU inference with llama.cpp) through AVX2. With those specs, the CPU should handle Miqu model size.