Nous-Hermes LLM: Versions, Prompt Templates & Hardware Requirements

Updated: 2024-02-25 |
role-play
erp

Explore all versions of the model, their file formats like GGML, GPTQ, and HF, and understand the hardware requirements for local inference.

The Nous-Hermes are series comprises language models fine-tuned on more than 300,000 instructions. Developed by Nous Research and sponsored by Redmond AI, these models are based on the LLaMA, Llama-2, Mixtral, Yi LLMs and aim to offer solid performance across various tasks.

Version Highlights

Nous-Hermes
This model features long responses and a low rate of hallucinations. It also does not include censorship mechanisms. It was trained on an 8x a100 80GB DGX machine with a sequence length of 2000 for over 50 hours.

Nous-Hermes Llama-2
This version aims to maintain consistency with the original Hermes model while enhancing its capabilities. It also offers long responses and a low hallucination rate. Training was conducted on an 8x a100 80GB DGX machine with a 4096 sequence length.

Data Sources:
Both versions are trained on synthetic GPT-4 outputs and a variety of datasets, including GPTeacher, roleplay versions 1 & 2, code instruct, and unpublished datasets like Nous Instruct & PDACTL. They also incorporate subject-specific datasets from Camel-AI and Airoboros.

Consistency and Adaptability:
The Llama2-13b version is designed to be consistent with the previous Hermes model, catering to users who prefer not to shift to a radically different system.

The Nous-Hermes-13b series represents a balanced approach to fine-tuning and dataset curation in language models.

LimaRP, LLongMA, and SuperHOT

A subset of the Nous-Hermes models were fine tuned for role play with LIMARP LoRA.

This smaller dataset with high quality examples aims to yield strong performance. This contrasts with typical strategies using tens of thousands or even millions of training examples that vary widely in quality.

For LIMARP, the conversational data is almost entirely human-generated. Each training instance is handpicked to meet subjective quality standards, with minimal chance of problematic responses.

When used to fine-tune LLaMA models like Nous_Hermes, the selective and targeted LIMARP dataset provides a customized boost for imaginative and high-quality conversational roleplaying abilities. The human-crafted examples offer nuanced training in a small package.

Hardware requirements

The performance of an Nous-Hermes model depends heavily on the hardware it's running on. For recommendations on the best computer hardware configurations to handle Nous-Hermes models smoothly, check out this guide: Best Computer for Running LLaMA and LLama-2 Models.

Below are the Nous-Hermes hardware requirements for 4-bit quantization:

For 7B Parameter Models

If the 7B Nous-Hermes-13B-SuperHOT-8K-fp16 model is what you're after, you gotta think about hardware in two ways. First, for the GPTQ version, you'll want a decent GPU with at least 6GB VRAM. The GTX 1660 or 2060, AMD 5700 XT, or RTX 3050 or 3060 would all work nicely. But for the GGML / GGUF format, it's more about having enough RAM. You'll need around 4 gigs free to run that one smoothly.

Format RAM Requirements VRAM Requirements
GPTQ (GPU inference) 6GB (Swap to Load*) 6GB
GGML / GGUF (CPU inference) 4GB 300MB
Combination of GPTQ and GGML / GGUF (offloading) 2GB 2GB

*RAM needed to load the model initially. Not required for inference. If your system doesn't have quite enough RAM to fully load the model at startup, you can create a swap file to help with the loading.

For 13B Parameter Models

For beefier models like the Nous-Hermes-13B-SuperHOT-8K-fp16, you'll need more powerful hardware. If you're using the GPTQ version, you'll want a strong GPU with at least 10 gigs of VRAM. AMD 6900 XT, RTX 2060 12GB, RTX 3060 12GB, or RTX 3080 would do the trick. For the CPU infgerence (GGML / GGUF) format, having enough RAM is key. You'll want your system to have around 8 gigs available to run it smoothly.

Format RAM Requirements VRAM Requirements
GPTQ (GPU inference) 12GB (Swap to Load*) 10GB
GGML / GGUF (CPU inference) 8GB 500MB
Combination of GPTQ and GGML / GGUF (offloading) 10GB 10GB

*RAM needed to load the model initially. Not required for inference. If your system doesn't have quite enough RAM to fully load the model at startup, you can create a swap file to help with the loading.

For 30B, 33B, and 34B Parameter Models

If you're venturing into the realm of larger models the hardware requirements shift noticeably. GPTQ models benefit from GPUs like the RTX 3080 20GB, A4500, A5000, and the likes, demanding roughly 20GB of VRAM. Conversely, GGML formatted models will require a significant chunk of your system's RAM, nearing 20 GB.

Format RAM Requirements VRAM Requirements
GPTQ (GPU inference) 32GB (Swap to Load*) 20GB
GGML / GGUF (CPU inference) 20GB 500MB
Combination of GPTQ and GGML / GGUF (offloading) 10GB 4GB

*RAM needed to load the model initially. Not required for inference. If your system doesn't have quite enough RAM to fully load the model at startup, you can create a swap file to help with the loading.

For 65B and 70B Parameter Models

When you step up to the big models like 65B and 70B models (), you need some serious hardware. For GPU inference and GPTQ formats, you'll want a top-shelf GPU with at least 40GB of VRAM. We're talking an A100 40GB, dual RTX 3090s or 4090s, A40, RTX A6000, or 8000. You'll also need 64GB of system RAM. For GGML / GGUF CPU inference, have around 40GB of RAM available for both the 65B and 70B models.

Format RAM Requirements VRAM Requirements
GPTQ (GPU inference) 64GB (Swap to Load*) 40GB
GGML / GGUF (CPU inference) 40GB 600MB
Combination of GPTQ and GGML / GGUF (offloading) 20GB 20GB

*RAM needed to load the model initially. Not required for inference. If your system doesn't have quite enough RAM to fully load the model at startup, you can create a swap file to help with the loading.

Memory speed

When running Nous-Hermes AI models, you gotta pay attention to how RAM bandwidth and mdodel size impact inference speed. These large language models need to load completely into RAM or VRAM each time they generate a new token (piece of text). For example, a 4-bit 7B billion parameter Nous-Hermes model takes up around 4.0GB of RAM.

Suppose your have Ryzen 5 5600X processor and DDR4-3200 RAM with theoretical max bandwidth of 50 GBps. In this scenario, you can expect to generate approximately 9 tokens per second. Typically, this performance is about 70% of your theoretical maximum speed due to several limiting factors such as inference sofware, latency, system overhead, and workload characteristics, which prevent reaching the peak speed. To achieve a higher inference speed, say 16 tokens per second, you would need more bandwidth. For example, a system with DDR5-5600 offering around 90 GBps could be enough.

For comparison, high-end GPUs like the Nvidia RTX 3090 boast nearly 930 GBps of bandwidth for their VRAM. The DDR5-6400 RAM can provide up to 100 GB/s. Therefore, understanding and optimizing bandwidth is crucial for running models like Nous-Hermes efficiently

Recommendations:

  1. For Best Performance: Opt for a machine with a high-end GPU (like NVIDIA's latest RTX 3090 or RTX 4090) or dual GPU setup to accommodate the largest models (65B and 70B). A system with adequate RAM (minimum 16 GB, but 64 GB best) would be optimal.
  2. For Budget Constraints: If you're limited by budget, focus on Nous-Hermes GGML/GGUF models that fit within the sytem RAM. Remember, while you can offload some weights to the system RAM, it will come at a performance cost.

Remember, these are recommendations, and the actual performance will depend on several factors, including the specific task, model implementation, and other system processes.

CPU requirements

For best performance, a modern multi-core CPU is recommended. An Intel Core i7 from 8th gen onward or AMD Ryzen 5 from 3rd gen onward will work well. CPU with 6-core or 8-core is ideal. Higher clock speeds also improve prompt processing, so aim for 3.6GHz or more.

Having CPU instruction sets like AVX, AVX2, AVX-512 can further improve performance if available. The key is to have a reasonably modern consumer-level CPU with decent core count and clocks, along with baseline vector processing (required for CPU inference with llama.cpp) through AVX2. With those specs, the CPU should handle Nous-Hermes model size.