Supported LLM Base Models

Supported LLM Base Models

Navigator supports a variety of open source LLMs, and is adding more all the time. Below is the current list of supported models with links to their Hugging Face model pages where you can learn more about each model’s strengths and weaknesses. Change the model you are training here

For most use cases that you expect to train or run inference on consumer hardware, we recommend using a 7B parameter model. This is the current sweet spot between model performance and size for consumer devices. All LLM features in Navigator work well with 7B parameter models.

Full list of supported Models

TinyLlama/TinyLlama-1.1B-Chat-v1.0gradientai/Llama-3-8B-Instruct-262kmlx-community/Meta-Llama-3-8B-Instruct  # 4 bit & 8 bitmlx-community/gemma-1.1-2b-it-4bitmlx-community/Mixtral-8x22B-Instruct-v0.1-4bit

mlx-community/WizardLM2-8x22B-4bit-mlxmlx-community/Meta-Llama-3-70B-Instruct-4bitmlx-community/Phi-3-mini-4k-instruct  # 4 bit & 8 bitmlx-community/Phi-3-mini-128k-instruct  # 4 bit & 8 bitmlx-community/OpenELM-270M-Instructmlx-community/OpenELM-450M-Instruct

mlx-community/OpenELM-1_1B-Instruct  # 4 bit & 8 bit

mlx-community/Qwen1.5-1.8B-Chat-4bitmlx-community/Qwen1.5-0.5B-Chat-4bitmlx-community/Qwen1.5-7B-Chat-4bit

mlx-community/Qwen1.5-72B-Chat-4bit

Maykeye/TinyLLama-v0