Mobilint Model Zoo#

Mobilint’s Model Zoo (mblt-model-zoo) is a curated collection of deep learning models optimized by Mobilint’s Neural Processing Units (NPUs).

Designed to help developers accelerate deployment, Mobilint’s Model Zoo offers access to over 400 public, pre-trained, and pre-quantized vision, language, and multimodal models in .mxq* formats, optimized for Mobilint’s runtime. Along with performance results, we provide pre- and post-processing tools to help developers evaluate, fine-tune, and integrate the models with ease.

New models are added regularly to support evolving AI workloads and customer requests.

See also

To download and access the Model Zoo, click here.

Tip

To check which MXQ files are compatible with your runtime environment, refer to the MXQ compatibility matrix.

Supported Models#

The following is a list of models (non-exhaustive) currently available in Mobilint’s Model Zoo.

Transformer Models#

Large Language Models#

Model

Model ID

Original Source

EXAONE-3.5-2.4B-Instruct

mobilint/EXAONE-3.5-2.4B-Instruct

Link

EXAONE-4.0-1.2B

mobilint/EXAONE-4.0-1.2B

Link

EXAONE-Deep-2.4B

mobilint/EXAONE-Deep-2.4B

Link

HyperCLOVAX-SEED-Text-Instruct-1.5B

mobilint/HyperCLOVAX-SEED-Text-Instruct-1.5B

Link

Llama-3.1-8B-Instruct

mobilint/Llama-3.1-8B-Instruct

Link

Llama-3.2-1B-Instruct

mobilint/Llama-3.2-1B-Instruct

Link

Llama-3.2-3B-Instruct

mobilint/Llama-3.2-3B-Instruct

Link

c4ai-command-r7b-12-2024

mobilint/c4ai-command-r7b-12-2024

Link

bert-base-uncased

mobilint/bert-base-uncased

Link

Qwen2.5-7B-Instruct

mobilint/Qwen2.5-7B-Instruct

Link

Speech-To-Text Models#

Model

Model ID

Original Source

whisper-small

mobilint/whisper-small

Link

Vision Language Models#

Model

Model ID

Original Source

aya-vision-8b

mobilint/aya-vision-8b

Link

blip-image-captioning-large

mobilint/blip-image-captioning-large

Link

Qwen2-VL-2B-Instruct

mobilint/Qwen2-VL-2B-Instruct

Link

Vision Models#

Image Classification (ImageNet)#

Model

Original Source

AlexNet

Link

ConvNeXt_Tiny

Link

ConvNeXt_Small

Link

ConvNeXt_Base

Link

ConvNeXt_Large

Link

DenseNet121

Link

DenseNet169

Link

DenseNet201

Link

GoogLeNet

Link

Inception_V3

Link

MNASNet1_0

Link

MNASNet1_3

Link

MobileNet_V2

Link

RegNet_X_400MF

Link

RegNet_X_800MF

Link

RegNet_X_1_6GF

Link

RegNet_X_3_2GF

Link

RegNet_X_8GF

Link

RegNet_X_16GF

Link

RegNet_X_32GF

Link

RegNet_Y_400MF

Link

RegNet_Y_800MF

Link

RegNet_Y_1_6GF

Link

RegNet_Y_3_2GF

Link

RegNet_Y_8GF

Link

RegNet_Y_16GF

Link

RegNet_Y_32GF

Link

ResNet18

Link

ResNet34

Link

ResNet50

Link

ResNet101

Link

ResNet152

Link

ResNeXt50_32X4D

Link

ResNeXt101_32X8D

Link

ResNeXt101_64X4D

Link

ShuffleNet_V2_X1_0

Link

ShuffleNet_V2_X1_5

Link

ShuffleNet_V2_X2_0

Link

VGG11

Link

VGG11_BN

Link

VGG13

Link

VGG13_BN

Link

VGG16

Link

VGG16_BN

Link

VGG19

Link

VGG19_BN

Link

Wide_ResNet50_2

Link

Wide_ResNet101_2

Link

Object Detection (COCO)#

Model

Original Source

yolov3u

Link

yolov3-spp

Link

yolov3-sppu

Link

yolov5su

Link

yolov5mu

Link

yolov5lu

Link

yolov5l6

Link

yolov5xu

Link

yolov5x6

Link

yolov7

Link

yolov7x

Link

yolov8s

Link

yolov8m

Link

yolov8l

Link

yolov8x

Link

yolov9m

Link

yolov9c

Link

yolo11s

Link

yolo11m

Link

yolo11l

Link

yolo11x

Link

yolo12s

Link

yolo12m

Link

Instance Segmentation (COCO)#

Model

Original Source

yolov5l-seg

Link

yolov5x-seg

Link

yolov8s-seg

Link

yolov8m-seg

Link

yolov8l-seg

Link

yolov8x-seg

Link

yolov9c-seg

Link

yolo11s-seg

Link

yolo11m-seg

Link

yolo11l-seg

Link

yolo11x-seg

Link