ncnn is a high-performance neural network forward computation framework optimized for mobile devices. ncnn is designed from the ground up with mobile deployment and usage in mind. It has no third-party dependencies, is cross-platform, and achieves faster CPU speeds on mobile devices than all known open-source frameworks.
Project Address: https://github.com/Tencent/ncnn
Development Team: Tencent Open Source Project
Platform/Hardware | Windows | Linux | Android | macOS | iOS |
---|---|---|---|---|---|
Intel CPU | ✔️ | ✔️ | ❔ | ✔️ | / |
Intel GPU | ✔️ | ✔️ | ❔ | ❔ | / |
AMD CPU | ✔️ | ✔️ | ❔ | ✔️ | / |
AMD GPU | ✔️ | ✔️ | ❔ | ❔ | / |
NVIDIA GPU | ✔️ | ✔️ | ❔ | ❔ | / |
Qualcomm | ❔ | ✔️ | ✅ | / | / |
ARM CPU | ❔ | ❔ | ✅ | / | / |
Apple CPU | / | / | / | ✔️ | ✅ |
✅ = Known to run and perform excellently; ✔️ = Known to run; ❔ = Theoretically feasible but not confirmed; / = Not applicable
ncnn is currently used in several core Tencent applications, including:
ncnn supports building on the following platforms:
It is recommended to start with the Using ncnn with AlexNet tutorial, which provides detailed step-by-step instructions and is especially suitable for beginners.
ncnn is an ideal choice for mobile AI application development, especially for developers and enterprises that need to deploy deep learning models on mobile devices.