Out of stock
The Coral USB Accelerator is an awesome device that adds an Edge TPU coprocessor to your existing systems with accelerated ML inferencing. It includes a USB-C socket that you simply connect to a host computer to perform accelerated ML inferencing on a wide range of systems, including Linux, Mac, and Windows.
High-speed ML Inferencing
The nifty on-board Edge TPU is a small ASIC designed by Google that accelerates TensorFlow Lite models in a power-efficient manner: It’s super power-efficient, too! Running on just 2 watts while delivering a whopping 4 trillion operations per second (4 TOPS)—that’s 2 TOPS per watt!
To give you an idea of what this little device is capable of, one Edge TPU can execute state-of-the-art mobile vision models such as MobileNet v2 at almost 400 frames per second. This on-device ML processing boosts speed, ensures data privacy, reduces latency and you won’t need a constant internet connection since it all happens right on your device. It’s a little powerhouse that makes ML inferencing a piece of cake!
Compatible with all major platforms
Simply connect via USB to any system running Debian Linux (including Raspberry Pi), macOS, or Windows 10 and you’re good to go! It’s a versatile little gadget that can be integrated into various setups.
TensorFlow Lite Support
You won’t need to build models from the ground up either. TensorFlow Lite models can be compiled to run seamlessly on the Edge TPU.
So, if you want high-speed ML inferencing on almost any platform, the Coral USB Accelerator is the way to go! Just plug it in, and you’re good to go!
|ML accelerator||Google Edge TPU coprocessor:
4 TOPS (int8); 2 TOPS per watt
|Connector||USB 3.0 Type-C* (data/power)
*Compatible with USB 2.0 but inferencing speed is slower.
|Dimensions||65 mm x 30 mm|