We now have over 30 distributors around the world who carry Pixy and other Charmed Labs products! Check out the full list here: http://charmedlabs.com/default/where-to-order-pixy/
If you’d like to apply for distributor status, please send us an email with your web store and some details about your location and business. Some of our underserved markets are Russia, Taiwan, and Israel. Get in touch at [email protected]
Here’s a video with the highlights of this release:
OK, that’s the big news — some smaller news — we have a Python API for our Linux, Raspberry Pi and BeagleBone Black users. And we’ve added lots of new distributors including distributors in Canada, Australia, China, Japan, Korea, Singapore, Italy, and two in Germany. (If you know of an online retailer that you think would be a good fit for Pixy, please send us a note!)
But back to the new release…
Way back in July we sent out a survey asking you what you what you wanted us to work on. And we got a significant percentage of you expressing a desire for improved detection accuracy, so that’s just what we did — we paused what we were working on and refocused our efforts like little Pixys tracking lasers, and we rewrote large sections of Pixy’ s firmware with the goal of improving detection accuracy. It took us a long time, but we’re happy with the results.
Improved detection accuracy — a weakness of previous firmware versions was false positives, where Pixy detected things that you didn’t intend. Pixy’s new firmware is more robust, using a more accurate color filtering algorithm. (The new color filtering algorithm is more computationally expensive too, but we spent lots of time optimizing, and it runs at 50 frames per second like before — yay!)
Simplified and effective color signature adjustments — another problem with previous firmware versions was the fact that it was somewhat difficult to “tweak” things if Pixy didn’t reliably detect the object you taught it. There was minimum saturation, hue range and a couple other parameters that you could adjust, but it was unintuitive, and you always needed to re-teach after tweaking things. The new firmware uses a simple slider for each color signature — slide it to the left and your color signature will be less inclusive, slide it to the right and your color signature will be more inclusive — everything is adjusted on-the-fly. It’s super easy… and kinda fun to be honest, but we’re sorta weird in that way.
Improved button-teach method — you’ve always been able to teach Pixy an object by pressing the button and holding your object in front of Pixy. This feature had room for improvement though. Sometimes Pixy would complain that the object’s hue wasn’t saturated enough. Sometimes it would learn your object, but detection accuracy would be an issue. The new firmware can learn objects with a huge range of color saturation levels. And when it learns an object, the detection accuracy is greatly improved.
Improved implementation of color codes — you may have noticed that our color code implementation never made it out of beta status. That’s because we simply weren’t happy with it. In this firmware version color codes are much improved — more accurate and easier to use.
New features added to the serial interfaces — many of you wanted to be able to control Pixy’s camera brightness/exposure as well as Pixy’s RGB LED from an Arduino. These controls have been added and are in the Arduino API. And we’ve added pan/tilt servo control to the UART and I2C interfaces in addition to the camera brightness and LED controls. We’ve also added “SPI with slave select” as an interface option.
Saving and loading of onboard Pixy parameters — you can save Pixy’s parameters, including color signatures, on your computer and restore them to your Pixy or even copy them to another Pixy. This was in the previous beta release, but it’s also been improved.
More developments coming!
Many of you have asked us what’s the status of the GCC-compatible version of the firmware (what we’re calling the “Firmware SDK”). It’s next on our list. Much of the work is already done — so we’re hoping it will be released soon. And we’re going to release a face detection algorithm after that. These projects have been piled up behind this release and they’ve been running far behind schedule, so we’ll be glad to move onto these next tasks and get them moving toward the door.
We can always use help — if you’re a developer of any sort and want to help with the CMUcam5 Pixy project, please send us a note!
Hope everyone is having a great summer! We have a new version PixyMon and firmware that supports color codes. It’s beta status. Please check it out, and tell us what you think. (And if you don’t own a Pixy yet, go here!)
We also have a new library for communicating with Pixy over USB — we call it (appropriately) libpixyusb. It’s great if you want to talk to Pixy using a Raspberry Pi or Beagle Bone, or any other microcontroller with a USB port. (PixyMon already runs on these platforms, but if you wanted to write your own program to talk to Pixy over USB, there was no convenient way to do this. Libpixyusb makes it easy.)
We are actively working on a GCC port for the firmware. (Currently, Pixy’s firmware only compiles using the Keil compiler, which is a great tool, but it costs money.) The GCC port is coming along well and we hope to release it by the end of August. When it’s released you’ll be able to compile Pixy firmware with a free IDE (LPCXpresso) or just plain GCC. And, of course, this will bring Pixy firmware development to anyone who wants to develop their own vision algorithms. It’ll be awesome — there are lots of great ideas out there!
Speaking of algorithms, we’ve brought in some outside help to work on face detection for Pixy. We have the core algorithm working and are starting to work on optimization and platform support. A release date hasn’t materialized yet — we’ll keep you posted!