I experimented with bounding boxes a few years ago with OpenCV and Processing — I couldn’t get enough of them.
—- ah yes, I have labelme – and it’s a lot of fun.
— my direction lately has been really looking hard at the pre-processing stages, causes of many of the problems with creating “accidentally racist” facial recognition that mislabels American blacks far too often and rolled out to so many police departments.
My 2016 experiments https://www.youtube.com/watch?v=1tKOe6DzKjk — if you have siezures don’t watch – I do random threshholds.
As graphics are usually not processed in color but usually do something drop dead stupid like picking only one of the RGBs and work with it as the greyscale (it’s fast) — and stupidity in the preprocessing stages carries over into the final result.
Developers are likely to work only with data that works in order to meet their deadlines, so if you get a training set of 100,000 images that all work, it feels complete to the bosses and they can roll out with it, make big announcements, sell or give away products to governments as much as they like, all with flaws that don’t show up until something god awful happens and they realize having an all white male programming team and an all white male administration might have caused them to not consider diversity of appearance in the training set and in beta testing — EVEN IF they believe they are diverse, have otherwise diverse policies, etc.
Ethics has to be carried through. It’s not just that they were fooled into joining the program. It’s saying that THIS is representative of American blacks.
I’ll give a white example. I’m not Republican and this bothered me because these “typical American faces” were not American and yet used as if they are, for a US reelection campaign.