Happy Monday everyone! It’s hard (and quite sad) to believe that we’re at the beginning of week 9, and that in exactly two weeks I’ll be on a plane headed for London. We’re pushing ahead at full speed to get everything we want to get done completed before the end of the internship–both work and travel wise. This weekend, all 8 of us piled on another Knight bus, and took a 36 hour trip to Hampi, which Avia talks more about. Sunday afternoon we went and saw a play (Noises Off), gorged ourselves on Chaat, and then poked around Blossom’s bookstore for a while.
On a work note, we’ve been working hard to complete the necessary enhancements on the SABT and the BWT. Kannada Braille has been a little harder to implement than expected, possibly because certain characters in Kannada are represented as multiple characters when typed out (a “base” character accompanied by a type of accent mark). We’ve also been making good progress on the new tactile graphics project, which Poornima and Aditya have talked about. Poornima is writing a GUI (Graphic User Interface–aka the front end the users interact with) in Java, and Aditya and I have been working on some python code to pull down pictures from Google images, and then to select the “best” one. It is ideal to have a sighted teacher to select the best image from some choices, but we want our program to be able to do something reasonable if a blind teacher is using it. Having a program choose the “best” image, however, is much easier said than done. This is tricky for several reasons, the first being that “best” is a loose term and even people can’t always come to a consensus–we may or may not have been having heated arguments on which hippo pictures are better. The second reason this is tricky is that even if a consensus is reached, it is hard to quantify exactly what makes that image better. As of right now, we are using three simple metrics–how many disjoint components the image has, what proportion of the image is white space, and finally, what was the search rank on Google. These metrics are based off three assumptions–more disjoint components means the image is more complicated, more dark pixels mean there are most likely extraneous details that we don’t need, and Google is usually pretty good at ranking things, respectively. These metrics are all relatively simple to obtain, the most difficult being the number of connected components (which is calculated with a slightly modified floodfill algorithm). Right now we’re trying to figure out how to combine these metrics in the best way to select a picture, but right now our algorithm is doing decently, and will pick Koala B as it is the simplest (or at least in our opinion).
|Koala B||Koala C|
Right now I’m working on adding a component to the algorithm that will detect how many separate objects are in the image (which is slightly different and more holistic than counting disjoint components) so we can penalize images with background, as those are more complicated. This will most likely involve calculating some bounding boxes of the shapes and then testing for containment (it’s a bit of an asymptotic complexity nightmare, but yay for constants?) Stay tuned for next week’s update, when we’ll be wrapping everything up!