The first step in development was to identify the capabilities and limits of the new Scan ML test. I used an example Lens provided by Snap to discover what items in my home were most easily detected and understood. From food in my freezer, to my shoes, to my bed.
Once the categories with the highest rate of detection were identified, it was time to figure out what form the Lens would take. We ended up deciding on a habit forming app that could detect whether the user was reading a book, eating, drinking, about to go for a walk, going to sleep, or about to brush their teeth. The Lens would detect the activity, prompt the user with a button to confirm that the activity was correct, and then plot it out on a screen that showed what habits were performed each day over the course of a week.
Because Snap’s local storage functionality is limited to 24 hour saves, we ended up re-saving a week’s worth of data every time the user loads the lens, to ensure that the stats last over the course of a week. So long as the user got in the habit of loading the Lens once every 24 hours, the data would safely persist.
When the fundamentals of the Lens were locked down, we frequently iterated with Snap until the rollout of the Lens during the Snap Partner Summit when the Scan ML functionality was first introduced.