Project 10 Sousveillance Tool: Multi-Reality Camera (Giphy API)

The Camera may take up to one minute to load all the API. Please patiently wait for it...

P5.JS Link

The Presentation Slide

Description

Sousveillance = the individual point of view of the current world

Current World = Multiple realms of context: Internet Realm + The actual Reality see from the eyes

This is a camera that deep dives into the idea of souvelliance by detecting/scanning the reality/user's environment and apply the corresponding content in giphy api to juxtaposition what kind are world we live in "now".

Design Concept & Design Process

Continuing with what I started on the multi-dimension reality camera, this time, I was planning to figure out the technical part that didn’t work out last time. Last time, both functions of detecting the image data and drawing the corresponding image didn’t completely work out. They will start scanning and drawing at the same time. Therefore, the images that is drawn on the camera screen is also being scanned and that really messes up the detection function. Therefore, this time I was able to make two buttons for the different functions and have the user activates it at their desired times. And I actually made it happen! I spent a really long time to try to rewrite the code because the logic/format that I originally wrote will just crash the sketches.

Once the camera starts working, the user will see two buttons on the bottom of the screen. One is "Scan". Another one is "Draw". Here I separated the two functions of scanning the environment's visual information taken by the camera and searching the identified objects in the google's api to draw on top of the camera screen. By making these two functions into their own buttons give the user the control over what the camera should execute now. This way, the camera will not be constantly scanning and taking the visual data from user's environment. Moreover, it will only start drawing when the user granted the permission to do so.

However originally, I was just planning to have the image drawn out with the painting effect. But since there are too many things (including the consentful interface) running in this one sketch. The painting effect didn’t work out no matter how hard I tried. Mainly because the painting function required the canvas the have layover, but the camera function requires the canvas to consistently draw over the canvas. That is why they can’t happen on the same canvas.

Originally, I was also planning on improving the UI more since I thought by the time I can figure out all the functionality (which I did! beside the way the images are drawn out on the canvas). However, with the time given and other finals all going on at the same week. I wasn’t able to do so.

Reflection

I was disappointed that I couldn’t at least get the painting function work. I think there must be a way to do this, but with limited coding knowledge that I have. This didn’t happen. I think this might be something that I will continuously work on in the future along with the other AR stuff that I have been working on. For example, I can make an array or an updating 3d model asset. Then every time the user open this camera. The user will be able to scan his/her/their environment and being able to see 3d models being displayed right beside the things they see in front of them. It is like AV however linked to real-time API. They can choose which kind of api they want to display like “3d model api related to covid”, “3d model api realted to instagram” etc. I might look into this during winter break.