Project #8 Experimental Camera: Multidimensional Reality Camera

The Camera may take up to one minute to load all the API. Please patiently wait for it...
Description
Giphy API (P5.JS Link)
This one works but only on phones or tablets (because it uses the back camera instead of front camera)
Google API (P5.JS Link)
This one still doesn't work perfectly and also only works on phones or tablets.
The camera detects what is captured and paints out the same object found from web api.

"The camera are your eyes that helps you see multidimensional (real world + virtual world from the extternal api) reality."
Design Concept
What is "this"? | The world we see
You have to admit that we all have already been living in a multidimention reality for a while now. There is the real world, and then there's the internet world which we interact, look, and act very differently than the real world. The real world can be seen as the "moment" / the "present". However, the internet world is a huge repository that contains all past events and information. But how do we bring these together?
Is the future a state/phase or a continuous constant| Real world Reality + Virtual World Reality
This camera is created with the intend to draw the vritual world on top of the existing world. For example, if you point your camera at an apple. The camera will be able to identify that it is an apple ,and it will search the api to find an image of apple, then starts drawing that image of apple on top of this existing apple in the camera. Therefore, you get two versions of an apple. One is what you see in real life. Another one is the possible apple from the internet. Having these two visual information linking together, creates this multi-dimention phenomenal. It shows you all the possible apple, connecting your present (real life apple) with the future (the virutal apple from internet api).
Design Process
Developing the Code | Problems Along the way
ATTEMPT 1 (Front Camera)
The first thing I did is to try out how to make the front camera working on phones with p5 editor (this sketch will not load on laptops, only on phones).
ATTEMPT 2 (Google API)
Then I tried to create a working api fetching function by registering an api key and search engine ID etc. on google developers website. (API Key: https://developers.google.com/custom-search/json-api/v1/overview) (Search Engine ID: https://cse.google.com/all) By changing the names in the let query = "" variable and run the code, it will search an image of what the object name you put in. However, with the google api there is a limit of daily fetching; therefore, in the final code, I change it to the giphy api, which has not fetcing limits.
ATTEMPT 3 (Object Detection)
Then I stumble upon the TensorFlow library COCO (Common Objects in Context) which is a long-used machine-learning model sponsored by Microsoft and Facebook. By watching "Object Detection – Webcam Tracking in p5.js/TensorFlow" by Jeff Thompson (https://www.youtube.com/watch?v=WPOY2IEqUMg), I made a working code that are able to identify objects in cameras.
Then in my final project, I just combine these features together. However, the drawing functions keeps messing up with the detection fucntion. I am trying to make them into two functions that only activates when buttons are pressed so they don't affect each other, but with the time limit I didn't fully get it working.
Reflection
I feel like in this semester. I have already done projects using the ideas of pixels (face generator, exquisite corpse, tomorrow game...etc.). So, this time, I was really digging deaeper and thinking about how do we interpret what we see in the camera. If all of these pixels come together has different meanings, and I absolutely enjoy practicing out coding in this way. I wish this can be something that I continuely develop and really get it to work!