Thursday, February 22, 2018

Google Vision API using Python

Stumbled on this guy's cool project: http://labonnesoupe.org/2018/02/14/introducing-qrocodile/ and wanted to play around with that idea a bit. Using the Raspberry Pi and the gmusicapi module I was able to develop a proof of concept that would use the Web Cam to read a QR code and stream a song using google play.

Fast Forward a few hours after that proof of concept and an idea hit me. What if I could use a web cam to detect the emotion of someone and then stream a random song from Google Play Music. This post is about that specifically. Taking a picture with a web cam and bouncing it off the Google Vision API. I'd call this a part 1 of Part x for this project but not sure I'll remember to post the other parts..ha.


Setting up the API

This was not as easy as I'd hoped. You have to enable billing on your Google Cloud project. Don't fret though unless you plan to do more than 1000 look ups per month.



  • Open up the Google Cloud Console and turn on billing.
  • Then click on the API and Services option. 
  • Search for Vision API and enable it.
Create a new Service account or use an existing one. I created a Compute Engine Default Service account. Download the JSON file and save it in the folder that you will run your script from. 

You can then complete authentication by referencing the JSON file you created.

os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'C:\\Python27\\Projects\\emotion\\cred.json'

Python

Ok if you made it this far awesome. Sorry for the lame instructions above.
You will need some Python stuff to make this happen.

  • pip install pillow
  • pip install google-cloud
  • pip install google-cloud-vision
  • Install CV2 
  • Maybe some other stuff

Script

So basically I just hijacked an example to get this to work quick and dirty. I open the camera up, capture a picture, then feed this picture into the defect_faces function. I added code to spit out the results on a web page, to mark up the image with a green box around the recognized face, and to print out the emotion analysis to the console. The script below will handle a lot of different requests, again I am just using the faces portion.


Call the script like so: python c:\Python27\Projects\emotion\emotion.py faces blah

Notice I didn't even take time to fix the arguments thing. So I just override blah in the sample above by hard coding a path...i know i know...lazy bum.

Update

I found an example of how to use Microsoft Face API using Python. It is pretty straight forward so I pasted it here below. The MS Face API returns some extra data. I need to figure out the quotas on the API calls but it seems to be a cool substitute for the google API. It returns approximate age and some other cool details about the image.

You actually upload your file to the web service and then it spits out he results.

0 comments:

Post a Comment