Users Online

· Guests Online: 39

· Members Online: 0

· Total Members: 188
· Newest Member: meenachowdary055

Forum Threads

Newest Threads
No Threads created
Hottest Threads
No Threads created

Latest Articles

Python Project - Emojify - Create your own emoji with Deep Learning

Emojify - Create your own emoji with Deep Learning

Deep Learning project for beginners – Taking you closer to your Data Science dream

Emojis or avatars are ways to indicate nonverbal cues. These cues have become an essential part of online chatting, product review, brand emotion, and many more. It also lead to increasing data science research dedicated to emoji-driven storytelling.

With advancements in computer vision and deep learning, it is now possible to detect human emotions from images. In this deep learning project, we will classify human facial expressions to filter and map corresponding emojis or avatars.

About the Dataset

The FER2013 dataset ( facial expression recognition) consists of 48*48 pixel grayscale face images. The images are centered and occupy an equal amount of space. This dataset consist of facial emotions of following categories:

  • 0:angry
  • 1:disgust
  • 2:feat
  • 3:happy
  • 4:sad
  • 5:surprise
  • 6:natural

Download Dataset: Facial Expression Recognition Dataset

Download Project Code

Before proceeding ahead, please download the source code: Emoji Creator Project Source Code

Create your emoji with Deep Learning

create emoji with deep learning

We will build a deep learning model to classify facial expressions from the images. Then we will map the classified emotion to an emoji or an avatar.

Facial Emotion Recognition using CNN

In the below steps will build a convolution neural network architecture and train the model on FER2013 dataset for Emotion recognition from images.

Download the dataset from the above link. Extract it in the data folder with separate train and test directories.

Make a file train.py and follow the steps:

1. Imports:

  1. import numpy as np
  2. import cv2
  3.  
  4. from keras.emotion_models import Sequential
  5. from keras.layers import Dense, Dropout, Flatten
  6. from keras.layers import Conv2D
  7. from keras.optimizers import Adam
  8. from keras.layers import MaxPooling2D
  9. from keras.preprocessing.image import ImageDataGenerator

2. Initialize the training and validation generators:

  1. train_dir = 'data/train' 
  2. val_dir = 'data/test' 
  3. train_datagen = ImageDataGenerator(rescale=1./255) 
  4. val_datagen = ImageDataGenerator(rescale=1./255) 
  5.  
  6. train_generator = train_datagen.flow_from_directory( 
  7. train_dir,
  8. target_size=(48,48),
  9. batch_size=64,
  10. color_mode="gray_framescale",
  11. class_mode='categorical')
  12. validation_generator = val_datagen.flow_from_directory(
  13. val_dir,
  14. target_size=(48,48),
  15. batch_size=64,
  16. color_mode="gray_framescale",
  17. class_mode='categorical')

3. Build the convolution network architecture:

  1. emotion_model = Sequential()
  2. emotion_model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(48,48,1)))
  3. emotion_model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
  4. emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
  5. emotion_model.add(Dropout(0.25))
  6. emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
  7. emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
  8. emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
  9. emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
  10. emotion_model.add(Dropout(0.25))
  11. emotion_model.add(Flatten())
  12. emotion_model.add(Dense(1024, activation='relu'))
  13. emotion_model.add(Dropout(0.5))
  14. emotion_model.add(Dense(7, activation='softmax'))

4. Compile and train the model:

  1. emotion_model.compile(loss='categorical_crossentropy',optimizer=Adam(lr=0.0001, decay=1e-6),metrics=['accuracy'])
  2. emotion_model_info = emotion_model.fit_generator(
  3. train_generator,
  4. steps_per_epoch=28709 // 64,
  5. epochs=50,
  6. validation_data=validation_generator,
  7. validation_steps=7178 // 64)

5. Save the model weights:

  1. emotion_model.save_weights('model.h5')

6. Using openCV haarcascade xml detect the bounding boxes of face in the webcam and predict the emotions:

  1. cv2.ocl.setUseOpenCL(False)
  2. emotion_dict = {0: "Angry", 1: "Disgusted", 2: "Fearful", 3: "Happy", 4: "Neutral", 5: "Sad", 6: "Surprised"}
  3. cap = cv2.VideoCapture(0)
  4. while True:
  5. ret, frame = cap.read()
  6. if not ret:
  7. break
  8. bounding_box = cv2.CascadeClassifier('/home/shivam/.local/lib/python3.6/site-packages/cv2/data/haarcascade_frontalface_default.xml')
  9. gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2gray_frame)
  10. num_faces = bounding_box.detectMultiScale(gray_frame,scaleFactor=1.3, minNeighbors=5)
  11. for (x, y, w, h) in num_faces:
  12. cv2.rectangle(frame, (x, y-50), (x+w, y+h+10), (255, 0, 0), 2)
  13. roi_gray_frame = gray_frame[y:y + h, x:x + w]
  14. cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray_frame, (48, 48)), -1), 0)
  15. emotion_prediction = emotion_model.predict(cropped_img)
  16. maxindex = int(np.argmax(emotion_prediction))
  17. cv2.putText(frame, emotion_dict[maxindex], (x+20, y-60), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)
  18. cv2.imshow('Video', cv2.resize(frame,(1200,860),interpolation = cv2.INTER_CUBIC))
  19. if cv2.waitKey(1) & 0xFF == ord('q'):
  20. cap.release()
  21. cv2.destroyAllWindows()
  22. break

Code for GUI and mapping with emojis

Create a folder named emojis and save the emojis corresponding to each of the seven emotions in the dataset.

Paste the below code in gui.py and run the file.

  1. import tkinter as tk
  2. from tkinter import *
  3. import cv2
  4. from PIL import Image, ImageTk
  5. import os
  6. import numpy as np
  7. import cv2
  8. from keras.models import Sequential
  9. from keras.layers import Dense, Dropout, Flatten
  10. from keras.layers import Conv2D
  11. from keras.optimizers import Adam
  12. from keras.layers import MaxPooling2D
  13. from keras.preprocessing.image import ImageDataGenerator
  14. emotion_model = Sequential()
  15. emotion_model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(48,48,1)))
  16. emotion_model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
  17. emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
  18. emotion_model.add(Dropout(0.25))
  19. emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
  20. emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
  21. emotion_model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
  22. emotion_model.add(MaxPooling2D(pool_size=(2, 2)))
  23. emotion_model.add(Dropout(0.25))
  24. emotion_model.add(Flatten())
  25. emotion_model.add(Dense(1024, activation='relu'))
  26. emotion_model.add(Dropout(0.5))
  27. emotion_model.add(Dense(7, activation='softmax'))
  28. emotion_model.load_weights('model.h5')
  29. cv2.ocl.setUseOpenCL(False)
  30. emotion_dict = {0: " Angry ", 1: "Disgusted", 2: " Fearful ", 3: " Happy ", 4: " Neutral ", 5: " Sad ", 6: "Surprised"}
  31. emoji_dist={0:"./emojis/angry.png",2:"./emojis/disgusted.png",2:"./emojis/fearful.png",3:"./emojis/happy.png",4:"./emojis/neutral.png",5:"./emojis/sad.png",6:"./emojis/surpriced.png"}
  32. global last_frame1
  33. last_frame1 = np.zeros((480, 640, 3), dtype=np.uint8)
  34. global cap1
  35. show_text=[0]
  36. def show_vid():
  37. cap1 = cv2.VideoCapture(0)
  38. if not cap1.isOpened():
  39. print("cant open the camera1")
  40. flag1, frame1 = cap1.read()
  41. frame1 = cv2.resize(frame1,(600,500))
  42. bounding_box = cv2.CascadeClassifier('/home/shivam/.local/lib/python3.6/site-packages/cv2/data/haarcascade_frontalface_default.xml')
  43. gray_frame = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
  44. num_faces = bounding_box.detectMultiScale(gray_frame,scaleFactor=1.3, minNeighbors=5)
  45. for (x, y, w, h) in num_faces:
  46. cv2.rectangle(frame1, (x, y-50), (x+w, y+h+10), (255, 0, 0), 2)
  47. roi_gray_frame = gray_frame[y:y + h, x:x + w]
  48. cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray_frame, (48, 48)), -1), 0)
  49. prediction = emotion_model.predict(cropped_img)
  50. maxindex = int(np.argmax(prediction))
  51. cv2.putText(frame1, emotion_dict[maxindex], (x+20, y-60), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)
  52. show_text[0]=maxindex
  53. if flag1 is None:
  54. print ("Major error!")
  55. elif flag1:
  56. global last_frame1
  57. last_frame1 = frame1.copy()
  58. pic = cv2.cvtColor(last_frame1, cv2.COLOR_BGR2RGB)
  59. img = Image.fromarray(pic)
  60. imgtk = ImageTk.PhotoImage(image=img)
  61. lmain.imgtk = imgtk
  62. lmain.configure(image=imgtk)
  63. lmain.after(10, show_vid)
  64. if cv2.waitKey(1) & 0xFF == ord('q'):
  65. exit()
  66. def show_vid2():
  67. frame2=cv2.imread(emoji_dist[show_text[0]])
  68. pic2=cv2.cvtColor(frame2,cv2.COLOR_BGR2RGB)
  69. img2=Image.fromarray(frame2)
  70. imgtk2=ImageTk.PhotoImage(image=img2)
  71. lmain2.imgtk2=imgtk2
  72. lmain3.configure(text=emotion_dict[show_text[0]],font=('arial',45,'bold'))
  73. lmain2.configure(image=imgtk2)
  74. lmain2.after(10, show_vid2)
  75. if __name__ == '__main__':
  76. root=tk.Tk()
  77. img = ImageTk.PhotoImage(Image.open("logo.png"))
  78. heading = Label(root,image=img,bg='black')
  79. heading.pack()
  80. heading2=Label(root,text="Photo to Emoji",pady=20, font=('arial',45,'bold'),bg='black',fg='#CDCDCD')
  81. heading2.pack()
  82. lmain = tk.Label(master=root,padx=50,bd=10)
  83. lmain2 = tk.Label(master=root,bd=10)
  84. lmain3=tk.Label(master=root,bd=10,fg="#CDCDCD",bg='black')
  85. lmain.pack(side=LEFT)
  86. lmain.place(x=50,y=250)
  87. lmain3.pack()
  88. lmain3.place(x=960,y=250)
  89. lmain2.pack(side=RIGHT)
  90. lmain2.place(x=900,y=350)
  91. root.title("Photo To Emoji")
  92. root.geometry("1400x900+100+10")
  93. root['bg']='black'
  94. exitbutton = Button(root, text='Quit',fg="red",command=root.destroy,font=('arial',25,'bold')).pack(side = BOTTOM)
  95. show_vid()
  96. show_vid2()
  97. root.mainloop()

 

Summary

In this deep learning project for beginners, we have built a convolution neural network to recognize facial emotions. We have trained our model on the FER2013 dataset. Then we are mapping those emotions with the corresponding emojis or avatars.

Using OpenCV’s haar cascade xml we are getting the bounding box of the faces in the webcam. Then we feed these boxes to the trained model for classification.


Comments

No Comments have been Posted.

Post Comment

Please Login to Post a Comment.

Ratings

Rating is available to Members only.

Please login or register to vote.

No Ratings have been Posted.
Render time: 0.70 seconds
10,818,902 unique visits