Users Online

· Guests Online: 40

· Members Online: 0

· Total Members: 188
· Newest Member: meenachowdary055

Forum Threads

Newest Threads
No Threads created
Hottest Threads
No Threads created

Latest Articles

Python Project - Deep Surveillance with Deep Learning - Intelligent Video Surveillance Project

Python Project - Deep Surveillance with Deep Learning - Intelligent Video Surveillance Project

BY  · UPDATED · AUGUST 7, 2020

 

 

Surveillance security is a very tedious and time-consuming job. In this tutorial, we will build a system to automate the task of analyzing video surveillance. We will analyze the video feed in real-time and identify any abnormal activities like violence or theft.

There is a lot of research going on in the industry about video surveillance among them; the role of CCTV videos has overgrown. CCTV cameras are placed all over the places for surveillance and security.

In the last decade, there have been advancements in deep learning algorithms for deep surveillance. These advancements have shown an essential trend in deep surveillance and promise a drastic efficiency gain. The typical applications of deep surveillance are theft identification, violence detection, and detection of the chances of explosion.

Network architecture:

We have generally seen deep neural networks for computer vision, image classification, and object detection tasks. In this project, we have to extend deep neural networks to 3-dimensional for learning spatio-temporal features of the video feed.

For this video surveillance project, we will introduce a spatio temporal autoencoder, which is based on a 3D convolution network. The encoder part extracts the spatial and temporal information, and then the decoder reconstructs the frames. The abnormal events are identified by computing the reconstruction loss using Euclidean distance between original and reconstructed batch.

spatial temporal encoders

Intelligent Video Surveillance with Deep Learning


deep learning intelligent video surveillance execution

We will use spatial temporal encoders to identify abnormal activities.

The dataset for abnormal event detection in video surveillance:

Following are the comprehensive datasets that are used to train models for anomaly detection tasks.

CUHK Avenue Dataset:

This dataset contains 16 training and 21 testing video clips. The video contains 30652 frames in total.

The training videos contain video with normal situations. The testing videos contain videos with both standard and abnormal events.

Dataset Download Link: Avenue Dataset for Abnormal Event Detection

UCSD pedestrian Dataset:

This dataset contains videos with pedestrians. It includes groups of people walking towards, away, and parallel to the camera. The abnormal event includes:

  • Non-pedestrian entities
  • Anomalous pedestrian motion patterns

Dataset Download Link: UCSD Anomaly Detection Dataset

Project Source Code

Before proceeding ahead, please download the source code which we used in this deep learning project: Video Surveillance Project Code

Video Surveillance – Anomaly Even Detection Code:

First, download any one of the above datasets and put in a directory named “train”.

Make a new python file train.py and paste the code described in following steps:

 

1. Imports:

  1. from keras.preprocessing.image import img_to_array,load_img
  2. import numpy as np
  3. import glob
  4. import os
  5. import cv2
  6.  
  7. from keras.layers import Conv3D,ConvLSTM2D,Conv3DTranspose
  8. from keras.models import Sequential
  9. from keras.callbacks import ModelCheckpoint, EarlyStopping
  10. import imutils

2. Initialize directory path variable and describe a function to process and store video frames:

  1. store_image=[] 
  2. train_path='./train' 
  3. fps=5 
  4. train_videos=os.listdir('train_path') 
  5. train_images_path=train_path+'/frames' 
  6. os.makedirs(train_images_path) 
  7.  
  8. def store_inarray(image_path):
  9. image=load_img(image_path) 
  10. image=img_to_array(image) 
  11. image=cv2.resize(image, (227,227), interpolation = cv2.INTER_AREA) 
  12. gray=0.2989*image[:,:,0]+0.5870*image[:,:,1]+0.1140*image[:,:,2]
  13. store_image.append(gray)

3. Extract frames from video and call store function:

  1. for video in train_videos:
  2. os.system( 'ffmpeg -i {}/{} -r 1/{} {}/frames/%03d.jpg'.format(train_path,video,fps,train_path))
  3. images=os.listdir(train_images_path)
  4. for image in images:
  5. image_path=train_image_path + '/' + image
  6. store_inarray(image_path)

4. Store the store_image list in a numpy file “training.npy”:

  1. store_image=np.array(store_image)
  2. a,b,c=store_image.shape
  3. store_image.resize(b,c,a)
  4. store_image=(store_image-store_image.mean())/(store_image.std())
  5. store_image=np.clip(store_image,0,1)
  6. np.save('training.npy',store_image)

5. Create spatial autoencoder architecture:

  1. stae_model=Sequential()
  2. stae_model.add(Conv3D(filters=128,kernel_size=(11,11,1),strides=(4,4,1),padding='valid',input_shape=(227,227,10,1),activation='tanh'))
  3. stae_model.add(Conv3D(filters=64,kernel_size=(5,5,1),strides=(2,2,1),padding='valid',activation='tanh'))
  4. stae_model.add(ConvLSTM2D(filters=64,kernel_size=(3,3),strides=1,padding='same',dropout=0.4,recurrent_dropout=0.3,return_sequences=True))
  5. stae_model.add(ConvLSTM2D(filters=32,kernel_size=(3,3),strides=1,padding='same',dropout=0.3,return_sequences=True))
  6. stae_model.add(ConvLSTM2D(filters=64,kernel_size=(3,3),strides=1,return_sequences=True, padding='same',dropout=0.5))
  7. stae_model.add(Conv3DTranspose(filters=128,kernel_size=(5,5,1),strides=(2,2,1),padding='valid',activation='tanh'))
  8. stae_model.add(Conv3DTranspose(filters=1,kernel_size=(11,11,1),strides=(4,4,1),padding='valid',activation='tanh'))
  9. stae_model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])

6. Train the autoencoder on the “training.npy” file and save the model with name “saved_model.h5”:

  1. training_data=np.load('training.npy')
  2. frames=training_data.shape[2]
  3. frames=frames-frames%10
  4. training_data=training_data[:,:,:frames]
  5. training_data=training_data.reshape(-1,227,227,10)
  6. training_data=np.expand_dims(training_data,axis=4)
  7. target_data=training_data.copy()
  8. epochs=5
  9. batch_size=1
  10. callback_save = ModelCheckpoint("saved_model.h5", monitor="mean_squared_error", save_best_only=True)
  11. callback_early_stopping = EarlyStopping(monitor='val_loss', patience=3)
  12. stae_model.fit(training_data,target_data, batch_size=batch_size, epochs=epochs, callbacks = [callback_save,callback_early_stopping])
  13. stae_model.save("saved_model.h5")

Run this script to train and save the autoencoder model.

Now make another python file “test.py” and observe the results of abnormal event detection on any custom video.

Paste the below code in “test.py”:

  1. import cv2
  2. import numpy as np
  3. from keras.models import load_model
  4. import argparse
  5. from PIL import Image
  6. import imutils
  7. def mean_squared_loss(x1,x2):
  8. difference=x1-x2
  9. a,b,c,d,e=difference.shape
  10. n_samples=a*b*c*d*e
  11. sq_difference=difference**2
  12. Sum=sq_difference.sum()
  13. distance=np.sqrt(Sum)
  14. mean_distance=distance/n_samples
  15. return mean_distance
  16. model=load_model("saved_model.h5")
  17. cap = cv2.VideoCapture("__path_to_custom_test_video")
  18. print(cap.isOpened())
  19. while cap.isOpened():
  20. imagedump=[]
  21. ret,frame=cap.read()
  22. for i in range(10):
  23. ret,frame=cap.read()
  24. image = imutils.resize(frame,width=700,height=600)
  25. frame=cv2.resize(frame, (227,227), interpolation = cv2.INTER_AREA)
  26. gray=0.2989*frame[:,:,0]+0.5870*frame[:,:,1]+0.1140*frame[:,:,2]
  27. gray=(gray-gray.mean())/gray.std()
  28. gray=np.clip(gray,0,1)
  29. imagedump.append(gray)
  30. imagedump=np.array(imagedump)
  31. imagedump.resize(227,227,10)
  32. imagedump=np.expand_dims(imagedump,axis=0)
  33. imagedump=np.expand_dims(imagedump,axis=4)
  34. output=model.predict(imagedump)
  35. loss=mean_squared_loss(imagedump,output)
  36. if frame.any()==None:
  37. print("none")
  38. if cv2.waitKey(10) & 0xFF==ord('q'):
  39. break
  40. if loss>0.00068:
  41. print('Abnormal Event Detected')
  42. cv2.putText(image,"Abnormal Event",(100,80),cv2.FONT_HERSHEY_SIMPLEX,2,(0,0,255),4)
  43. cv2.imshow("video",image)
  44. cap.release()
  45. cv2.destroyAllWindows()

Now, run this script and observe the results of video surveillance, it will highlight the abnormal events.

video surveillance project execution

Summary:

In this deep learning project, we train an autoencoder for abnormal event detection. We train the autoencoder on normal videos. We identify the abnormal events based on the euclidean distance of the custom video feed and the frames predicted by the autoencoder.

We set a threshold value for abnormal events. In this project, it is 0.0068; you can vary this threshold to experiment getting better results.


Comments

No Comments have been Posted.

Post Comment

Please Login to Post a Comment.

Ratings

Rating is available to Members only.

Please login or register to vote.

No Ratings have been Posted.
Render time: 0.92 seconds
10,818,916 unique visits