生物專題



樹莓機物體移動監測系統


硬體規格

  1. 樹莓機(Raspberry Pi)
  2. Model B+ 實驗中使用的樹莓機為第一代(2012), 更新的硬體規格有第二代(2015)和第三代(2016).
  3. 樹莓機專用相機模組(具有紅外線拍攝功能)
  4. 紅外線燈
  5. USB無線網卡
  6. 行動電源(供電給樹莓機)
  7. 乾電池組電源(供電給紅外線燈)



軟體運作


系統開機時自動執行一個以Python語言寫的程式
此軟體程式的運作邏輯:

1. 啟動時
1.1 設定拍攝鏡頭擷取的畫面中要偵測的區域

1.2. 利用OpenCV一個分離前景與背景的演算法(Gaussian Mixture-based Background/Foreground Segmentation Algorithm)做移動物體的邊緣偵測.

2. 擷取一張拍攝鏡頭的畫面
2.1 計算偵測區域內物體邊緣的移動點數
2.2 計算整個畫面內物體邊緣的移動點數
2.3 若"偵測區域內物體邊緣的移動點數"大於"一個實驗出來的設定值", 則作以下的計算
2.3.1 計算偵測區域內物體邊緣的移動點數在偵測區域內所佔的比率
2.3.2 計算整個畫面內物體邊緣的移動點數在整個畫面內所佔的比率
2.3.3 "若偵測區域內邊緣移動的比率"大於"整個畫面內邊緣移動的比率", 而且, "偵測區域內物體邊緣的移動點數"小於"一個實驗出來的設定值", 則判斷偵測到一隻蟲.

3. 重複2.


測試結果


2019, 3月19日


設備沒有架好,
  • 背景沒有被遮蓋掉, 造成偵測錯誤, 誤以為背景的景物移動是要偵測的果實蠅.
  • 紅外線燈的電源脫落, 使得在夜間無法進行偵測

修改偵測範圍


修改偵測(綠框)及背景(藍框)的區域以避免誤判斷


程式碼



#! /usr/bin/python3

import sys
import os
import argparse
from distutils.version import LooseVersion

import numpy as np
import cv2
from time import gmtime, strftime

parser = argparse.ArgumentParser(description="Program's description")
parser.add_argument("-samples", "--samples", dest="samples", type=int, default=86400, required=False, help="the number of samples for recording")
parser.add_argument("-threshold", "--threshold", dest="threshold", type=int, default=100, required=False, help="the threshold of the change in the focus window")
parser.add_argument("-f_x", "--focus_x", dest="f_x", type=int, default=50, required=False, help="the x position of the focus window")
parser.add_argument("-f_y", "--focus_y", dest="f_y", type=int, default=150, required=False, help="the y position of the focus window")
parser.add_argument("-f_w", "--focus_w", dest="f_w", type=int, default=375, required=False, help="the width of the focus window")
parser.add_argument("-f_h", "--focus_h", dest="f_h", type=int, default=350, required=False, help="the height of the focus window")

parser.add_argument("-c", "--conf", dest="conf", type=str, default=None, required=False, help="path to the JSON configuration file")
parser.add_argument("-d", "--debug", dest="debug", type=bool, default=False, required=False, help="debug mode on/off")
parser.add_argument("-w", "--window", dest='window', type=bool, default=False,required=False, help="display frames in a window")
args = parser.parse_args()

dbg = args.debug

# generate folder's prefix
nextChar = 'A'
for letter in range(65, 91): # A ~ Z
    capChar = chr(letter)
    for name in os.listdir("/media/pi/usb/data/"):
        if name == capChar:
            nextChar = chr(letter+1)

outputFolder="/media/pi/usb/data/" + nextChar + '/'
os.mkdir(outputFolder)
logPath = outputFolder + "md.log"
logF = open(logPath, "a+")
print(args, file=logF)
print("outFolder:", outputFolder, file=logF)

cap = cv2.VideoCapture(0) # Capture video from camera

# Get the width and height of  the videoframe
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH) + 0.5)
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT) + 0.5)
print("camera screen:", width, height, file=logF)

request_samples = args.samples
print("To record ", request_samples, "pictures", file=logF)

# The focus window, CV2 use x as the vertical axis
f_x = args.f_x
f_y = args.f_y
f_w = args.f_w
f_h = args.f_h


# check focus window's size
if (args.f_w == 0):
    f_w = width
if (args.f_h == 0):
    f_h = height
print("(f_x,f_y)=",f_x, f_y, "(f_w,f_h)=", f_w, f_h, file=logF);
 
# threshold, default 100
if (args.threshold != 0):
    threshold_detected = args.threshold

# show in a window?
if ( args.window  == True ):
    display_window = True
else:
    display_window = False
# Define the codec and create VideoWriter object
fourcc = cv2.VideoWriter_fourcc(*'mp4v') # Be sure to use the lower case

# create a background substraction object
if LooseVersion(cv2.__version__) >= LooseVersion("3.0"):
    window_fgbg = cv2.bgsegm.createBackgroundSubtractorMOG()
    frame_fgbg = cv2.bgsegm.createBackgroundSubtractorMOG()
else:
    window_fgbg = cv2.createBackgroundSubtractorMOG()
    frame_fgbg = cv2.createBackgroundSubtractorMOG()

detected_count = 0

while(cap.isOpened()):
    ret, frame = cap.read() # fram in BGR mode
    timestamp = strftime("%H%M%S", gmtime())
    if ret == True:
        #frame = cv2.flip(frame,0)
        # Slicing an array returns a view of it:
        #window = frame[ f_x:(f_x + f_w) , f_y:(f_y + f_h), 0 ]
        window = frame[ f_x:(f_x + f_h) , f_y:(f_y + f_w), 0 ]
        # CV2 use x as the vertical axis
        #window = frame[ f_y:(f_y + f_w) , f_x:(f_x + f_h), 0 ]
        window_fgmask = window_fgbg.apply(window)
        frame_fgmask = frame_fgbg.apply(frame)
        # check if the change is over the threshold
        #print("Compare changes, window:frame=: ", window_fgmask.sum(), frame_fgmask.sum() , file=logF)
        text = ""
        if window_fgmask.sum() > threshold_detected :
            # check if is it the secne or window change
            # calculate the change rate
            window_chg_rate = window_fgmask.sum() / window_fgmask.size
            frame_chg_rate = frame_fgmask.sum() / frame_fgmask.size
            #print( window_chg_rate, frame_chg_rate , file=logF)
            if (window_chg_rate > frame_chg_rate) and ( window_fgmask.sum() < 5000000 ) :
                print("Detected at ", timestamp, ",window > frame (change rate): ", window_chg_rate, frame_chg_rate , file=logF)  
                text = "Detected"
                detected_count += 1
        # adds text to the image as the bottom-left corner of the text and HERSHEY_DUPLEX as the font with the size of 1 and color pink
        #cv2.putText(frame, text , (f_y+5, f_x+f_h+5+80), cv2.FONT_HERSHEY_DUPLEX , 1, (255,0,0))
        # draw the focus window's border line
        cv2.rectangle(frame, (f_y-1, f_x-1), (f_y + f_w + 1, f_x + f_h + 1), (0, 255, 0), 2)
        # draw the fgmask on the frame
        frame[ f_x:(f_x + f_h) , f_y:(f_y + f_w), 0 ] = window_fgmask
        #frame[ f_x:(f_x + f_h) , f_y:(f_y + f_w), 1 ] = window_fgmask
        #frame[ f_x:(f_x + f_h) , f_y:(f_y + f_w), 2 ] = window_fgmask
        if ( len(text) > 7) or ( (request_samples % 60) == 0  ): # ony save detected frame
            outputFile = outputFolder + timestamp + "-" + text +".jpg"
            cv2.imwrite(outputFile, frame)
            #print(outputFile, "recorded")
        if ( display_window ):
            #cv2.imshow('frame',window_fgmask)
            cv2.imshow('frame',frame)
        request_samples -= 1
        if ( request_samples <= 0 ):
            break
        if (cv2.waitKey(1) & 0xFF) == ord('q'): # Hit `q` to exit
            print(frame.shape, file=logF)
            print(window.shape, file=logF)
            print(window_fgmask.shape, file=logF)
            print(window_fgmask.size, file=logF)
            break
    else:
        break

print("Detected count = ", detected_count, file=logF)
# Release everything if job is finished
cap.release()
cv2.destroyAllWindows()


留言

熱門文章