오늘은 모두연 Self-driving lab에서 진행하기로 했던 코드리뷰를 진행하려고합니다.
코드리뷰는 병학님의 친구분께서 Udaycity 수업을 들어면서 진행한 깃헙 프로젝트에 대해서 진행합니다.
windowsub0406/Behavior-Cloning
만약에 코드를 실행하는데 문제가 있으시다면, 아래 저장소에서 코드를 사용하시면 됩니다.
해당 코드는 udaycity의 car-sim이라는 시뮬레이터를 사용합니다.
udacity/self-driving-car-sim
1. 테스트 영상
2. 메인 코드
import argparse import base64 import json import cv2 import numpy as np import socketio import eventlet import eventlet.wsgi import time from PIL import Image from PIL import ImageOps from flask import Flask, render_template from io import BytesIO from keras.models import model_from_json from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array import tensorflow as tf tf.python.control_flow_ops = tf num = 0 Rows, Cols = 128, 128 steering = [] def crop_img(image): """ crop unnecessary parts """ cropped_img = image[63:136, 0:319] resized_img = cv2.resize(cropped_img, (Cols, Rows), cv2.INTER_AREA) img = cv2.cvtColor(resized_img, cv2.COLOR_BGR2RGB) return img """ def smoothing(angles, pre_frame): # collect frames & print average line angles = np.squeeze(angles) avg_angle = 0 for ii, ang in enumerate(reversed(angles)): if ii == pre_frame: break avg_angle += ang avg_angle = avg_angle / pre_frame return avg_angle """ sio = socketio.Server() app = Flask(__name__) model = None prev_image_array = None @sio.on('telemetry') def telemetry(sid, data): # The current steering angle of the car steering_angle = data["steering_angle"] # The current throttle of the car throttle = data["throttle"] # The current speed of the car speed = float(data["speed"]) # The current image from the center camera of the car imgString = data["image"] image = Image.open(BytesIO(base64.b64decode(imgString))) image_pre = np.asarray(image) #image_pre = cv2.resize(image_pre, (128,128)) #image_array = image_pre image_array = crop_img(image_pre) transformed_image_array = image_array[None, :, :, :] # This model currently assumes that the features of the model are just the images. Feel free to change this. steering_angle = 1.0*float(model.predict(transformed_image_array, batch_size=1)) # The driving model currently just outputs a constant throttle. Feel free to edit this. # smoothing by using previous steering angles #steering.append(steering_angle) #if len(steering) > 3: # steering_angle = smoothing(steering, 3) #global num #cv2.imwrite('center_'+ str(num) +'.png', image_array) #num += 1 boost = 1 - speed/30.2 + 0.3 throttle = boost if (boost < 1) else 1 if abs(steering_angle) > 0.3: throttle *= 0.2 #throttle = 0.3 print("steering_angle : {:.3f}, throttle : {:.2f}".format(steering_angle, throttle)) send_control(steering_angle, throttle) @sio.on('connect') def connect(sid, environ): print("connect ", sid) send_control(0, 0) def send_control(steering_angle, throttle): sio.emit("steer", data={ 'steering_angle': steering_angle.__str__(), 'throttle': throttle.__str__() }, skip_sid=True) if __name__ == '__main__': parser = argparse.ArgumentParser(description='Remote Driving') parser.add_argument('model', type=str, help='Path to model definition json. Model weights should be on the same path.') args = parser.parse_args() with open(args.model, 'r') as jfile: model = model_from_json(json.load(jfile)) model.compnum = 0 Rows, Cols = 128, 128 steering = []ile("adam", "mse") weights_file = args.model.replace('json', 'h5') model.load_weights(weights_file) # wrap Flask application with engineio's middleware app = socketio.Middleware(sio, app) # deploy as an eventlet WSGI server eventlet.wsgi.server(eventlet.listen(('', 4567)), app)
2-1. 기본 파라미터
변수 설정 부분입니다.
num은 불필요한 변수입니다.
Rows, Cols는 네트워크 입력으로 주어질 이미지의 해상도를 의미합니다.
steering은 해당 스크립트파일에서 사용되지 않는 것으로 보입니다.
num = 0 Rows, Cols = 128, 128 steering = []
2-2. Image Crop Method
image를 관심영역(ROI)기반으로 잘라주고, 이미지를 resize해주고, bgr 2 rgb로 변경해줍니다.
ROI로 자르는 이유는 도로영역만 보기 위해서 잘라주는 것으로 보입니다.
resize는 네트워크 입력으로 넣기 위해서 (128,128)크기로 리사이즈 해주는 것으로 보입닏나.
bgr 2 rgb는 일반적으로 이미지의 데이터 순서가 (blue, green, red) 순으로 되어있는 것을 (red, green, blue)
순으로 변경해주려고 하는 것 같습니다.
def crop_img(image): """ crop unnecessary parts """ cropped_img = image[63:136, 0:319] resized_img = cv2.resize(cropped_img, (Cols, Rows), cv2.INTER_AREA) img = cv2.cvtColor(resized_img, cv2.COLOR_BGR2RGB) return img
2-3. Server / Client 설정
sio = socketio.Server() app = Flask(__name__) model = None prev_image_array = None
2-4. 통신부
@sio.on은 python 데코레이터입니다. "telemetry"라는 이벤트를 등록합니다.
telemetry method안에는 car-sim에서 받아온 steering angle과 throttle 데이터, speed 데이터, image 데이터를 받고
이를 데이터 변환 과정(base64 -> image -> ndarray -> crop image)를 거친 후에,
해당 데이터를 네트워크 모델의 파라미터로 던져주게 됩니다.
모델이 예측한 값을 send_control method를 통해서 car-sim으로 던져줍니다.
@sio.on('telemetry') def telemetry(sid, data): # The current steering angle of the car steering_angle = data["steering_angle"] # The current throttle of the car throttle = data["throttle"] # The current speed of the car speed = float(data["speed"]) # The current image from the center camera of the car imgString = data["image"] image = Image.open(BytesIO(base64.b64decode(imgString))) image_pre = np.asarray(image) #image_pre = cv2.resize(image_pre, (128,128)) #image_array = image_pre image_array = crop_img(image_pre) transformed_image_array = image_array[None, :, :, :] # This model currently assumes that the features of the model are just the images. Feel free to change this. steering_angle = 1.0*float(model.predict(transformed_image_array, batch_size=1)) # The driving model currently just outputs a constant throttle. Feel free to edit this. # smoothing by using previous steering angles #steering.append(steering_angle) #if len(steering) > 3: # steering_angle = smoothing(steering, 3) #global num #cv2.imwrite('center_'+ str(num) +'.png', image_array) #num += 1 boost = 1 - speed/30.2 + 0.3 throttle = boost if (boost < 1) else 1 if abs(steering_angle) > 0.3: throttle *= 0.2 #throttle = 0.3 print("steering_angle : {:.3f}, throttle : {:.2f}".format(steering_angle, throttle)) send_control(steering_angle, throttle)
해당 부분은 car-sim 시뮬레이터와 네트워크 모델이 처음 연결되었을 때, 초기화에 대한 내용입니다.
connect에 대한 이벤트 등록을 하고, 초기 send_control method에 파라미터를 0값으로 던져주는 것을 통해서 초기화를 합니다.
send_control method는 받은 파라미터를 car-sim에 던져주는 역활을 합니다.
@sio.on('connect') def connect(sid, environ): print("connect ", sid) send_control(0, 0) def send_control(steering_angle, throttle): sio.emit("steer", data={ 'steering_angle': steering_angle.__str__(), 'throttle': throttle.__str__() }, skip_sid=True)
2-5. main 함수
@sio.on('connect') if __name__ == '__main__': parser = argparse.ArgumentParser(description='Remote Driving') parser.add_argument('model', type=str, help='Path to model definition json. Model weights should be on the same path.') args = parser.parse_args() with open(args.model, 'r') as jfile: model = model_from_json(json.load(jfile)) model.compile("adam", "mse") weights_file = args.model.replace('json', 'h5') model.load_weights(weights_file) # wrap Flask application with engineio's middleware app = socketio.Middleware(sio, app) # deploy as an eventlet WSGI server eventlet.wsgi.server(eventlet.listen(('', 4567)), app)
'IT > Deeplearning' 카테고리의 다른 글
[Code Review/ self-driving lab] Udacity Self-driving Car - (3) (0) | 2017.10.26 |
---|---|
[Code Review/ self-driving lab] Udacity Self-driving Car - (2) (0) | 2017.10.26 |
[Performance Measurement] Precision/Accuracy (2) | 2017.10.19 |
[번역:: Gradient Clipping] Why you should use gradient clipping (0) | 2017.10.15 |
[CNN] Convolution Neural Network (0) | 2017.10.15 |