• Hi!
    I'm Lily Hsu

    I am interested in machine learning & Robotic Vision.

    This's my CV

  • I am
    a Programmer

    A Python & C++ developer, skilled at ROS.

    View My Github Repo

About Me

Hi, I'm Lily Hsu!

I am interested in robotics, machine learning and computer vision. During my research period in institute, I concentrated on simulation and robotic vision, especially about the reality gap problem of virtual image dataset. Moreover, I apply these skills to robotic arms and Unmanned ground vehicle (UGV), trying to deal with the automation problems.

In addition to the field of robotics, I always keep a positive attitude to learn other skills. Hoping that I could keep up with the trend of technology.

My dream is using robots to make the world better.

Education

  • M.S. in Graduate Degree Program of Robotics, National Chiao Tung University (NCTU), Taiwan.
    (2019/06~present)
  • B.S. in Electrical and Computer Engineering (ECE), National Chiao Tung University (NCTU), Taiwan.
    (2015/09~2019/06)
  • National Hsinchu Girls' Senior High School (HGSH).
    (2012/09~2015/06)
  • My Specialty

    Professional Skills

    Languages

    Chinese (native), English (fluent)

    Programming

    C/C++, C#, Java, Python

    Middleware, Libraries and others

    Robotic Operating System (ROS), OpenCV, PCL (Point Cloud Library), PyTorch

    Simulation

    Gazebo, Unity

    Software

    SketchUp, SolidWorks, 3D Builder, MeshLab, PhotoShop

    Experience

    Teaching Experience

    Teaching Assistant

  • Creative Software Project (2017)
  • Human Centric Computing (2019)
  • Introduction to Artificial Intelligence (2019)
  • Sensing and Intelligent Systems (2020)
  • Research Experience

    Duckietown – Robotics Education & Demo Experience 2017-2019

    Duckietown

    I have learnt a lot from Duckietown since I was in third grade, doing the project with professor Nick Wang. Duckiebot is my first robot and it is a self-driving car platform for education, research and promotion. For the skills’ part, I have learnt the basic knowledge of self-driving car, ROS(Robot Operating System) and openCV.

    2019 Sydney RoboCup

    In addition, I have been to Sydney to participate RoboCup as a member of exhibitors to promote Duckietown in July, 2019. I gained a lot of valuable experience in these demo events for the students and others who don’t have the background knowledge about robotics and self-driving car.

    Duckietown – Simulation 2017-2018

    Duckietown-Simulation Duckietown-Simulation

    During the project in grade third, I used Gazebo to build a virtual environment, trying to let the duckiebot finish the driving test by itself. From this project, I learnt how to use Gazebo to run my system and make both robot model and the map by SolidWorks and SketchUp.

    Pick-and-Place System – Virtual Data(Unity) 2018-2019

    system

    In grade forth, I join a team about pick-and-place system in our lab. Our goal is to let the robot arm pick the object and place it to the right shelf with its front side by detecting its brand name.

    unity

    In the team, I am responsible for building the objects’ model and generating the virtual datasets. It is important to use virtual data to improve the efficiency of collecting data to train the FCN model. Our paper has been published in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) and this is our website: Reasoning Pose-aware Placements with Semantic Labels - Brandname-based Affordance Prediction and Cooperative Dual-Arm Active Manipulation.

    Pick-and-Place System – Synthetic data & fast neural style transfer(FNST) & FCN model 2019-2020

    FNST virtual dataset

    To improve the effectiveness of virtual data on detection, I apply the fast neural style transfer to the virtual dataset in my prior work. Training by FCN model, the result of synthetic dataset is much better than the original virtual dataset's and significantly reduced the number of real dataset we need to use. We only need a small amount to fine-tune.

    Pick-and-Place System for Industry & Mask R-CNN model 2020-2021

    robot system

    In 2020, I lead the team and try to develop our system to apply in factory assembly line to pick the electronic components. We use UR5 robotic arm and use Mask R-CNN model instead of FCN model. Because Mask R-CNN can produces a better semantic segmentation result and provide instance segmentation prediction which is very useful to separate the prediction of the same class of objects.

    fcn vs maskrcnn

    Unity Virtual Dataset & Reality Gap 2020-2021

    teaser

    My thesis is using Unity to generate virtual dataset, which can overcome the reality gap problem by using domain randomization method and automatically generate the labeled data and annotations, for training Mask R-CNN model to detect the electronic components. Finally, achieve an automatically pick and place system in real world by using this robotic vision without any real world data for training model.

    training result

    Get in Touch

    Contact

    thpss92093@gmail.com
    thpss92093.eed04@nctu.edu.tw