CSE Doctoral Student Seminar: Zihao Deng and Hang Yan

Dec 15, 2017
12:30 p.m.
2 p.m.
Lopata Hall, Room 101

"Efficient Algorithm for Semantic Role Labeling"

Zihao Deng
Adviser: Brendan Juba

Semantic role labeling is a critical step in the typical natural language processing pipeline. In order to produce coherent solutions, a successful new model was developed to simultaneously learn the semantic labelings of words and their relations instead of using the sequential pipeline. This model is based on metric labeling problem, which has the flexibility to incorporate the background verbal knowledge from other texts.
However, metric labeling problem is known to be computationally intractable in general, and there is no fast algorithm for this specific model. In order to solve it accurately with higher efficiency, we formulate it as a semidefinite programing (SDP) hierarchy, and round the boolean solution using a spectral algorithm. SDP hierarchy is a novel convex relaxation that can be solved within polynomial time. Many recent results suggest that it has the ability to produce polynomial algorithms that can achieve arbitrary accuracy for a wide range of NP-hard problems. Spectral algorithms is a fast rounding algorithm that
computes solution vectors out of the related matrix of the problem, and it has recently found its use in many important fields. Our algorithm runs the SDP hierarchy to compute a high degree covariance matrix of the solution vectors, and factor out an optimal solution vector from this matrix. Since the number of solutions of semantic role labeling is small by nature, the built-in random process of our spectral algorithm has a high probability of hitting one of the desired solutions. Therefore, our algorithm can output an accurate result with higher efficiency.


"Robust IMU Double Integration"

Hang Yan
Adviser: Tao Ju

In this work we propose a novel data-driven approach for inertial navigation, which estimates trajectories of natural human motions just from an inertial measurement unit (IMU) in every smartphone. The key observation is that human motions are repetitive and consist of a few major modes (e.g., standing, walking, or turning). Our algorithm learns to regress a velocity vector from the history of linear accelerations and angular velocities, then corrects low-frequency bias in the linear accelerations. The corrected linear accelerations are then integrated twice to estimate positions. We have acquired training data with ground-truth motions across multiple human subjects and multiple phone placements (e.g., in a bag or a hand-held). The qualitatively and quantitatively evaluations have demonstrated that our algorithm has shown comparable results to full Visual Inertial navigation. To our knowledge, this work is the first to integrate sophisticated machine learning techniques with inertial navigation, potentially opening up a new line of research in the domain of data-driven inertial navigation.