HI, MY NAME IS RUSSELL

View Resume View GitHub

2015 - 2019

Project Portfolio

Automated Colon Segmentation

This is a Helmsley funded research in the Qualcomm Institute at UCSD. Using U-Net and volume rendering algorithms, we have developed a system that can accurately segment the colon area from MRI scan and generate a manipulable 3D volume in virtual reality and CAVE to help study Crohn's disease. The last milestone of this project is documented in this Research Paper. Also checkout our Demo and our Github repository.

Bodylogical AR

Bodylogical AR is a 3D Visual Analytics tool on HoloLens that I developed at the Qualcomm Institute for the Bodylogical team from PricewaterhouseCoopers(PwC). It supports rendering, manipulating and analyzing 10,000+ data with high dimensional features, along with the multi-user shared experience and dynamic file loading/exporting. Checkout this quick overview, or read more in my article on Medium. Alternatively, you can watch this Youtube Demo on my Channel.

Calcflow AR

Calcflow AR is an augmented reality application for Magic Leap One that I developed at Nanome. Calcflow is originally a virtual reality application where user can study and visualize vector calculus in an interactive environment. During my internship at Nanome, I was the principal developer responsible for porting this application's basic feature to the AR platform using Magic Leap One using Unity. In the app, user is able to generate and manipulate 3D graphs based on the input parametric function.

ARTEMIS - Remote Surgery

ARTEMIS stands for Augmented Reality Technology Enabled reMote Integrated Surgery, a research project to help Naval Medical Center develop an Augmented Reality Telementorship platform. I work on 3D localization and tracking of users hands, head and eyes. Check our Research Symposium for more information. I also work on the User Interaction design aspect of the project. View our Design Site to learn more about our current progress.

DeepMotion

DeepMotion is a novel solution to the handwriting-to-text task. This system recognizes handwritten letters based on pen motion. We built it with Arduino Uno R3, MPU9250 motion sensors, and our own 3D printed molds. With self-collected 5,000+ temporal motion data samples, we explored different neural network models using TensorFlow, including CNN and RNN with LSTM. The details of this project is documented in this Research Paper, or you can check out this project on Github repository.

HoloBear Tour

Holo Bear Tour is an augmented reality experience that guides you to explore UCSD's Stuart Collection statue, the Hawkinson's Bear. The purpose of this to explore embedded visualization in the context of mixed reality environment, a research I conducted with Dr. Ali Sarvghad at Ubiquitous Computing Lab. I have published a more elaborated introduction about this project on Medium. You may also check out this Demo Video to get a taste of how it actually works. Find this project on Github.

Re: Art

Re: Art is an awards-winning web app that implemented convolutional neural network to enable transforming an user-uploaded video into an animated artwork as if it was drawn by great artist like Van Gogh. We built this app at SD Hacks 2016, making it the first video style transfer web app on the internet, and won the Top Prize in the SD Hacks. Check out our project on the Devpost or my Github to see how it works! We also had a Press Mention on CSE News.

LowPoly ZenGarden

LowPoly ZenGarden is an interactive game demo with beautiful graphics and meditative music to help players relax and peace their mind. We developed this game using C++ and OpenGL with custom shaders. Riding a lonely boat on the limpid low poly water, players can explore the procedurally generated terrain, appreciate the snow particle effects and capture great moment with depth of field camera. Feel free to check our Demo and enjoy a peaceful break. See Github page for details.

PixelBase

PixelBase is a web app based on Blockchain that provides a verifiable record for image copyright for the public. We designed this web app by trying to fully make use of the decentralized feature of blockchain to ensure the copyright record is unmodifiable. I also extended the verification process by implementing similar image recognition using OpenCV. This app is developed at CalHacks 2.0 at UC Berkeley. You can find more about PixelBase at our submission onto Devpost.

GroundCrew VR

GoundCrew VR is a virtual reality game application on HTC Vive that we designed for San Diego Air and Space Museum. It is a project that I collaborated with the VR Club at UCSD. Although it is still in development, players could either exploring freely in the Gallery scene to interact with real aircrafts, or jump into the Ground Crew scene to challenge levels in our Game Play Demo. We are planning on deploying this application at the museum after it’s completed. Checkout the Github page for more.

Dream Journal

DreamJournal is a web app that creates multimedia AI-generated journals based on Microsoft Cognitive Service, Facebook API and Amper Music. It works by importing user’s photos from their facebook timeline and generate a conversational interaction with user to understand the story behind the pictures. Then it adopts a pre-trained deep learning model to generate a compelling media that records user’s memory for user. Feel free to view our submission on the Devpost for LA Hacks 2017.

SaitoGame VR

SaitoGemu VR is an action-packed virtual reality horror game inspired by the PlayTest in Black Mirror. You are trapped in a graveyard full of different kinds of skeleton zombies with nothing in your hand, and you need to try your best to find all the necessary components to solve the puzzle. There are guns left by a mysterious person that you may find, but your fists can be your weapon as well. Can you escape in the end? See out our Game Play Demo on YouTube or check out its Github page.

Progressive Meshes

Progressive Meshes is a naive implementation of Progressive Meshes and surface simplification based on Quadratic Error Metrics. The details are documented in my blog here. You may also checkout a Video Demo of this program. The result is rendered using OpenGL.

ABOUT ME

Wanze (Russell) Xie

Stanford University

Computer Science Department

Hi there! This is Russell. I am a graduate student in Computer Science at Stanford, specialized in Artificial Intelligence and Computer Graphics.

I earned a B.S. magna cum laude in CS from UC San Diego, and I was a staff researcher at the Qualcomm Institute at Calit2. My most recent work focused on reconstructing 3D colon from MR Enterography scan and automating medical image segmentation using deep learning.

I am also an enthusiast for VR/AR/XR! Despite the fact that this emerging technology is still at its early stage in today’s consumer market, it won’t be years until it becomes part of our daily routine, considering its huge potential in entertainment, education and healthcare. I hope to invest my career into this promising field and create revolutionary experience that augment people's way of learning and working.

Beyond my career in technology, life to me is more about travel and art than codes and researches. I have been keeping updating my sketches and color paintings on deviantart, where you can find some of my work at russellxie. Recently, I have been compiling a blog that collects my photographs and memories during my last year's stay in Europe and also my incoming travel to Australia. It will be soon available under the Blog section.

Keep smiling :)