Inputting Image Data into TensorFlow for Unsupervised Deep Learning

Standard

Everyone is looking towards TensorFlow to begin their deep learning journey. One issue that arises for aspiring deep learners is that it is unclear how to use their own datasets. Tutorials go into great detail about network topology and training (rightly so), but most tutorials typically begin with and never stop using the MNIST dataset. Even new models people create with TensorFlow, like this Variational Autoencoder, remain fixated on MNIST. While MNIST is interesting, people new to the field already have their sights set on more interesting problems with more exciting data (e.g. Learning Clothing Styles).

In order to help people more rapidly leverage their own data and the wealth of unsupervised models that are being created with TensorFlow, I developed a solution that (1) translates image datasets into a file structured similarly to the MNIST datasets (github repo) and (2) loads these datasets for use in new models. Continue reading

Visualizing deaf-friendly businesses

Standard

In order to prepare for the Insight Data Science program, I have been spending some time on acquiring/cleaning data, learning to use a database (MySQL) to store that data, and trying to find patterns. It is uncommon in academia to search for patterns in data in order to improve a company’s business, so I thought I should get some practice putting myself in that mindset. I thought an interesting idea would be to visualize the rated organizations from deaffriendly.com on a map of the U.S. to identify patterns and provide some insights for the Deaf community.
This could be useful for a variety of reasons:

  • We could get a sense of where in the U.S. the website is being used.
  • We could identify cities that receive low ratings, either because businesses are unaware of how to improve or because the residents of that city have different rating thresholds. This could help improve the ability to calibrate reviews across the country.
  • We could identify regions in cities that do not receive high reviews to target those areas for outreach.
  • Further work to provide a visual version of the website could allow users to find businesses on the map in order to initiate the review process.

Deaf Friendly Map

 

Continue reading

Find your Dream Job!

Standard

A Recurrent Neural Network learns Indeed.com Job Postings

A few months ago Andrej Karpathy wrote an excellent introductory article on recurrent neural networks, The Unreasonable Effectiveness of Recurrent Neural Networks. With this article, he released some code (and larger version) that allows someone to train character-level language models. While RNNs have been around for a long time (Jeff Elman from UCSD Cognitive Science did pioneering work in this field), the current trend is implementing with deep learning techniques organizationally different networks that attain higher performance (Long Short-term memory networks). Andrej demonstrated the model’s ability to learn the writing styles of Paul Graham and Shakespeare. He also demonstrated that this model could learn the structure of documents, allowing the model to learn and then produce Wikipedia articles, LaTeX documents, and Linux Source code.

Continue reading

Robust Principal Component Analysis via ADMM in Python

Standard

Principal Component Analysis (PCA) is an effective tool for dimensionality reduction, transforming high dimensional data into a representation that has fewer dimensions (although these dimensions are not from the original set of dimensions). This new set of dimensions captures the variation within the high dimensional dataset. How do you find this space? Well, PCA is equivalent to determining the breakdown M = L + E, where L is a matrix that has a small number of linearly independent vectors (our dimensions) and E is a matrix of errors (corruption in the data). The matrix of errors, E, has been minimized. One assumption in this optimization problem, though, is that our corruption, E, is characterized by Gaussian noise [1].

Sparse but large errors affect the recovered low-dimensional space Introduction to RPCA

Sparse but large errors affect the recovered low-dimensional space.
From: Introduction to RPCA

Continue reading

Low-cost Raspberry Pi robot with computer vision

Standard

The ever decreasing costs of hardware and the rise of Maker culture is allowing hobbyists to take advantage of state of the art tools in robotics and computer vision for a fraction of the price. During my informal public talk in San Diego’s Pint of Science event “Machines: Train or be Trained” I talked about this trend and got to show off the results of a side project I had been working on. My aim in the project was to create a robot that was capable of acting autonomously, had computer vision capabilities, and was affordable for researchers and hobbyists.

Continue reading

Cognitive Science to Data Science

Standard

I joined the UCSD Cognitive Science PhD program with the aim to investigate multi-agent systems. A few years in I joined a project to investigate the interactions of bottlenose dolphins. The research group had a massive amount of audio and video recordings that was too big to handle without computational techniques. I joined the group to provide the computational support that they needed. During this process, I discovered that working with big data is motivating in its own right and that I wanted to pursue the data scientist path in lieu of academia. Continue reading

Detection Error Tradeoff (DET) curves

Standard

In July 2015, I attended DCLDE 2015 (Detection, Classification, Localization, and Density Estimation), a week-long workshop focusing on methods to improve the state of the art in bioacoustics for marine mammal research.

While I was there, I had a conversation with Tyler Helble about efforts to detect and classify blue whale and fin whale calls recorded off the coast of Southern California. While most researchers use Receiver Operating Characteristic (ROC) curves or Precision Recall (PR) curves to display classifier performance, one metric we discussed was Detection Error Tradeoff (DET) curves [1]. This might be a good metric when you are doing a binary classification problem with two species and you care how it is incorrectly classifying both species. This metric has been used several times in speech processing studies and has been used in the past to look at classification results for marine bioacoustics [2].

Continue reading

Batch text file conversion with Python: {.pdf, .doc, .docx, .odt} → .txt

Standard

AbstractsAttending conferences and presenting research is a frequent event in the life of an academic. Conference organizing committees that plan these events have a lot on their plate. Even small conferences, such as the one I organized in 2015 (iSLC 2015), can be very demanding.

One thing that conference organizers have to consider is how to implement an article and/or abstract submission process that allows attendees, reviewers, and organizers to fluidly access and distribute these documents. Some of these services are free while others are a paid service. Some services provide better and a more adaptive pipeline for this process.

An important feature of these abstract submission sites is allowing the tagging of abstracts so that organizers can appropriately distribute the content to the best reviewers.

Continue reading

AlexNet visualization

Standard

I’ve been diving into deep learning methods during the course of my PhD, which focuses on analyzing audio and video data to uncover patterns relevant to the dolphin communication. I obtained a NVIDIA Titan Z through an academic hardware grant and installed Caffe to run jobs on the GPU using Python.

As a first step, I have been utilizing Caffe’s implementation of AlexNet (original paper) to tackle dolphin detection in video using R-CNN (by Ross B. Girshick) and classifying whale calls in audio (work presented at DCLDE 2015).

 

Continue reading

Interview Preparation

Standard

A few months ago I interviewed for a Software Engineering role at Google. I have had several friends ask me how I prepared for the interviews so I thought I would share my preparation in the hopes that it will help them and others. First off, the recruiters will give you documents providing an overview of what’s expected. Read them and get up to speed with all that information. But, be warned – once you start the interview process (in order to get that document), the interviews will come very quickly. With that in mind, it’s also good to prepare early. Below are links to things that helped me prepare: Continue reading