MIT News - Electrical Engineering & Computer Science (eecs) http://news.mit.edu/topic/mitelectrical-engineering-computer-science-eecs-rss.xml MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community. en Fri, 18 Oct 2019 00:00:00 -0400 Computer science in service of medicine http://news.mit.edu/2019/kristy-carpenter-student-medicine-1018 Senior Kristy Carpenter aims to leverage artificial intelligence and other computational tools to develop new, more affordable drugs. Fri, 18 Oct 2019 00:00:00 -0400 Shafaq Patel | MIT News correspondent http://news.mit.edu/2019/kristy-carpenter-student-medicine-1018 <p>MIT’s <a href="https://web.mit.edu/facilities/construction/completed/stata.html">Ray and Maria Stata Center</a> (Building 32), known for its striking outward appearance, is also designed to foster collaboration among the people inside. Sitting in the famous building’s amphitheater on a brisk fall day, Kristy Carpenter smiles as she speaks enthusiastically about how interdisciplinary efforts between the fields of computer science and molecular biology are helping accelerate the process of drug discovery and design.</p> <p>Carpenter, an MIT senior with a joint major in&nbsp;both subjects, said she didn’t want to specialize in only one or the other — it’s the intersection between both disciplines, and the application of that work to improving human health, that she finds compelling.</p> <p>“For me, to be really fulfilled in my work as a scientist, I want to have some tangible impact,” she says.&nbsp;</p> <p>Carpenter explains that artificial intelligence, which can help compute the combinations of compounds that would be better for a particular drug, can reduce trial-and-error time and ideally quicken the process of designing new medicines.</p> <p>“I feel like helping make drugs in a more efficient manner, or coming up with some new medicine or way to tackle cancer or Alzheimer’s or something, would really make me feel fulfilled,” she says.</p> <p>In the future, Carpenter hopes to get a PhD and pursue computational approaches to biomedicine, perhaps at one of the national laboratories or the National Institutes of Health. She also plans to continue advocating for diversity and inclusion in science, technology, engineering, and mathematics (STEM), throughout her career, drawing in part from her experiences as part of the leadership of the MIT chapter of the American Indian Science and Engineering Society (<a href="http://mit.edu/aises/www/">AISES</a>) and the <a href="http://wilg.mit.edu/">MIT Women’s Independent Living Group</a>.</p> <p><strong>Finding her niche in STEM</strong></p> <p>Carpenter was first drawn to computer science and coding in middle school. She recalls becoming engrossed in a program called <a href="https://scratch.mit.edu/">Scratch</a>, spending hours in the computer lab playing with the block-based visual programming language, which, as it happens, was developed at MIT’s Media Lab.</p> <p>As an MIT student, Carpenter found her way into the computational biology major after a summer internship at Lawrence Livermore National Lab, where researchers were using computer simulations and physics to look at a particular protein implicated in tumors.</p> <p>Next, she got hooked on using computational biology for drug discovery and design during her sophomore year, as an intern at Massachusetts General Hospital. There, she learned that developing a new drug can be a very long, tedious, and&nbsp;complicated process that can take years, but that using machine learning and screening drugs virtually can help hasten this process.&nbsp;She followed that internship with an Undergraduate Research Opportunities Program (UROP) project in the lab of Professor Collin Stultz, within the MIT Research Laboratory of Electronics.</p> <p><strong>Building community </strong></p> <p>For Carpenter, who is part Japanese-American and part Alaskan Native and grew up outside of Seattle, the fact that there were Native American students at MIT, albeit just about a dozen of them, was an important factor in deciding where to attend college.&nbsp;</p> <p>Soon after Carpenter was admitted, a senior from MIT’s AISES chapter called her and told her about the organization.&nbsp;</p> <p>“They sort of recruited me before I even came here,” she recalls.&nbsp;</p> <p>Carpenter is now the vice president of the chapter. The people in the organization, which Carpenter describes as a cultural group at MIT, have become her close friends.&nbsp;</p> <p>“AISES has been a really important part of my time here,” Carpenter says. “At MIT, it’s mostly about having a community of Native students since it’s very easy for us to get isolated here. It’s hard to find people of a similar background, and so AISES is a place where we can all gather just to hang out, socialize, check in with each other.”</p> <p>The organization also puts on movie screenings and other events to “show that we exist and that there are Native people at MIT because a lot of people forget that.”</p> <p>Carpenter first became a member of the national AISES organization as a high school student, when she and her father made serious efforts to reconnect with their Alutiiq heritage. She began educating herself more about the history of Alaska Natives on Kodiak Island, and learning the Alutiiq language, which is severely endangered — just about a couple hundred people still speak it and even fewer speak it fluently.&nbsp;</p> <p>Carpenter started to teach herself the language and then took an online class in high school through Kodiak College.&nbsp;She said she learned very basic amounts and knows simple sentences and personal introductions.</p> <p>“I feel like learning the language was one of the best ways to connect to my culture and sort of legitimize myself in a way.&nbsp;Also, I knew it was important to keep the culture around,” she says.&nbsp;“I would always be telling my friends about it and trying to teach them what I was learning.”</p> <p>Carpenter has also built her MIT community through the Women’s Independent Living Group, one of the few all-women housing options at the Institute. She joined the group of about 40 women the spring semester of her sophomore year.</p> <p>“I really appreciate the group because there’s a lot of diversity in major and diversity in [graduation] year,” she says. “The living group is meant to be a strong community of women at MIT.”</p> <p>Carpenter is now the president of the living group, which has been a significant source of support for her. When she was trying to increase her iron intake so she could donate blood, her friends in the living group helped cook meals and cheered her on.</p> <p>Carpenter also hopes to rise in the ranks at the organizations where she ends up working after MIT, taking a leadership role in advocating for diversity, equity, and inclusion.</p> <p>“I don’t want to lose sight of where I came from or my heritage or being a woman in STEM,” Carpenter says. “Wherever I end up working, I hopefully will move up and keep my Native and Asian identity visible, to be an example for others.”</p> Kristy CarpenterImage: Jared CharneyStudents, Profile, Undergraduate, Electrical Engineering & Computer Science (eecs), Biology, Medicine, Health care, Drug development, Diversity and inclusion, Women in STEM, Artificial intelligence, Machine learning, Computer science and technology Recovering “lost dimensions” of images and video http://news.mit.edu/2019/model-lost-data-images-video-1016 Model could recreate video from motion-blurred images and “corner cameras,” may someday retrieve 3D data from 2D medical images. Wed, 16 Oct 2019 00:00:01 -0400 Rob Matheson | MIT News Office http://news.mit.edu/2019/model-lost-data-images-video-1016 <p>MIT researchers have developed a model that recovers valuable data lost from images and video that have been “collapsed” into lower dimensions.</p> <p>The model could be used to recreate video from motion-blurred images, or from new types of cameras that capture a person’s movement around corners but only as vague one-dimensional lines. While more testing is needed, the researchers think this approach could someday could be used to convert 2D medical images into more informative — but more expensive — 3D body scans, which could benefit medical imaging in poorer nations.</p> <p>“In all these cases, the visual data has one dimension — in time or space — that’s completely lost,” says Guha Balakrishnan, a postdoc in Computer Science and Artificial Intelligence Laboratory (CSAIL) and first author on a paper describing the model, which is being presented at next week’s International Conference on Computer Vision. “If we recover that lost dimension, it can have a lot of important applications.”</p> <p>Captured visual data often collapses data of multiple dimensions&nbsp;of time and space into one or two dimensions, called “projections.” X-rays, for example, collapse three-dimensional data about anatomical structures into a flat image. Or, consider a long-exposure shot of stars moving across the sky: The stars, whose position is changing over time, appear as blurred streaks in the still shot.</p> <p>Likewise, “<a href="http://news.mit.edu/2017/artificial-intelligence-for-your-blind-spot-mit-csail-cornercameras-1009">corner cameras</a>,” recently invented at MIT, detect moving people around corners. These could be useful for, say, firefighters finding people in burning buildings. But the cameras aren’t exactly user-friendly. Currently they only produce projections that resemble blurry, squiggly lines, corresponding to a person’s trajectory and speed.</p> <p>The researchers invented a “visual deprojection” model that uses a neural network to “learn” patterns that match low-dimensional projections to their original high-dimensional images and videos. Given new projections, the model uses what it’s learned to recreate all the original data from a projection.</p> <p>In experiments, the model synthesized accurate video frames showing people walking, by extracting information from single, one-dimensional lines similar to those produced by corner cameras. The model also recovered video frames from single, motion-blurred projections of digits moving around a screen, from the popular <a href="http://www.cs.toronto.edu/~nitish/unsupervised_video/">Moving MNIST</a> dataset.</p> <p>Joining Balakrishnan on the paper are: Amy Zhao, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and CSAIL; EECS professors John Guttag, Fredo Durand, and William T. Freeman; and Adrian Dalca, a faculty member in radiology at Harvard Medical School.</p> <p><strong>Clues in pixels</strong></p> <p>The work started as a “cool inversion problem” to recreate movement that causes motion blur in long-exposure photography, Balakrishnan says. In a projection’s pixels there exist some clues about the high-dimensional source.</p> <p>Digital cameras capturing long-exposure shots, for instance, will basically aggregate photons over a period of time on each pixel. In capturing an object’s movement over time, the camera will take the average value of the movement-capturing pixels. Then, it applies those average values to corresponding heights and widths of a still image, which creates the signature blurry streaks of the object’s trajectory. By calculating some variations in pixel intensity, the movement can theoretically be recreated.</p> <p>As the researchers realized, that problem is relevant in many areas: X-rays, for instance, capture height, width, and depth information of anatomical structures, but they use a similar pixel-averaging technique to collapse depth into a 2D image. Corner cameras — invented in 2017 by Freeman, Durand, and other researchers —&nbsp;capture reflected light signals around a hidden scene that carry two-dimensional information about a person’s distance from walls and objects. The pixel-averaging technique then collapses that data into a one-dimensional video — basically, measurements of different lengths over time in a single line. &nbsp;</p> <p>The researchers built a general model, based on a convolutional neural network (CNN) —&nbsp;a machine-learning model that’s become a powerhouse for image-processing tasks — that captures clues about any lost dimension in averaged pixels.</p> <p><strong>Synthesizing signals</strong></p> <p>In training, the researchers fed the CNN thousands of pairs of projections and their high-dimensional sources, called “signals.” The CNN learns pixel patterns in the projections that match those in the signals. Powering the CNN is a framework called a “variational autoencoder,” which evaluates how well the CNN outputs match its inputs across some statistical probability. From that, the model learns a “space” of all possible signals that could have produced a given projection. This creates, in essence, a type of blueprint for how to go from a projection to all possible matching signals.</p> <p>When shown previously unseen projections, the model notes the pixel patterns and follows the blueprints to all possible signals that could have produced that projection. Then, it synthesizes new images that combine all data from the projection and all data from the signal. This recreates the high-dimensional signal.</p> <p>For one experiment, the researchers collected a dataset of 35 videos of 30 people walking in a specified area. They collapsed all frames into projections that they used to train and test the model. From a hold-out set of six unseen projections, the model accurately recreated 24 frames of the person’s gait, down to the position of their legs and the person’s size as they walked toward or away from the camera. The model seems to learn, for instance, that pixels that get darker and wider with time likely correspond to a person walking closer to the camera.</p> <p>“It’s almost like magic that we’re able to recover this detail,” Balakris