Skip to Main Content

Recipients from FY17 TSG Cycle

Below are the recipients of the FY17 Transdisciplinary Seed Grant awards, an internal funding mechanism:

NAME OF PROJECT 

TEAM MEMBERS

CO-SPONSORS (IF APPLICABLE)

PROJECT SUMMARY

Los Angeles as a Lab for Environmental Culture and Science: A Public Media Collaboration

PI:  Allison Carruth - English (Humanities)

 

Co-PI:  Kristy Guevara-Flanagan - Film, TV and Digital Media (TFT)

 

Other Collaborators:

Jessica Cattelino - Anthropology and Gender Studies (Social Sciences)

Jon Christensen - History and IoES (Social Sciences)

Ursula Heise - English and IoES (Humanities)

N/A

The Laboratory for Environmental Narrative Strategies (LENS) launched in fall 2016 to incubate new research methods for studying the cultural dimensions of past and present environmental challenges and to produce innovative models of science communication and environmental storytelling for diverse public audiences. "Los Angeles as a Lab for Environmental Culture and Science" is a yearlong collaboration between LENS and KCET, Southern California's leading public television and digital media organization. Working in partnership with KCET editors and producers, LENS faculty and students will research, produce and publish a series of video shorts and digital stories about the complex histories and contemporary visions in the Los Angeles region of urban biodiversity, climate change, environmental equity, food and water sustainability and green space. For the initial phase of the project, three teams will work on one of three "storylines": (1) the biodiversity of Los Angeles past and present in the context of native and introduced species, conservation and traditional ecological knowledge, (2) the implications of climate change for different microclimates and communities in the L.A. basin and an investigation of different plans for addressing our climate challenges and (3) California's environmental vanguard and creative visions of L.A. as an "eco-city."

How Stories Live: Using Big Data to Understand the Diversity Dynamics of Folktales

PI:  Jacob Foster - Sociology (Social Sciences)

 

Co-PI:  Timothy Tangherlini - Scandinavian Section, Germanic Languages and Literatures/ Asian Languages and Cultures (Humanities)

 

Other Collaborators:

Michael Alfaro, Ecology & Evolutionary Biology (Life Sciences)

Peter Broadwell, Digital Library Department (Young Research Library)

N/A

Storytelling is a fundamental form of cultural expression. Yet we know very little about the birth, life, and death of the stories we tell. Our project will develop generalizable methods to study when, where, and why new stories appear, and old stories vanish. Similar questions about the origination and extinction of species are standard in studying long-term biological diversity, and we draw directly on current computational approaches to biological diversity dynamics. As a test case, we will analyze a collection of ~30K Danish folktales. To use the biological methods, we first need to group these folktales into lineages. While human readers can recognize that two stories are part of a single lineage, it is impractical to do this manually for large collections. Hence, we will develop machine learning approaches to automatically create cultural lineages. We will then develop and fit efficient Bayesian statistical models of cultural lineage dynamics, building to models that incorporate competition, geography, and social or political drivers of diversity. While we tackle these challenges in the context of folktales, they are generic problems for the large-scale study of cultural diversity. Our solutions will put cultural diversity dynamics at the center of the emerging computational study of culture.

Reducing Homelessness in LA using Big Data and Predictive Modeling

PI:  Mark Handcock - Statistics (Physical Sciences)

 

Co-PI:  Till von Wachter - Economics (Social Sciences)

Other Collaborators:

Hunter Owens - City of Los Angeles (Information Technology Agencies)

Clinical & Translational Science Institute

In this project we analyze how Big Data techniques can be used to predict outcomes of social processes and government services in a data rich environment. As many local and state governments do, the City of Los Angeles collects a vast array of different data on its population of poor, near-poor, and homeless individuals. The City has approached us to help it use this rich and complex data to better understand which individuals and families are likely to become homeless and which services are likely to work. We propose to combine state-of-the-art predictive methodologies based on machine learning with economic modeling to solve this prediction problem and provide an approach to better target services when outcomes are uncertain. This project is a natural collaboration between Statistics and Economics and will expand the frontier of uses of Big Data techniques. It will also create a lasting data infrastructure that will be available for future projects. The project addresses the need for public policy to harness increasingly available, complex Big Data-type sources. Hence, we expect that this unique collaboration with the City will yield a stream of projects going beyond this seed grant for which we will raise additional funding.

Querying and Constructing the SSWL database: Tools for Linguistic Theory

PI:  Hilda Koopman - Linguistics (Humanities)

 

Co-PI:  Yizhou Sun - Computer Science (HSSEAS)

 

Other Collaborators:

Dennis Shasha - NYU (Courant Institute of Mathematical Sciences)

Henry Samueli School of Engineering & Applied Science

This team will collaborate on the development of tools to query and mine the data in the Syntactic Structures of the World's Languages database, and develop new tools to help further construct it. SSWL is an open-ended, expert-sourced linguistic database. The general intellectual goal is to gather and store fine grained comparative data of linguistic properties for as many languages as possible and develop a tool that can support fundamental research in the linguistic sciences.

Our project supports the following functionality:

(i) Develop new or enhanced query and analytical tools to enable the exploration of the finely grained linguistic data in SSWL

(ii) Combine machine learning methods with SSWL data generating tools to automatically construct the database, with minimal supervision, and ultimately help speed up the linguistic description of as many languages as possible. In particular, we will explore existing translation tools to aid and speed up extracting linguistic properties and examples for the database, and propose text embedding based approaches to automatically detect basic linguistic examples and properties for low density languages with (minimal) text corpora.

Searching for Repeating Foreshocks by Data-mining Massive Continuous Seismic Waveforms

PI:  Lingsen Meng - EPSS (Physical Sciences)

 

Co-PI:  Rick Schoenberg - Statistics (Physical Sciences)

B. John Garrick Institute for the Risk Sciences

Over the last several decades, mega-earthquakes frequently occur around the world, causing casualty, injuries or loss of property. It is thus crucial to study the precursory processes preceding large earthquakes to improve earthquake prediction and hazard mitigation. Acceleration of repeating earthquakes (events with identical location and waveforms) is one of the promising precursors which have been found before some recent large earthquakes. Their occurrence is the manifestation of background slow-slip processes (slow creep, not emitting seismic waves), which reflect the final phase of unlocking process of the mainshock rupture. However, many other large earthquakes are not observed to be preceded by such precursors. One explanation is that most repeating earthquakes are small in magnitude, and thus a large portion of them may be too weak to be detected. The detection can be improved with template matching and auto-correlation algorithm, but is currently limited by the expensive computational cost. In this proposal, we propose to adapt efficient data-mining algorithms to search for repeating events among large volumes of seismic waveforms. The more complete picture of precursory repeating earthquakes will improve our understanding of earthquake preparation processes and constrain the earthquake forecasting models.

Visual Big Data: Using Images to Understand Protests

PI:  Zachary Steinert-Threlkeld - Public Policy (L-SPA)

 

Co-PI:  Jungseock Joo - Communication Studies (Social Sciences)

N/A

The collaborators propose to use large amounts of social media data to analyze the dynamics of protest movements, with an initial focus on the Black Lives Matter (BLM) movement. They propose to answer two questions using images shared from those protests. First, is there a correlation between the demographic diversity of protestors on the first day of a protest and the subsequent size of the protest? Second, did the sharing of images on Twitter of people with the hands-up, surrendering gesture cause the gesture to spread to other cities?

 

Professor Steinert-Threlkeld uses social media data to study how individuals decide to protest. Professor Joo is an expert in computational image analysis and is interested in how broadcast images media spread behaviors documented in those images. Professor Steinert-Threlkeld has collected over six billion tweets with GPS coordinates, and he continues to collect five million per day. These tweets will provide the raw material for the study. Tweets with images from the United States during (BLM) protests will be analyzed for the poses individuals strike as well as the sex, age, and race of participants.

Harvesting Data for a Regional Seismic Risk and Sustainability Assessment Tool for California's Bridge Stock

PI:  Ertugrul Taciroglu - Civil & Environmental Engineering (HSSEAS)

 

Co-PI:  J.R. DeShazo - Urban Planning (L-SPA)

Henry Samueli School of Engineering & Applied Science

An innovative, publicly accessible, pervasive, and low-cost seismic safety assessment tool for California's existing bridge stock—dubbed ShakeReady—is being developed. This tool uses data harvested continuously from the public domain (e.g., Caltrans and National Bridge Inventory Databases, Google Earth Images, etc.) and provides facility-specific seismic risk and loss estimates for a range of probable seismic events at an unprecedented level of granularity. At the present time, there is no repository that provides seismic risk estimates for California’s Highway Bridges, which form the backbone of our state’s transportation network. Availability of such data and ability to edit/update those data by a broad group of users (e.g., Caltrans engineers, researchers), as well as integrated analysis capabilities (e.g., for first responders and city-planners) would transform our understanding of these lifelines to natural and man-made hazards. In the present TSG Project, we will carry out a testbed study on 170 bridges along I-405 and prepare ShakeReady for broader use and public launch. The present work will also enable us to pursue large scale extramural grants.

Leveraging Highly Granular Data in Sampling and Analysis of Political Surveys

PI:  Lynn Vavreck - Political Science (Social Sciences)

Co-PI:  Erin Hartman - Statistics (Physical Sciences)

N/A

The outcome of the 2016 election stunned the world; no one predicted Trump would win despite fundamentals suggesting the race would be close. Now, many institutions are left to review practices, parties, registrars, the media, and pollsters. Pollsters face a particular challenge, as they have come under a bright spotlight for many years now with notable newsworthy misses. Technology has changed the way people live, and made it harder for pollsters to reach them using traditional methods. Innovations have emerged --mainly in survey mode and Internet samples--but we've seen less progress on how to construct representative samples of the population given significant non-response. In 2012, the Obama Campaign made great strides in this area by leveraging vast databases on voters, but the work was done under proprietary data arrangements. Those methods are ready to be publicly tested so they can be shared with the scientific community. Not only will this work be a first step toward helping the polling industry recover, it also provides an opportunity to track attitudes of Americans as a new president takes office and both political parties struggle to reconnect with the American electorate after the struggles of the 2016 cycle.

Real-time Precision Mapping in Understanding Metropolitan Los Angeles Transportation Networks

PI:  Chee Wei Wong - Electrical Engineering (HSSEAS)

Co-PI:  Rui Wang - Urban Planning (L-SPA)

Henry Samueli School of Engineering & Applied Science

All metropolitan areas use travel demand models to forecast road traffic in order to assist decision making for improving regional transportation system efficiency, infrastructure investment, environmental protection and land use planning. The Southern California Association of Governments runs a petabyte-scale dataset of advanced complex four-step computational model data to forecast and predict future traffic and population requirements. The ability of the data model to produce base year volume estimates within acceptable ranges of tolerance compared to actual ground counts is essential to validate the entire travel demand model. Currently the actual ground counts are supported with primarily expensive and infrequent household door-to-door surveys and various inductive loops or traffic camera sensors. Current validation is also inadequate in terms of the vehicle type , non-vehicle traffic or vehicular path.

Precision real-time laser mapping provides a low-cost and automated way of collecting and analyzing large amount of data across locations and time periods to improve our regional transportation network model. We will perform real-time mapping of traffic data across multiple critical spatial and temporal zones, integrating into the transportation network model. This not only provides calibration to the network