Savvas Petridis

I build tools that leverage both human and machine intelligence to tackle complex problems, such as designing visual representations for abstract ideas, fact checking, and predicting the histology of biopsies.

I am a fourth year PhD student in computer science at Columbia University, advised by Prof. Lydia Chilton. I'm broadly interested in HCI, Interactive Machine Learning, and Visualization. I've completed research internships at Adobe (Summer 2019) and IBM (Summer 2016 + 2017). I'm also currently collaborating with Prof. Jack Grinband to develop a deep neural network to predict the histology of biopsies.

If you want to collaborate, email me: savvas@cs.columbia.edu

Check out my research and professional experience in depth: CV
Google Scholar

Office: 703 CEPSR

Projects


SymbolFinder: Brainstorming diverse symbols using local semantic networks

Collaborators: Hijung Valentina Shin (Adobe Research), Lydia B. Chilton (Columbia)

Visual symbols are the building blocks for visual communication. They convey abstract concepts like reform and participation with concrete objects like scaffolding and key. Student designers struggle to brainstorm diverse symbols because they need to recall associations instead of recognizing them and they fixate on a few associations instead of exploring different related contexts. We present SymbolFinder, an interactive tool for finding visual symbols for abstract concepts. SymbolFinder molds symbol-finding into a recognition rather than recall task by introducing the user to diverse clusters associated with the concept. Users can dive into these clusters to find related, concrete objects that can symbolize the concept.

[paper] [project page]

VisiBlends: A Flexible Workflow for Visual Blends

Collaborators: Lydia B. Chilton (Columbia) and Maneesh Agrawala (Stanford)

Visual blends are an advanced graphic design technique to draw attention to a message. They combine two objects in a way that is novel and useful in conveying a message symbolically. VisiBlends is a flexible workflow for creating visual blends that follows the iterative design process. We introduce a design pattern for blending symbols based on principles of human visual object recognition. Our workflow decomposes the process into both computational techniques and human microtasks. It allows users to collaboratively generate visual blends with steps involving brainstorming, synthesis, and iteration.

[paper]

Human Errors in Interpreting Visual Metaphor

How do people interpret visual metaphors? What errors do they make? How can we learn from these errors to better visual communication and automatic machine understanding of advertisements? In this work, we provide evidence for four distinct types of errors people make in interpreting visual metaphors. We also show that people’s ability to interpret a visual message is not simply a function of image content but also of message familiarity. We discuss how our findings can be applied toward bettering visual communication.

[paper]

Where is your Evidence: Improving Fact-checking by Justification Modeling

Fact-checking is a journalistic practice that compares a claim made publicly against trusted sources of facts. We extend the LIAR dataset by automatically extracting the justification from the fact-checking article used by humans to label a given claim. We show that modeling the extracted justification in conjunction with the claim (and metadata) provides a significant improvement regardless of the machine learning model used.

[paper]

AMuSe: Large-scale WiFi video distribution-experimentation on the ORBIT testbed

AMuSe is a scalable system for WiFi multicast video delivery. The system includes a scheme for dynamic selection of a subset of the receivers as feedback nodes and a rate adaptation algorithm MuDRA that maximizes the channel utilization while meeting QoS requirements. We implemented AMuSe in the ORBIT testbed and evaluated its performance with 150-200 nodes. We present a dynamic web-based application that demonstrates the operation of AMuSe based on traces collected on the testbed in several experiments. The application allows us to compare the performance of AMuSe with other multicast schemes and evaluate the performance of video delivery. This demo was presented at the NYC Media Lab Summit and won second place!

[paper 1] [paper 2]

Publications

SymbolFinder: Brainstorming diverse symbols using local semantic networks
Savvas Petridis, Hijung Valentina Shin, Lydia Chilton
Under submission.

Human Errors in Interpreting Visual Metaphor
Savvas Petridis and Lydia Chilton
Creativity and Cognition 2019 (Oral Presentation) (Acceptance rate: 30%)

VisiBlends: A Flexible Workflow for Visual Blends
Lydia B. Chilton, Savvas Petridis, Maneesh Agrawala.
CHI 2019 (Oral Presentation) (Acceptance rate: 23.8%)

Where is your Evidence: Improving Fact-checking by Justification
Tariq Alhindi, Savvas Petridis, Smaranda Muresan.
FEVER Workshop at EMNLP 2018

AMuSe: Large-scale WiFi video distribution - Experimentation on the ORBIT testbed
Varun Gupta, Raphael Norwitz, Savvas Petridis, Craig Gutterman, Gil Zussman, Yigal Bejerano.
Demo description in Proc. IEEE INFOCOM'16, 2016

WiFi multicast to very large groups - experimentation on the ORBIT testbed
Varun Gupta, Raphael Norwitz, Savvas Petridis, Craig Gutterman, Gil Zussman, Yigal Bejerano.
Demo at IEEE LCN'15, 2015.