From experiments to AI products. Grounded in a decade of scientific research. Integrates AI workflows. Ships AI tools.

Cornell Ph.D. in Biological & Environmental Engineering (DNA Materials Lab) β€” grounded in Chemistry and Computer Science β€” integrating AI into workflows and building AI-powered tools that make it out of the lab.

See how I work ↓

Domain expertise is the foundation. Mastering AI elevates it to a new dimension.

How I Integrate AI into Research Workflows

From a 33-hour, 11 kmΒ² eDNA tracer campaign (~1,000 qPCR measurements; ~1,400 GPS coordinates) to a 100-day continuous nucleic-acid reactor, I use AI-assisted coding to turn raw outputs into reproducible analyses, publication figures, and structured reports.

01

Automated Reporting

Feed experiment design + raw qPCR data into a structured tracking report. No manual formatting.

Automated reporting pipeline screenshot
Data pipeline processing screenshot
02

Data Pipeline

Automated a 33-hour field campaign pipeline. ~1,000 qPCR measurements, ~1,400 GPS coordinates β€” from raw to analysis-ready in minutes.

03

Scientific Visualization

Publication figures, journal cover art, and 3D molecular renders. ~20 Python scripts, all AI-assisted.

Scientific visualization examples

How I Build and Ship AI Tools

CoBrain

WIP

A shared, evidence-first brain for your lab. Upload PDFs, capture your group's definitions and standards, and ask questions to get cited, auditable answers β€” with built-in quality gates and refusal when evidence is thin.

How I Think About AI

"Domain judgment sets the direction; AI shortens the loop."

I don't treat AI like a reliable calculator or a colleague who "gets" what I mean. I treat it like a fast executor that's great at generating drafts, code, and alternatives β€” but still needs clear structure and careful checking. The work starts with me: I decide what question matters, break it into smaller steps, and define what a good answer should look like. Then I let the model run: write the first pass, explore options, refactor code, and surface patterns. Finally, I verify everything against the domain reality β€” controls, assumptions, edge cases β€” and only keep what holds up.

Teaching AI at Cornell

I design hands-on AI tooling modules for Cornell bioengineering undergrads. I co-taught one course (~45 students; ~50% of the content) and gave guest lectures in two others. The focus is practical: how to break a messy question into steps, use AI-assisted scripting to move faster, and still keep scientific checks in place.

BEE 3400 β€” AI Tooling Module

  • Co-instructor (~50% of content) Β· ~45 students
  • Labs: problem breakdown β†’ AI-assisted scripting β†’ data processing + visualization

AI Integration Module

  • Built confidence-gated exam grading and automated notebook assessment
  • Reduced grading turnaround to <24 hours

From the Lecture Slides

Embrace Uncertainty
01
Embrace Uncertainty
Transformer: Attention
02
Transformer: Attention
Context Engineering
03
Context Engineering
Components of Context
04
Components of Context
What is an Agent?
05
What is an Agent?
AI Planning Team
06
AI Planning Team
The 40-20-40 Rule
07
The 40-20-40 Rule
Test & Iterate
08
Test & Iterate
Where It Gets Hard
09
Where It Gets Hard

Click any slide to view full size

AI Fluency Index

The AI Fluency Index is a research framework developed by Professors Rick Dakan and Joseph Feller in collaboration with Anthropic. It measures 11 observable behaviors across three dimensions β€” Description, Delegation, and Discernment β€” that characterize effective human-AI collaboration, baselined against 9,830 Claude.ai conversations. Below is my personal profile scored across ~40 of my own conversations.

Iteration & ...Provide Cont...Clarify GoalDelegate TasksSpecify FormatSet Interact...Provide Exam...Check FactsQuestion Rea...Identify Mis...Adjust Scope
My score
Population avg
Composite Score
74
vs. 40 population avg
79
vs. 51
Description
79
vs. 43
Delegation
63
vs. 18
Discernment
Key Finding

Discernment score (63%) is 3.5x the population average β€” fact-checking, questioning reasoning, and catching missing context at 3–4x typical rates.

Framework: Anthropic 4D AI Fluency (Dakan & Feller, 2026). Baselines from 9,830 conversations.

Get Your Own AI Fluency Index

I engineered the prompt below from the original research paper β€” it turns a static publication into a reproducible self-assessment anyone can run. Paste it into a new Claude.ai conversation to generate your own dashboard.