<!–

<!–

permalink: / title: “About Me” excerpt: “About me” author_profile: true redirect_from:

  • /about/
  • /about.html

Hi! 👋 I’m essam, currently a masters student at harvard studying cs. I’m interested in machine learning, natural language processing, and multi-modal learning.

I do ml research working with llms and large vision-language models. I previously spent some time at twitch/amazon as an applied scientist intern where I worked on their ml team and built their first ml-based video analysis platform, worked on a startup to automate meeting scheduling and ran a student org doing software consulting for tech companies.

Industry/Research Experience

  • Research intern at harvard & mit working on nlp and vision-language learning.
  • 3x applied sci and eng intern at twitch/amazon where I built their first ml-based video analysis platform and trained their first twitch-wide backbone vision model. Work went into production and currently in use. Only undergrad on the ml sci team.
  • Research intern at uc davis working in multimodel representation learning and adversarial robustness.
  • Research intern at stanford where I worked on ai for healthcare.
  • Co-Founder/CTO at komma where I led full-stack development.
  • Co-Founder/President at codelab the largest cs student org with 100+ members where we build software for tech companies. <!– ## Twitch/Amazon 3X Internship 📺

  • 2022 (Applied Science): Developed unsupervised, continual learning framework on Twitch Streaming data. Framework allows for ML models developed by Twitch to continuously and intelligently update on new streams such that a model learns new representations without forgetting old ones.

  • 2021 Fall (Software Engineer): Designed & developed an end-to-end, real-time Media-Analysis backend service integrated with both other AWS services and my trained ML model to moderate Twitch’s 9.36M+ streamers’ content. Service is in production!

  • 2021 Summer (Applied Science): Developed a Twitch content representation Image Embedding trained using self-supervised learning. Improved prior method’s performance on downstream tasks such as Game Stream Classification by ~10%. –>