cv

Basics

Name Tejas Agrawal
Label Student Researcher
Email tejasagrawal55@gmail.com
Url https://Tej-55.github.io/
Summary Currently a final year undergrad at BITS Pilani, Goa, I am a student researcher with an interest in ML, DL, NLP, and CV. I am also a member of the SAiDL.

Work

  • 2024.05 - Present
    Research Intern
    CSE Department, IIT Bombay
    Working on a project on advancing Automatic Speech Recognition (ASR) systems on low-resource accented speech under the guidance of Prof. Preethi Jyothi.
    • Automatic Speech Recgnition
    • Low-resource NLP
  • 2024.01 - 2024.05
    Student Researcher
    APPCAIR, BITS Pilani
    Worked on the extension of CRMs paper by Prof. Srinivasan and Tirthraj Dash, a form of 'explainable neural networks' by large-scale pretraining and examining their behavior.
    • Symbolic AI
    • Autoencoders
  • 2023.08 - 2023.12
    Student Researcher
    APPCAIR, BITS Pilani
    Under the guidance of Senior Prof. Ashwin Srinivasan, Dr. Lovekesh Vig, and Dr. Gautam Shroff, I worked in collaboration with TCS research to study and analyze the performance of LLM's ability to reason over arguments by introducing Argumentative Reasoning.
    • Large Language Models
    • LLM Agents
  • 2023.05 - 2024.07
    Summer Intern
    Dino.co.in
    Demonstrated initiative and managed project independently to develop a GPT-powered chatbot using LangChain to facilitate content creation and enhanced customer support
    • LangChain
    • prompting

Volunteer

  • 2023.06 - Present

    BITS Pilani, Goa Campus

    Core Member
    SAiDL
    SAiDL is a student-led research group at BITS Pilani, Goa, working on research and applications of various fields in ML. I am a member of the group and have worked on various projects, organized events and conducted courses.

Education

  • 2021.11 - 2025.06

    Goa, India

    B.E. (Hons.)
    BITS Pilani
    Electrical and Electronic Engineering (Minor in Data Science)
    • Machine Learning
    • Large Language Models
    • Foundations of Data Science
    • Optimization
    • Deep Learning

Achievements

  • 2024.03.20
    IKDD Uplink Research Internship
    IKDD
    One among 12 students in the country to be selected for a 3-month long research internship.
  • 2023.10.01
    LLM Hackathon
    Moveworks
    Second position amongst 20 shortlisted teams that participated, having around 80 people in total.

Online courses

CS 224N
Stanford University NLP
CS 224W (incomplete)
Stanford University ML with Graphs
CS229
Stanford University ML
Deep Learning Specialisation
DeepLearning.AI DL
CS 231N
Stanford University CV

Skills

Frameworks
Pytorch
ESPNet
Hugging Face
LangChain
Keras
TensorFlow
Scikit-learn

Languages

Hindi
Fluent
English
Fluent

Projects

  • 2024.02 - 2024.04
    CountCLIP
    The project involves reproducing the ICCV 2023 paper 'Teaching CLIP to Count to ten', ensuring implementation accessibility, and using a specialized dataset for training.
  • 2024.02 - 2024.05
    Rank-N-Contrast for graphs
    Reproduction of the NeurIPS Spotlight Rank-N-Contrast, and evaluating performance in graph regression tasks.
  • 2023.09 - 2023.11
    Albert with Perceiver layers
    The project involved implementing the Albert model with Perceiver layers and comparing its performance against the standard Transformer layers. Both models were pre-trained on the same corpus and evaluated through fine-tuning for paraphrasing tasks using the MSR corpus.
  • 2023.03 - 2023.04
    Code-Mixed Sentence Generation and Language Model Fine-Tuning
    The project involved examining code-mixed sentences with non-formal language for abuse detection. Pre-trained language models (BeRT and m-BeRT) were fine-tuned to categorize code-mixed sentences and assess their performance.
  • 2023.03 - 2023.04
    Zero-Shot Image Segmentation using CLIP
    The project achieved text-guided image segmentation by leveraging CLIP’s text-image embeddings. The approach utilized a contrastive loss to align with ground truth segmentation maps.
  • 2023.03 - 2023.04
    Variations of Softmax
    Project analyzed how various Softmax variants affect both model performance and training time, evaluating them on large-class classification tasks. This explores the trade-offs between computational complexity and model accuracy to enhance computational efficiency