Hello, I'm Tri Vikram Dharmavarapu
a PhD student, experienced software engineer, and curious builder of high-performance systems.

Skills
Technologies
Tools
Domain Expertise
About Me
I am a Ph.D. student at Florida State University specializing in systems research, focusing on high-performance computing, file systems, and parallel model training for large-scale LLMs.
As a software engineer with extensive experience at ServiceNow, I specialized in developing user interfaces using modern frameworks and implementing server-side logic and APIs with JavaScript to improve system functionality. I also played a key role in troubleshooting and resolving backend and UI issues, which significantly improved application stability. Additionally, I contributed to enhancing test automation processes. This journey has deepened my passion for building high-performance, scalable systems while exploring cutting-edge technologies. At Florida State University, I focused on distributed systems, parallel programming, and advanced operating systems. I worked specifically on MPI and RDMA for parallel programming, as well as socket programming to optimize system-level communication. My work also included AI and deep learning applications, which helped improve my skills in machine learning optimization. For my research, I focused on optimizing file systems using CNTR and ExtFUSE for better performance, as well as using model pipelining to run large language models (LLMs) in parallel, aimed at improving computational efficiency and optimizing model training processes.
My goal is to combine my expertise in systems research, distributed computing, and full-stack development to build scalable and efficient infrastructures. I aspire to become a full-stack software engineer, leveraging my skills in system optimization, AI/ML, and backend technologies.
Research
March 2025
Optimizing Container Filesystems with ExtFUSE + CNTR
Integrated eBPF to reduce context switching in CNTR containers. Enabled dynamic image loading and measured performance on HPC systems with parallel training workloads.
This work improves container startup times by reducing reliance on traditional file I/O. Benchmarks show improved IOPS and reduced overhead in metadata resolution across slim/fat containers.
Education

Ph.D. in Computer Science
Florida State University
Aug 2025 – Present
Focused on Systems research, with emphasis on file systems optimization and integrating LLMs into scalable distributed architectures

M.S. in Computer Science
Florida State University
Jan 2024 – May 2025
Specialized in Advanced Operating Systems, Parallel and Distributed Programming, and applications of AI/Deep Learning in high-performance systems

M.Tech in Software Systems (Data Analytics)
BITS Pilani (WILP)
Jul 2021 – Jun 2023
Worked on scalable ML systems and gained expertise in distributed computing, AI/ML, and data analytics

B.Tech in Electronics and Communication Engineering
KL University
Jul 2016 – Jun 2020
Built strong foundations in communication systems, digital logic, embedded systems, and represented ACM ICPC
Experience

Graduate Assistant (Research & Teaching)
Graduate Research Assistant (Jan 2025 – Present)
- Researching file systems, container optimization, and scalable parallel training for ML models.
- Improving runtime efficiency of CNTR and integrating ExtFUSE for optimized file handling in containers.
- Investigating parallelization to scale machine learning model training workloads efficiently.
Graduate Teaching Assistant (Jan 2024 – Present)
- Supported teaching for Parallel & Distributed Programming course.
- Mentored students on Flask, Python, debugging, and parallel programming techniques.
- Helped prepare lectures and assignments to deepen understanding of system-level programming.

Software Engineer
- Developed user interfaces and implemented server-side logic using JavaScript, improving usability and reducing response times by 15%.
- Designed and integrated RESTful and GraphQL APIs, enhancing frontend-backend communication and improving system observability.
- Improved test automation coverage to 80%, cutting manual testing time by 40% and significantly reducing post-release bugs.
- Led integrations for Slack and observability pipelines, optimizing monitoring and alert systems.
- Collaborated cross-functionally in Agile sprints, contributing to code reviews, sprint planning, and iterative delivery of scalable features.
- Mentored junior engineers and contributed to internal code quality and performance standards.
- Recognized with multiple monthly awards and honored with the prestigious quarterly LAMA award for outstanding contributions.

Engineering Intern
- Built a web application using Angular7 with content extraction features for improved document management.
- Developed a document classification POC that boosted categorization accuracy by 25%.
Project Highlights
CNTR + ExtFUSE Integration
2025
Integrated ExtFUSE into CNTR framework for container runtime optimization using eBPF. Achieved reduced startup latency and improved IOPS for slim/fat containers.
TinyImageNet Distributed Training
2025
Implemented parallel training using DDP and model parallelism for TinyImageNet on multi-node GPU clusters. Addressed bottlenecks in data loading and checkpointing.