To take challenging roles that utilize my present skills and enhances them, nourishes my professional growth through developing new skills, and allows to prove myself as a cohesive member of a winning team.
I am a Computer and Software Engineer from NUST. I have experience in Python development including the full stack, i.e, DevOps, MLOps and for the back-end, I can work in Django and Flask. I have contributed to and helped develop ML products, with the ML side comprising ML subcategories such as NLP, Object detection, and Form processing which includes OCR and NER. And with the Python development side consists of maintaining the end-to-end pipeline and the back-end development.
My professional goal is to take on challenging roles that utilize my present skills and enhances them, nourish my professional growth through developing new skills, and allow me to prove myself as a cohesive member of a winning team.
Experience
Ricult

I’m currently part of a lean, international scrum team of 10, working to maintain and enhance an existing product. My role involves creating backend APIs using FastAPI, Pydantic, and SQLAlchemy, and I’ve been deeply involved in designing a new backend server to enhance the product. Over time, I’ve crafted more than 50 REST APIs, implementing complex features and layers of caching to handle massive workloads (around a million users) on limited compute resources. The results speak for themselves—these efforts have helped attract 10 new customers for our company.
Previously at the same company, I led the complete migration of a server-side application from Flask to FastAPI, designing and implementing the end-to-end architecture. This involved not only achieving an 85% code coverage with thorough unit tests but also spearheading the development of data migration scripts to transfer information smoothly from legacy systems. I also pioneered setting up CI/CD pipelines through GitLab Actions, which included automated regression testing. This greatly reduced manual deployment time and effort. Throughout, I leaned on VS Code, with tools like Pylint and Black Formatter, to write dependable and maintainable code efficiently.
In my current position, I continue to work with FastAPI and Python, utilizing tools such as Poetry for dependency management and Docker for containerization. I’ve also employed AWS for deployments and integrated everything with GitLab for continuous integration and delivery. My focus includes building a robust and scalable database architecture using Alembic and SQLAlchemy. I’ve been responsible for designing database schemas, setting up migrations, and optimizing performance through SQL and other database technologies. My approach ensures that our systems are scalable and efficient in handling large datasets.
Additionally, I’ve worked with AWS SageMaker for training models, and designed data collection pipelines using AWS services. For logging and monitoring, I’ve used Python’s logger library in tandem with AWS CloudWatch to ensure everything runs smoothly. Whether it’s working with Docker for containerization, JIRA for project management, or simply coding the best solutions, I enjoy the variety of challenges that come with building something new.
Veeve
Veeve is an AI-based company providing a cashier-less shopping experience by integrating the billing and checkout system in the shopping cart
I contributed to an agile and flexible team, where I initially focused on researching and fine-tuning state-of-the-art deep learning models like YOLO-V3 and ResNet. For instance, I used YOLO-V3 to handle object localization within shopping carts and ResNet to nail down the identification of specific products, such as distinguishing between brands and types. I took it a step further by experimenting with multi-model architectures, which significantly improved accuracy and performance.
I built out machine learning pipelines using TensorFlow to train and evaluate these deep learning models and maintained Python-based services within Docker containers to orchestrate processes in parallel. I also tinkered with the source code of OpenCV to refine certain algorithms, ultimately improving our object detection system.
On the deployment side, I containerized applications with Docker (dockerfile, docker-compose), hosted images on cloud services, and set up continuous deployment pipelines. For inference on edge devices, I leveraged NVIDIA GPUs for training and Jetson to handle offline inference. I even linked the system to an Android app through Flask, ensuring seamless communication via sockets for a smooth cashier-less shopping experience. We used ClickUp initially and later Jira for project management and ticketing. All of this work was driven by a robust data collection pipeline that I built to ensure accurate model training using TensorRT and our own custom pipeline created in C++.
Phelix

Phelix provides medical tools and AI assistants for hospitals and patients.
I’ve had the opportunity to lead efforts on fine-tuning and optimizing ML architectures, pushing the accuracy of our models to outperform major industry players like Google and Amazon, specifically in the area of comprehending medical documents. Working in a fast-paced, agile environment, I’ve developed robust pipelines that processed over 100K images daily using Python. I also implemented OCR (Optical Character Recognition) and NER (Named Entity Recognition) to streamline medical record processing and accuracy.
In addition to model development, I automated our build, test, and deployment processes using Docker, Pytest, and Splinter, enabling CI/CD and dramatically speeding up our development cycles. This automation cut deployment times from 4 hours down to less than 10 minutes and significantly reduced bugs in production by implementing automated testing, slashing bug occurrence by over 70%.
Collaborating with a highly agile international team of five, I took the lead in mentoring mid-level and junior developers, while spearheading the training and maintenance of different ML models. I developed a microservice-based architecture using Flask and Django to serve our models, with seamless communication between services thanks to REST APIs and sockets. I deployed the system on AWS, integrating services like Textract and Amazon Medical Comprehend for document analysis—though eventually, our fine-tuned models replaced these with superior in-house solutions. Huggingface’s Layout-LM was a big part of this, and I also customized Tesseract for OCR and trained models for document segmentation, enhancing document comprehension across the board.
To keep everything on track, I enhanced our data visualization and model monitoring using tools like Streamlit and Matplotlib. These improvements created a visual layer that helped us stay on top of our training processes and performance. Ultimately, I built an ecosystem where robust machine learning meets efficient deployment and seamless automation—making it possible to iterate faster and with better results.
NCAI (National Centre for AI)

At NCAI, I had the incredible opportunity to lead a team of talented researchers and developers while exploring cutting-edge machine-learning architectures to tackle some pretty complex challenges. One of the projects I’m most proud of was centered around Urdu language transcription. I was the spearhead of our Speech-to-Text team, where we dove deep into evaluating various tools like Kaldi and Deep Speech. After thorough testing, we found Kaldi to be the better choice due to its superior performance and lower latency, especially in low-data environments. Although working with Kaldi presented its own set of challenges, we were able to push through and deliver a robust speech recognition solution for Urdu transcription.
To make it all work, I had to flex my C++ and Bash scripting skills while building out multiple machine-learning pipelines to ensure our AI applications were production-ready. Our agile environment helped us move quickly, and I made sure we embraced automation and continuous deployment practices to keep everything efficient. This involved integrating REST APIs, automating deployments with Docker, and maintaining CI/CD pipelines through Jenkins and GCP. We used a suite of MLOps strategies, relying on tools like TensorFlow, Keras, PyTorch, and cloud services (AWS and GCP) to streamline the entire model training, deployment, and monitoring process.
On top of all this, the project also had a web development aspect, so I worked with frameworks like Django and handled data pipeline management using Scrapy and SQL. Leading this project not only taught me a lot about AI development but also how to turn cutting-edge research into practical solutions for users.