-
William & Mary
- Williamsburg, Virginia
- https://www.wm.edu/
- in/md-zahidul-haque-6298781a7
Stars
Marketing skills for Claude Code and AI agents. CRO, copywriting, SEO, analytics, and growth engineering.
Use this skill to enable Claude Code to communicate directly with your Google NotebookLM notebooks. Query your uploaded documents and get source-grounded, citation-backed answers from Gemini. Featu…
An agentic skills framework & software development methodology that works.
A curated list of awesome Claude Skills, resources, and tools for customizing Claude AI workflows
2026 AI/ML internship & new graduate job list updated daily
The Roslyn .NET compiler provides C# and Visual Basic languages with rich code analysis APIs.
Replication package of the paper "Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models" (TOSEM 2025)
High Accuracy and efficiency multi-task fine-tuning framework for Code LLMs. This work has been accepted by KDD 2024.
Official repository for the paper "LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code"
[NeurIPS 2024] EffiBench: Benchmarking the Efficiency of Automatically Generated Code
A python backend for the GumTree diff tool.
[TOSEM 2026]A Systematic Literature Review on Large Language Models for Automated Program Repair
My take on a practical implementation of Linformer for Pytorch.
[ICLR 2024] Efficient Streaming Language Models with Attention Sinks
[NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which reduces inference latency by up to 10x for pre-filli…
A low-latency & high-throughput serving engine for LLMs
Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance. Accepted to ACL 2024.
Dynamic Memory Management for Serving LLMs without PagedAttention
LangChain & Prompt Engineering tutorials on Large Language Models (LLMs) such as ChatGPT with custom data. Jupyter notebooks on loading and indexing data, creating prompt templates, CSV agents, and…
A high-throughput and memory-efficient inference and serving engine for LLMs
⛔️ DEPRECATED – See https://github.com/ageron/handson-ml3 or handson-mlp instead.
12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all
Turn any web page into a desktop app (but, lightweight ~3MB)
Notebooks from the Machine Learning Specialization