Displaying 1 to 30 of 70 repositories
Multimodal AI model with 35B MoE architecture for coding agents, reasoning, and vision tasks
2d
10K+
1T MoE multimodal agentic model with long-horizon coding, swarm orchestration, and native vision
4d
3.8K
119B MoE model with switchable reasoning mode, multimodal vision, and 256k context window
4d
882
119B parameter hybrid model with reasoning, vision, and code capabilities (1M token context)
5d
1.1K
Multimodal LLM with 35B parameters for coding, agentic tasks, and vision-language understanding
9d
2.9K
Gemma 4: multimodal open AI models by Google, optimized for reasoning, coding, and long context.
23d
10K+
397B MoE model with 17B activation for reasoning, coding, agents, and multimodal understanding
26d
100K+
7
397B-parameter MoE multimodal LLM with 17B active params, 262K context, 201 languages
26d
10K+
1
744B MoE language model with 40B active params for reasoning, coding, and agentic tasks (FP8)
2m
10K+
3
Advanced coding agent model with 80B params (3B active MoE) for code generation and debugging
2m
10K+
1
Efficient 80B MoE coding model with 3B activated params, 256K context, and agentic capabilities
2m
50K+
1
Image generation model, uses a base latent diffusion model plus a refiner.
3m
10K+
7
GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.
3m
10K+
4
GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.
3m
10K+
1
Devstral Small 2 is an FP8 instruct LLM for agentic SWE tasks, codebase tooling, and SWE-bench.
3m
10K+
4
FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.
4m
6.1K
1
FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.
4m
9.4K
2
Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.
5m
50K+
2
Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.
5m
10K+
1
DeepSeek-V3.2 boosts efficiency and reasoning with DSA, scalable RL, agentic data—IMO/IOI wins.
5m
50K+
10
Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use
5m
10K+
4
Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use
5m
50K+
2
Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.
5m
10K+
3
Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.
5m
10K+
Snowflake’s Arctic-Embed v2.0 boosts multilingual retrieval and efficiency
6m
5.0K
Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.
6m
10K+
1
Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.
6m
10K+
OpenAI’s open-weight models designed for powerful reasoning, agentic tasks
6m
100K+
44